Distributed Computing Meets : Robust Mechanisms for Rational Secret Sharing and Multiparty Computation

Ittai Abraham∗ Danny Dolev† Rica Gonen‡ Hebrew University Hebrew University Bell Labs, [email protected] [email protected] Lucent Technologies [email protected] Joe Halpern§ Cornell University [email protected]

ABSTRACT 1. INTRODUCTION We study k-resilient Nash equilibria, joint strategies where Traditionally, work on secret sharing and multiparty com- no member of a coalition C of size up to k can do better, even putation in the cryptography community, just like work in if the whole coalition defects. We show that such k-resilient distributed computation, has divided the agents into “good Nash equilibria exist for secret sharing and multiparty com- guys” and “bad guys”. The good guys follow the protocol; putation, provided that players prefer to get the information the bad guys do everything in their power to make sure it than not to get it. Our results hold even if there are only does not work. Then results are proved showing that if no 2 players, so we can do multiparty computation with only more than a certain fraction of the agents are “bad”, the two rational agents. We extend our results so that they hold protocol will succeed. even in the presence of up to t players with “unexpected” Halpern and Teague [10] studied secret sharing under the utilities. Finally, we show that our techniques can be used assumption that agents were rational: they would only do to simulate games with mediators by games without media- what was in their self-interest. For three or more players, tors. under the assumption that a player prefers to get the secret over not getting it, they give a randomized protocol with Categories and Subject Descriptors: F.0 [Theory of constant expected running time in which all players learn the Computation]: General. secret. They prove their protocol is a that General Terms: Economics, Security, Theory. survives iterated deletion of weakly dominated strategies. Indeed, traditional results in game theory mostly consider Keywords: Distributed Computing, Game Theory, Secret the equilibrium notions (like Nash equilibrium) that toler- Sharing, Secure Multiparty Computation. ates the deviation of only one agent. That is, a joint (σ1,...,σn) is a Nash equilibrium if no agent can do better ∗Part of the work was done while the author visited Mi- by unilaterally deviating (while all the other agents continue crosoft Research Silicon Valley. to play their part of the joint strategy). However, in prac- †Part of the work was done while the author visited Cornell tice, agents can form coalitions. It could well be that if university. The work was funded in part by ISF, NSF, CCR, three agents form a coalition and they all deviate from the and AFOSR. ‡ protocol, then they could all do better. Supported in part by the IDA. We define an equilibrium to be k-resilient if it tolerates § Supported in part by NSF under grants CCR-0208535, deviations by coalitions of size up to k. Roughly speaking, a ITR-0325453, and IIS-0534064, by ONR under grant joint strategy (σ ,...,σ ) is k-resilient if, for any coalition N00014-01-10-511, by the DoD Multidisciplinary Univer- 1 n sity Research Initiative (MURI) program administered by |C| ≤ k that deviates from the equilibrium, none of the the ONR under grants N00014-01-1-0795 and N00014-04-1- agents in C do better than they do with (σ1,...,σn). This 0725, and by AFOSR under grant FA9550-05-1-0055. is a very strong notion of resilience (much stronger than other notions in the literature). We will be interested in k-resilient practical mechanisms which, roughly speaking, are protocols that define a k-resilient Nash equilibrium that survives iterated deletion of weakly dominated strategies. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies 1.1 Our contributions bear this notice and the full citation on the first page. To copy otherwise, to In this paper we significantly extend and improve the republish, to post on servers or to redistribute to lists, requires prior specific results of Halpern and Teague in several important ways. permission and/or a fee. PODC'06, July 22-26, 2006, Denver, Colorado, USA. While continuing to use rationality so as to move away Copyright 2006 ACM 1-59593-384-0/06/0007 ...$5.00. from the tradition “good guys”–“bad guys” adversary model that is standard in the distributed computing community, diators. In many cases of interest, it can be shown that we mitigate the “fragility” of Nash equilibrium by allowing a mechanism with desired properties exists if there is coalitions and tolerating a certain number of agents whose a trusted mediator. But a reliable trusted mediator is utilities may be unknown or nonstandard. Our specific con- not always available. Izmalkov, Micali, and Lepinski tributions include the following: [12] and Lepinski et al. [16] show, roughly speaking, that any equilibrium of a game with a mediator can 1. While Halpern and Teague’s results provide a 1-resilient be simulated by a game without a mediator. How- equilibrium, our main result shows that we can achieve ever, their construction relies on a very strong primi- optimal resilience —an(n−1)-resilient practical mech- tive they call an envelope and ballot-box; it is not clear anism — for the n out of n secret-sharing game. This that envelopes and ballot-boxes can be implemented is of particular interest in the context of secure mul- in practice. Ben-Porath [4] shows that we can simu- tiparty computation. Replacing the use of n out of late a Nash equilibrium with a mediator provided that n secret sharing in standard multiparty computation there is a “punishment strategy” which players can use protocols by our (n−1)-resilient rational secret sharing to threaten potential deviators. Heller [11] strength- protocol, it follows that any multiparty computation ens Ben-Porath’s result to allow coalitions. We show that can be carried out with a trusted mediator can that, if we can assume private channels or if we make a also be carried out without the trusted mediator, in a standard cryptographic assumption (that imply obliv- highly resilient way. ious transfer and computationally bounded players), 2. While Halpern and Teague’s results are appropriate we can simulate any equilibrium of a game with a me- for three or more players, our results work even for diator provided that there is a punishment strategy. two players. The n = 2 setting is of great interest. For We also show that if k is sufficiently small, then we example, consider Yao’s [25] classic millionaire’s prob- can do the simulation even without the punishment lem, where two millionaires meet and want to learn strategy. which is richer, in a setting where both millionaires would like to learn the answer, but would prefer that Perhaps the most significant issue that we have not yet ad- the other millionaire does not. We provide the first fair dressed is asynchrony. All our results depend on the model solution to the problem in this setting. On a perhaps being synchronous. We are currently working on the asyn- more practical note, consider rival companies that wish chronous case. to securely compute aggregates statistics (e.g., medi- ans or expectations) based on their combined private databases. Malkhi et al. [18] recently built a full dis- 2. DEFINITIONS tributed implementation of secure computation for two We consider games in extensive form, described by a game parties. One of the main open questions raised in the tree whose leaves are labeled by the utilities of the players. context of the real-world applicability of such systems We assume that at each node in the game tree player i is in is exactly the problem of fair termination. Our results some local state (intuitively, this local state describes player solve this problem for any number of agents and arbi- i’s initial information and the messages sent and received trary coalitions if agents are rational. by player i). With each run r (i.e., path that ends in a 3. Halpern and Teague’s randomized protocol includes a leaf) in the game tree and player i, we can associate i’s parameter that essentially determines the probability utility, denoted ui(r), if that path is played. A strategy for of terminating in any given round. To set this pa- player i is a (possibly randomized) function from i’s local rameter, the protocol designer must know the utilities states to actions. Thus a strategy for player i tells player i of all the agents (or, at least, a lower bound on the what to do at each node in the game tree. A joint strategy gap between the utility of everyone learning the secret ~σ = (σ1,...,σn) for the players determines a distribution and the utility of no one learning the secret). This is over paths in the game tree. Let ui(~σ) denote player i’s also the case for our general protocol. However, we expected utility if ~σ is played. can show that if k < dn/3e, then we can give a sin- Let N = {1,...,n} be the set of players. Let Si denote gle k-resilient practical mechanism that works for all the set of possible strategies for player i, and let S = S1 × choices of numerical utility, as long as agents do not . . .×Sn. Given a space A = A1 ×···×An and a set I ⊂ N let

strictly prefer not learning the secret to learning the AI = i∈I Ai and A−I = i/∈I Ai. Thus, SI is the strategy secret. Moreover, this protocol does not require cryp- set of players in I. Given x ∈ AI and y ∈ A−I , let (x,y) be Q Q tographic assumptions. For the case that k = 1, this the tuple in A such that (x,y)i = xi if i ∈ I and (x,y)i = yi solves an open problem raised by Halpern and Teague otherwise. [10]. A joint strategy is a Nash equilibrium if no player can 4. A system designer may not be able to count on all gain any advantage by using a different strategy, given that agents having the utility she expects. For example, in all the other players do not change their strategies. We our setting, especially if n is large, there may be some want to define a notion of k-resilient equilibrium that gen- “altruists” who actually prefer it if more agents learn eralizes Nash equilibrium, but allows a coalition of up to k the secret. We extend our results to the case where players to change their strategies. There has been a great there are t agents whose utility can be arbitrary, as deal of work on dealing with deviations by coalitions of long as the remaining n − t agents have the utility players, going back to Aumann [2]. Perhaps most rele- that the designer expects. vant to our work is that of Bernheim, Peleg, and Whinston 5. Finally, our results lead to a deeper understanding of [5]. They define a notion of coalition-proof Nash equilibrium the power of as a method to securely simu- that, roughly speaking, attempts to capture the intuition late an equilibrium without depending on honest me- that ~σ is a coalition-proof equilibrium if there is no devi- ation that allows all players to do better. However, they 3. GAMES WITH MEDIATORS argue that this is too strong a requirement, in that some To prove our results, it is useful to consider games with deviations are not viable: they are not immune from further mediators. We can think of a mediator as a trusted party. deviations. Thus, they give a rather complicated definition We will show that if there is an equilibrium in a game using a that tries to capture the intuition of a deviation that is im- mediator, then we can get a similar equilibrium even without mune from further deviations. This work is extended by the mediator, provided utilities satisfy certain properties. Moreno and Wooders [19] to allow correlated strategies. Following Forges [6], we view a mediator as a communica- Since we want to prove possibility results, we are willing tion device. Consider a multistage game with T stages with to consider a notion of equilibrium that is even stronger then t t t complete recall. Formally, a mediator is a tuple hIi , Oi , P i those considered earlier in the literature. t for i ∈ N and t ∈ T , where Ii is a set of inputs that player t i can send at stage t, Oi is a set of outputs for player i at Definition 1 (k-resilient equilibrium). Given a non- t t stage t, and P is a function from i∈N,r≤t Ii and (pos- empty set C ⊆ N, σC ∈ SC is a group best response for C sibly some random bits r) to Ot. Less formally, at to σ−C ∈S−C if, for all τC ∈SC and all i ∈ C, we have i∈NQi each stage t, each player sends a message (input) and the ui(σC ,σ−C ) ≥ ui(τC ,σ−C ). mediator computes a function (whichQ is possibly random) of A joint strategy ~σ ∈ S is a k-resilient equilibrium if, for all all the messages ever sent and sends each player a piece of advice (output). C ⊆ N with |C| ≤ k, σC is a group best response for C to Given a multistage game Γ and a mediator d, we can define σ−C . A strategy is strongly resilient if it is k resilient for all k ≤ n − 1. a game Γd, the extension of Γ with d. Each stage t of Γd can consists of three phases: in the first phase, each player, t Given some desired functionality F, we say that (Γ, ~σ) is a i, sends some input in Ii to d; in the second phase, d sends t t k-resilient mechanism for F if ~σ is a k-resilient equilibrium each player i the appropriate output in Oi according to P ; of Γ and the outcome of (Γ, ~σ) satisfies F. For example, if and in the third phase, each player makes a move in Γ or no ~σ is a k-resilient mechanism for secret sharing, then in all move at all. The utilities of the players depend only on the runs of ~σ, everyone would get the secret. moves made in Γ. Observe that a 1-resilient equilibrium is just a Nash equi- librium; thus, this notion generalizes Nash equilibrium. The notion of k-resilience strengthens Moreno and Wooder’s no- 4. RESILIENT SECRET SHARING tion of coalition-proof equilibrium by tolerating arbitrary de- In this section we focus on a specific game: m out of n viations, not just ones that are viable (in that the deviations secret sharing based on Shamir’s scheme [23]. The secret themselves are immune to further deviations). Moreover, k- is f(0), where f is a degree m − 1 (random) polynomial resilience implies tolerance of deviations where only a single over a field F such that |F | > n. Agent i is given f(i), player in the coalition does better; coalition-proof equilibria for i = 1,...,n; f(i) is agent i’s “share” of the secret. We tolerate only deviations where everyone in the coalition does assume, without loss of generality, that f(0) 6= 0 (and that better. By considering deviations where only one player does this fact is common knowledge among the players). For ease better, we allow situations where one player effectively con- of exposition, we assume that the secret is equally likely to trols the coalition. This can happen in practice in a network be any nonzero field value (and that this fact is common if one player can “hijack” a number of nodes in the network. knowledge among the players). It could also happen if one player can threaten others, or For most of this section we assume for ease of exposition does so much better as a result of the deviation that he per- that the initial shares given to each player are signed by the suades other players to go along, perhaps by the promise issuer, in such a way that each other player can verify the of side payments. 1 We do restrict to coalitions of size at signature of the issuer and no player can forge the signature. most k, where k is a parameter. Such a restriction seems We remove the assumption at the end of the section. reasonable; it is difficult in practice to form and coordinate To prove our results, we must formalize the intuition that large coalitions. players prefer getting the secret to not getting it. Halpern We take a mechanism to be a pair (Γ, ~σ) consisting of a and Teague [10] did this by assuming that it was possible to game and a joint strategy for that game. Intuitively, a mech- determine whether or not a player learned a secret just by anism designer designs the game Γ and recommends that looking at a run (complete path) in a game tree. To avoid player i follow σi in that game. The expectation is that a this assumption, we assume that players must output the “good” outcome will arise if all the players play the recom- secret (or their best guess at the secret) at the end of the mended strategy in the game. Designing a mechanism es- game. Of course, if they guess wrong, the output will be sentially amounts to designing a protocol; the recommended incorrect. This approach has the additional advantage that strategy is the protocol, and the game is defined by all possi- we can model situations where players get partial informa- ble deviations from the protocol. (Γ, ~σ) is a practical mech- tion about the secret (so that they can guess the value with anism if ~σ is a Nash equilibrium of the game Γ that survives high probability). iterated deletion of weakly-dominated strategies.2 Similarly Given a run r in the game tree, let out(r) be a tuple a k-resilient practical mechanism is a practical mechanism where outi(r) = 1 if player i correctly outputs the secret where ~σ is k-resilient. and outi(r) = 0 if player i outputs a value that is not f(0). 1 We can now model the fact that players prefer to get the Of course, in a more refined model of the game that took the threats or side-payments into account, everyone in the secret to not getting it using two assumptions that are iden- coalition would do better. tical to those made by Halpern and Teague, except that we 2We assume the reader is familiar with the concept of iter- talk about what a player outputs rather than what a player ated deletion; see [21, 10] for details. learns. The following assumption says that player i’s utility depends just on the outputs: expected utility” here, note that if the players in A learn

0 0 the secret, we assume that their output is the secret. But U1. If out(r)= out(r ) then ui(r)= ui(r ) . the utility of player i also depends on what the players not The next assumption is that each player strictly prefers in A output. Since the players not in A have no informa- learning the secret to not learning it. tion about the secret, the probability that a player not in A will guess the secret correctly is 1/(|F |− 1). However, 0 0 U2. If outi(r)=1 and outi(r ) = 0 then ui(r) > ui(r ). this probability alone does not determine i’s expected pay- off, since the payoff may depend in part on how the players In some of our results we require a weaker assumption, not in A correlate their guesses. For example, if the play- namely, that each player never prefers not to learn the se- ers not in A agree to all guess the same value, then with cret. probability 1/(|F |− 1) they all guess the secret, and with probability (|F |− 2)/(|F |− 1) none do. On the other hand, U20. If out (r)=1 and out (r0) = 0 then u (r) ≥ u (r0). i i i i if they always make different guesses, then with probability Halpern and Teague [10] made an additional assumption (n − |A|)/(|F |− 1), exactly one player not in A guesses the that was needed for their impossibility result, which cap- secret, and with probability (|F |− 1 − n + |A|)/(|F |− 1), no tures the intuition that players prefer that as few as possible player in A guesses the secret. Let ui(A) be i’s payoff if the of the other players will learn the secret. This property was players not in A correlate their guesses in a way that gives not used by Halpern and Teague for their possibility result. i the best expected payoff, and let mi = maxA⊆N ui(A). We do not need it for ours either, so we do not make it here. ui(N)−ui(∅) We provide a strategy for an augmented game with a me- Proposition 1. If α ≤ mini∈N mi−ui(∅) and k 0, this will be an ack message. if either (1) some players in C lie about their initial share If it does not receive an appropriate message from each in their phase 1 message to the mediator or do not send a player, the mediator stops playing. If the mediator is message at all, (2) some players in C do not send an ack still playing at stage t, then after receiving a phase message to the mediator in phase 1 of some round t> 0, or 1 (of stage t) message from each player, it chooses a (3) some players in C either lie or do not send their shares binary random variable ct with Pr[ct = 1] = α (we (received from the mediator in phase 2) to the other players specify α below) and a random degree m − 1 polyno- in phase 3 of some round t. mial gt over F such that gt(0) = 0. It computes the Since we have assumed that shares are signed by the is- polynomial ht = f · ct + gt and, in phase 2, sends each suer, if some i ∈ C lies about his initial share, then i will player i the value ht(i). Note that if ct = 0, then be caught and the mediator will not continue. This clearly ht(0) = 0; if ct = 1, ht(0) = f(0). Thus, if ct = 1, results in a worse outcome for i. Similarly, if some i ∈ C ht encodes the same secret as f; if ct = 0, then ht does not send a share in stage 0 or does not send an ack does not encode a secret (and if the players get all the message in stage t > 0, then the game stops and no player shares of ht, they will realize that ht does not encode learns the secret. This is also a worse outcome for i. a secret, since f(0) 6= 0, by assumption). Now suppose that some i ∈ C does not send a share in phase 3 of round t. With probability 1 − α, the game will

Consider the following strategy σi for player i. stop and no player will learn the secret. If the game does not stop (which happens with probability α), the best outcome 1. In phase 1 of stage 0, send your share of the secret to player i can hope for is to gain mi. Thus, i ∈ C gains from the mediator and set ok := true. In phase 1 of stage sending nothing only if αmi + (1 − α)ui(∅) > ui(N). But t> 0, if ok = true, send ack to the mediator. we have assumed that this is not the case. Thus, i will send t 2. If the mediator sends the message ai in phase 2 of something at phase 3. stage t ≥ 0 and ok = true, then in phase 3 of stage t Now suppose that players in C decide to send X = {xi | t t send ai to all the other players. If not all players sent i ∈ C} instead of {h (i) | i ∈ C}. We say that X is a a message in phase 3, set ok := false. Otherwise, con- compatible response if and only if there exists a degree m−1 struct the polynomial ht by interpolating the received polynomial q such that shares. If there is no such polynomial, set ok := false. t Otherwise, if ht(0) 6= 0, then stop, set ok := false, and xi − h (i) if i ∈ C t t q(i)= output h (0). If h (0) = 0, continue to the next stage. (0 if i ∈ {0, 1,...,n}− C.

We want to show that (σ1,...,σn) is a k-resilient equilib- Note that this condition can be checked by simple interpo- rium, provided that α is chosen appropriately. If A ⊆ N, let lation. There are two cases to consider. If X is a com- ui(A) be i’s best-case expected utility if exactly the players patible response, then sending X has the same result as in A learn the secret. To understand why we say “best-case sending {ht(i) | i ∈ C}. Essentially, sending a compatible t 0 t 0 response X at stage t will cause the players to reconstruct h (i) to h , i must also modify the value of yij to y such t t t t 0 0 the polynomial h + q instead of polynomial h . However, that cij = bij h + y ; otherwise, i’s lie will be caught. The since [ht +q](0) = ht(0)+q(0) = ht(0), this does not change probability of being able to guess an appropriate y0 is less the outcome, i.e. either with probability α all players learn than β. the secret or with probability 1 − α all players interpolate 0 ui(N)−ui(∅) as the secret and continue to the next stage. On the other Proposition 2. If β < maxi∈N 2mi−(ui(N)+ui(∅)) and hand, if X is not a compatible response, then with probabil- 0 0 k ui(N), with probability 1, everyone will eventually learn the secret, which we assume is not the case. and this will happen in expected time 2. The argument for Finally, the fact the ~σ survives iterated deletion is im- k-resiliency proceeds much as that for Proposition 1. But mediate from the sufficient condition given by Halpern and note that in the case that c = 1, the non-coalition players Teague [10, Theorem 3.2] for a randomized protocol with un- will get the secret unless some coalition members lie about bounded running time to survive the iterated deletion pro- their shares and the lie is not detected. The probability that cess. a single lie is not detected is at most β; if there are more lies, the probability of detection is even greater. Thus, if If in the protocol above we use n out of n secret shar- c = 1 and someone in the coalition lies, with probability at ing (that is, if the mediator chooses a polynomial of degree most β, i’s payoff will be at most m , and with probability n − 1 = m − 1), then we have a strongly resilient protocol, i at least 1 − β, i’s payoff will be ui(N). If c = 0 and someone provided that α is chosen appropriately. That is, we can in the coalition lies, then with probability at least 1 − β, the essentially tolerate arbitrary coalitions. lie will be detected, and the game will end. If the lie is not Note that the choice of α here depends on the utilities of detected, the most that i can hope to get is mi. Thus, i’s the agents. Implicitly, we are assuming that the mediator expected utility from being part of a coalition where some- knows these utilities, so that it can choose α appropriately. 1 1 one lies is at most 2 (1 − β)ui(∅)+ 2 (1 − β)ui(N)+ βmi. So Moreover, the expected running time of the protocol de- i would prefer it’s coalition to lie only if pends on α. We now show that if k ui(N). pected running time is 2 rounds. However, the modification 2 2 still depends on the mediator knowing the players’ utilities. We then show that if k 0 by appropriately modi- send their shares in phase 3. If the coalition members ac- fying β (i.e., by taking the field F sufficiently large), giving t tually send incorrect values of ai rather than no value at an expected running time of 1 +  for the protocol. all, then the non-coalition members may have a problem. If k 1/β, where Pr(c = 1) = 1/(1 + ) for all  > 0, to get a protocol with β is a security parameter determined below. expected running time 1 + . In fact, (σ ,...,σ ) is a k- • c is chosen so that Pr(c = 1) = 1/2, rather than α. 1 n resilient Nash equilibrium even if Pr(c = 1) = 1, that is, • In addition to sending each player i the value ht(i) at if the mediator sends player i f(i) in stage 0. By results of round t, it also sends i a random element yt in F , one ij Halpern and Teague, however, this protocol does not survive for each player j 6= i. Finally, it sends j a pair (bt , ct ) ij ij iterated deletion. To get a practical mechanism, we must of field elements such that ct = bt ht(i)+ yt (so that ij ij ij take Pr(c − 1) < 1. it sends i (bt , ct ) for all j 6= i). ji ji There are many games with mediators that can deal with 0 We modify σi to the protocol σi where i sends j the value secret sharing. For example, the mediator can just send the t t yij in addition to h (i). Note that if i modifies the value of secret to each of the players. The advantage of the game that we have just described is that it can be simulated by the transfer and computationally bounded players. (Of course, players without a mediator using multiparty computation. the assumption that the issuer signs all the shares with an The simulation proceeds as follows. unforgeable signature also requires cryptography; we show Assume again that the players are given signed shares. how to get rid of this assumption later.) The exact crypto- We actually have a sequence of multiparty computations, graphic assumptions we need depend in part on the protocol one for each round t. For round t, the input of player i to we consider. Since we do not wish to focus here on issues the multiparty computation consists of the following inputs, of cryptography, we say assuming cryptography to indicate where a and b are parameters of the protocol chosen so that that we assume players are computationally bounded and b ui(N)−ui(∅) b that enhanced trapdoor permutations [7] exist.3 a/2 ≤ mini∈N mi−ui(∅) (intuitively, a/2 is playing the role of α): If we use cryptography (even given the assumptions above), we typically cannot perfectly implement a desired function- 1. the (signed) secret share that i received initially; ality F, since, for example, there is a small probability that t 2. a random bitstring ci of length b, uniformly distributed the cryptography will be broken. For our protocols that in {0, 1,..., 2b}; use cryptography, we can typically make no guarantee as t t 3. a random degree m−1 polynomial gi such that gi (0) = to what happens if the cryptography is broken. Thus, we t 0 and the coefficients of gi are uniformly distributed need to weaken the notion of k-resilient mechanism for F in the underlying field F . somewhat to capture this small probability of “error”. 4. a random bitstring bi of length dlog(|F |)e. Definition 2. (Γ, ~σ) is a -k-resilient mechanism for F Let if ~σ satisfies F but, for each coalition C such that |C| ≤ k,

t z σC is not necessarily a group best response to σ−C , but it is 1 if ⊕ ∈ c ≤ a2 ct = i N i a -best response: no one in the group can do more than  (0 otherwise, better than they would using a best response. That is, for all

t t t C such that |C|≤ k, for all τC ∈SC , and all i ∈ C, we have where ⊕ denotes bitwise exclusive or; let g =(g1 +· · ·+gn). The multiparty computation then does essentially what ui(σC ,σ−C ) ≥ ui(τC ,σ−C ) − . the mediator does in the previous algorithm. More precisely, let F t be the function that, given the inputs above, does the Intuitively, if  is small, although players may be able to do following computation. It checks that the shares sent are better, it may not be worth their while to figure out how. correctly signed; if so, it computes the polynomial f that A -1-resilient equilibrium is an -Nash equilibrium [21]. interpolates them. It also checks that gt(0) = 0 for each i Theorem 1. Consider the m out of n secret sharing game. random polynomial. If both of these properties hold, then Suppose that players’ utilities satisfy U1. let ht = ct · f + gt, as before. The output of F t is then t t (h (1) ⊕ b1,...,h (n) ⊕ bn). (a) If k 0, there exists a mechanism without the other players, it shares ht(i), just as in σ . i a mediator that takes parameter α and is a practical A coalition member i will follow the protocol during the -k-resilient mechanism for m out of n secret shar- circuit evaluation of F for the same reasons that i follows it ing with an expected running time of O(1/α) rounds if when playing with a mediator: deviation will result either ui(N)−ui(∅) α ≤ min ∈ . in i being caught (if i does not send the correct signed share, i N mi−ui(∅) does not send a share to some or to all, or sends an incom- patible response) or will not affect the outcome (if i sends Note that Theorem 1(c) applies if m = n = 2 and k = 1; a compatible response) or will cause some players to obtain that is, for all > 0, there is a practical -1-resilient mecha- the wrong final shares and may cause all players to learn the nism for 2 out of 2 secret sharing (assuming cryptography). wrong secret (if i sends an incorrect value during the circuit Halpern and Teague [10, Cor. 2.2] claim that there is no evaluation). The argument that players will share their se- practical mechanism for 2 out of 2 secret sharing. Their t crets (i.e., that player i will send all the other players h (i)) argument also suggests that there is no -1 resilient mech- gives a k-resilient equilibrium proceeds just as before. anism for secret sharing. It seems that this corollary is Thus, coalition members can be viewed as what is known 3 as “honest-but-curious” or passive adversaries. As is well Enhanced trapdoor permutations are a set of permutations known, we can do multiparty computation with such ad- such that, given a permutation fα in the set and an element x generated using an algorithm S, it is hard to compute versaries without cryptographic techniques if k < n/2 [7, −1 fα (x) even if the random coins used by S to generate x 3]; on the other hand, if n/2 ≤ k < n, it seems we must are known. If enhanced trapdoor permutations exist, then use the techniques of GMW [8], which use cryptography it is possible to do oblivious transfer and to sign secrets with and thus our results depend on the existence of oblivious signatures that cannot be forged. false. Halpern and Teague show correctly that a strategy in In the context of multiparty computation, we take outi(r) what they call A2, where a player sends his share to another to be 1 if player i correctly outputs the function value given player, will be eliminated by the iterated deletion process. the inputs in r, and 0 otherwise; with this change, U1, U2, The argument for Corollary 2.2 implicitly assumes that if and U20 are well defined. there is an equilibrium in the 2 out of 2 case, where a player learns the secret, then one of the strategies must be in A2, Theorem 2. Suppose that players’ utilities satisfy U1 and that is, a player must realize that he is sending his share to the players can compute f with a trusted mediator. the other player. But our protocol has the property that a 0 player does not know when he is sending a “useful” share (a) If 3k < n, then, assuming U2 , there exists a practi- to the other player (where a share is useful if α = 1).4 We cal k-resilient mechanism without a mediator for the remark that Gordon and Katz [9] also describe a mechanism multiparty computation of f with an expected running 1- mechanism for 2 out of 2 secret sharing (although they time of 2 rounds. do not explicitly mention the need for ). (b) If 2k < n, then assuming U2, there exists a mecha- It is also worth noting that, unlike the protocol given by nism without a mediator that takes a parameter β and Halpern and Teague, our protocol has the property that the is a practical k-resilient mechanism for the multiparty secret issuer has to issue the shares only once, at the begin- computation of f with an expected running time of 2 ui(N)−ui(∅) ning of the protocol. Gordon and Katz [9] and Lysyanskaya rounds if β < maxi∈N 2mi−(ui(N)+ui(∅)) . and Triandopoulos [17] also describe protocols for rational (c) If k < n, then, assuming U2 and cryptography, for all secret sharing with this property.  > 0, there exists a practical -k-resilient mechanism Up to now we have assumed that initial shares given to for the multiparty computation of f with an expected ui(N)−ui(∅) each player are signed by the issuer using unforgeable sig- running time of O(1/α) rounds if α ≤ mini∈N mi−ui(∅) . natures. This means that a rational player will always send his true shares to the mediator at stage 0. We can remove this assumption using ideas from check vectors and the ICP 5. TOLERATING PLAYERS WITH protocol [3], much as in the mechanism for the case that “UNEXPECTED” UTILITIES m ≤ n − k. With a mediator, the issuer sends the mediator In Theorem 1, as well as Propositions 1, 2, and 3, we as- the pair (bi, ci) so it can verify i’s signature; without a me- sumed that all players have utilities that satisfy U1 and U2. diator, the issuer sends a different verification pair (bij , cij ) to each agent j 6= i. Using verification in this way, the prob- However, in large systems, it seems almost invariably the ability of a lie going undetected is 1/|F |. We can modify case that there will be some fraction of users who do not re- the bounds on α and in Proposition 1 and Theorem 1(c) spond to incentives the way we expect. (Certainly this seems and on β in Proposition 2 and Theorem 1(b) to take this to be the case with students in large undergraduate classes!) into account; we leave details to the reader. Note that if This is an issue that arises in practice. For example, in a k 0, there exists a practical -(k,t)- robust mechanism for multiparty computation with an Intuitively, this says that, for all C such that |C| ≤ k, σ C expected running time of 2 rounds. continues to be a best response no matter what strategy τ T (d) If 2t+k < n, then, assuming U2 and cryptography, for the players in a subset T of size t use, as long as the remain- all > 0, there exists a practical -(k,t)-robust mecha- ing players (i.e., those in N − (C ∪ T )), continue to use their nism for multiparty computation with an expected run- component of ~σ). Note that a k-resilient strategy is a (k, 0)- ui(N)−ui(∅) ning time of O(1/α) rounds if α ≤ min ∈ . robust strategy and a t-immune strategy is a (0,t)-robust i N mi−ui(∅) strategy.5 We can define -(k,t)-robust equilibrium mech- anisms in the obvious way; we leave details to the reader. Proof Sketch. For parts (a) and (b), the secret sharing A modification of our earlier techniques gives a general- stage of protocol σ00 is similar to the protocol in Theorem 1, ization of Theorem 1. Roughly speaking, we replace the except that we now simulate a mediator that continues play- condition k

5 that we can simulate a mediator, so that if there is a solu- In fact, our results hold for an even stronger form of ro- tion with a mediator, there is a solution without one. This bustness: the payoffs of the agents not in T can be taken to question has been studied in both the economics literature be independent of the actions of the agents in T . 6 U0(t) allows us to continue thinking of outi(r) as binary— (given the private inputs in r), the closer outi(r) is to 1. either 0 or 1. Suppose that we instead assume that outi(r) Then we can prove our results as long as if i outputs a t- takes arbitrary values in the interval [0, 1], where, intuitively, variant of f(~x) in r, then ui(r) is greater than i’s expected the closer i’s output is to the true function value in run r utility from just guessing the function value. and the computer science literature. In the economics litera- input should be interpreted as “send message mj to ij ”, for ture, Ben-Porath [4] showed simulation of Nash equilibrium j = 1,...k. In phase 2 of that stage, the mediator sim- is possible if there is a “punishment” strategy. Intuitively, a ply relays these messages to the appropriate recipients. We punishment strategy provides a threat that players can use omit the formal details here. Although we present our pro- to force compliance with the simulation. For example, in tocols as if there are private channels between the players, our secret sharing game the threat is that players will stop for the results where we use cryptography (parts (c) and (d) playing if they detect misbehavior. Since we assume that all of Theorem 4), it suffices that there are public channels. players prefer to get the secret than not to get it (no matter We now can get a generalization of Theorem 3. The U2 how many other players also get the secret), this is indeed requirement is replaced by the assumption that there is a a punishment. Heller [11] extends Ben-Porath’s results to punishment strategy; we do not need this assumption for allow for coalitions. the cases where U20 suffices. (Recall that U1 and U2 imply In the computer science literature, Izmalkov, Micali, and that not sending messages is a punishment strategy.) Note Lepinski [12], generalizing the previous work of Lepinski that perhaps the right way to think about Theorem 3(a) et al. [16], show that simulation is possible provided that (and part (a) of all our other theorems) is that we have a we can use a very strong primitive called an envelope and mechanism that works for a family of games that have the an entity called a ballot-box. A ballot-box is essentially same game tree except that the utilities of the outcomes may an honest dealer that can do specific limited actions (like differ (although they always satisfy U1 and U2). Formally, randomly permute envelopes). Envelopes have two opera- we say that a game Γ0 is a utility-variant of a game Γ if tions: Send(R,m), which corresponds to the action where Γ0 and Γ have the same game tree, but the utilities of the the sender puts m in an envelope, writes his name on the players may be different in Γ and Γ0. h envelope and hands it to R; and Receive(S), which corre- Given a game Γ, let ui (~ρ) be i’s best-case payoff if at least sponds to the action where the receiver publicly opens all h players use strategy ρj (and the remaining players use an envelopes from S and privately reads their contents. As Lep- arbitrary strategy); let mi now denote the maximum payoff inski, Micali, and Shelat [15] observe, envelopes cannot be that i receives in Γ. Like the other simulation results in the implemented even over broadcast channels. Thus, a solution literature, our simulation results apply only to normal-form that depends on envelopes is inadequate when, for example, games, which can be viewed as games where each player players are playing a game over the Internet. moves only once, and the moves are made simultaneously. We show that we can simulate a (k,t)-robust equilibrium using multiparty computation, provided that k and t satisfy Theorem 4. Suppose that Γ is an n-player normal-form the same bounds as in Theorem 3. Moreover, we do not game and that ~σ is a (k,t)-robust strategy for a game Γd always need to assume the existence of a punishment strat- with a mediator d based on Γ. egy; provided that k and t satisfy the appropriate bounds, (a) If 3(t + k) < n, then there exists a strategy ~σ0 and a we can replace the use of a punishment strategy by using CT extension ΓCT of Γ such that for all utility vari- Reed-Solomon decoding or (assuming cryptography) by us- 0 0 ants Γ of Γ, if ~σ is a (k,t)-robust strategy for Γd, then ing verifiable secret sharing and the GMW compiler. To 0 0 (~σ , ΓCT ) is a (k,t)-robust mechanism implementing make this precise, we first formalize the notion of a (k,t) 0 0 (~σ, Γd), where ΓCT is the utility variant of ΓCT corre- punishment strategy. 0 0 sponding to Γd. Moreover, ~σ has an expected running time of 2 rounds. Definition 5. A joint strategy ~ρ is a (k,t)-punishment (b) If 3t + 2k < n, and there exists a (k,t)-punishment strategy with respect to ~σ if for all C,T,P ⊆ N such that strategy ~ρ with respect to ~σ, then there exists a strat- C,T,P are disjoint, |C| ≤ k, |T | ≤ t, and |P | > t, for all egy ~σ0 that takes a parameter β and a CT extension ~τT ∈ ST , for all φ~C ∈ SC , for all i ∈ C we have ui(~σ)−ui(~ρ) ΓCT of Γ such that if β < maxi∈N 2mi−(ui(~σ)+ui(~ρ)) , ~ 0 ui(~σ−T , ~τT ) > ui(~σN−(C∪T ∪P ), φC , ~τT , ~ρP ). then (σ~ , ΓCT ) is a (k,t)-robust mechanism implement- 0 ing (~σ, Γd), and ~σ has an expected running time of 2 Intuitively, ~ρ is (k,t)-punishment strategy with respect to ~σ rounds. if, for any coalition C of at most k players and any set T of (c) If 2(t + k) < n, then, assuming cryptography, for all nonstandard players, as long as more than t players use the  > 0, there exists a strategy ~σ0 and a CT extension punishment strategy and the remaining players play their ΓCT of Γ such that if ~σ is a (k,t)-robust strategy for component of ~σ, all the players in C are worse off than they 0 ~0 Γd, then (σ , ΓCT ) is an -(k,t)-robust mechanism that would be had everyone not in T played ~σ. The idea is that 0 implements (~σ, Γd), and ~σ has an expected running the threat of having more than t players use their component time of 2 rounds. of ~ρ is enough to stop players in C from deviating from ~σ. (d) If 2t+k < n and there exists a (k,t)-punishment strat- Given this definition, we give a complete characterization egy ~ρ with respect to ~σ, then, assuming cryptography, of equilibria that can be simulated using what economists for all  > 0, there exists a strategy ~σ0 that takes a call cheap talk. In the language of distributed computing, ui(N)−ui(∅) parameter α such that if α ≤ min ∈ , then we assume that each pair of agents has a secure private i N mi−ui(∅) ~0 channel, and can use communication over this channel to (σ , ΓCT ) is an -(k,t)-robust mechanism implement- 0 facilitate reaching an equilibrium. Formally, ΓCT is a pri- ing (~σ, Γd), and ~σ has an expected running time of vate channel Cheap-Talk (CT) extension of the game Γ if O(1/α) rounds. the mediator in ΓCT acts as a private channel between ev- ery pair of players. Specifically, we assume that the inputs We remark that if we assume that the strategy ~σ survives 0 that each player i sends to the mediator d in phase 1 of a iterated deletion in Γd, then we can assume that ~σ does as stage always have the form ((m1, i1),..., (mk, ik)); such an well. Note that in part (a) of Theorem 4 we do not need to know 7. REFERENCES the utility; in parts (b), (c), and (d) we do. In parts (a) and [1] E. Adar and B. Huberman. Free riding on Gnutella. First (c), we do not need to assume a punishment strategy; in Monday, 5(10), 2000. parts (b) and (d), we do. Ben-Porath [4] essentially proved [2] R Aumann. Acceptable points in general cooperative Contributions to the Theory of Games, the special case of part (b) where t = 0 and k = 1 (i.e, n-person games. Annals of Mathematical Studies, IV:287–324, 1959. where ~σ is a Nash equilibrium). Theorem 4(a) implies that [3] M. Ben-Or, S. Goldwasser, and A. Wigderson. any Nash equilibrium can be simulated if there are at least Completeness theorems for non-cryptographic four players, even without a punishment strategy, and thus fault-tolerant distributed computation. In Proc. 20th ACM significantly strengthens Ben-Porath’s result. Heller [11] ex- Symp. Theory of Computing, pages 1–10, 1988. tends Ben-Porath’s result to arbitrary k, proving the special [4] E. Ben-Porath. Cheap talk in games with incomplete case of Theorem 4(b) with t = 0. Heller also claims a match- information. J. Economic Theory, 108(1):45–71, 2003. ing lower bound. While we believe that a matching lower [5] B. D. Bernheim, B. Peleg, and M. Whinston. Coalition J. Economic Theory bound does hold (see Conjecture 1), Heller’s lower bound ar- proof Nash equilibrium: Concepts. , 42(1):1–12, 1989. gument seems to have some gaps. Specifically, he seems to [6] F. M. Forges. An approach to communication equilibria. assume that all the randomization (if any) in the implemen- Econometrica, 54(6):1375–85, 1986. tation happens after messages have been exchanges, rather [7] O. Goldreich. Foundations of Cryptography, Vol. 2. than allowing the message exchange itself to depend on the Cambridge University Press, 2004. randomization. [8] O. Goldreich, S. Micali, and A. Wigderson. How to play We believe we have tight lower bounds corresponding to any mental game. In Proc. 19th ACM Symp. Theory of Theorem 4. In particular, we believe we can prove the fol- Computing, pages 218–229, 1987. lowing, although details remain to be checked. [9] D. Gordon and J. Katz. Rational secret sharing, revisited. Unpublished manuscript, 2006. Conjecture 1. (a) If n = 3(t + k) > 0, then there [10] J. Y. Halpern and V. Teague. Rational secret sharing and exists an n-player game Γ and strategies ~σ such that multiparty computation: extended abstract. In Proc. 36th ACM Symp. Theory of Computing, pages 623–632, 2004. ~σ is a (k,t)-robust equilibrium for a game Γd with a 0 [11] Yuval Heller. A minority-proof cheap-talk protocol. mediator d based on Γ, but there is no strategy ~σ and Unpublished manuscript, 2005. 0 CT extension ΓCT such that (~σ , ΓCT ) is a (k,t)-robust [12] S. Izmalkov, S. Micali, and M. Lepinski. Rational secure mechanism implementing (~σ, Γd). computation and ideal . In Proc. 46th (b) If n = 3t + 2k > 0 then there exists an n-player game IEEE Symp. Foundations of Computer Science, pages Γ and strategies ~σ and ~ρ such that ~σ is a (k,t)-robust 585–595, 2005. strategy with respect to a (k,t)-punishment strategy ~ρ [13] J. Justesen. On the complexity of decoding Reed-Solomon codes (corresp). IEEE Trans. on Information Theory, for a game Γd with a mediator d based on Γ, but there is 0 0 22(2):237–238, 1976. no strategy ~σ and CT extension ΓCT such that (~σ , ΓCT ) [14] L. Lamport, R. Shostak, and M. Pease. The Byzantine is a (k,t)-robust mechanism implementing (~σ, Γd). Generals problem. ACM Trans. on Programming (c) If n = 2t + 2k > 0 then, assuming cryptography, for Languages and Systems, 4(3):382–401, 1982. any > 0, there exists an n-player game Γ and strate- [15] M. Lepinksi, S. Micali, and A. Shelat. -free gies ~σ and ~ρ such that ~σ is a (k,t)-robust strategy with protocols. In Proc. 37th ACM Symp. Theory of Computing respect to a (k,t)-punishment strategy ~ρ for a game Γd , pages 543–552, 2005. with a mediator d based on Γ, but there is no strategy [16] M. Lepinski, S. Micali, C. Peikert, and A. Shelat. ~σ0 and CT extension Γ such that (~σ0, Γ ) is an Completely fair SFE and coalition-safe cheap talk. In CT CT Proc. 23rd ACM Symp. Principles of Distributed -(k,t)-robust mechanism implementing (~σ, Γd). Computing, pages 1–10, 2004. (d) If n = 2t+k > 0 then, assuming cryptography, for any [17] A. Lysyanskaya and N. Triandopoulos. Rationality and  > 0, there exists an n-player game Γ and strategies adversarial behavior in multi-party computation. ~σ such that ~σ is a (k,t)-robust strategy for a game Γd Unpublished manuscript, 2006. with a mediator d based on Γ, but there is no strategy [18] D. Malkhi, N. Nisan, B. Pinkas, and Y. Sella. Fairplay - 0 0 Proc. 13th ~σ and CT extension ΓCT such that (~σ , ΓCT ) is an secure two-party computation system. In USENIX Security Symposium, pages 287–302, 2004. -(k,t)-robust mechanism implementing (~σ, Γd). [19] D. Moreno and J. Wooders. Coalition-proof equilibrium. Our sketch proof of Conjecture 1 uses ideas from the Games and Economic Behavior, 17(1):80–112, 1996. Byzantine agreement literature. For example, the proof of [20] G. Neiger and S. Toueg. Automatically increasing the fault-tolerance of distributed algorithms. Journal of part (a) involves constructing a game that essentially cap- Algorithms, 11(3):374–419, 1990. tures Byzantine agreement; we then appeal to the fact that [21] M. J. Osborne and A. Rubinstein. A Course in Game Byzantine agreement cannot be done without cryptography Theory. MIT Press, Cambridge, Mass., 1994. if there are more than n/3 Byzantine processes [14]. Simi- [22] T. Rabin and M. Ben-Or. Verifiable secret sharing and larly, part (c) uses the fact that uniform agreement, where multiparty protocols with honest majority. In Proc. 21st the faulty processes have to decide on the same value as ACM Symp. Theory of Computing, pages 73–85, 1989. the nonfaulty processes (if they decide on anything at all) [23] A. Shamir. How to share a secret. Commun. ACM, cannot be done with generalized omission failures (where 22(11):612–613, 1979. processes may both fail to send message and fail to receive [24] Y. Shoham and M. Tennenholtz. Non-cooperative computing: Boolean functions with correctness and messages) if there are more than n/2 faulty processes [20]. exclusivity. Theoretical Computer Science, We also need to use ideas from the GMW compiler to force 343(1–2):97–113, 2005. processes to behave as if they are only suffering from omis- [25] A. Yao. Protocols for secure computation (extended sion failures, rather than Byzantine failures. We hope to abstract). In Proc. 23rd IEEE Symp. Foundations of complete the proof and report on the details shortly. Computer Science, pages 160–164, 1982.