in Large Games: Incentives and Privacy ∗

[Extended Abstract]†

Michael Kearns Mallesh M. Pai Department of Computer and Department of Economics Information Science University of Pennsylvania University of Pennsylvania Philadelphia, PA Philadelphia, PA [email protected] [email protected] Aaron Roth Jonathan Ullman Department of Computer and School of Engineering and Information Science Applied Sciences University of Pennsylvania Harvard University Philadelphia, PA Cambridge, MA [email protected] [email protected]

ABSTRACT correlated equilibrium of a large game while satisfying joint We study the problem of implementing equilibria of com- differential privacy. plete information games in settings of incomplete informa- Although our recommender mechanisms are designed to tion, and address this problem using “recommender mech- satisfy game-theoretic properties, our solution ends up satis- anisms.” A recommender mechanism is one that does not fying a strong privacy property as well. No group of players have the power to enforce outcomes or to force participa- can learn “much” about the type of any player outside the tion, rather it only has the power to suggestion outcomes on group from the recommendations of the mechanism, even the basis of voluntary participation. We show that despite if these players collude in an arbitrary way. As such, our these restrictions, recommender mechanisms can implement algorithm is able to implement equilibria of complete in- equilibria of games in settings of in- formation games, without revealing information about the complete information under the condition that the game is realized types. large—i.e. that there are a large number of players, and any player’s action affects any other’s payoff by at most a small Categories and Subject Descriptors amount. Theory of Computation [Algorithmic and Our result follows from a novel application of differential Mechanism Design]: Algorithmic Mechanism Design privacy. We show that any algorithm that computes a cor- related equilibrium of a complete information game while satisfying a variant of differential privacy—which we call Keywords joint differential privacy—can be used as a recommender Game Theory, Mechanism Design, Large Games, Differen- mechanism while satisfying our desired incentive properties. tial Privacy Our main technical result is an algorithm for computing a 1. INTRODUCTION ∗We gratefully acknowledge the support of NSF Grant CCF- A useful simplification common in game theory is the 1101389. We thank Nabil Al-Najjar, Eduardo Azevdeo, Eric model of games of complete (or full) information. Infor- Budish, Tymofiy Mylovanov, Andy Postlewaite, Al Roth mally, in a game of complete information, each player knows and Tim Roughgarden for helpful comments and discussions. with certainty the utility function of every other player. In †A full version of this working paper appears on the arXiv games of complete information, there are a number of solu- preprint site [KPRU13]. tion concepts at our disposal, such as and correlated equilibrium. Common to these is the idea that each player is playing a against his opponents— Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed because of randomness, players might be uncertain about for profit or commercial advantage and that copies bear this notice and the full cita- what actions their opponents are taking, but they under- tion on the first page. Copyrights for components of this work owned by others than stand their opponents’ incentives exactly. ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- In many situations, it is unreasonable to assume that play- publish, to post on servers or to redistribute to lists, requires prior specific permission ers have exact knowledge of each other’s utilities. For exam- and/or a fee. Request permissions from [email protected]. ple, players may have few means of communication outside ITCS’14, January 12–14, 2014, Princeton, New Jersey, USA. of the game, or may regard their type as valuable private Copyright 2014 ACM 978-1-4503-2698-8/14/01 ...$15.00. http://dx.doi.org/10.1145/2554797.2554834. information. These are games of incomplete (or partial) in-

403 formation, which are commonly modeled as Bayesian games that in such games there exists a recommender mechanism in which the players’ utilities functions, or types, are drawn such that for any prior on agent types, it is an approximate from a commonly known prior distribution. The most com- Bayes-Nash equilibrium for every agent in the game to opt mon in such games is Bayes-Nash Equilib- in to the proxy, and then follow its recommended action. rium. This stipulates that every player i, as a function of his Moreover, when players do so, the resulting play forms an type, plays an action that maximizes his expected payoff, in approximate correlated equilibrium of the full information expectation both over the random draw of the types of his game. The approximation error we require tends to 0 as the opponents from the prior distribution and over the possible number of players grows. randomization of the other players. Unsurprisingly, equilibrium welfare can suffer in games 1.1 Overview of Techniques and Results of incomplete information, because coordination becomes A tempting approach is to use the following form of proxy: more difficult amongst players who do not know each other’s The proxy accepts a report of each agent’s type, which de- types. One way to measure the quality of equilibria is via the fines an instance of a full information game. It then com- “”—how much worse the social welfare can putes a correlated equilibrium of the full information game, be in an equilibrium , as opposed to the welfare- and suggests an action to each player which is a draw from maximizing outcome. The price of anarchy can depend this correlated equilibrium. By definition of a correlated significantly on the notion of equilibrium. For example, equilibrium, if all players opt into the proxy, then they can 1 Roughgarden [Rou12] notes that even smooth games that do no better than subsequently following the recommended have a constant price of anarchy under full information so- action. However, this proxy does not solve the problem, as lution concepts (e.g. Nash or correlated equilibrium) can it may not be in a player’s best interest to opt in, even if the have an unboundedly large price of anarchy under partial in- other n 1 players do opt in! Intuitively, by opting out, the formation solution concepts (e.g. Bayes-Nash Equilibrium). player can− cause the proxy to compute a correlated equilib- Therefore, given a game of partial information, where all rium of the wrong game, or to compute a different correlated that we can predict is that players will choose some Bayes- equilibrium of the same game!2 The problem is an instance Nash Equilibrium (if even that), it may be preferable to im- of the well known equilibrium-selection problem—even in a plement an equilibrium of the complete information game game of full information, different players may disagree on defined by the actual realized types of the players. Doing their preferred equilibrium, and may have trouble coordi- so would guarantee welfare bounded by the price of anarchy nating. The problem is only more difficult in settings of of the full information game, rather than suffering from the incomplete information. In our case, by opting out of the large price of anarchy of the partial information setting. In mechanism, a player can have a substantial affect on the a smooth game, we would be just as happy implementing a computed equilibrium, even if each player has only small correlated equilibrium as a Nash equilibrium, since the price affect on the utilities of other players. of anarchy is no worse over correlated equilibria. Our solution is to devise a means of computing corre- In this paper we ask whether it is possible to help coordi- lated equilibria such that any single player’s reported type nate on an equilibrium of the realized full information game to the algorithm only has a small effect on the distribution using a certain type of proxy that we call a “recommender of suggested actions to all other players. The precise no- mechanism.” That is, we augment the game with an addi- tion of “small effect” that we use is a variant of the well tional option for each player to use a proxy. If players opt in studied notion of differential privacy. It is not hard to see to using the proxy, and reveal their type to the proxy, then that computing an equilibrium of even a large game is not it will suggest some action for them to take. However, play- ers may also simply opt out of using the proxy and play the 2As a simple example, consider a large number n of people original game using any they choose. We make the who must each choose whether to go to the beach (B) or assumption that if players use the proxy, then they must re- mountains (M). People privately know their types— each port their type truthfully (or, alternatively, that the proxy person’s utility depends on his own type, his action, and the has the ability to verify a player’s type and punish those fraction of other people p who go to the beach. A Beach type gets a payoff of 10p if he visits the beach, and 5(1 p) who report dishonestly). However, the proxy has very lim- if he visits the mountain. A mountain type gets a payoff− ited power in other respects, because it does not have the 5p from visiting the beach, and 10(1 p) from visiting the ability to modify payoffs of the game (i.e. make payments) mountain. Note that the game is ‘insensitive’− (an agent’s or to enforce that its recommendations be followed. visit decision has a small impact on others’ payoffs). Fur- Our main result is that it is indeed possible to implement ther, note that “everyone visits beach” and “everyone visits approximate correlated equilibria of the realized full infor- mountain” are both equilibria of the game, regardless of the realization of types. Consider the mechanism that attempts mation game using recommender mechanisms, assuming the to implement the following social choice rule—“if the num- original game is “large”. Informally, a game is large if there ber of beach types is less than half the population, send are many players and that each player has individually only everyone to the beach, and vice versa.” It should be clear a small affect on the utility of any other player. We show that if mountain types are just in the majority, then each mountain type has an incentive to opt out of the mechanism, 1Of particular interest to us are smooth games, defined by and vice versa. As a result, even though the game is “large” Roughgarden [Rou09]. Almost all known price of anarchy and agents’ actions do not affect others’ payoffs significantly, bounds (including those for the well studied model of traf- simply computing equilibria from reported type profiles does fic routing games) are bounds on smooth games, and many not in general lead to even approximately truthful mecha- are quite good. Although price of anarchy bounds are typi- nisms. This is a general phenomenon that is not specific to cally proven for exact Nash equilibria of the full information our example. Finding an exact correlated equilibrium sub- games, in smooth games, the price of anarchy bounds extend ject to any objective is a linear programming problem, and without loss to (and even beyond) approximate correlated in general small changes in the objective (or constraints) of equilibria, again of the full information game. an LP can lead to wild changes in its solution.

404 possible under the standard constraint of differential pri- [MT09]. Our mediators are thus closer to the communi- vacy, because although agent’s actions have only a small cation devices in the “communication equilibria” of Forges affect on the utilities of other players in large games, they [For86]—that work investigates the set of achievable payoffs can have large affect on their own utility functions. Thus, via such mediators rather than how to design one, which we it is not possible to privately announce a best response for do here. It also does not allow players to opt out of using player i while protecting the privacy of i’s type. Instead, the mediator. we introduce a variant which we call joint differential pri- vacy, which requires that simultaneously for every player i, Large Games Our results hold under a “largeness condi- the joint distribution on the suggested actions to all play- tion”, i.e. a player’s action affects the payoff of all others by ers j = i be differentially private in the type of agent i. a small amount. These are closely related to the literature We show6 that a proxy mechanism which calculates an α- on large games, see e.g. [ANS00] or [Kal04]. There has been approximate correlated equilibrium of the game induced by recent work studying large games using tools from theoret- players reported types, under the constraint of ǫ-joint differ- ical computer science (but in this case, studying robustness ential privacy makes it an (ǫ + α)-approximate Bayes-Nash of equilibrium concepts)—see [GR08, GR10]. equilibrium for players to opt into the proxy, and then follow Differential Privacy Differential privacy was first defined their suggested action, as desired. by [DMNS06], and is now the standard privacy “solution Our main technical result is an instantiation of this plan: concept” in the theoretical computer science literature. It a pair of algorithms for computing α-approximate correlated quantifies the worst-case harm that can befall an individual equilibria in large games, such that we can take the approxi- from allowing his data to be used in a computation, as com- mation parameter ǫ + α tending to zero. The first algorithm pared to if he did not provide his data. There is by now a is efficient, but has a suboptimal dependence on the number very large literature on differential privacy, readers can con- of actions k in the game. The other algorithm is inefficient, sult [Dwo08] for a more thorough introduction to the field. but has a nearly optimal dependence on k. Both have an [MT07] were the first to observe that a differentially pri- optimal dependence on the number of players n in the game, vate algorithm is also approximately truthful. This line of which we show by exhibiting a matching lower bound. work was extended by [NST12] to give mechanisms in sev- We introduce joint differential privacy, large games, and eral special cases which are exactly truthful by combining our game theoretic solution concepts in Section 2. In Section private mechanisms with non-private mechanisms which ex- 3, we formally introduce our notion of a proxy, and state plicitly punish non-truthful reporting. [HK12] showed that our main results. Formal proofs are deferred to the full the mechanism of [MT07] (the “exponential mechanism”) is working paper version available at on the arXiv preprint in fact maximal in distributional range, and so can be made site [KPRU13]. exactly truthful with the addition of payments. This imme- diate connection between privacy and truthfulness does not 1.2 Related Work and Discussion carry over to the notion of joint-differential privacy that we Market and Mechanism Design Our work is related to study here, but as we show, it is regained if the object that the large body of literature on mechanism/ market design we compute privately is an equilibrium of the underlying in “large games,” which uses the large number of agents to game. provide mechanisms which have good incentive properties, Another interesting line of work considers the problem even when the small market versions do not. It stretches of designing truthful mechanisms for agents who explicitly back to [RP76] who showed that market (Walrasian) equi- experience a cost for privacy loss as part of their utility libria are approximately strategy proof in large economies. function [CCK+13, NOS12, Xia13]. The main challenge in More recently [IM05], [KP09], [KPR10] have shown that this line of work is to formulate a reasonable model for how various two-sided matching mechanisms are approximately agents experience cost as a function of privacy. We remark strategy proof in large markets. There are similar results that the approaches taken in the former two can also be in the literature for one-sided matching markets, market adapted to work in our setting, for agents who explicitly economies, and double auctions. The most general result is value privacy. [Gra12] studies the problem of implementa- that of [AB11] who design incentive compatible mechanisms tion for various assumptions about players’ for for large economies that satisfy a smoothness assumption. privacy and permissible game forms. A related line of work While we only allow agents to opt in/ opt out rather than which also takes into account agent values for privacy con- mis-report, we do not assume any such smoothness condi- siders the problem of designing markets by which analysts tion. Further, the literature on mechanism design normally can procure private data from agents who explicitly experi- gives the mechanism the power to “enforce” actions, while ence costs for privacy loss [FL12, GR11, LR12, RS12]. See here our mechanism can only “recommend” actions. [PR13] for a survey. Our work is also related to mediators in games [MT03, Finally, our results rely on several technical solutions in MT09]. This line of work aims to modify the equilibrium the differential privacy literature. The most well studied structure of full information games by introducing a me- problem is that of accurately answering numeric-valued queries diator, which can coordinate agent actions if they choose on a data set. A basic result of [DMNS06] is that any low to opt in using the mediator. Mediators can be used to sensitivity query (i.e. the addition or removal of a single convert Nash equilibria into dominant strategy equilibria entry can change the value of the query by at most 1) can [MT03], or implement equilibrium that are robust to col- be answered efficiently and (ǫ-differential) privately while lusion [MT09]. Our notion of a recommender mechanism is introducing only O(1/ǫ) error. Another fundamental re- related, but is even weaker than that of a mediator. For sult of [DKM+06, DRV10] is that differential privacy com- example, our mechanisms do not need the power to make poses gracefully. Any algorithm composed of T subroutines, payments [MT03], or the power to enforce suggested actions each of which are O(ǫ)-differentially private, is itself √T ǫ-

405 differentially private. Combined, these give an efficient algo- Definition 1 (Approximate CE). Let (u1, u2, . . . un) rithm for privately answering any T low sensitivity queries be a tuple of utility functions, one for each player. Let π be with O(√T ) effort, a result which we make use of. a distribution over tuples of actions An. We say that π is an Using computationally inefficient algorithms, it is possible α-approximate correlated equilibrium of the (complete infor- to privately answer queries much more accurately [BLR08, mation) game defined by (u1, u2, . . . un) if for every player DRV10, RR10, HR10, GHRU11, GRU12]. Combining the i [N], and any function f : A A, results of the latter two yields an algorithm which can pri- ∈ → E [ui(a)] E [ui(f(ai), a i)] α vately answer arbitrary low sensitivity queries as they arrive, π ≥ π − − with error that scales only logarithmically in the number of queries. We use this when we consider games with large We now define a solution concept in the Bayesian model. action spaces. Let τ be a commonly known joint distribution over n, and U Our lower bounds for privately computing equilibria use let τ t be the posterior distribution on types conditioned on | i recent information theoretic lower bounds on the accuracy the type of player i being ti. A (pure) strategy for player i queries can be answered while preserving differential privacy is a function si : A, and we write s = (s1, . . . , sn) to U → [DN03, DMT07, DY08, De12]. Namely, we construct games denote a vector of strategy profiles. whose equilibria encode answers to large numbers of queries on a database. Definition 2 (Approximate BNE). Let τ be a dis- n s Variants of differential privacy related to joint differential tribution over , and let = (s1, . . . , sn) be a vector of U s privacy have been considered in the setting of query release, strategies. We say that is an α-approximate (pure strat- specifically for analyst privacy [DNV12]. Specifically, the egy) Bayes Nash Equilibrium under τ if for every player i, for every ti , and for every alternative strategy s′ : definition of one-analyst-to-many-analyst privacy used by ∈ U i [HRU13] can be seen as an instantiation of joint differential privacy. E [ui(ti, si(ti), s i(t i))] t τ − − −i∼ |ti ≥

2. MODEL & PRELIMINARIES E ui(ti, si′ (ti), s i(t i)) ǫ t τ − − We consider games of up to n players 1, 2, . . . , n , in- −i∼ |ti − dexed by i. Player i canG take actions in a set{ A, A = k}. To   We restrict attention to ‘insensitive’ games. Roughly speak- allow our games to be defined also for fewer than| n| players, ing a game is γ-sensitive if a player’s choice of action affects we will imagine that the null action A, which corre- any other player’s payoff by at most γ. Note that we do not sponds to “opting out” of the game.⊥ We ∈ index actions by constrain the effect of a player’s own actions on his payoff— j. A tuple of actions, one for each player, will be denoted a player’s action can have a large impact on his own payoff. a = (a , a , . . . a ) An.3 1 2 n Formally: Let be the set∈ of player types.4 There is a utility functionU u : An that determines the payoff for U × → ℜ Definition 3 (γ-Sensitive). A game is said to be γ- a player given his type ti and a joint action profile a for sensitive if for any two distinct players i = i′, any two ac- 6 all players. When it is clear from context, we will refer tions ai, a′ and type ti for player i and any tuple of actions n i to the utility function of player i, writing ui : A a i for everyone else: → ℜ − to denote u(ti, ). We write a generic profile of utilities · n u = (u , u , . . . u ) . We will be interested in im- ui′ (ai, a i) ui′ (ai′ , a i) γ. (1) 1 2 n | − − − | ≤ plementing equilibria∈ of U the complete information game in settings of incomplete information. In the complete infor- A key tool in our paper is the design of differentially private “proxy” algorithms for suggesting actions to play. mation setting, the types ti of each player is fixed and com- monly known to all players. In such settings, we can ig- Agents can able to opt out of participating in the proxy: so nore the abstraction of ‘types’ and consider each player i each agent can submit to the proxy either their type ti, or else a null symbol which represents opting out. A proxy simply to have a fixed utility function ui. In models of algorithm is then a⊥ function from a profile of utility func- incomplete information, players know their own type, but n do not know the types of others. In the Bayesian model tions (and symbols) to a probability distribution over , i.e. :( ⊥ )n ∆ n. Here is an appropriatelyR of incomplete information, there is a commonly known prior M U ∪ {⊥} → R R distribution τ from which each agent’s type is jointly drawn: defined range space. First we recall the definition of differential privacy, both (t1, . . . , tn) τ. We now define the solution concepts we will use, both in∼ the full information setting and in the Bayesian to provide a basis for our modified definition, and since it setting. will be a technical building block in our algorithms. Roughly u Denote a distribution over An by π, the marginal distri- speaking, a mechanism is differentially private if for every and every i, knowledge of the output (u) as well as u i bution over the actions of player i by πi, and the marginal M − distribution over the (joint tuple of) actions of every player does not reveal ‘much’ about ui. but player i by π i. We now present two standard solu- − Definition 4 ((Standard) Differential Privacy). tion concepts— approximate correlated and coarse corre- A mechanism satisfies (ε, δ)-differential privacy if for any lated equilibrium. M player i, any two types for player i, ti and ti′ , and ∈ Un ∪{⊥}1 3 any tuple of types for every else t i ( ) − and any In general, subscripts will refer indices i.e. players and pe- − S n, ∈ U ∪{⊥} riods, while superscripts will refer to components of vectors. ⊆ R 4It is trivial to extend our results when agents have different P ε P [( (ti; t i)) S] e (ti′ ; t i) S + δ. n − − typesets, i. will then be i=1 i. M M ∈ ≤ M M ∈ U U U    S

406 We would like something slightly different for our setting. A which determines how they use that recommendation. We We propose a relaxation of the above definition, motivated denote this set of choices A1′ = ( , f) f : A A . Alter- by the fact that when computing a correlated equilibrium, nately, they can opt out of the proxy,{ ⊤ which| means→ that} they the action recommended to a player is only observed by her. do not submit their type, and directly choose an action to Roughly speaking, a mechanism is jointly differentially pri- play. We denote this set of choices A′ = ( , a) a A . 2 { ⊥ | ∈ } vate if, for each player i, knowledge of the other n 1 rec- Together, the action set in M′ is A′ = A1′ A2′ . Given ommendations (and submitted types) does not reveal− ‘much’ a set of choices by the players,G we define a vector∪ x such about player i’s report. Note that this relaxation is neces- that xi = ti for each player i who chose ( , fi) A′ (each ⊤ ∈ 1 sary in our setting if we are going to privately compute corre- player who opted in), and xi = for each player i who ⊥ lated equilibria, since knowledge of player i’s recommended chose ( , ai) A2′ (each player who opted out). The proxy action can reveal a lot of information about his type. It is then computes⊥ ∈ M(x) = aˆ. Finally, this results in a vector still very strong- the privacy guarantee remains even if ev- of actions a from the game , one for each player. For each G eryone else colludes against a given player i, so long as i does player who opted in, they play the action ai = fi(aˆi). For not himself make the component reported to him public. each person who opted out, they play the action ai = ai. Finally, each player receives utility u(ti, a) as in the original Definition 5 (Joint Differential Privacy). A mech- game . anism satisfies (ε, δ)-joint differential privacy if for any G M We now show that if the algorithm M satisfies certain player i, any two possible types for player i, ti and t′ i ∈ properties, then for any prior on agent types, it is always an , any tuple of utilities for everyone else t i and approximate Bayes Nash equilibrium for every player to opt U ∪ {⊥}n 1 − S − , ⊆ R in and follow the recommendation of the proxy. ε P P ′ ( (ti; t i)) i S e (ti; t i) i S + δ. Theorem 6. Let M be an algorithm that satisfies (ǫ, δ)- M − − ∈ ≤ M − ∈ M M − joint differential privacy, and be such that for every vector   h  i 3. JOINT DIFFERENTIAL PRIVACY AND of types t n, M(t) induces a distribution over actions that is an ∈α-approximate U correlated equilibrium of the full TRUTHFULNESS information game induced by the type vector t. Then for The main result of this paper is a reduction that takes an every prior distributionG on types τ, it is an η-approximate arbitrary large game of incomplete information and mod- G Bayes Nash equilibrium of M′ for every player to play ( , f) ifies it to have equilibrium implementing equilibrium out- for the identity function fG(a) = a. (i.e. for every player⊤ to comes of the corresponding full information game defined by opt into the proxy, and then follow its suggested action), the realized agent types. Specifically, we modify the game where η = ǫ + δ + α. by introducing the option for players to use a proxy that can recommend actions to the players. The modified game Remark 7. Observe that when agents play according to is called ′. For any prior on agent types, it will be an the approximate Bayes Nash equilibrium of M′ guaranteed G G approximate Bayes Nash equilibrium of ′ for every player by Theorem 6, then the resulting distribution over actions to opt in to using the proxy, and to subsequentlyG follow its played, and the resulting utilities of the players, correspond recommendation. Moreover, the resulting set of actions will to an α-approximate correlated equilibrium of the full infor- correspond to an approximate correlated equilibrium of the mation game M′ , induced by the realized type vector t. G complete information game defined by the realized agent G Proof Proof of Theorem 6. Fix any prior distribu- types. For concreteness, we consider Bayesian games, how- tion on player types τ, and let s1, . . . , sn be the strategies ever our results are not specific to this model of incomplete corresponding to the action ( , f) for each player, where f information. is the identify function. (i.e.⊤ the strategy corresponding to More precisely, the modified game ′ will be identical to G G opting into the proxy and following the suggested action). with an added option. Each player i has the opportunity to There are two types of deviations that a player i can con- submit their type to a proxy, which will then suggest to them sider: ( , f ′) for some function f ′ : A A not the identity ⊤ i i → an actiona ˆi A to play. They can use this advice however function, and ( , ai) for some action ai. First, we consider they like: that∈ is, they can choose any function f : A A ⊥ → deviations of the first kind. Let si′ (ti) be the strategy corre- and choose to play the action ai = f(ˆai). Alternately, they sponding to playing ( , f ′ ) for some functions ˆ(ti). For ⊤ sˆ(ti) can opt out of the proxy (and not submit their type), and every type ti: choose an action to play ai A directly. In the end, each ∈ player experiences utility u(ti, (a1, . . . , an)), just as in the E [u (t , s (t ), s (t ))] = Pr [t ] E [u (a)] original game . We assume that types are verifiable—agent t i i i i i i i i −i τ − − τ − a M(t) G ∼ |ti t |ti · ∼ i does not have the ability to opt in to the proxy but report X−i a false type ti′ = ti. However, he does have the ability to 6 E opt out (and submit ), and the proxy has no power to do Pr [t i] ui(fsˆ′(t )(ai), a i) α τ − a M(t) i − ⊥ ≥ t |ti · − anything other than suggest which action he should play. In −i ∼ X   the end, each player is free to play any action ai, regardless = E u (t , s′ (t ), s (t )) α of what the proxy suggests, even if he opts in. t i i i i i i −i τ|t − − − Formally, given a game defined by an action set A, a ∼ i   type space , and a utilityG function u, we define a proxy where the inequality follows from the fact that M com- U n game M′ , parameterized by an algorithm M : putes an α-approximate correlated equilibrium. Now, con- n G {U ∪{⊥}} → A . In ′, each agent has two types of actions: they can opt sider a deviation of the second kind. Let si′ (ti) be the strat- in to theG proxy, which means they submit their type, receive egy corresponding to playing ( , a ) for some function ⊥ sˆ(ti) an action recommendationa ˆ, and choose a function f : A sˆ(ti). For every type ti: →

407 4. DISCUSSION

E [ui(ti, si(ti), s i(t i))] In this work, we have introduced a new variant of differen- t τ − − −i∼ |ti tial privacy (joint differential privacy), and have shown how it can be used as a tool to construct extremely weak proxy E = Pr [t i] [ui(a)] mechanisms which can implement equilibria of full informa- τ|t − · a M(t) t i ∼ X−i tion games, even when the game is being played in a setting of only partial information. Moreover, our privacy solution t E a Pr [ i] ui(asˆ(ti), i) α concept maintains the property that no coalition of players ≥ τ|t − · a M(t) − − t i ∼ X−i   can learn (much) more about any player’s type outside of the coalition than they could have learned in the original Pr [t i] exp( ǫ) E ui(asˆ(t ), a i) δ α a t i , and thus players have almost no incentive ≥ τ|t − · − · M( , −i) − − − t i ∼ ⊥ X−i   not to participate even if they view their type as sensitive in- formation. Although our proxies are weak in most respects Pr [t i] E ui(asˆ(t ), a i) ǫ δ α a t i (they cannot enforce actions, they cannot make payments or ≥ τ|t − · M( , −i) − − − − t i ∼ ⊥ X−i   charge fees, they cannot compel participation), we do make E the assumption that player types are verifiable in the event = ui(ti, si′ (ti), s i(t i)) ε δ α t τ − − that they choose to opt into the proxy. This assumption is −i∼ |ti − − − reasonable in many settings: for example, in financial mar- where the first inequality follows from the α-approximate kets, there may be legal penalties for a firm misrepresenting correlated equilibrium condition, the second follows from the relevant facts about itself, and in traffic routing games, the (ǫ, δ)-joint differential privacy condition, and the third fol- proxy may be embodied as a physical device (e.g. a GPS lows from the fact that for ǫ 0, exp( ǫ) 1 ǫ and that device) that can itself verify player types (e.g. physical lo- utilities are bounded in [0, 1].≥ − ≥ − cation). Nevertheless, we view relaxing this assumption as The main technical contribution of the paper is an algo- an important direction for future work. rithm M which satisfies (ǫ, δ)-joint differential privacy, and In this extended abstract, we have merely summarized our computes an α-approximate correlated equilibrium of games results. Our full paper [KPRU13] includes a formal descrip- which are γ-large. We in fact give two such algorithms: one tion of our algorithms, formal statements of our theorems, that runs in time polynomial in n and A = k, and one and proofs of all results. that runs in time exponential in n and k.| The| efficient algo- rithm computes an α1-approximate correlated equilibrium, and the inefficient algorithm computes an α2-approximate 5. REFERENCES correlated equilibrium, where: 3/2 [AB11] E. Azevedo and E. Budish. ˜ γk n log(1/δ) α1 = O in the large as a desideratum for market design. ǫ ! p Technical report, University of Chicago, 2011. γ log k log3/2( )√n log(1/δ) [ANS00] N.I. Al-Najjar and R. Smorodinsky. Pivotal α2 = O˜ U . players and the characterization of influence. ǫ   Journal of Economic Theory, 92(2):318–342, In combination with Theorem 6, the existence of these 2000. algorithms together with optimal choices of ǫ and δ give our [BLR08] Avrim Blum, Katrina Ligett, and Aaron Roth. main result: A learning theory approach to non-interactive Theorem 8. Let be any γ-large game. Then there ex- database privacy. In , editor, G ists a proxy game ′ such that for any prior distribution on STOC, pages 609–618. ACM, 2008. types τ, it is an ηG-approximate Bayes-Nash equilibrium to [CCK+13] Yiling Chen, Stephen Chong, Ian A Kash, Tal opt into the proxy and follow its advice. Moreover, the result- Moran, and . Truthful mechanisms ing distribution on actions forms an η-approximate corre- for agents that value privacy. In Proceedings of lated equilibrium of the full information game induced by the the fourteenth ACM conference on Electronic realized types. If we insist that the proxy be implemented us- commerce, pages 215–232. ACM, 2013. ing a computationally efficient algorithm, then we can take: [De12] Anindya De. Lower bounds in differential

1/4 3/4 privacy. In TCC, pages 321–338, 2012. η = O˜ γn k + √ [DKM 06] Cynthia Dwork, Krishnaram Kenthapadi, If we can take the proxy to be computationally inefficient, Frank McSherry, Ilya Mironov, and . then we can take: Our data, ourselves: Privacy via distributed noise generation. In EUROCRYPT, pages η = O˜ √γn1/4 log k log3/4 |U| 486–503, 2006. [DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Remark 9. In large games,p the parameter γ tends to zero Nissim, and Adam Smith. Calibrating noise to as n grows large. For γ = 1/n, our approximation error is sensitivity in private data analysis. In TCC ˜ 3/4 1/4 ˜ √ 3/4 1/4 η = O(k /n ) and η = O( log k log /n ) respec- ’06, pages 265–284, 2006. tively. Note that the approximation error |Uη |in the equilib- [DMT07] Cynthia Dwork, Frank McSherry, and Kunal rium concepts tends to zero in any γ-large game such that 1 1 Talwar. The price of privacy and the limits of γ = o( 3 2 ) or γ = o( 3 2 ) respectively. √nk / √n log k log / |U| lp decoding. In STOC, pages 85–94, 2007.

408 [DN03] and . Revealing [Kal04] E. Kalai. Large robust games. Econometrica, information while preserving privacy. In PODS, 72(6):1631–1665, 2004. pages 202–210, 2003. [KP09] F. Kojima and P.A. Pathak. Incentives and [DNV12] Cynthia Dwork, Moni Naor, and Salil Vadhan. stability in large two-sided matching markets. The privacy of the analyst and the power of American Economic Review, 99(3):608–627, the state. In Foundations of Computer Science 2009. (FOCS), 2012 IEEE 53rd Annual Symposium [KPR10] F. Kojima, P.A. Pathak, and A.E. Roth. on, pages 400–409. IEEE, 2012. Matching with couples: Stability and incentives [DRV10] Cynthia Dwork, Guy N. Rothblum, and Salil P. in large markets. Technical report, National Vadhan. Boosting and differential privacy. In Bureau of Economic Research, 2010. FOCS, pages 51–60. IEEE Computer Society, [KPRU13] Michael Kearns, Mallesh M Pai, Aaron Roth, 2010. and Jonathan Ullman. Mechanism design in [Dwo08] C. Dwork. Differential privacy: A survey of large games: Incentives and privacy. arXiv results. Theory and Applications of Models of preprint arXiv:1207.4084, 2013. Computation, pages 1–19, 2008. [LR12] Katrina Ligett and Aaron Roth. Take it or [DY08] Cynthia Dwork and Sergey Yekhanin. New leave it: Running a survey when privacy comes efficient attacks on statistical disclosure control at a cost. CoRR, abs/1202.4741, 2012. mechanisms. In CRYPTO, pages 469–480, [MT03] Dov Monderer and Moshe Tennenholtz. 2008. k-implementation. In Proceedings of the 4th [FL12] Lisa Fleischer and Yu-Han Lyu. Approximately ACM conference on Electronic commerce, optimal auctions for selling privacy when costs pages 19–28. ACM, 2003. are correlated with data. In EC, pages 568–585, [MT07] Frank McSherry and Kunal Talwar. Mechanism 2012. design via differential privacy. In FOCS, pages [For86] Fran¸coise Forges. An approach to 94–103, 2007. communication equilibria. Econometrica: [MT09] Dov Monderer and Moshe Tennenholtz. Strong Journal of the Econometric Society, pages mediated equilibrium. Artificial Intelligence, 1375–1385, 1986. 173(1):180–195, 2009. [GHRU11] Anupam Gupta, Moritz Hardt, Aaron Roth, [NOS12] Kobbi Nissim, Claudio Orlandi, and Rann and Jonathan Ullman. Privately releasing Smorodinsky. Privacy-aware mechanism design. conjunctions and the statistical query barrier. In EC, pages 774–789, 2012. In STOC ’11, pages 803–812, 2011. [NST12] Kobbi Nissim, Rann Smorodinsky, and Moshe [GR08] R. Gradwohl and O. Reingold. Fault tolerance Tennenholtz. Approximately optimal in large games. In EC, pages 274–283. ACM, mechanism design via differential privacy. In 2008. ITCS, pages 203–213, 2012. [GR10] R. Gradwohl and O. Reingold. Partial exposure [PR13] Mallesh Pai and Aaron Roth. Privacy and in large games. Games and Economic mechanism design. Sigecom Exchanges, pages Behavior, 68(2):602–613, 2010. 8–29, 2013. [GR11] Arpita Ghosh and Aaron Roth. Selling privacy [Rou09] T. Roughgarden. Intrinsic robustness of the at auction. In EC, pages 199–208, 2011. price of anarchy. In STOC, pages 513–522. [Gra12] R. Gradwohl. Privacy in implementation. 2012. ACM, 2009. [GRU12] Anupam Gupta, Aaron Roth, and Jonathan [Rou12] T. Roughgarden. The price of anarchy in games Ullman. Iterative constructions and private of incomplete information. In Proceedings of data release. In TCC, pages 339–356, 2012. the 13th ACM Conference on Electronic [HK12] Zhiyi Huang and Sampath Kannan. The Commerce, pages 862–879. ACM, 2012. exponential mechanism for social welfare: [RP76] D.J. Roberts and A. Postlewaite. The Private, truthful, and nearly optimal. In FOCS, incentives for price-taking behavior in large 2012. exchange economies. Econometrica, pages [HR10] Moritz Hardt and Guy N. Rothblum. A 115–127, 1976. multiplicative weights mechanism for [RR10] Aaron Roth and Tim Roughgarden. Interactive privacy-preserving data analysis. In FOCS, privacy via the median mechanism. In STOC pages 61–70, 2010. ’10, pages 765–774, 2010. [HRU13] Justin Hsu, Aaron Roth, and Jonathan [RS12] Aaron Roth and Grant Schoenebeck. Ullman. Differential privacy for the analyst via Conducting truthful surveys, cheaply. In EC, private equilibrium computation. In pages 826–843, 2012. Proceedings of the 45th annual ACM [Xia13] David Xiao. Is privacy compatible with symposium on Symposium on theory of truthfulness? In Proceedings of the 4th computing, pages 341–350. ACM, 2013. conference on Innovations in Theoretical [IM05] N. Immorlica and M. Mahdian. Marriage, Computer Science, pages 67–86. ACM, 2013. honesty, and stability. In SODA, pages 53–62. Society for Industrial and Applied Mathematics, 2005.

409