<<

Essays in and

Dissertation

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

in the Graduate School of The Ohio State University

By

Ritesh Jain

Graduate Program in

The Ohio State University

2018

Dissertation Committee:

Paul J Healy, Advisor

Yaron Azrieli

James Peck

1

Copyrighted by

Ritesh Jain

2018

2

Abstract

I broadly classify my research in micro-economic theory. This dissertation brings together my work in mechanism design and implementation theory.

Mechanisms, (Jointly with Paul Healy) we study Nash implementation in public goods setting. Groves and Ledyard (1977) construct a mechanism for public goods procurement that can be viewed as a direct- revelation Groves mechanism in which agents announce a parameter of a quadratic approximation of their true preferences. The mechanism's outcomes are efficient. The budget is balanced because Groves mechanisms are balanced for the announced quadratic preferences. Tian (1996) subsequently discovered a richer set of budget-balancing preferences. We replicate the Groves-Ledyard construction using this expanded set of preferences, and uncover a new set of complex mechanisms that generalize the original Groves-Ledyard mechanism.

In Chapter 2 titled Symmetric Mechanism Design, (Jointly with Yaron Azrieli) we study the extent to which regulators can guarantee fair outcomes by a policy requiring mechanisms to treat agents symmetrically. This is an exercise in mechanism design. Our main result is a characterization of the class of social choice functions that can be implemented under this constraint. In many environments, extremely discriminatory social choice functions can be implemented by symmetric mechanisms, but there are also cases

ii in which symmetry is binding. Our characterization is based on a `' type of result, where we show that a social choice function can be symmetrically implemented if and only if a particular kind of (indirect) symmetric mechanism implements it. We illustrate the result in environments of voting with private values, voting with a common value, and assignment of indivisible goods.

In Chapter 3 titled Rationalizable Implementation of Social Choice Correspondences, I study the implementation of social choice correspondences (SCC), in a setting, using as the . I find a condition which

I call r-monotonicity to be necessary for the rationalizable implementation of an SCC. r- monotonicity is strictly weaker than Maskin monotonicity, a condition introduced by

Maskin (1999). If an SCC satisfies a no worst alternative condition and a condition which

F- distinguishability, then it is shown that r-monotonicity is also sufficient for rationalizable implementation. We discuss the strength of these additional conditions. In particular, I find that, whenever there are more than 3 agents a social choice correspondence, which always selects at least two alternatives is rationalizably implementable if and only if it satisfies r-monotonicity. This paper, therefore, extends

Bergemann et al. (2011) to the case of social choice correspondences.

iii

To my family

iv

Acknowledgments

I am deeply indebted to Paul Healy and Yaron Azrieli for being inspirations for me in academia and beyond. They have been instrumental in helping conceptualize, write and rewrite the chapters in this dissertation. I thank James Peck for numerous conversations and comments which have helped shaped this dissertation.

v

Vita

2009...... B.A. in Economics, University of Delhi,

India

2011...... M.A. in Economics, Jawaharlal Nehru

University, New Delhi, India

2013...... M.A. in Economics, The Ohio State

University, Columbus, OH

2013 to 2018 ...... Graduate Associate, Department

of Economics, The Ohio State University,

Columbus, OH

Fields of Study

Major Field: Economics

vi

Table of Contents

Abstract ...... ii

Dedication ...... iv

Acknowledgments...... v

Vita ...... vi

List of Tables ...... viii

Chapter 1. Generalized Groves Ledyard Mechanisms ...... 1

Chapter 2. Symmetric Mechanism Design ...... 20

Chapter 3. Rationalizable Implementation of Social Choice Correspondences ...... 50

References ...... 78

Appendix for Chapter 1 ...... 84

Appendix for Chapter 2 ...... 91

Appendix for Chapter 3 ...... 95

vii

List of Tables

Table 1 Preferences in Example 4 ...... 50

Table 2 Preferences in Example 5 ...... 58

Table 3 Preferences in Example 6 ...... 59

Table 4 Preferences in Example 10 ...... 75

Table 5 Preferences in Example 11 ...... 106

viii

Chapter 1: Generalized Groves Ledyard Mechanism

1 Introduction

Groves and Ledyard (1977) provided one of the first decentralized economic mechanisms to solve the free rider problem for general economies. Specifically, they devised a gov- ernment (or, mechanism) such that self-interested equilibrium behavior by all parties always leads to a Pareto optimal allocation. This work has been cited widely, with many researchers building off its original insights. What has not been appreciated, however, is the manner in which Groves and Ledyard (1977) actually construct their mechanism. The final mechanism looks simple: players announce a single number, and taxes are based on an proportional share of the cost and a quadratic penalty. In fact, the mechanism has a more complex foundation: Groves and Ledyard (1977) present it as being derived from a Groves mechanism (Groves, 1973) in which agents announce entire quasilinear functions and are taxed (or rewarded) based on the Marshallian surplus calculated from the announced preferences of all other agents in the economy. But agents may not actually have quasilinear preferences, so these announcements represents approximations of their true preferences. The space of allowable announcements is parameterized with a single parameter, which greatly simpli- fies agents’ messages. In equilibrium, agents with non-quasilinear preferences announce quasilinear preferences that best approximate their true willingness to pay at the Pareto optimal allocation, and that allocation is selected by the mechanism.1 What is crucial to the Groves-cum-Groves-Ledyard construction is that the set of approximating preferences that the agents are allowed to announce always generate a budget-balanced allocation in the Groves mechanism. Budget balance does not always obtain with general quasilinear preferences; Groves and Loeb (1975) showed that balance is achieved if we further restrict agents to quadratic valuation functions. Following this

1Groves (1973) mechanisms are usually only discussed in settings where all agents actually have quasilinear preferences, and therefore have a dominant of truthful revelation. Groves and Ledyard (1977) are novel in that they consider the Groves mechanism for more general preferences.

1 insight, Groves and Ledyard (1977) only allow agents to announce quadratic (approxi- mate) preferences, guaranteeing that the final allocation will balance the budget. Agents need only to announce the intercept of their (approximate) valuation function, and are charged according to the balanced Groves mechanism. Algebraic manipulation reveals that these and tax functions reduce to the familiar Groves-Ledyard mechanism with its quadratic form. For two decades it was believed that quadratic preferences are the only ones for which a Groves mechanism could be budget balanced.2 This belief was overturned when Tian (1996) discovered a much richer class of preferences for which Groves mechanisms can be balanced. And, like the quadratic preferences, these are parameterized by a single parameter. In this paper we replicate the Groves-Ledyard procedure of turning a Groves mecha- nism into a one-dimensional, budget-balanced, Pareto efficient mechanism, but we do so on Tian’s larger domain of preferences. In doing so, we uncover a broad family of complex mechanisms that, in theory, could also be used to solve the free-rider problem. We also gain a deeper understanding of the incentive properties of Groves mechanisms applied to general preferences. And we see that getting the Samuelson condition (equating the sum of marginal rates of substitution to marginal costs) is a relatively trivial matter, so that the real difficulties in the design problem lie in achieving budget balance and equi- librium existence. We lean on Tian to accomplish the former; for the latter, we devise a simple trick of allowing agents to announce transfers among their partners, providing just enough richness to the range of the mechanisms (without distorting incentives) to guarantee equilibrium existence in these mechanisms.

2This was conjectured by Laffont and Maskin (1980), who provide general conditions for budget balance.

2 2 Setup

2.1 The Economic Environment

We present here a simplified version of the Arrow-Debreu setting studied by Groves and Ledyard (1977). Specifically, there is one private good, one private good, I consumers, F firms, and a mechanism (that can be thought of as a government). The mechanism receives messages from the consumers and uses this information to procure public goods from the firms. These purchases are financed by (message-dependant) taxes collected from the consumers. The price of the private good is normalized to one, and the price of the is given by p ∈ R. Quantities are given by x ∈ R for the private good and y ∈ R for the 2 public good. Each consumer i has a consumption set Xi ⊆ R , a relation i on

Xi that is representable by a differentiable utility function ui, and an initial endowment 1 2 of private goods ωi ∈ R . Firms are characterized by a production set Zf ∈ R and a 1 I x y vector of profit shares θf = (θf , . . . , θf ). Production vectors are given by zf = (zf , zf ), with negative components representing inputs. Firm profits are distributed to consumers

I F according to θf . An economy is therefore represented by e = ((Xi, i, ωi)i=1, (Zf , θf )f=1). We think of there being a set of admissible economies E, where each e ∈ E differs only in the preference profiles of the consumers. We assume that, for every e ∈ E, each ui is continuously differentiable, quasi-concave, and strictly increasing in the private good. In some cases we may discuss further restrictions on E, such as requiring that all preferences are quasilinear (ui(xi, y) = vi(y) + xi for some concave function vi), or even quadratic-quasilinear (vi(y) is concave and quadratic). I F 1 1 An allocation is a vector ((xi)i=1, y, (zf )f=1) with xi ∈ R for each consumer i, y ∈ R , 2 and zf ∈ R for each firm f. An allocation is feasible if (i) (xi, y) ∈ Xi for each i, (ii) P P I F zf ∈ Zf for each j, and (iii) ( i(xi−ωi), y) ≤ f zf . An allocation ((xi)i=1, y, (zf )f=1) is 0 I 0 0 F Pareto optimal if it is feasible and there is no other feasible allocation ((xi)i=1, y , (zf )f=1) 0 0 0 0 such that (xi, y ) i (xi, y) for all i and (xi, y ) i (xi, y) for some i.

3 2.2 Mechanisms and Competitive Equilibrium

A mechanism (or government) is simply a message space M = ×iMi (one for each consumer), an allocation rule y(m, p) selecting a public goods level for each message profile m and price p, and a list of tax functions ti(m, p) indicating how much wealth each consumer must sacrifice for financing the public good.3 Thus, a mechanism is given

I I 4 by Γ = ((Mi)i=1, y, (ti)i=1). x y Firm f’s profit is given by zf + pzf . Its supply correspondence φf (p) is the set of production vectors zf ∈ Zf that maximize profit in Zf given price p. Resulting profits x y are then given by πf (p) := zf + pzf for any zf ∈ φf (p). P i At price p, consumer i has wealth wi(ωi, p) := ωi + f θf πf (p). This consumer must choose a private goods consumption level xi and a message mi ∈ Mi to send to the government, taking as given p and m−i. She faces the constraint that (xi, mi) must be such that (xi, y(mi, m−i, p)) ∈ Xi and xi + ti(mi, m−i, p) ≤ wi(ωi, p). Let Bi(m−i, p) be the ‘budget set’ of choices (xi, mi) satisfying this constraint, and define βi(m−i, p) to be the set of i-most-preferred (or ‘’) choices in Bi(m−i, p). We will define a competitive equilibrium for an economy with a given mechanism as an allocation and vector of messages that satisfy preference maximization and profit maximization. Groves and Ledyard (1977) further require exact market clearing, which implies that the government balance the budget. We allow here the possibility of gov- ernments that collects a surplus in equilibrium; equilibrium is defined formally after a few simplifications.

2 x y For simplicity, we assume a linear production technology Zf = {zf ∈ R : zf + κzf ≤ 0} for all j, meaning all firms have a constant marginal cost of κ > 0. Profit maximization yields p = κ and πf (κ) = 0 (so that wi(ωi, κ) = ωi). The budget balance conditions P y P P simplify to f zf = y(m, p) and i xi + κy(m, p) = i ωi. Since we know that p = κ

3We assume later a constant marginal cost of production, which implies that wealth is measured simply by the private good endowment. 4We abuse notation, letting y represent both a public good level and the function that determines this level; this should cause no confusion.

4 in any competitive equilibrium, and we are only interested in equilibrium conditions, we henceforth assume p = κ and drop p from the arguments of all functions.

2 We further assume that Xi = R for each i, so that preference maximization implies 5 xi = ωi − ti(m). Since the choice of xi is now trivial, we redefine βi(m−i) to be the 6 i-optimal messages in Mi, given m−i. This further simplifies the budget balance P y P conditions to f zf = y(m) and i ti(m) = κy(m, p). Now, a (Nash or competitive) equilibrium is a message profile m∗ ∈ M such that (i) ∗ ∗ P ∗ ∗ mi ∈ βi(m−i) for all i, and (ii) i ti(m ) ≥ κy(m )). If (ii) holds with equality, we call it a balanced (Nash or competitive) equilibrium. It is unbalanced if (ii) holds with strict inequality. Given an equilibrium m∗, the equilibrium allocation is the allocation that

∗ ∗ ∗ results from m . A message mi is a dominant strategy if mi ∈ βi(m−i) for all m−i. Our ultimate goal is to design mechanisms such that (i) at least one balanced equi- librium exists for every e ∈ E, and (ii) all balanced equilibrium allocations are Pareto optimal.7 We refer to such mechanisms as Nash efficient mechanisms. If a mechanism satisfies (ii) but not (i) then we say it is conditionally Nash efficient.

Define agent i’s marginal rate of substitution at any allocation (xi, y) by

∂ui(xi, y)/∂y MRSi(xi, y) = . ∂ui(xi, y)/∂xi

A mechanism is conditionally Nash efficient if and only if, for every e ∈ E and every balanced equilibrium m∗ in e,

X ∗ ∗ MRSi(ωi − ti(m ), y(m )) = κ. (1) i

We refer to (1) as the Samuelson (1954) condition, since it is necessary (but not sufficient) for Pareto optimality. If, in addition, the allocation is budget-balanced then efficiency

5We thus allow for negative quantities of all goods. This simplifies the analysis by avoiding corner solutions. See Healy and Mathevet (2012) for a discussion of this assumption. 6 Formally, βi(m−i) = {mi ∈ Mi :(ωi − ti(mi, m−i), y(mi, m−i)) i (ωi − 0 0 0 ti(mi, m−i), y(mi, m−i)) ∀mi ∈ Mi}. 7Unbalanced equilibrium allocations cannot be Pareto optimal.

5 obtains. Thus, if existence of a balanced equilibrium is guaranteed for every e ∈ E, then conditional Nash efficiency implies Nash efficiency.

2.3 Direct, Quasi-Direct, and Indirect Mechanisms

A mechanism is called direct if each Mi is the space of possible preferences (or utility functions) for consumer i. For example, the Vickrey-Clarke-Groves mechanism intro- duced below is a direct mechanism when preferences are quasi-linear. We introduce here the notion of a quasi-direct mechanism, in which agents can be thought of as announcing their preferred allocation, rather than an abstract message.

I I Definition 1. Mechanism Γ = ((Mi)i=1, y, (ti)i=1) is quasi-direct if

(1.1) y(·, m−i) is surjective for each i and m−i (meaning, for each m−i and yi there exists

some mi such that y(mi, m−i) = yi), and

(1.2) each ti(·) is measurable with respect to {(y(m), m−i)}m∈M (meaning, y(mi, m−i) = 0 0 y(mi, m−i) implies ti(mi, m−i) = ti(mi, m−i)).

We will see a direct mechanism that is also quasi-direct; these definitions are not mutually exclusive. A mechanism is said to be indirect if it is neither direct nor quasi- direct.

In a quasi-direct mechanism, choosing mi (given m−i) is equivalent to choosing a public goods level yi(mi, m−i), and then paying a tax that depends only on yi and m−i.

With quasi-direct mechanisms we sometimes view the message as yi rather than mi, and re-write the tax function as ti(yi, m−i). 1 Consider an agent in a quasi-direct mechanism choosing yi ∈ R to maximize ui(ωi − ti(yi, m−i), yi) given m−i. Suppose ti(yi, m−i) is differentiable in yi. The individually- ∗ optimal choice yi will be characterized by the first-order condition for utility maximiza- tion that ∗ ∂ti(yi , m−i) ∗ ∗ = MRSi(ωi − ti(yi , m−i), yi ). (2) ∂yi

6 Combining this with the Samuelson condition (1) (and our assumption of quasiconcave ) gives the following lemma.

Lemma 1. A quasi-direct mechanism is conditionally Nash efficient if and only if, for every balanced equilibrium m∗,

∗ ∗ X ∂ti(y(m ), m ) −i = κ. ∂y i i

It is not hard to construct quasi-direct mechanisms that are conditionally Nash effi- cient. For example, consider the proportional tax mechanism described by Groves and Ledyard (1977).

Definition 2. The proportional tax mechanism ΓP is given by

P 1 (2.1) Mi = R for each i,

P P (2.2) y (m) = i mi,

P P (2.3) ti (m) = αiκy (m),

I P where (αi)i=1 is any vector of parameters satisfying i αi = 1.

The proportional tax mechanism is quasi-direct; we can view the agent as choosing yi P P (by setting mi = yi − j6=i mj) and then paying ti (yi, m−i) = αiκyi. By Lemma 1, this mechanism is conditionally Nash efficient. The difficulty, however, is that equilibrium rarely exists. At any Nash equilibrium m∗, all agents must select the same value of

∗ ∗ ∗ yi. Call this value y . By equation (2) we have MRSi(ωi − αiκy , y ) = αiκ for each i. Thus, we have equilibrium existence only in the knife-edge economies in which there exists a y satisfying this requirement for all agents. In those economies the equilibrium will be balanced and will generate Pareto optimal allocations. In all other economies no equilibrium will exist.

7 2.4 The Original Groves-Ledyard Mechanism

The Groves-Ledyard mechanism can be viewed as an extension of the proportional tax mechanism that guarantees existence of equilibrium by adding a quadratic penalty term.

Definition 3. The Groves-Ledyard mechanism ΓL is given by

L 1 (3.1) Mi = R for each i,

L P (3.2) y (m) = i mi,

L γ  I−1 2 2  (3.3) ti (m) = αiκy(m) + 2 I (mi − m¯ −i) − σ (m−i) , where

I P • ((αi)i=1, γ) is a vector of parameters satisfying i αi = 1 and γ > 0,

1 P • m¯ −i = I−1 j6=i mj, and

2 1 P 2 • σ (m−i) = I−2 j6=i(mj − m¯ −i) .

Groves and Ledyard (1977, 1980) show that this mechanism is (unconditionally) Nash efficient. One deficiency is that it may violate individual rationality, meaning there are economies in which some agents prefer the initial endowment over the equilibrium allo- cation of the mechanism. Hurwicz (1979a) shows that if one wants both Nash efficiency and individual rationality for every economy (subject to a continuity condition), then one must select a mechanism whose equilibrium outcomes are Lindahl allocations.8 We refer to such mechanisms as Nash-Lindahl mechanisms. One of the first Nash-Lindahl mechanisms was identified by Walker (1981). Healy and Mathevet (2012) provide a full characterization of continuously differentiable Nash-Lindahl mechanisms when the set of admissible economies is sufficiently rich. They prove that, for any Nash-Lindahl mecha- nism, ti(m) must be of the form ti(m) = qi(m−i)y(m). The proportional tax mechanism

8 ∗ I ∗ P A Lindahl allocation is a vector ((xi )i=1, y ) such that there exists some (p1, . . . , pI ) with i pi = κ ∗ ∗ for which the maximizers (ˆxi, yˆi) ∈ arg max{ui(xi, yi): xi + piyi ≤ ωi} satisfyx ˆi = xi andy ˆi = y for each i.

8 and the Walker mechanism satisfy this property, while the Groves-Ledyard mechanism does not. Other Nash-Lindahl mechanisms have been proposed by Hurwicz (1979b); Tian (1990); Kim (1993); Chen (2002); Healy and Mathevet (2012), and Van Essen (2009), among others.

3 Using Groves Mechanisms to Construct Nash Ef- ficient Mechanisms

3.1 Groves Mechanisms

We begin with the definition of the Groves mechanisms.

Definition 4. A Groves mechanism ΓG is given by

G (4.1) Mi is a set of strictly concave, differentiable functions mi such that, for all m, the P set arg maxy { i mi(y) − κy} is not empty,

G P (4.2) y (m) = arg maxy { i mi(y) − κy}

G G P (4.3) ti (m) = αiκy (m) − j6=i (mj(y(m)) − αjκy(m)) + hi(m−i),

P where i αi = 1 and hi(m−i) is any function that does not depend on mi.

G The tax function ti is measurable in {(y(m), m−i)}m. Furthermore, for any yi and G 0 P 0 m−i, player i can announce any mi ∈ Mi with mi(yi) = κ − j6=i mj(yi) to get G y (mi, m−i) = yi. Thus, Groves mechanisms are quasi-direct. If we restrict ourselves to G economies where the actual preferences are quasi-linear and the vi functions are in Mi , then the Groves mechanisms are also direct mechanisms, and truth-telling is a domi- nant strategy (Groves, 1973). In the literature, Groves mechanisms are almost always applied to quasi-linear economies and viewed as direct mechanisms. The key innovation of Groves and Ledyard (1977) is to view them instead as quasi-direct mechanisms for more general economies.

9 Groves and Ledyard (1977) prove the following properties of Groves mechanisms in general economies.

Lemma 2 (Groves and Ledyard (1977), Theorem 3.1). Consider any e ∈ E and any equilibrium m∗ of a Groves mechanism ΓG in e.

(2.1) Each agent’s message reveals their marginal rate of substitution at the realized allocation:9 ∂m∗(yG(m∗)) i = MRS (ω − tG(m∗), yG(m∗)). ∂y i i i

P G ∗ G ∗ (2.2) The Samuelson condition is satisfied: i MRSi(ωi − ti (m ), y (m )) = κ.

A proof appears in the appendix.

Remark. There are many possible best responses to a given m−i, even with quasi-linear preferences. For example, consumer i can directly affect the taxes of others by adding a scalar to their message. The resulting message will still be a best response, and the added scalar will not alter the public good level or their own tax. Other similar manipulations are possible.

0 G 0 G Remark. In quasi-linear economies, any mi with mi(y (mi, m−i)) = vi(y (mi, m−i)) is a best response to m−i. If other agents change their message to somem ˆ −i at which 0 G 0 G mi(y (mi, mˆ −i)) 6= vi(y (mi, mˆ −i)), then mi is no longer a best response tom ˆ −i. Thus, mi is not a dominant strategy. If, however, mi ≡ vi (up to an additive scalar), then mi is a best response regardless of the value of yG. Thus, truth-telling is a dominant strategy in Groves mechanisms with quasi-linear preferences. With general preferences, however, consumer i’s MRSi depends on the level of taxes; as others change their message (for example, by adding scalars), consumer i must alter her message in response to match the resulting change in MRSi.

9Groves and Ledyard (1977) actually show that agents reveal their ‘marginal willingness to pay’, but it is equivalent here to the MRSi function.

10 That the Samuelson condition is satisfied is only one step toward establishing Nash efficiency. What remains is to ensure that an equilibrium exists, and that it is balanced. For general Groves mechanisms, equilibrium existence is impossible if the mechanism fails to collect sufficient funds to finance yG(m). The VCG mechanism cures this problem by ensuring that hi(m) ≥ αiκy(m) for all m.

V V V V Definition 5. A mechanism Γ = ((Mi )i, y , (ti )i) is the VCG mechanism if it is a Groves mechanism in which P (5.1) hi(m−i) = maxy−i j6=i (mj(y−i) − αjκy−i).

Though the VCG mechanism will never run a shortage, it collects excess funds in many economies. The excess cannot be returned to agents in the economy without altering their preferences, and therefore represents social inefficiency. To achieve Pareto optimality, we must use a Groves mechanism that is exactly balanced. Unfortunately, it is well-known that balanced Groves mechanisms are impossible

G 10 when Mi is sufficiently large. Thus, we must look to smaller domains of quasilinear preferences to achieve balance.

3.2 Quadratic Preferences and the Groves-Ledyard Mechanism

Groves and Loeb (1975) show that the Groves mechanism can be exactly balanced when preferences are quadratic-quasilinear and only vary in the intercept of the marginal util- ity for y. Groves and Ledyard (1977) use this space of preferences to construct their mechanism in general economies. Specifically, they apply the Groves mechanism with message space

n γ o M Q := v (·|θ ): v (y|θ ) = (γθ + α κ)y − y2, θ ∈ , (3) i i i i i i i 2I i R

I P where ((αi)i=1, γ) is a fixed vector of parameters satisfying i αi = 1 and γ > 0. Each Q θi ∈ R is a free parameter that indexes the set of functions in Mi . Thus, we can 10See footnote 22 of Groves and Ledyard (1977).

11 view consumers as selecting a θi ∈ R and then submitting mi = vi(·|θi) to the VCG mechanism. Letting θ be the vector of chosen parameter values, a few calculations reveals that

G X y (θ) = θi, i γ I − 1  tG(θ) = α κ + θ − θ¯ 2 − σ2(θ ) . i i 2 I i −i −i

This is exactly the Groves-Ledyard mechanism ΓL. Because it is balanced, we have from Lemma 2 that this mechanism is conditionally Nash efficient. Furthermore, Groves and Ledyard (1980) show that equilibrium exists for a wide class of economies, giving the following result.

Proposition 1 (Groves and Ledyard (1977,1980)). Let Γ be a Groves mechanism in

V G which Mi = Mi (the space of parameterized quadratic-quasilinear functions given in (3)). Then Γ is equivalent to the Groves-Ledyard mechanism ΓL, has a balanced equilibrium for every e, and is Nash efficient.

3.3 Tian’s Preferences and the Generalized Groves-Ledyard Mechanism

Originally, it was conjectured that quadratic-quasilinear preferences are the only pref- erences for which Groves mechanisms can be balanced (see Laffont and Maskin, 1980). Then Tian (1996) showed that budget balance could be obtained on a wider class of polynomial functions that may have higher order than two. Liu and Tian (1999) ex- tended this line of inquiry by providing a characterization of domains of functions on which there exists a balanced Groves mechanism. We can now replicate Groves and Ledyard’s construction of their mechanism, but with the more general budget-balancing preferences of Tian. The resulting mechanism will generalize the original Groves-Ledyard mechanism.

12 Unfortunately, Tian’s preferences were derived for the case of κ = 0. And his proof that these preferences result in a balanced budget rely on the arguments of Laffont and Maskin (1980), who also assume κ = 0. Thus, before we can proceed, we must re-derive the Laffont-Maskin result and Tian’s balanced preferences for the case of κ > 0.

Proposition 2 (Based on Laffont and Maskin (1980), Theorem 4.1). Assuming yG(m) is differentiable and κ ≥ 0, there exists a balanced Groves mechanism if and only if there I P exist (αi)i=1 such that i αi = 1 and

I X ∂I−1  ∂y  (v0(y(m)|θ ) − α κ) ≡ 0. ∂m i i i ∂θ i=1 −i i

Proposition 3 (Based on Tian (1996), Theorem 1). Assume yG(m) is differentiable and I P κ ≥ 0. Fix any (αi)i=1 such that i αi = 1, any c > 0 and d ≥ 0, and any continuously 0 differentiable functions f(y) and (ψi(θi))i such that f(y) ≥ 0 for all y, f (y) 6= 0 for all 0 y, f is invertible, and ψi(θi) 6= 0 for all i and θi. For each r ∈ {2, 3,...,I − 1}, there G T r exists a balanced Groves mechanism if Mi = Mi , where

 r − 1  M T r = v (·|θ ): v (y|θ ) = α κy + f(y)ψ (θ ) − (cf(y) + d)r/(r−1) , θ ∈ 1 . i i i i i i i i rc i R

Proofs for both propositions appear in the appendix, and follow closely those of the original authors. Now we apply the Tian preferences to generate a family of generalized Groves-Ledyard mechanisms. equationproposition

I P Proposition 4. Fix r ∈ {2, 3,...,I − 1} and parameters (αi)i=1 such that i αi = 1, T r c > 0, d ≥ 0, f(·), and (ψi(·))i. Using message space Mi in the Groves mechanism gives the following generalized Groves-Ledyard mechanism.

∗ (4.1) Mi = R,

∗ −1 1 ¯ r−1  (4.2) y (θ) = f c ψ(θ) − d ,

13 ∗ ∗  1 ¯ r−1  ¯ r−1 ¯ r (4.3) ti (θ) = αiκy (θ) − (I − 1) c ψ(θ) − d ψ−i(θ−i) + rc ψ(θ) + hi(θ−i),

¯ P ¯ P where ψ(θ) = i ψi(θi)/I and ψ−i(θ−i) = j6=i ψj(θj)/(I − 1). The mechanism is balanced if, in addition,

r  r   q  X X I − 1 2r − 1 t1,t2,...,tq X X X Y h (θ ) = ··· ψ (θ )tk i −i  Ir−1 cr I − q  ik ik  q=1 Tq,r i16=i i26∈{i,ik}k<2 iq6∈{i,ik}k 1)tk−1 > tk (5) k=1 is the set of all strictly decreasing length-q sequences of positive integers that sum to r, and  r  r! = (6) t1, t2, . . . , tq t1!t2! ··· tq! is the multinomial coefficient.

equationproposition A detailed derivation appears in the appendix. The original Groves-Ledyard mech- anism is obtained by setting r = 2, f(y) = y, ψi(θi) = γθi for any γ > 0, c = γ/I, and d = 0.

3.4 Example Mechanisms

In this section we adapt the parameter restrictions that imply the Groves-Ledyard mechanism, but consider higher-order preferences represented by r > 2.11 For any

11Technically, these preferences deviate from from M T r defined above by adding a scalar multiple to the f(y)r/(r−1) term, but it can be shown easily that this does not impact budget balance.

14 r ∈ {2, 3,...,I − 1}, define

( ) ∗ γ(r − 1) r M = v (·|θ ): v (y|θ ) = α κy + γθ f(y) − f(y) r−1 , θ ∈ . i i i i i i i rI i R

The resulting public good function is characterized by

∗ X r−1 f(y (θ)) = ( θi) , i and the tax function is given by

" # X X γ(r − 1) X t∗(θ) = α κy∗(θ) − γθ ( θ )r−1 − ( θ )r + h (θ ). (4) i i j k rI k i −i j6=i k k

Algebraic manipulation of equation 4 yields a version of the tax function somewhat similar to the original Groves-Ledyard mechanism. Specifically, we find

I r−1 ! X γ(r − 1) I − 1 r X t∗(θ) =α κ( θ )r−1 + θ − θ¯  + b Ijθr−jθ¯j i i i r I i −i j i −i i=1 j=1 (r − 1) + γIr−1(I − 1) b θ¯r + h (θ ), r r −i i −i where

r r r−1 r (−1)i  I − (I − 1)  b = − i + r−1 i−1 i . i (I − 1)i I − 1

∗ P For the case of r = 2 and f(y) = y, we have y (θ) = i θi, as in Groves-Ledyard, and " # γ 1 X 2 X X h (θ ) = (I − 1) θ2 + θ θ . i −i 2 I − 1 j I − 2 j k j6=i j6=i k6=i,j

L Further manipulation gives the exact form for ti above.

15 Finally, if r = 3, and f(y) = y, we have

∗ X 2 y (θ) = ( θi) i and

I 2  X 2 γ(r − 1)I − 1 3 X  2 t∗(θ) = α κ θ + θ −θ¯ + b Ijθ3−jθ¯j + γ(I−1)b I2θ¯3 +h (θ ), i i i r I i i−1 j i −i 3 3 −i i −i i=1 j=1 where

2γ 1 X 3 X X 1 X X X h (θ ) = (I − 1)[ θ3 + θ2θ + θ θ θ ]. i −i 3 I − 1 j I − 2 j k I − 3 j k l j6=i j6=i k6=i,j j6=i k6=i,j l6=i,j,k

4 Equilibrium Existence and Two-Dimensional Mech- anisms

Groves and Ledyard (1980) prove that a competitive equilibrium exists for their mecha- nism in a wide range of economies. Here we find conditions on the strategy space under which equilibrium existence is assured for Groves mechanisms, and then ask whether or not those conditions can translate to the generalized Groves-Ledyard mechanisms with their more restrictive strategy spaces.

G G Proposition 5. Suppose Γ is a Groves mechanism such that, for every i, mi ∈ Mi , G and scalar ai, the function mi(·) + ai is also in Mi . If an economy e has an allocation o o G ((xi )i, y ) satisfying the Samuelson condition (eq. 1) and there is a messagem ˆ ∈ M 0 o o o G for whichm ˆ i(y ) = MRSi(xi , y ) for all i, then there exists a Nash equilibrium of γ in o o e whose allocation is ((xi )i, y ).

Unfortunately, this existence result does not apply to the generalized Groves-Ledyard mechanisms proposed above because their message spaces do not contain functions that differ only by a scalar. But if we add a scalar to each function as a second parameter—

16 which the agent must also announce—then the message space becomes rich enough to permit equilibrium existence via Proposition 5. We now briefly formalize the development of the resulting mechanism. Fix any generalized Groves-Ledyard mechanism, which is a Groves mechanism ΓG =

T r G G T r ((Mi )i, y , (ti )i) with Tian preferences M for some r ∈ {2,...,I−1}. Now, construct S S S S a new ‘shifting’ mechanism Γ = ((Mi )i, y , (ti )i) as follows. First, set

Sr  T r Mi = vi(·|θi, βi) = vi(·|θi) + βi : vi(·|θi) ∈ Mi , βi ∈ R .

S We can equivalently view agents as submitting (θi, βi) instead of mi. Next, set y (θ, β) = yG(θ). Now, if we simply apply the Groves tax function with these new preferences, we S G P would have that ti (θ, β) = ti (θ) − j6=i βj for each i. This is undesirable; if the original mechanism were balanced, this new mechanism would be unbalanced by a factor of P P P i j6=i βj. To compensate, we need to add this amount into i hi(θ−i, β−i). This can be done by adding (I − 1)βi+1 to each hi(θ−i, β−i), where i + 1 is taken modulo I. In total, we now have

S G X ti (θ, β) = ti (θ) − βj + (I − 1)βi+1. j6=i The added terms clearly balance out in aggregate, preserving the balance properties of the original mechanism.

Since βi does not affect consumer i’s allocation, this added dimension does not alter the strategic properties of the mechanism. It does, however, give us enough flexibility to use the above existence result. Thus, we have the following proposition.

Proposition 6. For any generalized Groves-Ledyard mechanism Γ∗, the two-dimensional ‘shifting’ mechanism ΓS defined above

(6.1) satisfies the Samuelson condition (Lemma 2) at any equilibrium,

P S S (6.2) is budget balanced ( i ti (θ, β) = κy (θ, β)) for every θ and β, and

(6.3) has an equilibrium in every economy e ∈ E.

17 Consequently, ΓS is a Nash efficient mechanism.

A simple interpretation of this mechanism is that it is identical to Γ∗, except each agent i also gets to take any amount of private good from their neighbor and redistribute it equally among j 6= i. This makes any vector of private goods consumption obtainable at any public goods level, subject to the aggregate resource constraint. Any general- ized Groves-Ledyard mechanism can be augmented in this way to guarantee existence, including the original Groves-Ledyard mechanism.

5 Discussion

We do not claim that the generalized Groves-Ledyard mechanisms are in any way more practical or useful than the original mechanism. Instead, our goal here is to highlight how the method of creating mechanisms used by Groves and Ledyard (1977) could be extended to find other mechanisms. In some sense, we view our results as negative; the Tian preferences are the broadest class of preferences for which it is known that the Groves mechanism can be balanced, and we fully characterize the mechanisms that can be created using these preferences. For all but the simplest quadratic preferences, the resulting mechanisms are quite unruly. But, if there are no other preferences that balance the Groves mechanism, then there are no other mechanisms that can be constructed in this way. Of course, this is also not the only way to construct Nash efficient mechanisms. Brock (1980) used a differential approach to show how such mechanisms might be constructed.12 Healy and Mathevet (2012) used this approach to characterize Nash-Lindahl mechanisms. In their setting it is easy to see that conditional Nash efficiency (ignoring the Lindahl P requirement) is characterized by i(∂ti/∂mi)/(∂y/∂mi) = κ for all m. And budget P balance is given by i ti(m) = κy(m) for all m. These are fairly easy to satisfy. Thus, it 12Groves and Ledyard (1987) describe the procedure in detail. Like us, Brock (1980) finds that the Groves-Ledyard mechanism is the ‘simplest’ mechanism one might hope to construct that guarantees existence. Another notably simple mechanism was constructed by Walker (1981).

18 is the equilibrium existence requirement that is the real challenge, not conditional Nash efficiency. We study a two-good economy for simplicity. Groves and Ledyard (1977) study economies with arbitrary finite numbers of goods. They demonstrate that the procedure of using the Groves mechanisms to construct Nash efficient mechanisms faces no difficulty with the larger number of goods. Messages simply become vectors instead of scalars. Thus, it is immediately clear that our more generalized construction will similarly apply to the larger setting. Though our ‘shifting’ mechanism has the benefit of equilibrium existence, it has a few undesirable properties. It is not symmetric. Agents are forced to treat their neighbors (and to be treated by their neighbors) differently than everyone else. This creates obvious political difficulties in practice. But it appears there is no simple way to make the mechanism symmetric without sacrificing either budget balance or equilibrium existence. Behavior would also be distorted by the slightest degree of . Each agent essentially engages in a in which they unilaterally redistribute money among the other parties. The required vector of redistribution that guarantees equilibrium for some economy may be seen by some agents as undesirable, leading again to a non-existence result. Agents in a larger context may also fear reciprocity for their redistribution decisions. Finally, equilibrium existence may require a very specific vector of transfers, to be executed simultaneously by all parties, all of whom are indifferent over their own choices. It seems unlikely that equilibrium would obtain in practice.

19 Chapter 2: Symmetric Mechanism Design

1 Introduction

Designers of economic mechanisms can often benefit by biasing the rules in favor of some of the participating agents. In auction design, for example, it is well known (Myerson (1981)) that if bidders are heterogenous then the seller can intensify the competition by subsidizing the bids of weaker bidders. Thus, this ‘affirmative action’ type of bias may increase the seller’s revenue.13 A second example is the design of reward schemes whose goal is to incentivize agents to exert costly effort, where even in a completely symmetric environment the principle can benefit by introducing discriminatory rewards. This type of results have been obtained by Winter (2004) in a joint production setup with increasing returns to scale technology, and by P´erez-Castrilloand Wettstein (2016) in the setup of contests. Third, the admission criteria employed by academic institutions in order to achieve a diverse student body are often criticized as discriminatory and unfair.14 On the other hand, fairness is a high priority goal for policy makers. This is evident from the many existing acts and regulations whose goal is to guarantee that markets are not biased against certain groups in the population.15 In the context of mechanism design, a natural candidate for an anti-discriminatory regulation is that the “rules of the game” are the same for all participants. In other words, the mechanism should be symmetric across agents. Symmetry is a normatively appealing property that has been used extensively in the social choice and mechanism design literature. It appears for

13In a recent paper, Deb and Pai (Deb and Pai (2017)) show how the seller can implement the revenue-maximizing auction without explicitly biasing the rules in favor of weaker bidders. We discuss their result and its relation to our work in detail below. Another relevant recent paper is Knyazev Knyazev (2016) who studies auction design when the goal of the seller is to maximize the surplus of his ‘favorite’ bidder. 14These policies have been challenged in courts, a recent example is the case of Fisher verses University of Texas, US supreme court, vol. 570 (2013). 15For example, the U.S. Equal Employment Opportunity Commission (EEOC) enforces several anti- discriminatory labor-market laws. Another example is the Genetic Information Nondiscrimination Act of 2008 in health insurance markets.

20 example in the classic result of May (1952) on simple majority rule, in the theory of social welfare functions (e.g. (Mas-Colell et al., 1995, Chapter 22)), and in the theory of values of cooperative games (e.g. Shapley (1953)), among many others. The goal of this paper is to analyze the extent to which symmetric mechanisms guarantee fair outcomes. We consider an abstract mechanism design environment with incomplete information, allowing for both correlated types and interdependent values. The output of a mechanism specifies a public outcome and a private outcome for each agent. For instance, in a public good provision problem (as in e.g. Ledyard and Pal- frey (1999)) a mechanism determines whether the good is provided or not (the public outcome) and the transfer required from each of the agents (the private outcomes) as a function of the profile of messages sent to the mechanism.16 Roughly speaking, a symmetric mechanism is one in which a permutation of the vector of messages results in no change to the public outcome and the corresponding permutation of the private outcomes. Our first main result (Theorem 1) characterizes the class of social choice functions (i.e., mappings from type profiles to outcomes) that a designer can implement in Bayes- Nash equilibrium using symmetric mechanisms. In many environments this class is large and contains functions that clearly favor certain agents over others. In some environ- ments this includes even the extreme case of dictatorial functions, as we illustrate in Section 2 below. Thus, a regulation requiring mechanisms to treat agents symmetrically (in the sense we have defined it) is not necessarily an effective way to achieve fairness. On the other hand, in many environments symmetry is not a vacuous condition; some implementable social choice functions can no longer be implemented once symmetry is required. The characterization in Theorem 1 can be roughly described as follows. A given (incentive compatible) social choice function f can be implemented by a symmetric mechanism if and only if for every pair of agents i and j there exists another social

16Environments with only public outcomes (such as voting environments) and environments with only private outcomes (such as auctions) can of course be accommodated in our framework.

21 choice function fij such that (1) fij treats i and j symmetrically; and (2) for any type of agent i, the expected utility i receives under f by truthfully revealing his type is weakly higher than the expected utility he receives under fij from any possible type report. When such functions fij exist, we explicitly show how to construct a symmetric mechanism that implements f. The construction is based on the idea that the equilibrium message of an agent encodes his identity as well as his type, and the mechanism uses f to determine the outcome when the message profile contains all identities. The function fij is used to incentivize agent i to reveal his true identity instead of ‘pretending’ to be agent

17 j. Conversely, if for some pair of agents i and j an appropriate function fij does not exist, then, not only that this construction does not work, symmetric implementation is impossible altogether. Thus, this result has the flavor of a ‘revelation principle’, in the sense that one should only consider a particular kind of (indirect) mechanisms to determine whether symmetric implementation is possible or not. We emphasize that we do not argue that the type of mechanism described above is a practical way for designers to get around a rule requiring symmetry. It is merely a theoretical construction that enables us to prove the sufficiency part of our theorem and to provide an upper bound on what can be symmetrically implemented. It may very well be that simpler and more practical symmetric mechanisms can be found that implement a given social choice function, as is the case in Deb and Pai (2017) for example. However, we do think that an important contribution of the paper is to expose the role that indirect mechanisms have in overcoming exogenous constraints such as symmetry.18 It is important to point out that Theorem 1 concerns partial implementation, and

17This construction is reminiscent of mechanisms used in the literature on implementation with com- plete information (e.g. Maskin (1999)), where each agent reports everyone’s preferences and an agent is punished if his message disagrees with the messages of all other agents. There are important differences however: First, and most importantly, in our case the designer does not use the punishment scheme to elicit any new information from the agents; it is only used to create symmetry in cases where the original social choice function is not symmetric. Second, under complete information the construction works for every social choice function, while this is not the case in our setup. Third, our construction is meaningful even if there are only two agents, which is not the case with complete information. 18Other examples that illustrate the potential importance of indirect mechanisms include Bester and Strausz (2001) for the case where the designer lacks commitment power, Saran (2011) when agents have menu-dependent preferences, and Strausz (2003) when only deterministic mechanisms are allowed.

22 that the mechanism used in its proof would typically have multiple equilibria that may generate different outcomes than the one the designer intended. We thus follow the standard approach of the mechanism design literature in assuming that the designer can induce the agents to play a particular equilibrium by making it focal.19 Notice that, if the underlying environment is symmetric across agents, then in order to implement a biased social choice function with a symmetric mechanism the equilibrium itself must generate the asymmetry, i.e., the designer should make an asymmetric equilibrium the focal one. A regulator interested in preventing discrimination should therefore be concerned not only with the symmetry of the rules of the mechanism, but also with the way the mechanism is framed and with the communication between the designer and the participants. A well-known criticism of implementation in Bayes-Nash equilibrium is that it as- sumes too much among the agents and the planner (see Bergemann and Morris (2005)). One solution to this problem is to consider more robust solution concepts. Here, we consider symmetric implementation in dominant strategies. Dom- inant strategy mechanisms are typically studied in environments with private values, so we restrict attention to this type of environments. In Theorem 2 we prove that if the environment is symmetric then only symmetric social choice functions can be im- plemented in (weakly) dominant strategies by symmetric mechanisms. In other words, indirect mechanisms are not useful in overcoming the symmetry constraint. This stands in stark contrast to the results for implementation in Bayes-Nash equilibrium. The current work is motivated by a recent paper of Deb and Pai (Deb and Pai (2017)), who study the possibility of a seller to design a discriminatory auction in an independent private-values setup under the same symmetry constraint as in the current paper. However, the set of feasible mechanisms they consider is further restricted to include only mechanisms in which bidders submit a single number (the bid), and the highest bidder wins the object. This rules out the type of mechanisms that we use in our proof. Nevertheless, they are able to show that essentially any incentive compat-

19The equilibrium used in the proof of Theorem 1 is focal in the sense that each agent reports his true name and type.

23 ible direct mechanism (social choice function in our terminology) can be implemented in Bayes-Nash equilibrium by some such symmetric auction.20 In other words, symme- try puts almost no restriction on what can be implemented. In Corollary 1 below we show that the same is true in voting environments with independent private-values, i.e. every incentive compatible social choice function can be symmetrically implemented in Bayes-Nash equilibrium. However, as we show by examples, this is no longer true with correlated types or interdependent values. In the following section we illustrate our results with three examples. The first is in an environment of voting with private values, the second in a common-value voting setup (as in the ‘Condorcet Jury’ theorem), and the third is in a model of assignment of indivisible goods. Section 3 presents the notation and the definition of symmetry that we use, and Section 4 contains Theorem 1 and its proof. The special case in which the outcome space contains only public outcomes (voting environment) is further analyzed in Section 5. Section 6 contains the analysis of implementation in dominant strategies. Section 7 concludes.

2 Motivating examples

2.1 Voting with private values

Consider a society of two agents which needs to choose an alternative from the set {x, y} (they may also choose to randomize between the alternatives). Each agent can either be of type X or of type Y . Both agents have the same (private-values) utility function given by u(X, x) = u(Y, y) = 1 and u(X, y) = u(Y, x) = 0, that is an agent of type X prefers alternative x and an agent of type Y prefers alternative y. The realized profile of types is drawn according to the distribution given by the following matrix, where rows correspond to types of agent 1 and columns to types of agent 2:

20The notion of implementation used in Deb and Pai (2017) is somewhat weaker than ours, since they only require that the expected payment in the indirect mechanism is equal to the payment in the direct mechanism. Our definition requires that they are equal ex-post.

24 1 \ 2 X Y X µ 1/2 − µ Y 1/2 − µ µ

The parameter 0 ≤ µ ≤ 1/2 measures the likelihood that agents have similar preferences. Notice that when µ > 1/4 agents’ types are ‘positively correlated’ and when µ < 1/4 they are ‘negatively correlated’. When µ = 1/4 types are independent. Consider the case where agent 1 is a dictator who gets to choose his favorite alter- native. We can describe this Social Choice Function (SCF) f by the following matrix:

1 \ 2 X Y X x x Y y y

Clearly f is incentive compatible, since agent 1 always gets his favorite alternative and agent 2 cannot influence the outcome. However, f treats the agents asymmetri- cally, favoring agent 1 over 2. The violation of symmetry is expressed by the fact that f(X,Y ) = x while f(Y,X) = y, so that two type profiles which are permutations of one another lead to different outcomes. Is there a way to implement f as a Bayesian equilibrium of some symmetric indirect mechanism? We claim that the answer is positive if and only if µ ≥ 1/4. To see this, consider the indirect mechanism in which the set of messages available to each agent is M = {m1, m2, m3, m4}, and the mapping from profiles of messages to alternatives is given by the following matrix:

1 \ 2 m1 m2 m3 m4

m1 β γ x x

m2 γ δ y y

m3 x y x x

m4 x y x x

25 In the above matrix β, γ and δ stand for numbers in [0, 1] representing the probability with which alternative x is chosen (y is chosen with the complementary probability). Notice that this is a symmetric mechanism in the sense that when the agents switch their messages the outcome does not change.

Consider the strategy profile in which agent 1 plays σ1(X) = m1 and σ1(Y ) = m2 and agent 2 plays σ2(X) = m3 and σ2(Y ) = m4. Note that if the agents follow these strategies then the SCF f is implemented, as can be seen in the boldfaced entries in the matrix. Also, notice that σ1 is a best response for agent 1 to σ2. To make this profile of strategies an equilibrium we therefore need to choose β, γ, δ ∈ [0, 1] so that the following inequalities are satisfied:

(2µ) ∗ β + (1 − 2µ) ∗ γ ≤ 2µ (5)

(2µ) ∗ γ + (1 − 2µ) ∗ δ ≤ 2µ (6)

(1 − 2µ) ∗ (1 − β) + (2µ) ∗ (1 − γ) ≤ 2µ (7)

(1 − 2µ) ∗ (1 − γ) + (2µ) ∗ (1 − δ) ≤ 2µ (8)

Inequalities (1) and (2) are the incentive constraints for agent 2 of type X not to deviate to messages m1 or m2, respectively. Similarly, inequalities (3) and (4) are the incentive constraints for agent 2 of type Y . Now, if µ ≥ 1/4 then it is immediate to check that β = γ = δ = 1/2 satisfy the above inequalities, and hence f can be implemented by a symmetric mechanism. On the other hand, if µ < 1/4 then a simple manipulation of (1) and (3) implies that β = 1 and γ = 0 must hold. But if γ = 0 then (4) cannot be satisfied, hence there is no solution to this system. This argument only implies that the particular kind of mechanism considered above cannot symmetrically implement f when µ < 1/4. But our main result (Theorem 1) says that looking at this kind of mechanisms is without loss of generality, that is, it’s impossible to find any symmetric mechanism that implements f. To understand this

26 result better, let us rename the messages in the above mechanism so that M = {1, 2} ×

{X,Y }. Thus, each agent reports a name of an agent and a type. Replace m1 by 1X, m2 by 1Y , m3 by 2X, and m4 by 2Y . The strategy profile we considered is the one in which each agent truthfully reports his name and type. The outcome function of the mechanism is designed so that under this truthful strategy profile the given SCF f is used to determine the outcome. To make this truthful reporting an equilibrium, each agent should be incentivized not to deviate to messages that ‘belong’ to the other agent. It is easy to design a punishment for agent 1 to prevent him from ‘pretending’ to be agent 2, since agent 1 gets his favorite alternative in equilibrium. However, when µ < 1/4 it is not possible to design an effective punishment for agent 2 to prevent him from ‘pretending’ to be 1. The difficulty comes from the fact that the punishment should satisfy the symmetry constraint (the outcome should be γ in both message profiles (1X, 1Y ) and (1Y, 1X)). We show in Theorem 1 that if such a symmetric punishment scheme is not available then no symmetric mechanism that implements f exists.

2.2 Common value voting

Consider a ‘Condorcet Jury’ setup, where given the state of nature all agents agree on the correct decision, but there is uncertainty about the realized state. Specifically, assume that the state is either H or L, and that each of a group of n agents receives a signal in {h, l}. Let µ be the (full-support) distribution over {H,L} × {h, l}n according to which the state and signals are drawn. The set of possible decisions is {H,L}. All agents receive utility of 1 if the decision matches the state and a utility of 0 otherwise. To fit with our formal framework, it will be convenient to define the utility of agents as a function of the realized profile of signals (denoted t = (t1, . . . , tn)) and the decision chosen (by taking expectation over the state of nature). That is, the utility function of each agent i is given by ui(t,H) = µ(H|t) and ui(t,L) = µ(L|t) = 1 − ui(t,H). Consider the SCF f which chooses the decision more likely to be correct given any profile of types. That is f(t) = H if µ(H|t) ≥ 1/2 and f(t) = L otherwise. Clearly

27 f is incentive compatible, since even if agent i knows everyone else’s types it is still a best response for him to be truthful (i.e., truth-telling is an ex-post equilibrium under f). However, since we make no assumption on the structure of µ, it may be that some agents’ signals are highly correlated with the state and hence provide valuable information, while others’ signals are independent of the state. If this is the case then f would not be symmetric – the decision will not depend just on the number of h signals in the message profile but also on the identity of the agents who sent those h signals. It is possible however to implement f by a symmetric mechanism. To see this, consider the ‘names mechanism’ described in the previous example, where the set of messages available for each agent is M = {1, . . . , n}×{h, l}. The mapping from message profiles to decisions is defined as follows. If all the names 1 through n appear in the message profile then choose the decision prescribed by f at the reported type profile, treating the names as if they were truthfully announced (even if they were not). In all other message profiles choose alternative H. This mechanism is clearly symmetric, in the sense that when the message profile is permuted the decision is not affected. In addition, the strategy profile in which each agent i reports his real name and his type is an equilibrium, since in this case f is used to determine the decision, so i gets his preferred alternative at each type profile. Thus, this symmetric mechanism implements f.

2.3 An assignment model

Three goods {a, b, c} are to be allocated to three agents {1, 2, 3}, one good per agent.21 Each agent’s type is drawn from the set

T = {abc, acb, bac, bca, cab, cba},

21This example is based on the assignment model of Hylland and Zeckhauser (1979). It is important for our argument that the designer must allocate all three goods and cannot leave some agents empty- handed, as is often the case in applications.

28 where each t ∈ T corresponds to a strict ordering over the goods. Agents’ types are

1 independent and identically distributed according to the probabilities µ(abc) = 6 + 5δ 1 1 and µ(t) = 6 − δ for any other t ∈ T , with δ ∈ [0, 6 ). Note that when δ = 0 all types are equally likely, and that as δ increases the ordering a b c becomes more likely relative to all other types. For concreteness, suppose that the utility an agent gets from consuming his most preferred good is 3, from consuming his second-best is 2, and from consuming his least favorable is 1. A mechanism specifies a lottery over assignments as a function of the agents’ mes- sages. Our definition of symmetry requires in this setup that if the profile of messages is permuted then the vector of lotteries over the goods is permuted accordingly. For example, suppose that under some message profile agent 1 gets good a for sure while agents 2 and 3 each gets good b with probability 1/2 or good c with probability 1/2.22 Symmetry implies that if agents 1 and 2 switch their messages (and agent’s 3 message is unchanged) then agent 2 gets good a for sure and agents 1 and 3 each gets good b with probability 1/2 or good c with probability 1/2. Note that this implies in particular that when two agents send the same message to the mechanism they should get the same lottery over goods. Consider the ‘serial dictatorship’ SCF in which agent 1 gets to choose his favorite good first, and agent 2 gets to choose his favorite good from what’s left after 1’s choice. Agent 3 is assigned the good not chosen by the first two agents.23 This SCF clearly does not satisfy the above symmetry requirement. Yet, is there a symmetric indirect mechanism that can implement serial dictatorship? It turns out that the answer depends √ ∗ −3+ 13 ∼ on the parameter δ; if δ is sufficiently small, specifically δ ≤ δ := 12 = 0.05, then symmetric implementation is possible; if δ > δ∗ then it is not. We present here a sketch of the argument for the above claim; a detailed proof appears in Appendix A. Consider again the ‘names mechanism’, and recall that the

22By the Birkhoff–von Neumann Theorem this collection of lotteries corresponds to some distribution over assignments. 23See Abdulkadiro˘gluand S¨onmez(1998) for an analysis of the ‘random serial dictatorship’ mechanism in this model. In Bogomolnaia and Moulin (2001) this mechanism is referred to as ‘random priority’.

29 key for symmetric implementation is the existence of symmetric punishment schemes that prevent agents from sending other agents’ messages. Since agent 1 always gets his favorite good according to the given SCF, it is easy to construct such a punishment scheme to prevent him from lying for any value of δ. The same is true for agent 2, since if for example he pretends to be agent 3 then we can use the SCF in which agent 1 picks his favorite good as before, and the other two goods are randomly (uniformly) assigned to agents 2 and 3. This SCF is symmetric between agents 2 and 3, and agent 2 is worse off under this function relative to the original one for any value of δ. A similar construction works when agent 2 pretends to be agent 1. Symmetric implementation therefore depends on whether we can prevent agent 3 from pretending to be one of the other agents. The cutoff δ∗ is obtained by equating the probability with which agent 3 gets item c under the given serial dictatorship to 1/2: If δ ≤ δ∗ then this probability is less or equal to 1/2, and if δ > δ∗ then it is greater than 1/2. If this probability is below 1/2 then the punishment scheme is simply the constant SCF in which the lottery over the goods for agent 3 and the agent he pretends to be are both equal to the lottery that 3 gets under the serial dictatorship. Since the probabilities in this lottery are all below 1/2 this yields a legitimate random assignment, and agent 3 has no incentive to deviate. However, if this probability is greater than 1/2 then this construction no longer works, and we show in Appendix A that nothing else works in this case. By our main result this implies that no symmetric mechanism can implement serial dictatorship.24

3 Notation and definitions

We will work in a standard abstract mechanism design environment with incomplete in- formation as in e.g. Holmstr¨omand Myerson (1983). There is a finite set [n] = {1, . . . , n}

24It is easy to generalize this example to any number of agents and any (i.i.d.) type distribution. The key to whether symmetric implementation is possible or not is the lottery over the goods for the last agent: If this agent receives one of the goods with probability greater than 1/2 then symmetric implementation is not possible, otherwise it is.

30 of agents, with n ≥ 2. The private information of agents is described by their types, where T is the finite set of possible types of each agent. A state or a type profile is an assign-

n ment of an element in T for each agent, and we denote a state by t = (t1, . . . , tn) ∈ T , where ti ∈ T is the type of agent i. We use the standard notation in which a subscript −i means that the ith coordinate of a vector is omitted. In particular, given a state t,

0 we write t−i for the vector of types of all agents except i, and we write (ti, t−i) for the 0 state in which the type of i is ti and the types of all other agents are the same as in t. The set of finite-support distributions over some set A is denoted ∆(A).25 Agents’ beliefs about the types of others may depend on their own type in an arbitrary way. For

n−1 each i ∈ [n] there is a function µi : T → ∆ (T ) which specifies the beliefs of agent i given each one of his possible types. For any state t we write µi(t−i|ti) for the probability that i assigns to the other agents’ type profile being t−i when his type is ti. Note that we allow for correlated types and that beliefs need not be consistent with a common prior.

0 0 If every µi is constant, that is µi(t−i|ti) = µi(t−i|ti) for all i, ti, ti and t−i, then we say that types are independent. Let O be a set whose elements are called public outcomes, and let P be a set whose elements are called private outcomes. For example, in a public good provision problem O may stand for whether the good is provided or not and P for the transfer required from the agent to finance the good. Agents’ preferences are defined over lotteries over the set T n ×O ×P . Specifically, we assume that agent i’s preferences are represented by the von

n Neumann-Morgenstern utility function ui : T × O × P → R. Note that this allows the possibility of interdependent values, where an agent preferences depend on other agents’ types. Agents are assumed to be expected utility maximizers. With abuse of notation, if P P λ ∈ ∆(O ×P ) then we write ui(t, λ) for the expected utility o∈O p∈P λ(o, p)ui(t, o, p). The last component in the description of the environment is a set D ⊆ O × P n, whose elements are called feasible outcomes. Each d = (o, ) ∈ D is composed of a public

25Formally, ∆(A) is the set of all mappings ϕ : A → [0, 1] such that ϕ(a) 6= 0 for only finitely many P a ∈ A and a∈A ϕ(a) = 1. The restriction to finite-support lotteries is for expositional reasons. We often identify a ∈ A with the lottery that assigns probability 1 to a.

31 n outcome o ∈ O and a vector = (p1, . . . , pn) ∈ P that specifies the private outcome for each agent. By allowing D to be a strict subset of O ×P n we can incorporate constraints such as budget-balance in a public good provision problem, or that each agent gets a single good in an assignment problem (as in subsection 2.3 above). Throughout the paper we assume that D is closed under permutations of the vector of private outcomes. Formally, if π :[n] → [n] is any permutation of the agents’ names and (o, ) ∈ D then also (o, π) ∈ D, where π = (pπ(1), . . . , pπ(n)). A mechanism specifies the set of messages available to each player and an outcome function which maps each profile of messages to a lottery over feasible outcomes. We will restrict our attention to mechanisms in which all players have the same finite set of messages. A mechanism is therefore formally defined as a pair (M, g), where M is some finite set and g : M n → ∆(D). Before stating the main definition of this section we need one more piece of notation. Given any vector x of length n and any pair i, j ∈ [n], we denote by xij the vector of length n which is identical to x except that the ith and jth coordinates are switched.

Definition 6. A mechanism (M, g) is ij-symmetric if for every m ∈ M n and every (o, ) ∈ D, g(m)(o, ) = g(mij)(o,ij ).26

A mechanism (M, g) is symmetric if it is ij-symmetric for every pair i, j ∈ [n]. Equiva- lently, (M, g) is symmetric if for every permutation π of the agents

g(m)(o, ) = g(πm)(o, π).

In words, symmetric mechanisms have the property that if one message profile is a permutation of another then the mechanism output at the former is the corresponding permutation of the latter. For deterministic output this simply means that the public outcome is not affected by permutations of the message vector and the private outcomes

26g(m)(o, ) is the probability that the lottery g(m) ∈ ∆(D) assigns to the outcome (o, ) ∈ D.

32 are permuted accordingly. When the output is stochastic we require that the joint distribution over D is permuted. Note that symmetry implies in particular that the marginal distribution over the public outcome o is not affected by permutations of the message profile, and that if m is such that mi = mj then the marginal of g(m) on O × Pi is the same as its marginal on O × Pj. In addition, note that symmetry not only means that agents are treated equally in the outcomes they receive themselves, but also that agents are symmetric in the influence they have on other agents’ outcomes.

Remark. There are other possible ways to define symmetry that capture the idea that agents are treated equally by the mechanism. For instance, one could use a weaker notion than Definition 6 only requiring that the marginal on O × Pi of the distribution ij g(m) is the same as the marginal on O × Pj of the distribution g(m ), and that for any other agent k the marginal on O × Pk is the same under these two message profiles. This definition makes sense since agents only care about the public outcome and their own private outcome. The two definitions coincide for deterministic mechanisms. Our results can be adapted to this alternative definition.

Given the beliefs and preferences of the players, every mechanism induces a (finite) . A (pure) strategy for an agent is a mapping σ : T → M. If σ =

(σ1, . . . , σn) is a strategy profile and t is a state then we write σ(t) = (σ1(t1), . . . , σn(tn)) for the vector of induced messages. Given some λ ∈ ∆(D), it will be convenient to denote P by λ the marginal of λ on O × P , that is λ (o, p) = n λ(o, ) for every O,i i O,i {∈P : pi=p} (o, p) ∈ O × P . A strategy profile σ is a Bayes-Nash Equilibrium (BNE) of a mechanism (M, g) if

X X µi(t−i|ti)ui(t, g(σ(t))O,i) ≥ µi(t−i|ti)ui(t, g(mi, σ−i(t−i))O,i)

t−i t−i

for every i ∈ [n], every ti ∈ T , and every mi ∈ M. A Social Choice Function (SCF) is a mapping f : T n → ∆(D). The pair (T, f) is itself a mechanism, called the direct mechanism associated with f. A SCF f is ij-symmetric

33 (symmetric) if the direct mechanism associated with f is ij-symmetric (symmetric). Truth-telling refers to the strategy in the direct mechanism in which a player reports his true type. A SCF f is Bayesian Incentive Compatible (BIC) if truth-telling is a BNE of the direct mechanism associated with f. Finally, we define the notion of symmetric implementation that will be used in the next sections.

Definition 7. Let f be a SCF. A mechanism (M, g) implements f in BNE if there is a BNE σ of (M, g) such that f(t) = g(σ(t)) for every t ∈ T n. We say that f is symmetrically implementable in BNE if there is a symmetric mechanism (M, g) that implements f in BNE.

Note that we only require ‘partial implementation’, i.e. that there is an equilibrium for which the outcome of the mechanism coincides with that prescribed by the SCF. There may be other equilibria with different outcomes.

4 Symmetric implementation in BNE

Before stating the result we need one more definition.

Definition 8. Fix an environment and two SCFs f and f 0. We say that f dominates f 0 for agent i if

X X 0 µi(t−i|ti)ui(t, f(t)O,i) ≥ µi(t−i|ti)ui(t, f (si, t−i)O,i) (9)

t−i t−i for every ti, si ∈ T .

The left-hand side of (9) is the expected utility for agent i of type ti under f when all agents are truth-telling. The right-hand side is the expected utility for i of type ti under 0 0 f when he reports type si and all other agents are truth-telling. Thus, f dominates f for i if the former is not smaller than the latter for any pair of types ti and si.

34 Theorem 1. A SCF f is symmetrically implementable in BNE if and only if f is BIC and for every ordered pair of agents (i, j) there is an ij-symmetric SCF fij such that f dominates fij for agent i.

Before the formal proof we would like to make a few comments. First, from a technical point of view the theorem significantly simplifies the task of checking whether a given

SCF f is symmetrically implementable, since checking whether appropriate SCFs fij exist amounts to solving a collection of linear programs. This is well-illustrated by the voting example of subsection 2.1 and the constraints (1)–(4) that determine whether symmetric implementation is possible. Second, the theorem implies that the key to symmetric implementation of f is the ability of the designer to construct for each player i other SCFs that are uniformly worse for i than f. In particular, in an environment with monetary transfers, if the designer can freely tax agents arbitrarily high amounts then clearly such punishment schemes exist, and therefore symmetry will have no bite. Third, the sufficiency part of the proof is based on the ‘names mechanism’ as explained in the introduction and illustrated in the examples of Section 2. For the proof of necessity we use an argument reminiscent of the standard revelation principle: If some symmetric mechanism (M, g) and an equilibrium σ implements f, then consider the strategy profile in which agent i plays j’s equilibrium strategy σj instead of his own strategy, and all other agents follow their equilibrium strategies. This new strategy profile induces another SCF that is ij-symmetric and dominated for i by the original one, hence proves the existence of fij as required.

Proof. We start with the ‘If’ direction. Assume that f is BIC and that there are SCFs

{fij} satisfying the conditions in the statement of the theorem. We will now construct a symmetric mechanism (M, g) and an equilibrium σ of this mechanism that implements f. Let the set of messages available to each player be M = [n] × T . Thus, each player reports to the mechanism a name of a player and a type, and we denote by mi = (bi, si)

35 the message sent by agent i, where bi ∈ [n] and si ∈ T . We describe the outcome function g in three steps.

Step 1: Consider message profiles in which each agent truthfully reports his name, that is for all i, mi = (i, si) for some si ∈ T . Define g(m) = g((1, s1),..., (n, sn)) = f(s1, . . . , sn) in this case.

Consider now some message profile m = ((b1, s1),..., (bn, sn)) in which {b1, . . . , bn} = [n]. Any such m is a permutation of a unique message profile on which g was just defined in the previous paragraph. Therefore, there is a unique way to extend g to such message profiles while preserving symmetry. We define g in this way. Note that our assumption that D is closed under permutations of guarantees that the range of g is contained in ∆(D).

Step 2:

Fix two agents i, j ∈ [n]. Consider message profiles m in which bk = k for every k 6= i and bi = j, that is all agents truthfully report their name except agent i who reports name j. Thus, the message profile is of the form

m = ((1, s1),..., (i − 1, si−1), (j, si), (i + 1, si+1),..., (j, sj),..., (n, sn)).

Define g(m) = fij(s1, . . . , sn), that is, the SCF fij is used instead of f on the reported types. Notice that the fact that fij is ij-symmetric guarantees that our definition does not violate the symmetry of g when i and j switch their messages. Now, as in Step 1 above, there is a unique way to symmetrically extend g to all message profiles in which (b1, . . . , bn) contains all names except i and in which j appears twice. We extend g to preserve symmetry in this way. The same procedure is repeated for every ordered pair of agents (i, j).

Step 3: The message profiles on which g has not been defined in the previous two steps are those

36 in which at least two names are missing from the list (b1, . . . , bn). Define g arbitrarily on such message profiles, while preserving symmetry. Given the assumption on D such a symmetric function clearly exists.

The fact that the above mechanism is indeed symmetric is obvious from its construc- tion. Consider the strategy profile defined by σi(ti) = (i, ti) for every i, that is each player truthfully announces his name and his type. Clearly, f(t) = g(σ(t)) for every t ∈ T n.

We claim that σ is a BNE of the mechanism. Indeed, note first that agent i of type ti 0 can’t gain by deviating to a message of the form (i, ti), since in this case the mechanism would still use f to determine the outcome; since f is BIC such misreporting of the type is not beneficial. Second, consider a deviation of agent i of type ti to a message of the form mi = (j, si) with j 6= i. In this case the mechanism uses the function fij to determine the outcome, and since f dominates fij for agent i such a deviation is also not profitable. Thus, we have proven that (M, g) symmetrically implements f in BNE.

Consider now the ‘Only If’ direction, and assume that f is symmetrically imple- mentable in BNE. First, as is well-known, if f is not BIC then it cannot be implemented in BNE by any mechanism, so in particular it cannot be implemented by a symmetric mechanism (the ‘revelation principle’). Thus, f must be BIC.

It is left to construct the collection of SCFs {fij} from the theorem. Let (M, g) be a symmetric mechanism that together with the equilibrium σ implements f. Given a pair

0 0 0 of players (i, j), define the strategy profile σ by σk = σk for every k 6= i and σi = σj. That is, in σ0 all players except i play their equilibrium strategies, and player i plays j’s

0 equilibrium strategy. Define fij(t) = g(σ (t)).

We claim that this fij satisfies the conditions in the theorem. First, fij is ij- symmetric, since

0 0 ij ij 0 ij ij ij ij fij(t)(o, ) = g(σ (t))(o, ) = g((σ (t)) )(o, ) = g(σ (t ))(o, ) = fij(t )(o, ),

37 where the first and last equalities are by the definition of fij, the second equality follows from the symmetry of g, and the third equality follows from the fact that i and j play the same strategy under σ0.

Second, for every two types ti, si ∈ T we have that

X X µi(t−i|ti)ui(t, f(t)O,i) = µi(t−i|ti)ui(t, g(σ(t))O,i) ≥ t−i t−i X µi(t−i|ti)ui(t, g(σj(si), σ−i(t−i))O,i) =

t−i X 0 X µi(t−i|ti)ui(t, g(σ (si, t−i))O,i) = µi(t−i|ti)ui(t, fij(si, t−i)O,i),

t−i t−i where the first equality follows from f(t) = g(σ(t)), the inequality from the fact that σ is a BNE, the next equality by the definition of σ0, and the last equality by the definition of fij. Thus, f dominates fij for agent i and the proof is complete.

5 Voting environments

We call an environment a voting environment if it contains only public outcomes. This can be formally embedded in our general framework by letting P be a singleton. For ease of notation we will identify D with O in this case. Agents’ preferences are described

n n by ui : T × O → R, and an outcome function of a mechanism is g : M → ∆(O). Symmetry here means that g is invariant under permutations of the vector of messages, that is g(m) = g(πm) for every permutation π.

5.1 Voting with private values

We start by analyzing voting environments in which the preferences of a player do not depend on the types of his opponents.

0 Definition 9. Agents in a voting environment have private values if ui(t, o) = ui(t , o) 0 n 0 for every i ∈ [n], every o ∈ O, and every t, t ∈ T with ti = ti. When agents have

38 private values we will write ui(ti, o) for the utility of agent i of type ti when alternative o is chosen.

We now use Theorem 1 to derive two conditions, one necessary and one sufficient, for a SCF to be symmetrically implementable. These conditions are easier to check and to understand than the condition of Theorem 1, as will be illustrated by examples below. The following notation will be useful. Let

X λi(ti; f) = µi(t−i|ti)f(ti, t−i) ∈ ∆(O)

t−i be the expected distribution over O under a SCf f when agent i is of type ti and all agents are truthful. Also, let

LCi(ti; f) = {λ ∈ ∆(O): ui(ti, λ) ≤ ui(ti, λi(ti; f))}

be the lower contour set in ∆(O) of the lottery λi(ti; f) for type ti of agent i, i.e. the set of lotteries which are weakly worse for i of type ti than the expected lottery he gets under f when everyone is truthful.

Proposition 7. Consider a voting environment with private values. 1. Let f be a BIC SCF. If T LC (t ; f) 6= ∅ for each agent i, then f is symmetrically ti∈T i i implementable in BNE.

0 2. If n = 2 and f is symmetrically implementable in BNE, then LCi(ti; f)∩LCi(ti; f) 6= ∅ 0 for each agent i and for each pair of types ti, ti ∈ T .

Proof. 1. Fix an agent i and let λ∗ ∈ T LC (t ; f). For each other agent j and for ti∈T i i ∗ ij every t define fij(t) = λ . Clearly, fij(t) = fij(t ) for all t, so fij is ij-symmetric. Also, for every ti, si ∈ T we have

X ∗ X µi(t−i|ti)ui(ti, f(t)) = ui(ti, λi(ti; f)) ≥ ui(ti, λ ) = µi(t−i|ti)ui(ti, fij(si, t−i)),

t−i t−i

39 so f dominates fij for agent i. It follows from Theorem 1 that f can be symmetrically implemented. 2. We will prove the result for agent i = 1, the proof for i = 2 is identical. Fix

0 two types t1, t1 ∈ T . Since f is symmetrically implementable in BNE, it follows from 2 Theorem 1 that there is a symmetric function f12 : T → ∆(O) such that for every s ∈ T

X u1(t1, λ1(t1; f)) ≥ µ1(t|t1)u1(t1, f12(s, t)), t∈T and

0 0 X 0 0 u1(t1, λ1(t1; f)) ≥ µ1(t|t1)u1(t1, f12(s, t)). t∈T

0 For each s ∈ T multiply the first inequality by µ1(s|t1) and sum up over all s ∈ T .

Similarly, multiply the second inequality by µ1(s|t1) and sum up over all s ∈ T . We get

  X 0 X 0 u1(t1, λ1(t1; f)) ≥ µ1(s|t1)µ1(t|t1)u1(t1, f12(s, t)) = u1 t1, µ1(s|t1)µ1(t|t1)f12(s, t) , (s,t)∈T 2 (s,t)∈T 2 and

  0 0 X 0 0 X 0 u1(t1, λ1(t1; f)) ≥ µ1(s|t1)µ1(t|t1)u1(t1, f12(s, t)) = u1 t1, µ1(s|t1)µ1(t|t1)f12(s, t) . (s,t)∈T 2 (s,t)∈T 2

But since f12(s, t) = f12(t, s) for every pair (s, t), it follows that

X 0 X 0 µ1(s|t1)µ1(t|t1)f12(s, t) = µ1(s|t1)µ1(t|t1)f12(s, t). (s,t)∈T 2 (s,t)∈T 2

0 Hence, this lottery belongs to both LC1(t1; f) and LC1(t1; f).

Remark. In the simple case where n = 2 and |O| = 2 the sufficient condition of part 1 and the necessary condition of part 2 in Proposition 7 coincide, hence we get a characteri- zation. Indeed, when |O| = 2 the set ∆(O) is one dimensional, so non-empty intersection

40 0 of each pair of sets LCi(ti; f), LCi(ti; f) implies that the whole collection intersects.

Example 1. We first illustrate the conditions in Proposition 7 by revisiting the example of subsection 2.1. Recall that f is a dictatorial SCF in which agent 1 gets to choose his favorite alternative out of {x, y}. We identify ∆(O) with the [0, 1] interval, so that any p ∈ [0, 1] corresponds to the probability of x being chosen. If agent 1 is of type X then x is chosen for sure, and when he is of type Y then y is chosen for sure. In the above notation this means that λ1(X; f) = 1 and λ1(Y ; f) =

0, which implies LC1(X; f) = [0, 1] and LC1(Y ; f) = [0, 1]. Thus, the condition in Proposition 7 holds for this agent for any value of µ.

For agent 2, λ2(X; f) = 2µ and λ2(Y ; f) = 1 − 2µ, so LC2(X; f) = [0, 2µ] and

LC2(Y ; f) = [1 − 2µ, 1]. Thus, LC2(X; f) ∩ LC2(Y ; f) 6= ∅ if and only if µ ≥ 1/4. Proposition 7 then tells us that f can be symmetrically implemented in BNE if and only if µ ≥ 1/4, as we saw in subsection 2.1.

Example 2. Let n = 2, O = {x, y, z}, and T = {X,Y,Z}. Both agents have the same utility function given by u(X, x) = u(Y, y) = u(Z, z) = 1 and u = 0 otherwise. Thus, each agent’s expected utility is the expected probability of the alternative corresponding to his type being chosen. Agents’ beliefs are the posteriors generated from the (common) prior µ given by the matrix

1 \ 2 X Y Z X 0 1/3 0 Y 0 0 1/3 Z 1/3 0 0

We identify each distribution in ∆(O) with the vector of probabilities (px, py, pz). Let α = (4/9, 4/9, 1/9), β = (1/9, 4/9, 4/9), and γ = (4/9, 1/9, 4/9). Consider the SCF f defined by the matrix

41 1 \ 2 X Y Z X α β γ Y α β γ Z α β γ

Note that f is BIC, since agent 1’s type does not affect the outcome at all, and agent 2 gets one of his favorite outcomes (out of α, β, γ) for each of his types. We now check the conditions of Proposition 7, starting with agent 2. We have

λ2(X; f) = α, λ2(Y ; f) = β, and λ2(Z; f) = γ. Thus, at every type, agent 2 has expected utility of 4/9 when he reveals his true type. It follows that the center of the simplex, i.e. the distribution (1/3, 1/3, 1/3), is in the intersection of the lower contour sets for the three types, since for each type it gives agent 2 an expected utility of just 1/3.

Moving to agent 1, we have λ1(X; f) = β, λ1(Y ; f) = γ, and λ1(Z; f) = α. Thus, agent’s 1 expected utility when he truthfully reports his type is only 1/9. Since for any distribution in ∆(O) at least one outcome has probability greater than 1/9, it follows that the intersection of the three lower contour sets for agent 1 is empty. However, notice that for every pair of types of agent 1 the lower contour sets for these types intersect.

For example, the distribution (0, 0, 1) is in both LC1(X; f) and LC1(Y ; f). We conclude that the necessary condition of Proposition 7 is satisfied in this example, but the sufficient condition is not. We therefore cannot rely on this proposition to determine whether f is symmetrically implementable or not, and instead we use directly the characterization in Theorem 1 to show that in fact the answer is positive. Indeed, we can define f21(t) = (1/3, 1/3, 1/3) for every t as the ‘punishment’ SCF for agent 2 when he pretends to be agent 1. For agent 1, the punishment SCF f12 can be for example

1 \ 2 X Y Z X x y x Y y y z Z x z z

42 It is straightforward to verify that these f21 and f12 satisfy the conditions of Theorem 1. Finally, it is worth noting that the example can be modified so that the prior has full support. All the relevant inequalities in the example are strict, so for any prior sufficiently close to µ none of the arguments will be affected.

Example 3. The purpose of this example is to show that the necessary condition in Proposition 7 (part 2) is no longer necessary when the number of agents is greater than two. Let n = 3, O = {x, y}, and T = {X,Y }. All agents have the same (private values) utility function given by u(X, x) = u(Y, y) = 1 and u(X, y) = u(Y, x) = 0. The beliefs of agent 3 over the types of agents 1 and 2 conditional on his own type are given by

1 \ 2 X Y 1 \ 2 X Y t = X: 1 t = Y : 1 1 3 X 0 3 3 X 2 6 1 1 1 Y 6 2 Y 3 0

1 Notice that, whatever his type is, agent 3 assigns probability of 3 to the event that agent 1 1 has the same type as his own, and a probability of only 6 to the event that agent 2 has the same type as his own. The beliefs of agent 2 over the types of agents 1 and 3 are similar to those above, with the roles of 2 and 3 switched. The beliefs of agent 1 are arbitrary. Consider the SCF f in which agent 1 is the dictator. Then when 3 is of type X he

1 believes that x will be chosen with probability 3 , and when he is of type Y he believes 2 that x will be chosen with probability 3 . Thus, LC3(X; f) ∩ LC3(Y ; f) = ∅ and the necessary condition from Proposition 7 is not satisfied. Nevertheless, f can be symmetrically implemented by the ‘names mechanism’ of Theorem 1. Since agent 1 is the dictator it is easy to incentivize him not to pretend to be one of the other agents. Consider now agent 3. We can take f32 to be f itself, so 3 would not want to pretend to be agent 2. Define f31 to be the SCF in which agent 2 (rather than 1) is the dictator. Since agent 3 believes that he is more likely to have the same preferences as agent 1 than as agent 2, he strictly prefers agent 1 to be the dictator

43 (as prescribed by f) than agent 2 being the dictator (as prescribed by f31). It follows that f31 is dominated by f for agent 3, and obviously f31 is 31-symmetric. Preventing agent 2 from misreporting his name can be achieved in a similar way.

An important corollary of Proposition 7 is that, in voting environments with private values and independent types, symmetry constraints imposed on the mechanism never bind.

Corollary 1. In a voting environment with private values and independent types a SCF f is symmetrically implementable in BNE if and only if f is BIC.

Proof. We prove just the ‘if’ direction, as the ‘only if’ direction is clear. Let f be some BIC SCF, and fix an agent i. Since i’s belief does not depend on his type, we simply

∗ ∗ ∗ write µi(t−i) instead of µi(t−i|ti). Fix some type ti ∈ T and denote λ = λi(ti ; f). We claim that λ∗ ∈ T LC (t ; f) and hence that this intersection is non-empty. Indeed, ti∈T i i for every ti ∈ T ,

∗ X ∗ X ui(ti, λ ) = µ(t−i)ui(ti, f(ti , t−i)) ≤ µ(t−i)ui(ti, f(ti, t−i)) = ui(ti, λi(ti; f)), t−i t−i where the first equality is by the definition of λ∗, the inequality follows from the assump- tion that f is BIC, and the last equality is by the definition of λi(ti; f).

We point out that both assumptions of private values and independent types are needed for the result. The fact that not every BIC SCF can be implemented in voting environments with private values can be seen in Example 1 above. In Appendix B we provide an example of an environment with independent types but interdependent values and a BIC SCF in this environment that cannot be symmetrically implemented in BNE.

5.2 Common value voting

We now show how the example of subsection 2.2 generalizes to arbitrary common-value voting environments.

44 Definition 10. A common value voting environment is a voting environment in which

n ui(t, o) = uj(t, o) for every i, j ∈ [n], every o ∈ O, and every t ∈ T . We denote by u the common utility function of the agents.

Proposition 8. If a SCF f in a common value voting environment satisfies u(t, f(t)) ≥ u(t, o) for every o ∈ O and every t ∈ T n (i.e., f is utilitarian) then f is symmetrically implementable in BNE.

0 Proof. Since f is utilitarian, it follows that u(t, f(t)) ≥ u(t, f(ti, t−i)) for every t, every 0 i, and every ti. Thus, truth-telling is an ex-post equilibrium under f, so in particular f ∗ ∗ is BIC. Choose an arbitrary outcome o ∈ O and define fij(t) = o for every i, j ∈ [n] n and every t ∈ T . Then clearly each fij is symmetric, and since f is utilitarian we have ∗ that u(t, f(t)) ≥ u(t, o ) = u(t, fij(si, t−i)) for every t and si. It follows that f dominates fij for agent i, so by Theorem 1 f is symmetrically implementable in BNE.

6 Symmetric implementation in dominant strategies

In this section we demonstrate that the possibility to use indirect mechanisms in order to symmetrically implement asymmetric SCFs is very limited when one requires imple- mentation in dominant strategies rather than BNE. We focus here on environments with private values, since dominant strategy mechanisms are typically studied in such envi- ronments. Thus, the utility function of each agent i depends only on his own type, and we view ui as a mapping ui : T ×O ×P → R. In addition, to make our point particularly clear, we assume that the labeling of types is the same for all agents in the sense that ui(t, ·, ·) ≡ uj(t, ·, ·) for any type t ∈ T and every i, j ∈ [n]. This assumption is satisfied in the voting examples considered in the previous section, as well as in the assignment model of subsection 2.3.

Given a mechanism (M, g), a strategy σi of player i is (weakly) dominant if for every

45 ti ∈ T and every mi ∈ M

ui(ti, g(σi(ti), m−i)O,i) ≥ ui(ti, g(mi, m−i)O,i)

n−1 holds for every m−i ∈ M with a strict inequality for at least one m−i. A strategy profile σ is a Dominant Strategy Equilibrium (DSE) if σi is a dominant strategy for all i ∈ [n].

Definition 11. Let f be a SCF. A mechanism (M, g) implements f in DSE if there is a DSE σ of (M, g) such that f(t) = g(σ(t)) for every t ∈ T n. We say that f is symmetrically implementable in DSE if there is a symmetric mechanism (M, g) that implements f in DSE.

Theorem 2. Under the assumptions of this section, if a SCF f is symmetrically imple- mentable in DSE then the direct mechanism associated with f is symmetric.

In words, under the assumptions of this section, the use of indirect mechanisms does not increase the class of SCFs that can be symmetrically implemented in dominant strategies. Only SCFs that inherently treat the agents symmetrically can be imple- mented. This stands in stark contrast to the situation with BNE implementation, where indirect mechanisms can often be used to create artificial symmetry, as we saw in the previous sections.

Proof. Suppose that f is symmetrically implementable in DSE. Let (M, g) be a sym- metric mechanism that together with the strategy profile σ implements f in DSE. We will now show that it must be the case that σ1 = ... = σn, i.e., all players use the same strategy. Notice that this will complete the proof, since it implies (together with the symmetry of g) that for any permutation π of the agents, any t ∈ T n, and any (o, ) ∈ D

f(t)(o, ) = g(σ(t))(o, ) = g(π(σ(t)))(o, π) = g(σ(πt))(o, π) = f(πt)(o, π).

To show that all players use the same strategy, fix i, j ∈ [n] and a type t∗ ∈ T .

46 Denote m∗ = σi(t∗). We need to show that σj(t∗) = m∗. Fix an arbitrary message m ∈ M and a profile of messages m−j for the players other than j. Let m∗ = (m∗, m−j) ij and m = (m, m−j). Notice that in the profile m∗ agent i plays m∗, while in the profile mij agent i plays m; all agents except i send the same message in these two profiles. Thus, we get that

ij ij uj(t∗, g(m∗)O,j) = ui(t∗, g(m∗ )O,i) ≥ ui(t∗, g(m )O,i) = uj(t∗, g(m)O,j), where the two equalities follow from the symmetry of g and our assumption on the labeling of types, and the inequality follows from the assumption that m∗ = σi(t∗) is a dominant message for i of type t∗. It follows that m∗ is a best response for agent j of type t∗ against any profile m−j. Since there cannot be two weakly dominant strategies this implies that σj(t∗) = m∗ as needed.

7 Final remarks

We have analyzed the extent to which a symmetry constraint limits the set of SCFs that a designer can implement. The main takeaway from the analysis is that symmetry by itself is typically not very restrictive if one considers implementation in Bayes-Nash equilibrium. In particular, symmetry does not prevent the implementation of highly biased SCFs. This apparent contradiction between the symmetry of the mechanism and the unfair outcomes it generates may seem surprising at first look, but notice that the bias is generated by the asymmetry in the equilibrium strategies of the players and not by the mechanism itself. When considering implementation in dominant strategies, the symmetry of the mechanism implies that all agents use the same strategy and hence that the resulting SCF is symmetric. In some cases the agents participating in the mechanism have inherently different roles, such as buyers and sellers, or firms and workers. In such situations it seems more suitable to define symmetry only within each group of agents having the same role and

47 not across all agents. While we do not pursue this direction here, we believe that such an extension would not create significant complications. In our formal analysis we make two assumptions that are more restrictive than typi- cally assumed in the mechanism design literature. The first is that all players have the same set of types T . Generalizing Theorem 1 to the case in which each player has a potentially different set of types is straightforward, but would complicate the notation.

Essentially, the difference will be that the domain of the punishment scheme fij would Q  Q change to l6=i Tl × Tj instead of being l∈[n] Tl, i.e., fij would no longer be a SCF. The second restriction we make is that we only consider BNE in pure strategies. Theo- rem 1 remains true without any change even if mixed strategies are allowed.27 Indeed, the sufficiency part of the proof needs no modification, while in the necessity part the function fij should be defined by the expectation of g(m) where m is distributed accord- ing to the (possibly mixed) strategy profile σ0. We choose to focus on pure strategies in the main text in order to avoid further complicating the exposition and notation. We would like to emphasize again that Theorem 1 concerns partial rather than full implementation. Consider again the example of subsection 2.1: If the agents switch their strategies, that is agent 1 plays σ1(X) = m3 and σ1(Y ) = m4 and agent 2 plays

σ2(X) = m1 and σ2(Y ) = m2 then we get an equilibrium that implements the SCF in which agent 2 is the dictator instead of agent 1! This phenomenon is not unique to this particular example; similar multiplicity of equilibria in the ‘names mechanism’ will hold whenever the underlying environment is symmetric. We leave a detailed study of full implementation under symmetry for future research.28 An anonymous referee suggested the following question: What if, as in Deb and Pai (2017), the notion of implementation would be weaken to require that only the interim distribution over outcomes for each type of each agent under the mechanism is the same as under the SCF? In environments with private values (as in Deb and Pai

27Implementing a SCF f in BNE with mixed strategies σ is defined by the requirement that at every message profile m in the support of σ we have f(t) = g(m). 28The recent paper Korpela (2016) studies symmetric Nash implementation in a complete information environment. The definition of symmetry used in that paper is different than our definition.

48 (2017)) such modification will not change Theorem 1. Indeed, if a mechanism (M, g) implements f in this sense then one can construct the functions fij in the same way as in the current proof and the argument goes through. In environments with interdependent values the resulting fij need not be dominated by f for i; but in such environments agents’ preferences are not determined by the distribution over outcomes alone, so this weaker notion of implementation appears less appropriate. Finally, given our results, an important question is what other regulations, either in isolation or combined with symmetry, are more effective in preventing implementation of discriminatory SCFs.29 Our analysis suggests that limiting the messages’ space of the mechanism, either by imposing bounds on its cardinality or by prohibiting certain kinds of labeling of messages, may help in achieving this goal. Another possibility is to restrict the set of feasible outcomes (D in our notation) so that it is more difficult for the designer to punish agents who deviate from their designated behavior. Exploring these interesting directions is beyond the scope of this paper.

29In an auction setting similar to the one of Deb and Pai (2017), Mass et al. Mass et al. (2017) study mechanisms that satisfy a condition they name ‘imitation perfection’. They show that combin- ing symmetry with imitation perfection imposes non-trivial restrictions on the class of implementable outcomes.

49 Chapter 3: Rationalizable Implementation of Social Choice Correspondences

1 Introduction

The goal of implementation theory is to design a game form such that, for every prefer- ence profile, all the “equilibrium outcomes” of the game coincide with those recommended by the social choice rule. A central ingredient in this exercise is the choice of the solution concept. An extensive literature, starting from Maskin (1999), assumes Nash equilibrium as the solution concept. As is well known, the assumption of Nash equilibrium relies on the ability of the agents to correctly predict the strategy choice of their opponents. This can be problematic from the point of view of implementation theory. To see why consider the following example:

Example 4. Let X = {a, b, c} be the set of alternatives, N = {1, 2} be the set of agents and {θ, θ0} be the set of states. The preference profile for each state is given in the table below. Numbers in the parentheses specify the vNM utilities.

Table 1: Preferences in Example 4 θ θ0 1 2 1 2 a(3) b(3) a(3) a(3) b(2) a(2) b(2) c(2) c(1) c(1) c(1) b(1)

Suppose the planner wants to select all the Pareto optimal alternatives in X. This objective can be represented by the following social choice correspondence, F (θ) = {a, b} and F (θ0) = {a}.

50 Let us consider the following game form which implements the SCC F in Nash equi- librium hl

H a c

L a b

When the state of the world is θ we have a game of complete information among the agents. The Nash equilibria of this game are (H, h) and (L, l) with a and b as the equi- librium outcomes. This is consistent with the outcomes prescribed by F at θ. However, suppose agents fail to coordinate on which equilibrium to play. This coordination failure may lead to the strategy profile (H, l) with an undesirable outcome c.

To avoid the potential of miscoordination, we assume (correlated) rationalizability as our solution concept (Brandenburger and Dekel (1987)). A mechanism specifies a list of messages, one for each agent, and an outcome function. For each message profile, the outcome function specifies a lottery over alternatives. We say a social choice rule is rationalizably implementable if there exists a mechanism such that for every state of the world θ,(i) for every a ∈ F (θ) there exists a rationalizable strategy profile m such that a is selected with a strictly positive probability under m and; (ii) the support of the outcome of any rationalizable strategy profile is contained in F (θ). The goal of this paper is to study which social choice correspondences are ratio- nalizably implementable in the above sense. We provide a condition, which we call r-monotonicity and prove that it is necessary for rationalizable implementation of a SCC (Proposition 9). We show by example (see Example 6 and Remark 3) that r- monotonicity is strictly weaker than Maskin monotonicity, a condition central in the theory of Nash implementation. For social choice functions, r-monotonicity coincides with Maskin monotonicity. In general r-monotonicity is closely related to the ‘set mono- tonicity’ condition (Mezzetti and Renou (2012)) and ‘extended monotonicity with respect to support F ’ condition (Bochet and Maniquet (2010)).

51 Under two additional conditions, no worst alternative (NWA) and ΘF -distinguishability, we show that r-monotonicity is also sufficient for rationalizable implementation. The no worst alternative (NWA) condition requires that for every state θ, no full support lot- tery over F (θ) is worst for some agent. Assuming strict preferences, the NWA property places no restrictions at states where the SCC selects more than one alternative. ΘF - distinguishability, a monotonicity type condition, applies only for the states where the SCC is single valued. In particular, if we consider an SCC F which always selects more than one alternative, then the conditions NWA and ΘF -distinguishability place no re- strictions. Our main theorem (Theorem 3) states that for a society with atleast 3 agents if an SCC satisfies NWA, ΘF -distinguishability and r-monotonicity then there exists a mechanism which rationalizably implements it.30An important corollary (Corollary 2) of Theorem 3 is that an SCC which always selects at least two alternatives is rationalizably implementable if and only if it satisfies r-monotonicity. The current work is motivated by a recent paper of Bergemann et al. (2011), who study the problem of rationalizable implementation of social choice functions31. We argue that focusing attention on social choice functions is restrictive for several reasons. First, there are important social choice correspondences within economics, such as the Pareto correspondence and the Walrasian correspondence, which have received considerable at- tention in the theory of implementation. Second, as shown by Bergemann et al. (2011), a stronger version of Maskin monotonicity is necessary for the rationalizable implemen- tation of a social choice function.32 However, sometimes Maskin monotonicity itself can be very restrictive for social choice functions.33 In contrast, Maskin monotonicity is less

30To prove Theorem 3 we construct a canonical mechanism. 31The main point where Bergemann et al. (2011) departs from the earlier literature is studying ra- tionalizable implementation (i) without any domain restriction or allowing the use of transfers and (ii) allowing mechanisms which use integer games. Recently, Oury and Tercieux (2012) extend Bergemann et al. (2011) to environments with incomplete information. 32They show that a stronger version of Maskin monotonicity, strict Maskin monotonicity∗, is necessary for the rationalizable implementation. In the Appendix B, we present an example of an SCF which violates strict Maskin monotonicity∗ but satisfies Maskin monotonicity. 33See Muller and Satterthwaite (1977) and Saijo (1987). For example, Saijo (1987) shows that if

52 restrictive for SCC’s . This makes the study of SCC’s very different than SCF’s. The results found in this paper are sharp in contrast to the case of social choice functions in two aspects. First, r-monotonicity is strictly weaker than Maskin mono- tonicity. Second, the “responsiveness” type of condition, which appears as a sufficient condition in Bergemann et al. (2011), is no longer required for states of the world where an SCC selects more than one alternative. This point can be seen clearly by Corollary 2 as described above. This paper, therefore, complements Bergemann et al. (2011) by extending their analysis to the important case of social choice correspondences.34 In a closely related paper Kunimoto and Serrano (2016) also study the case of corre- spondences. The key distinguishing feature between their work and the current work is in the notion of implementation. While our notion of implementation is based on Mezzetti and Renou (2012), the notion of implementation used in Kunimoto and Serrano (2016) is based on Maskin (1999). Specifically, they demand that a social choice correspondence is rationalizably implementable if there exists a mechanism such that for every state of the world θ,(i) for every a ∈ F (θ) there exists a rationalizable strategy profile m such that g(m) = a and; (ii) the outcome at any rationalizable strategy profile belongs to F (θ). Starting with different notions of implementation we both independently discover essentially the same necessary condition. There are however important differences in the sufficiency part of our results. First, the NWA condition we require is weaker than the one required in Kunimoto and Serrano (2016). Second, Kunimoto and Serrano (2016) do not require ΘF - distinguishability which we require. Finally, the canonical mechanism constructed to prove sufficiency is different in both the papers. Therefore we consider that both the papers complement each other. For social choice functions, rationalizable implementation has also been studied under a variety of settings. Abreu and Matsushima (1992) study virtual implementation as the domain of an SCF includes the complete indifference profile then any Maskin monotonic SCF is constant. 34Unlike Nash implementation, rationalizable implementation of SCC’s is not a trivial extension of that of SCF’s.

53 opposed to exact implementation. They show that in a complete information setting, any social choice function is virtually implementable in iterative elimination of strictly dominated strategies. Abreu and Matsushima (1994) strengthen virtual implementation to exact implementation but assume iterated elimination of weakly dominated strategies as the solution concept. By allowing small transfers they show that any SCF can be exactly implemented in iterated elimination of weakly dominated strategies. In economic environments, Sj¨ostr¨om(1994) studies a production economy and shows that any SCF can be exactly implemented in iterated elimination of weakly dominated strategies. The rest of the paper is organized as follows. Section 2 presents the notation and definitions. In Section 3, we define r-monotonicity and prove that this is a necessary condition for rationalizable implementation (Proposition 9). In Section 4, we define two additional conditions. We state and prove our main theorem (Theorem 3) in Section 5. As an application of Theorem 3, in Section 6 we show that in a society with more than 3 agents Pareto correspondence can be rationalizably implementable. In Section 7, we study rationalizable implementation when a partially honest agent is present in the society. A partially honest agent strictly prefers to tell the truth if he is indifferent between lying and telling the truth. Finally, in Section 8 we discuss stronger notions of implementation.

2 Notation and Definitions

There is a set N = {1, 2, . . . , n} of n ≥ 2 agents and a finite set of alternatives X = {a, b, . . . , w}. Let ∆(X) denote the set of all lotteries over X with a generic element

α. αx denotes the probability assigned to alternative x under the lottery α ∈ ∆(X). For every α ∈ ∆(X), supp (α) denotes the set of alternatives which receive positive probability in α. Formally, supp (α) = {x ∈ X s.t. αx > 0}. We assume that there is a finite set of states of the world which we denote by Θ. We assume that the state of the world is not known to the planner but is common knowledge among the agents.

54 Thus we are in the setting of complete information as in e.g. Maskin (1999). State θ ∈ Θ specifies the preferences of the players. Specifically, a state of the world θ ∈ Θ specifies the vNM utility function ui : X × θ 7→ R for every agent i. We further assume that agents are expected utility maximizers. This allows us to extend the vNM utility P function over X to ∆(X). With some abuse of notation we write ui(α, θ) = αxui(x, θ). 0 0 For agent i denote by Li(α, θ) = {α ∈ ∆(X) s.t. ui(α, θ) ≥ ui(α , θ)} and SLi(α, θ) = 0 0 {α ∈ ∆(X) s.t. ui(α, θ) > ui(α , θ)} the lower contour set and strict lower contour set of lottery α at state θ respectively. For any set X0 ⊆ X, let U[X0] ∈ ∆(X0) denote the uniform lottery over the set X0. Furthermore, we assume that agents have strict preferences over pure outcomes. Formally, for any two distinct pure alternatives x and y, ui(x, θ) 6= ui(y, θ) for every i ∈ N and every θ ∈ Θ. We discuss this assumption in Section 9.

A mechanism is a game form Γ = (M1, ...Mn; g), where Mi is a countable strategy set for agent i and g : M 7→ ∆(X) is the outcome function. For every strategy profile, the outcome function specifies a lottery over X. Given a state of the world θ, a mechanism Γ induces a game of complete information among the agents. In this paper we assume (correlated) rationalizability as our solution concept in the sense of Brandenburger and Dekel (1987) (also see Chen et al. (2016), Bernheim (1984) and Pearce (1984)). Given a mechanism Γ and a state of the world

θ, we denote by Ri(Γ, θ) the set of rationalizable strategies for agent i. Let R(Γ, θ) =

R1(Γ, θ)×R2(Γ, θ) ...×Rn(Γ, θ) denote the set of rationalizable strategy profiles. R(Γ, θ) can be described as an outcome of an iterative procedure. Each round in this iterative process eliminates strategies which are never best responses.

0 Let us now formally define the iterative process. Fix an agent i, let Ri (Γ, θ) = Mi 0 0 and R−i(Γ, θ) = × Rj (Γ, θ) = × Mj. j6=i j6=i

55  

 ∃λ ∈ ∆(R0 (Γ, θ)) s.t.   −i  1 0 Ri (Γ, θ) = mi ∈ Ri (Γ, θ) X  mi ∈ argmax λi(m−i)ui(g(mi, m−i), θ)   m0 ∈M   i i 0  m−i∈R−i(Γ,θ)

k For any k ≥ 1, for every i ∈ N we can recursively define Ri (Γ, θ) as follows.

 

 ∃λ ∈ ∆(Rk−1(Γ, θ)) s.t.   −i  k k−1 Ri (Γ, θ) = mi ∈ Ri (Γ, θ) X  mi ∈ argmax λi(m−i)ui(g(mi, m−i), θ)   m0 ∈M   i i k−1  m−i∈R−i (Γ,θ)

In the mechanism Γ and a state of the world θ, the set of rationalizable strate- gies for player i is the limit of the iterative process defined above. Formally, Ri(Γ, θ) = k 35 ∩ Ri (Γ, θ). We denote the set of rationalizable strategy profiles by R(Γ, θ) = × Ri(Γ, θ). k≥1 i∈N Finally, we say that in a mechanism Γ a set of strategy profiles T (θ) ⊆ M satisfies the best response property at state θ, if for every i ∈ N and for every mi ∈ Ti(θ), there exists a belief λi(mi) ∈ ∆(T−i(θ)) such that mi is a best response to λi(mi). It is easy to see that if a set T (θ) satisfies the best response property at state θ then T (θ) ⊆ R(Γ, θ). The goal of the social planner is summarized by a social choice correspondence F : Θ 7→ 2X \ {∅}. A special case of an SCC is a social choice function which is a mapping from the set of states to the set of pure alternatives i.e. f :Θ 7→ X. To achieve these goals, the planner needs to know the information about the true state of the world. The problem of the planner is to design a mechanism Γ such that when players are using rationalizable strategies, every alternative in F (θ) gets a positive chance to be selected and any alternative outside of F (θ) receives no chance of being selected36. The following

35If the game is infinite, transfinite induction is required to reach the fixed point of the iteration (see Lipman (1994) for a more formal treatment). 36In section 8 we discuss stronger notions of implementation. We demand that every alternative in F (θ) gets atleast an  chance to be selected and any alternative outside of F (θ) receives no chance of being selected.

56 definition captures this notion.

Definition 12. We say a mechanism Γ(M; g) rationalizably implements an SCC F if for every θ ∈ Θ,

1. For every a ∈ F (θ), there exists m ∈ R(Γ, θ) s.t. a ∈ supp (g(m)).

2. For every m0 ∈ R(Γ, θ), supp (g(m0)) ⊆ F (θ).

3 Necessary condition

In this section, we present a necessary condition for an SCC to be rationalizably imple- mentable. In a related model, using an implementability notion based on Maskin (1999), Kuni- moto and Serrano (2016), working paper dated May 2016, independently prove a neces- sity result that is close to my result in this section. Furthermore, they come close in that draft to close the gap between necessity and sufficiency in their model. An earlier draft of my paper, dated June 2016, contained that necessity result, and also a sufficiency result that was far from closing the gap between the necessary and sufficient conditions. The current draft produces a sufficiency result that comes closer to closing that gap in my framework.

Definition 13. For any pair (θ, θ0), we say that a lottery α ∈ ∆(X) maintains posi-

0 tion if Li(α, θ) ⊆ Li(α, θ ) for every i ∈ N.

In words, while moving from state θ to θ0, a lottery α maintains position if it does not fall in any agent’s preference ordering relative to any other lottery. Alternatively, while moving from state θ to θ0, we say that a lottery α does not maintain position if

0 there exists an agent i and a lottery β such that β ∈ Li(α, θ) and β∈ / Li(α, θ ).

Definition 14. We say that an SCC F is r-monotonic if for any pair (θ, θ0) ∈ Θ × Θ if every lottery α ∈ ∆(F (θ)) maintains position, then F (θ) ⊆ F (θ0).

57 r-monotonicity requires that while moving from state θ to θ0, if every lottery over F (θ) maintains position, then the set of socially optimal alternatives in state θ as prescribed by F remain socially optimal in state θ0 i.e. F (θ) ⊆ F (θ0). To illustrate r-monotonicity, below we give an example of an SCC F which violates r-monotonicity. Then we study its relation with Maskin monotonicity (Maskin (1999)).

Example 5. Let X = {a, b, c} and N = {1, 2} be the set of agents. The preference profile for each state is given in the table below. Numbers in the parentheses specify the vNM utilities. Suppose the planner wants to implement the social choice correspondence F (θ) = {b, c} and F (θ0) = {a, b}.

Table 2: Preferences in Example 5 θ θ0 1 2 1 2 a(3) b(3) b(3) b(3) b(2) a(2) c(2) a(2) c(1) c(1) a(1) c(1)

We claim that for the pair (θ, θ0) every lottery α ∈ ∆(F (θ)) = ∆({b, c}) maintains

0 position. To see this notice that for any α ∈ ∆(X), L2(α, θ) = L2(α, θ ) and for any 0 0 α ∈ ∆(F (θ) = {b, c}), L1(α, θ) ⊆ L1(α, θ ). Therefore for the pair (θ, θ ), the hypothesis of r-monotonicity is satisfied. However F (θ) * F (θ0). Thus F violates r-monotonicity.

Definition 15. We say that an SCC F is Maskin monotonic if for any pair (θ, θ0) ∈ Θ×Θ, if a ∈ F (θ) maintains position, then a ∈ F (θ0).

Maskin monotonicity states that when moving from state θ to θ0, if a socially optimal alternative maintains position, then it must remain socially optimal at state θ0.

Remark. If an SCC satisfies Maskin monotonicity then it satisfies r-monotonicity.

58 Proof. Consider a Maskin monotonic SCC F and assume that for the pair (θ, θ0) every lottery α ∈ ∆(F (θ)) maintains position. In particular, every degenerate lottery α ∈ ∆(F (θ)) maintains position. Therefore, for the pair (θ, θ0) every alternative a ∈ F (θ) maintains position. Since F satisfies Maskin monotonicity therefore F (θ) ⊆ F (θ0).

r-monotonicity is strictly weaker than Maskin monotonicity. To see why, consider the following example.

Example 6. Let X = {a, b, c, d} and N = {1, 2} be the set of agents. The preference profile for each state is given in the table below. Numbers in the parentheses specify the vNM utilities. Suppose the planner wants to implement the following social choice correspondence F (θ) = {a, b, c, d} and F (θ0) = {a}.

Table 3: Preferences in Example 6 θ θ0 1 2 1 2 a(4) a(4) b(4) a(4) b(3) d(3) c(3) d(3) c(2) b(2) d(2) b(2) d(1) c(1) a(1) c(1)

0 0 Consider the pair (θ, θ ). For every i ∈ N, Li(b, θ) ⊆ Li(b, θ ) is true. Therefore b maintains position. Since b ∈ F (θ) Maskin monotonicity would require that b ∈ F (θ0). However b∈ / F (θ0), thus F violates Maskin monotonicity at b. A similar argument holds for alternatives c and d. In contrast, F does not violate r-monotonicity. In fact, r- monotonicity is vacuously satisfied. To see this, note that while alternatives b, c and d maintain position, every lottery α ∈ ∆(F (θ)) does not maintain position. In particular

0 consider alternative a ∈ F (θ). For agent 1, L1(a, θ) * L1(a, θ ) and thus a does not maintain position. Therefore for the pair (θ, θ0), r-monotonicity imposes no restriction on F . For pair (θ0, θ), F (θ0) ⊆ F (θ) thus the SCC F satisfies r-monotonicity.

59 Proposition 9. If a social choice correspondence F is rationalizably implementable then it satisfies r-monotonicity.

n Proof. Consider a mechanism Γ = ((Mi)i , g) which rationalizably implements the SCC F . Recall that under the mechanism Γ, the set of rationalizable strategies at state θ is denoted by R(Γ, θ) = (R1(Γ, θ),...,Rn(Γ, θ)). We will assume that the hypothesis of 0 0 r-monotonicity is true. For any pair (θ, θ ),Li(α, θ) ⊆ Li(α, θ ) is true for every i and every α ∈ ∆(F (θ)). To prove proposition 9 we need to show that F (θ) ⊆ F (θ0).

Select an arbitrary agent i. For every rationalizable strategy mi ∈ Ri(Γ, θ), there exists a belief λi(mi, θ) ∈ ∆(R−i(Γ, θ)) such that mi is a best response to λi(mi, θ) in

Mi. Select an arbitrary strategy mi ∈ Ri(Γ, θ) with the associated belief λi(mi, θ) ∈ 0 0 ∆(R−i(Γ, θ)). For any strategy mi ∈ Mi we denote by αi(mi, mi) the lottery an agent 0 37 gets when he selects strategy mi ∈ Mi under the belief λi(mi, θ).

0 X 0 αi(mi, mi) = λi(mi, θ)(m−i)g(mi, m−i) (10)

m−i∈R−i(Γ,θ)

Claim 1. For mi ∈ Ri(Γ, θ), supp (αi(mi, mi)) ⊆ F (θ).

Proof. Consider supp (αi(mi, mi)). Denote the set of strategy profiles which can occur when agent i plays mi having a belief λi(mi, θ) by R(mi).

supp (αi(mi, mi)) = ∪ supp (g(m)). (11) m∈R(mi)

Notice that by construction R(mi) ⊆ R(Γ, θ). Therefore

supp (αi(mi, mi)) = ∪ supp (g(m)) ⊆ ∪ supp (g(m)). (12) m∈R(mi) m∈R(Γ,θ)

n Since Γ = ((Mi)i , g) rationalizably implements F we can conclude that

∪ supp (g(m)) = F (θ). (13) m∈R(Γ,θ)

37 0 0 Notice that for every strategy mi ∈ Mi, αi(mi, mi) is a lottery under a fixed belief λi(mi, θ).

60 Therefore,

supp (αi(mi, mi)) = ∪ supp (g(m)) ⊆ ∪ supp (g(m)) = F (θ). (14) m∈R(mi) m∈R(Γ,θ) Hence,

supp (αi(mi, mi)) ⊆ F (θ). (15)

We will use Claim 1 to show that the set R(Γ, θ) has the best response property at

0 state θ . Since mi ∈ Ri(Γ, θ) is a best response to λi(mi, θ) ∈ ∆(R−i(Γ, θ)) for every 0 mi ∈ Mi,

0 ui(αi(mi, mi), θ) ≥ ui(αi(mi, mi), θ). (16)

0 0 Thus for every mi ∈ Mi, αi(mi, mi) ∈ Li(αi(mi, mi), θ). By Claim 1 αi(mi, mi) ∈ ∆(F (θ)). Since the hypothesis of r-monotonicity is assumed to be true i.e. for any

0 0 pair (θ, θ ),Li(α, θ) ⊆ Li(α, θ ) for every i and every α ∈ ∆(F (θ)). In particular, 0 Li(αi(mi, mi), θ) ⊆ Li(αi(mi, mi), θ ) for every i.

0 0 0 0 ui(αi(mi, mi), θ ) ≥ ui(αi(mi, mi), θ ), for every mi ∈ Mi. (17)

0 This means that at state θ , mi is a best response to the belief λi(mi, θ) ∈ ∆(R−i(Γ, θ)).

Since i ∈ N and mi ∈ Ri(Γ, θ) were chosen arbitrarily, we can conclude that R(Γ, θ) has the best response property in state θ0. Thus, R(Γ, θ) ⊆ R(Γ, θ0). To complete the proof we need to show that F (θ) ⊆ F (θ0). To proceed notice that the fact R(Γ, θ) ⊆ R(Γ, θ0) implies

∪ supp (g(m)) ⊆ ∪ supp (g(m)). (18) m∈R(Γ,θ) m∈R(Γ,θ0)

61 Since Γ rationalizably implements F , we can conclude that ∪ supp (g(m)) = m∈R(Γ,θ) F (θ) and ∪ supp (g(m)) = F (θ0). This together with equation (18) implies that m∈R(Γ,θ0) F (θ) ⊆ F (θ0).

4 Sufficient Conditions

In this section we present two additional conditions under which r-monotonicity also be- comes sufficient for rationalizable implementation. For any X0 ⊆ X, let ∆o(X0) = {α ∈ ∆(X0)|supp (α) = X0} denote the set of lotteries which assigns a positive probability to every alternative in X0.

4.1 No Worst Alternative

Definition 16. An SCC F satisfies the no worst alternative condition if for every i ∈ N,

o every θ ∈ Θ, and every α ∈ ∆ (F (θ)) there exist yi(θ) ∈ ∆(X) s.t. ui(α, θ) > ui(yi(θ), θ).

In our setting the NWA property is a natural extension of the NWA property used by Bergemann et al. (2011). Under the assumption of strict preferences, the NWA property places restrictions only for states where the SCC F is single valued.38 In implementation theory, the NWA property appears in various contexts. For exam- ple, see Cabrales and Serrano (2009) to guarantee full implementation in best response dynamics and in Tumennasan (2013) for guaranteeing full implementation in quantal response equilibrium. In many environments, this condition is easily satisfied. Below we give some examples.

Example 7. (Environments with money). Consider an environment where the mech- anism designer is allowed to use arbitrarily small transfers. This allows the mecha-

n nism designer to select the alternatives from an extended outcome space ∆(X) × R+.

38 Let θ be such that |F (θ)| ≥ 2. For every agent i, let yi(θ) = argmin ui(y, θ). y∈F (θ)

62 In this setting a typical outcome is a (n + 1) tuple (α, t1, . . . , tn), where α ∈ ∆(X) n P and (t1, . . . , tn) ∈ R+ s.t. i∈N ti = 0. Furthermore, for every agent and every θ, ui(α, ti; θ) is strictly decreasing in ti. In this setting, every SCC F satisfies NWA. To see why this is true, for every α ∈ ∆(F (θ)) we can define yi(θ) = (α, i) such that ui(α, θ) > ui((α, i), θ).

Example 8. (Environments with time). Our second example is an environment where the mechanism designer is allowed to deliver the outcome with a delay. These type of environments are studied in Artemov (2015). This allows the mechanism designer to select the alternatives from an extended outcome space ∆(X) × R+. In this setting a typical outcome is (α, t), where α ∈ ∆(X) and t ∈ R+. Furthermore, every agent has a preference for early delivery of outcome as opposed to late. Formally, ui(α, t; θ) is strictly decreasing in t. Similar to the example above, for every α ∈ ∆(F (θ)), we can define yi(θ) = (α, ) such that ui(α, θ) > ui((α, ), θ). In this setting, therefore, every SCC F satisfies NWA. Notice the distinction in environments with time and transfers is one between public outcomes and private outcomes.

The NWA property guarantees the existence of a set of allocations {yi(θ)}i,θ. A typical allocation, yi(θ), may depend both on the agent and the state. In the construction of our canonical mechanism we require the existence two kinds of allocations. First, a state independent allocation which we call y . Second, a state and agent independent allocation i which we call y. Furthermore, we require that at any state θ these allocations are not best for any agent. The NWA property guarantees the existence of such allocations. To see this consider the following average allocations:

1 X y = y (θ) i #Θ i θ∈Θ 1 X y = y N i i∈N

63 ∗ Lemma 3. For every i ∈ N and every θ ∈ Θ, there exists a pair of allocation yi (θ) and ∗∗ yi s.t.

1. u (y∗(θ), θ) > u (y , θ) i i i i

∗∗ 2. ui(yi (θ), θ) > ui(y, θ)

Proof. First we prove (1) and show for any i and θ u (y∗(θ), θ) > u (y , θ). Define y∗(θ) i i i i i as

1 X 1 y∗(θ) = y (θ0) + α(θ), where α(θ) ∈ ∆o(F (θ)). i #Θ i #Θ θ0∈Θ\{θ}

∗ Consider the utility from lottery yi (θ) at state θ,

1 X 1 u (y∗(θ), θ) = u (y (θ0), θ) + u (α(θ), θ). i i #Θ i i #Θ i θ0∈Θ\{θ}

By the NWA property we know that ui(α(θ), θ) > ui(yi(θ), θ). Therefore

1 X 1 u (y∗(θ), θ) > u (y (θ0), θ) + u (y (θ), θ) = u (y , θ). i i #Θ i i #Θ i i i i θ0∈Θ\{θ}

∗∗ Second we prove (2) and show that for any i and θ, ui(yi (θ), θ) > ui(y, θ). Define ∗∗ yi (θ) as

1 X 1 y∗∗(θ) = y + y∗(θ). i N i N i j∈N\{i}

64 ∗∗ Consider the utility from lottery yi (θ) at state θ,

1 X 1 u (y∗∗(θ), θ) = u (y , θ) + u (y∗(θ), θ). (19) i i N i i N i i j∈N\{i}

We have already proved that u (y∗(θ), θ) > u (y , θ) therefore i i i i

1 X 1 u (y∗∗(θ), θ) > u (y , θ) + u (y , θ) = u (y, θ). i i N i i N i i i j∈N\{i}

The allocation y appears directly in the construction of our canonical mechanism.

∗ 39 0 Allocations yi and yi (θ) allows us to construct a set of allocations {zi(θ, θ )}θ,θ0 for each agent i. We would like to think of a situation where agent i is playing a game with

0 a group of people who always announce the same state. {zi(θ, θ )}θ,θ0 is the outcome function of this game. The following lemma establishes that for every agent i we can

0 0 find allocations {zi(θ, θ )}θ,θ0 such that if the true state is θ then zi(θ, θ ) is better than 0 0 0 0 0 o 0 zi(θ , θ ) and if the true state is θ then zi(θ, θ ) is not better than any α(θ ) ∈ ∆ (F (θ )). This generalizes Lemma 2 in Bergemann et al. (2011) for social choice correspondences.

Lemma 4. If an SCC F satisfies the NWA property then for every i ∈ N and every θ, θ0

0 o there exists allocations zi{(θ, θ )}{θ,θ0} for every α ∈ ∆ (F (θ)) such that

0 0 0 0 ui(α(θ ), θ ) > ui(zi(θ, θ ), θ )

0 0 0 0 and for θ 6= θ , ui(zi(θ, θ ), θ) > ui(zi(θ , θ ), θ)

0 Proof. Define zi(θ, θ ) as below

39 0 Here a typical allocation zi(θ, θ ) is a lottery over X.

65 0 0 0 zi(θ , θ ) = (1 − )yi(θ ) + yi

0 0 0 ∗ and for θ 6= θ , zi(θ, θ ) = (1 − )yi(θ ) + yi (θ).

To prove the two conditions in the lemma above, first consider the utility of agent i

0 0 from zi(θ, θ ) at state θ assuming that θ 6= θ.

0 0 ∗ ui(zi(θ, θ ), θ) = (1 − )ui(yi(θ ), θ) + ui(yi (θ), θ) (20)

From Lemma 3 we know that u (y∗(θ), θ) > u (y , θ) therefore i i i i

u (z (θ, θ0), θ) > (1 − )u (y (θ0), θ) + u (y , θ) = u (z (θ0, θ0), θ). (21) i i i i i i i i

Notice that this is true irrespective of the value of .

0 o 0 0 0 0 0 Now by NWA we know that for every α(θ ) ∈ ∆ (F (θ )), ui(α(θ ), θ ) > u(yi(θ ), θ ). 0 0 0 0 0 0 Assume  = 0, then zi(θ, θ ) = yi(θ ). Therefore for  = 0, ui(α(θ ), θ ) > u(zi(θ, θ ), θ ) for every α(θ0) ∈ ∆o(F (θ0)). By continuity this should also be true for some  > 0. Thus

0 0 0 0 we can always choose a positive  however small such that ui(α(θ ), θ ) > u(zi(θ, θ ), θ ) is true for every α(θ0) ∈ ∆o(F (θ0)) .

In addition to the NWA condition, we require an additional condition which we call

ΘF - distinguishability.

4.2 ΘF - Distinguishability

Definition 17 (Distinguishability). We say that an SCC is distinguishable on a set Θˆ ⊆ Θ, if for every ordered pair (θ, θ0) ∈ Θˆ × Θ there exists a lottery α ∈ ∆(F (θ)) which

66 does not maintain position.

Our sufficient condition requires F to be distinguishable on the set of states where F is single valued. Define ΘF = {θ ∈ Θ s.t. |F (θ)| = 1}.

Definition 18. We say that an SCC F is ΘF - distinguishable if it is distinguishable on

ΘF ⊆ Θ.

ΘF -distinguishability requires that whenever we move from a state θ where F (θ) is single valued to any other state θ0, there exists an agent i and a lottery β ∈ ∆(X) such

0 that β ∈ Li(F (θ), θ) and β∈ / Li(F (θ), θ ). For SCF’s ΘF -distinguishability is strictly stronger than Maskin monotonicity but for SCC’s both the conditions are logically in- dependent.

5 Main Theorem

In this section we present our main theorem. To prove this we construct a “canonical mechanism.” We will show that, for a society with atleast 3 agents, our canonical mech- anism rationalizably implements any SCC which satisfies r-monotonicity, NWA and ΘF -distinguishability. The canonical mechanism we construct shares many basic features with the mechanism constructed by Maskin (1999) with important differences and mod- ifications. In particular, our canonical mechanism is very closely related to the one used by Bergemann et al. (2011) and differs, non trivially, from theirs to accommodate the case of correspondences.

Theorem 3. Assume n ≥ 3. If an SCC F satisfies NWA, r-monotonicity and is ΘF -distinguishable then it is rationalizably implementable .

Proof. Below we describe the canonical mechanism which will be used to prove our main result. For each state of the world θ, a mechanism induces a game of complete information among the agents. We study the set of rationalizable strategy profiles for each θ which we denote by R(Γ, θ).

67 Message Space

1 2 3 Θ 3 Θ 4 M = Θ, M = Z+,M = {∆(X) | for every m ∈ ∆(X) , m (θ) ∈ ∆(F (θ))}, M 4 = {∆(X)Θ}, M 5 = ∆(X).

1 2 3 4 5 A typical message for a player i, mi = (mi , mi , mi , mi , mi ), can be described as follows:

1 1 • mi ∈ M : State of the world.

2 2 • mi ∈ M : An integer.

3 3 3 • mi ∈ M : A state contingent allocation plan such that mi (θ) ∈ ∆(F (θ)).

4 4 4 • mi ∈ M : A state contingent allocation plan such that mi (θ) ∈ ∆(X).

5 5 • mi ∈ M : A lottery over X.

Outcome Function

1 0 2 1. Universal Agreement: For every i, mi = θ , mi = 1.

g(m) = U[F (θ0)]

1 2 1 2 2. Unilateral Deviation: There is an agent i ∈ N s.t. (mi , mi ) 6= (mj , mj ) and for 1 2 0 0 0 every j 6= {i},(mj , mj ) = (θ , 1). Let b(θ ) ∈ argmin ui(y, θ ). y∈F (θ0)

0 2 I Coordination Failure: |F (θ )| ≥ 2 and mi = 1.

   U[F (θ0)] with probability 1  g(m) = 2 0 1  b(θ ) with probability 2 

68 0 2 II Other Cases: |F (θ )| = 1 or mi > 1

0 0 4 0 0 i. ui(U[F (θ )], θ ) ≥ ui(mi (θ ), θ ),

  4 0 1  mi (θ ) with probability (1 − m2+1 )  g(m) = i 0 0 1  zi(θ , θ ) with probability 2  mi +1

0 0 4 0 0 ii. ui(U[F (θ )], θ ) < ui(mi (θ ), θ )

0 0 g(m) = zi(θ , θ )

3. Disagreement:

2 I Coordination Failure: For every i ∈ N, mi = 1 and |F (mi)| ≥ 2. Select an agent i at random.

   m3(m1) with probability 1  g(m) = i i 2 1 1  U[F (mi )] with probability 2 

2 II Other Cases: Select an agent i announcing the highest integer mi and ties are broken arbitrarily.

  5 1  mi with probability (1 − m3+1 )  g(m) = i 1  y with probability 3  mi +1

69 In Appendix A we show that the above mechanism rationalizably implements a SCC which satisfies NWA, r-monotonicity and ΘF -Distinguishability.

Remark. For SCF’s our canonical mechanism is identical to the one constructed in Bergemann et al. (2011). Bergemann et al. (2011) show that a SCF is rationalizably im- plementable if it satisfies responsiveness40, Maskin Monotonicity and NWA (See Propo- sition 2, Pg 1261). It can be easily checked that for SCF’s responsiveness and Maskin monotoncity together imply ΘF -distinguishability. For non-responsive SCF’s Berge- mann et al. (2011) discover a condition which they call strict Maskin monotonicity∗. Under a weak restriction on the class of mechanisms, they prove that it is necessary for rationalizable implementation of an SCF. In Appendix B we show that strict Maskin monotonicity∗ is strictly stronger than Maskin monotonicity.

An important implication of Theorem 5 is that for states where F is multi-valued,

NWA and ΘF -distinguishability place no restrictions. This is sharp in contrast with respect to functions. To see this contrast clearly, consider an SCC F which always selects more than one alternative. Within this class we can use Theorem 5 to completely characterize the set of rationalizably implementable SCC’s.

Corollary 2. Assume N ≥ 3 and |F (θ)| ≥ 2, for every θ. An SCC F is rationalizably implementable if and only if it satisfies r-monotonicity.

6 Application-Pareto Correspondence

In this section, using our main theorem we show that Pareto correspondences can is rationalizable implementable.

40A social function is responsive if θ 6= θ0 ⇒ f(θ) 6= f(θ0).

70 Example 9. Consider the Pareto correspondence

n o F (θ) = a ∈ X|@ an alternative b ∈ X such that for every i ∈ N , ui(b, θ) > ui(a, θ)

Corollary 3. Pareto correspondence is rationalizably implementable.

Proof. Let us partition the set of states by the following partition, {ΘF , Θ \ ΘF }. For every θ ∈ ΘF , F (θ) is singleton. This implies that for every θ ∈ ΘF , F (θ) is the top alternative for every agent. Let us further partition ΘF based on the alternatives which a are top ranked in state θ. Let us denote this partition by {ΘF }a∈X . For example, a if θ ∈ ΘF then F (θ) = a. Informationally the mechanism designer cares about the a partition ΘF rather than the actual state within this partition. Next we claim that the a Pareto correspondence is distinguishable w.r.t. the partition {ΘF }a∈X of ΘF .

a Claim 2. Pareto Correspondence is distinguishable w.r.t. {ΘF }a∈X , i.e. for every pair 0 a 0 (θ, θ ) such that θ ∈ ΘF , F (θ) 6= F (θ ), there exists an agent i ∈ N and an alternative b

a ui(a, θ) > ui(b, θ) for every θ ∈ ΘF

0 0 ui(b, θ ) > ui(a, θ )

a Proof. Let us select a state θ ∈ ΘF . By definition we know that F (θ) = a. Now consider a θ0 ∈ Θ such that F (θ) 6= F (θ0). This implies that there exists an alternative b such that b ∈ F (θ0) and b 6= a. By the definition of Pareto correspondence we know that for every

a 0 i ∈ Nand for every θ ∈ ΘF , ui(a, θ) > ui(b, θ). Also b ∈ F (θ ), now let us assume that 0 0 0 there is no i such that ui(b, θ ) > ui(a, θ ) is true. But this would mean that b∈ / F (θ ). Hence we arrive at a contradiction.

Claim 3. Pareto correspondence satisfies r-monotonicity.

Proof. We can indeed prove that Pareto correspondence is Maskin monotonic. To see

71 this consider an alternative and a state θ such that a ∈ F (θ). By the definition of Pareto correspondence there does not exists an alternative b ∈ X such that for every i ∈ N, ui(b, θ) > ui(a, θ). Thus for every b ∈ X there exists an agent i such that ui(a, θ) > 0 0 ui(b, θ). Consider a θ such that for every i ∈ N, Li(a, θ) ⊆ Li(a, θ ). Therefore at state 0 0 0 θ we can say that for every b ∈ X there exists an agent i such that ui(a, θ ) > ui(b, θ ). By the definition of Pareto correspondence, a ∈ F (θ0). Since a was arbitrary we have shown that Pareto correspondence satisfies Maskin monotonicity. Finally we complete the proof by the fact that Maskin monotonicity implies r-monotonicity.

Claim 4. Pareto correspondence satisfies NWA property.

Proof. First consider θ ∈ ΘF . In these states, F (θ) is singleton and also the top al- ternative for everyone. Therefore any Pareto dominated alternative serves as a worst alternative. For states of the world θ0 where F is multi-valued, for every i ∈ N select

0 0 0 o 0 0 yi(θ ) ∈ argmin ui(y, θ ). Now for any lottery on F (θ ) such that α ∈ ∆ (F (θ )), yi(θ ) y∈F (θ0) serves as a worst alternative for agent i.

Using Claim 2, 3, 4 along with Theorem 3 we can now conclude that the Pareto correspondence is rationalizably implementable.

7 Implementation with Partially Honest agents

Recently there has been a growing literature which allows for the possibility that there exists at least one agent who is partially honest. A partially honest agent, strictly prefers to tell the truth over lying, conditional on receiving the same outcome. Therefore this agent has a lexicographic preference for honesty. Moreover, the designer knows the existence of such an agent but need not necessarily know the identity of the agent. In this section we show that the set of implementable SCC’s expand drastically, a result which is consistent with the existing literature on Nash implementation (see for example Matsushima (2008), Dutta and Sen (2012)). In this setting, the mechanism designer can

72 ask everyone to announce a state of the world as “evidence”. The utility of an agent now depends on the alternative chosen and also the evidence he/she announces i.e. utility

0 0 from alternative a at state θ by announcing θ is given by ui((a, θ ); θ).

0 Definition 19. An agent i is partially honest if for every θ ∈ Θ, ui((a, θ); θ) > ui((a, θ ); θ).

The following claim follows immediately from the definition of partial honesty. The existential quantifier in the claim emphasizes that we just need the existence of at least one partially honest agent.

Claim 5. Consider θ 6= θ0 then there exists an agent i such that for every a ∈ X.

0 0 0 0 ui((a, θ); θ) > ui((a, θ ); θ) and ui((a, θ ); θ ) > ui((a, θ); θ ).

0 0 0 0 ui((a, θ ); θ ) > ui((a, θ); θ ) and ui((a, θ); θ) > ui((a, θ ); θ).

Theorem 4. If there exists an agent i who is partially honest then an SCC which satisfies NWA is rationalizably implementable.

Proof. To prove the above theorem we show that in the presence of a partially honest agent, every SCC satisfies the conditions of our main theorem. Interpreting an outcome as (a, θ), from Claim 5, it follows that the SCC is ΘF -distinguishable and is also r monotonic. Since we have assumed that SCC satisfies NWA, the result follows directly from our main theorem (Theorem 3).

8 Stronger Notions of Implementation

The notion of implementation studied in this paper is the most conservative. In particu- lar, we assume that for any rationalizable strategy profile, at each state, the mechanism designer is completely agnostic about the probability received by each alternative. In this section, we relax this assumption and allow the mechanism designer to demand a minimal probability for each socially optimal alternative. More formally, we call this notion -rationalizable implementation.

73 Definition 20. Let  ∈ [0, 1]. We say a mechanism Γ(M; g) -rationalizably implements an SCC F if for every θ ∈ Θ

1. For every a ∈ F (θ) there exists m ∈ R(Γ, θ) such that g(m)(a) > .

2. For every m0 ∈ R(Γ, θ), supp (g(m0)) ⊆ F (θ).

Proposition 10. Let  ∈ [0, 1]. If a social choice correspondence F is -rationalizably implementable then it satisfies r-monotonicity.

Proof. Let us assume that F is -rationalizably implementable for any fixed  ∈ [0, 1]. This implies that there exists a mechanism, say Γ(M, g), which implements it. From the definition of -rationalizable implementation it follows that Γ(M, g) also -rationalizably implements F for  = 0. Therefore by Proposition 9 we can conclude that F satisfies r-monotonicity.

The necessity of r-monotonicity for -rationalizable implementation for any  ∈ [0, 1] is not surprising. What is surprising is that our main theorem (Theorem 3) remains true for any  ∈ [0, 1).

Theorem 5. Let  ∈ [0, 1) and n ≥ 3. If an SCC F satisfies NWA, r-monotonicity and is ΘF -distinguishable then it is -rationalizably implementable.

Proof. See Appendix C

For  = 1 the above theorem is no longer valid. In particular we provide an example of an SCC (example 10) which cannot be  = 1-rationalizably implemented (Claim 6) but satisfies all the conditions of the above theorem and hence can be -rationalizably implemented for any  ∈ [0, 1) (Claim 7). Before moving to the example let us remind ourselves the definition of  = 1-rationalizably implementation.

Definition 21. We say a mechanism Γ(M; g),  = 1-rationalizably implements an SCC F if for every θ ∈ Θ

74 1. For every a ∈ F (θ) there exists m ∈ R(Γ, θ) such that a = g(m).

2. For every m0 ∈ R(Γ, θ), supp (g(m0)) ⊆ F (θ).

Example 10. Let X = {a, b, c, d} be the set of alternatives and N = {1, 2, 3} be the set of agents. There are two states of the world, θ and θ0. The preference profile for each state is given in the table below. Numbers in the parentheses specify the vNM utilities.

Table 4: Preferences in Example 10 θ θ0 1 2 3 1 2 3 d(3) d(3) d(3) d(3) d(3) d(3) a(2) b(2) a(2) a(2) a(2) a(2) c(1) c(1) c(1) c(1) b(1) c(1) b(0) a(0) b(0) b(0) c(0) b(0)

Suppose the planner’s objective is described by the following social choice correspon- dence, F (θ) = {a, b, c, d} and F (θ0) = {a}.

Claim 6. There is no mechanism which  = 1-rationalizably implements F.

Proof. Let us assume by the way of contradiction that there exists a mechanism say Γ = (M, g) which rationalizably implements F . Since we have assumed that Γ rationalizably implements F at state θ there exists a m ∈ R(Γ, θ) such that g(m) = d. For every agent d is the top alternative at state θ and θ0. Therefore m is a Nash equilibrium at state θ and θ0. This implies that m ∈ R(Γ, θ0). Finally since Γ implements F , d ∈ F (θ0) which is a contradiction.

Claim 7. For  ∈ [0, 1), SCC F can be  -rationalizably implemented.

Proof. To prove the above claim we will check the conditions of Theorem 5. First consider

0 r-monotonicity. For the pair (θ, θ ), we know that for b ∈ F (θ), a ∈ L2(b, θ) and a∈ / 0 0 L2(b, θ ). This means that r-monotonicity is vacuously satisfied. For the pair (θ , θ) we

75 know that F (θ0) ⊆ F (θ). Thus we have established that F satisfies r-monotonicity. Now

0 consider ΘF -distinguishability. We need to consider pair (θ , θ). From the preferences 0 we know that b ∈ L2(a, θ ) and b∈ / L2(a, θ). Hence F satisfies ΘF -distinguishability. Finally it is easy to see that the NWA property is satisfied. Again we only need to check NWA at state θ0. Since alternative a = F (θ0) is second in ranking for everyone, we can find a lottery which is strictly lower than a for every agent. Since F satisfies all the conditions of Theorem 5 we can conclude that F can be  -rationalizably implemented for any  ∈ [0, 1).

9 Discussion

The canonical mechanism used to prove Theorem 3 has the property that it also imple- ments the SCC in mixed Nash equilibrium, a notion of implementation introduced and studied in Mezzetti and Renou (2012). Recently there is a growing literature which allows the agents to provide some costly evidence (See for e.g. Ben-Porath and Lipman (2012) and Kartik and Tercieux (2012)). This is in contrast to the standard literature where all the messages in a mechanism are . We touched upon a specific case of this literature by studying the presence of a partially honest agent. A general study of rationalizable implementation with evidence is an interesting topic left for future research. The assumption of strict preferences plays a crucial role in the proof of Theorem 3. However, the necessary condition still holds even if we allow for indifference. This centrality of strict preferences is reminiscent of the recent literature on dropping no veto power in characterizing Nash implementable SCC’s. (See for e.g. Benoˆıtand Ok (2008), Bochet (2007)). Recently Artemov (2015) adds the possibility of a delay to the theory of Nash im- plementation. This allows a planner to deliver the F -optimal alternate with some delay.

76 Furthermore, every agent discounts future. Therefore early delivery of outcome is better than the later one. If we allow for some  delay, then we can always define the SCC such that it selects F -optimal alternatives along with an  delay. This means that the modified SCC is never single-valued and, as a result, responsiveness or distinguishabil- ity type of conditions can be dropped altogether. A thorough study of rationalizable implementation with delay is an interesting topic which we leave for future research.

77 References

Abdulkadiro˘glu,A. and S¨onmez,T. (1998). Random serial dictatorship and the from random endowments in house allocation problems. Econometrica, 66(3):689–701.

Abreu, D. and Matsushima, H. (1992). Virtual implementation in iteratively undomi- nated strategies I: Complete information. 60:993–1008.

Abreu, D. and Matsushima, H. (1994). Exact implementation. 64:1–19.

Artemov, G. (2015). Time and nash implementation. Games and Economic Behavior, 91:229–236.

Ben-Porath, E. and Lipman, B. L. (2012). Implementation with partial provability. Journal of Economic Theory, 147(5):1689–1724.

Benoˆıt,J.-P. and Ok, E. A. (2008). Nash implementation without no-veto power. Games and Economic Behavior, 64(1):51–67.

Bergemann, D. and Morris, S. (2005). Robust mechanism design. Econometrica, 73(6):1771–1813.

Bergemann, D., Morris, S., and Tercieux, O. (2011). Rationalizable implementation. Journal of Economic Theory, 146(3):1253–1274.

Bernheim, B. D. (1984). Rationalizable strategic behavior. Econometrica: Journal of the Econometric Society, pages 1007–1028.

Bester, H. and Strausz, R. (2001). Contracting with imperfect commitment and the revelation principle: the single agent case. Econometrica, 69(4):1077–1098.

Bochet, O. (2007). Nash implementation with lottery mechanisms. Social Choice and Welfare, 28(1):111–125.

78 Bochet, O. and Maniquet, F. (2010). Virtual nash implementation with admissible support. Journal of mathematical economics, 46(1):99–108.

Bogomolnaia, A. and Moulin, H. (2001). A new solution to the random assignment problem. Journal of Economic theory, 100(2):295–328.

Brandenburger, A. and Dekel, E. (1987). Rationalizability and correlated equilibria. Econometrica: Journal of the Econometric Society, pages 1391–1402.

Brock, W. A. (1980). The design of mechanisms for efficient allocation of public goods. In Lawrence R. Klein, M. N. a. S. C. T., editor, Quantitative Economics and Development: Essays in Honor of Ta-Chung Liu, Economic Theory, Econometrics, and Mathematical Economics, pages 45–80. Academic Press, New York.

Cabrales, A. and Serrano, R. (2009). Implementation in adaptive better-response dy- namics. IMDEA working paper.

Chen, Y. (2002). A family of supermodular Nash mechanisms implementing Lindahl allocations. 19:773–790.

Chen, Y.-C., Luo, X., and Qu, C. (2016). Rationalizability in general situations. Eco- nomic Theory, 61(1):147–167.

Deb, R. and Pai, M. M. (2017). Discrimination via symmetric auctions. American Economic Journal: Microeconomics, 9(1):275–314.

Dutta, B. and Sen, A. (2012). Nash implementation with partially honest individuals. Games and Economic Behavior, 74(1):154–169.

Groves, T. (1973). Incentives in teams. 41:617–633.

Groves, T. and Ledyard, J. O. (1977). Optimal allocation of public goods: A solution to the ‘free-rider’ problem. 45:783–809.

79 Groves, T. and Ledyard, J. O. (1980). The existence of efficient and incentive compatible equilibria with public goods. 48(6):1487–1506.

Groves, T. and Ledyard, J. O. (1987). since 1972. In Groves, T., Radner, R., and Reiter, S., editors, Information, Incentives, and Economic Mech- anisms. University of Minnesota Press, Minneapolis.

Groves, T. and Loeb, M. (1975). Incentives and public inputs. Journal of Public Eco- nomics, 4:211–226.

Healy, P. J. and Mathevet, L. (2012). Designing stable mechanisms for economic envi- ronments. Theoretical Economics, 7:609–661.

Holmstr¨om,B. and Myerson, R. B. (1983). Efficient and durable decision rules with incomplete information. Econometrica: Journal of the Econometric Society, pages 1799–1819.

Hurwicz, L. (1979a). On allocations attainable through Nash equilibria. 21:140–165.

Hurwicz, L. (1979b). Outcome functions yielding Walrasian and Lindahl allocations at Nash equilibrium points. 46:217–225.

Hylland, A. and Zeckhauser, R. (1979). The efficient allocation of individuals to positions. Journal of Political economy, 87(2):293–314.

Kartik, N. and Tercieux, O. (2012). Implementation with evidence. Theoretical Eco- nomics, 7(2):323–355.

Kim, T. (1993). A stable Nash mechanism implementing Lindahl allocations for quasi- linear environments. 22(4):359–371.

Knyazev, D. (2016). Favoritism in auctions: A mechanism design approach. Technical report, working paper.

80 Korpela, V. P. (2016). Procedurally fair implementation: The cost of insisting on sym- metry. working paper.

Kunimoto, T. and Serrano, R. (2016). Rationalizable implementation of correspondences. Working Paper 2016-04, Department of Economics, Brown University.

Laffont, J.-J. and Maskin, E. (1980). A differential approach to dominant strategy mechanism design. 48:1507–1520.

Ledyard, J. O. and Palfrey, T. R. (1999). A characterization of interim efficiency with public goods. Econometrica, 67(2):435–448.

Lipman, B. L. (1994). A note on the implications of common knowledge of rationality. Games and Economic Behavior, 6(1):114–129.

Liu, L. and Tian, G. (1999). A characterization of the existence of optimal dominant strategy mechanisms. Review of Economic Design, 4:205–218.

Mas-Colell, A., Whinston, M. D., Green, J. R., et al. (1995). Microeconomic theory, volume 1. Oxford university press New York.

Maskin, E. (1999). Nash equilibrium and welfare optimality. 66:23–38.

Mass, H., Fugger, N., Gretschko, V., and Wambach, A. (2017). Imitation perfection–a simple rule to prevent discrimination in procurement.

Matsushima, H. (2008). Role of honesty in full implementation. Journal of Economic Theory, 139(1):353–359.

May, K. O. (1952). A set of independent necessary and sufficient conditions for simple majority decision. Econometrica: Journal of the Econometric Society, pages 680–684.

Mezzetti, C. and Renou, L. (2012). Implementation in mixed nash equilibrium. Journal of Economic Theory, 147(6):2357–2375.

81 Muller, E. and Satterthwaite, M. A. (1977). The equivalence of strong positive association and strategy-proofness. Journal of Economic Theory, 14(2):412–418.

Myerson, R. B. (1981). Optimal auction design. Mathematics of operations research, 6(1):58–73.

Oury, M. and Tercieux, O. (2012). Continuous implementation. Econometrica, 80(4):1605–1637.

Pearce, D. G. (1984). Rationalizable strategic behavior and the problem of perfection. Econometrica: Journal of the Econometric Society, pages 1029–1050.

P´erez-Castrillo,D. and Wettstein, D. (2016). Discrimination in a model of contests with incomplete information about ability. International economic review, 57(3):881–914.

Saijo, T. (1987). On constant maskin monotonic social choice functions. 42(2):382–386.

Samuelson, P. A. (1954). The pure theory of public expenditure. 36:387–389.

Saran, R. (2011). Menu-dependent preferences and revelation principle. Journal of Economic Theory, 146(4):1712–1720.

Shapley, L. S. (1953). A value for n-person games. Contributions to the Theory of Games, 2(28):307–317.

Sj¨ostr¨om,T. (1994). Implementation in undominated nash equilibria without integer games. Games and Economic Behavior, 6(3):502–511.

Strausz, R. (2003). Deterministic mechanisms and the revelation principle. Economics Letters, 79(3):333–337.

Tian, G. (1990). Completely feasible and continuous implementation of the lindahl correspondence with a message spce of minimal dimension. 51:443–452.

82 Tian, G. (1996). On the existence of optimal truth-dominant mechanisms. Economics Letters, 53(1):17–24.

Tumennasan, N. (2013). To err is human: Implementation in quantal response equilibria. Games and Economic Behavior, 77(1):138–152.

Van Essen, M. (2009). A simple supermodular mechanism that implements lindahl allocations. Journal of Public Economic Theory, forthcoming.

Walker, M. (1981). A simple incentive compatible scheme for attaining Lindahl alloca- tions. 49:65–71.

Winter, E. (2004). Incentives and discrimination. American Economic Review, 94(3):764–773.

83 Appendix for Chapter 1

A Proof of Lemma 2

In any quasi-direct mechanism, agent i chooses yi in response to m−i to maximize

ui (wi − ti (yi, m−i) , yi) .

∗ At any best response yi we have the first-order condition

∗ ∗ ∗ ∂ti (yi , m−i) ∂ui (wi − ti (yi , m−i) , yi ) /∂y = ∗ ∗ . ∂y ∂ui (wi − ti (yi , m−i) , yi ) /∂xi

∗ ∗ ∗ The best response message mi is that which generates y(mi , m−i) = yi . G The Groves mechanism is quasi-direct. By the definition of ti , we have that

G ∂t (yi, m−i) X i = q − m0 (y ) . ∂y j i j6=i

G P 0 G  Recall y (m) sets i mi y (m) = κ. Thus, at any yi and m−i, the message mi that G will lead to yi = y (m) must satisfy

∂tG (y , m ) m0 (y ) = i i −i . i i ∂y

0 G  If the message is in equilibrium then it is a best response, and so mi y (m) is agent  G I G  i’s marginal rate of substitution at ωi − ti (m) i= , y (m) . The Samuelson condition P 0 G  then follows from i mi y (m) = κ.

84 B Proof of Proposition 2

Denote the message of agent i by θi. First we demonstrate the envelope theorem result. The Groves tax function is

X ti (θ) = κy (θ) − vj (y (θ) |θj) + hi (θ−i) , j6=i so

∂ti ∂y X 0 ∂y = κ − vj (y (θ) |θj) . ∂θi ∂θi ∂θi j6=i

P 0 Because y (θ) is defined by i vi (y (θ) |θi) ≡ κ, we have

! ∂ti X ∂y = κ − v0 (y (θ) |θ ) + v0 (y (θ) |θ ) ∂θ j j i i ∂θ i j i

0 ∂y = vi (y (θ) , θi) ∂θi

Integrating this gives

Z 0 ∂y ti (θ) = vi (y (θ) , θi) dθi + hi (θ−i) . ∂θi

Now suppose there exists a balanced Groves mechanism. Then

X Z ∂y  v0 (y (θ) |θ ) dθ + h (θ ) ≡ κy (θ) . i i ∂θ i i −i i i

If we differentiate both sides with respect to θ1 through θI , we get

X ∂n−1  ∂y  ∂ny (θ) v0 (y (θ) |θ ) ≡ κ . ∂θ i i ∂θ ∂θ i −i i

85 This is equivalent to

X ∂n−1  ∂y  (v0 (y (θ) |θ ) − α κ) ≡ 0 ∂θ i i i ∂θ i −i i

P for any parameters such that i αi = 1. This is necessary and sufficient for budget balance, and is equivalent to the Laffont and Maskin (1980) condition when κ = 0.

C Proof of Proposition 3

Recall that

 r − 1 r  M T r := v (·|θ ): v (y|θ ) = α κy + f (y) ψ (θ ) − (cf (y) + d) r−1 , θ ∈ , i i i i i i i i rc i R

where c > 0, d ≥ 0, f (y) ≥ 0 for all y, both f and ψi are continuously differentiable 0 0 P with f 6= 0 and ψ 6= 0 at every point, and i αi = 1. We restrict attention to r ∈ {2, 3,...,I − 1}, as those are the values that ultimately will give budget balance. G P 0 Because y (θ) equates i vi with κ, we have

1 X G   r−1 ψi (θi) − I cf y (θ) + d = 0 i

¯ P at every θ. Letting ψ (θ) ≡ i ψi (θi) /I, we now have the useful fact that

1 cf yG (θ) + d r−1 = ψ¯ (θ) . (26)

Solving for f (y) gives 1 d f yG (θ) = ψ¯ (θ)r−1 − , c c which can be differentiated to find that

G ¯ r−2 0 ∂y (r − 1) ψ (θ) ψi (θi) = 0 G . ∂θi cIf (y (θ))

86 G 0 G  Using the fact that ∂ti /∂θi = vi y (θ) ∂y/∂θi, using the envelope condition to replace G ∂ti /∂θi, and using (26), we now have

G G ¯ r−2 0 ! 0 G  ∂y ∂y 0 G  (r − 1) ψ (θ) ψi (θi) vi y (θ) |θi = αiκ + f y (θ) ψi (θi) 0 G ∂θi ∂θi cIf (y (θ)) ! (r − 1) ψ¯ (θ)r−2 ψ0 (θ ) −f 0 yG (θ) ψ¯ (θ) i i . cIf 0 (yG (θ))

This simplifies to

0 G   ∂y r − 1 0 ¯ r−2 r − 1 0 ¯ r−1 vi y (θ) |θi − αiκ = ψi (θi) ψi (θi) ψ (θ) − ψi (θi) ψ (θ) . ∂θi cI cI

Following Tian (1996), we derive the Laffont-Maskin condition

n−1   I X ∂ ∂y I − 1 r−I Y (v0 (y (θ) , θ ) − α κ) = (r − 1) (r − 2) ··· (r − I + 1) ψ¯ (θ) ψ0 (θ ) , ∂θ i i i ∂θ cII−1 j j i −i i j=1 which identically equals zero for any r ∈ {2, 3,...,I − 1}. By Proposition 2, the Groves mechanism can be balanced.

D Proof of Proposition 4

We derived in equation (26) that

1 d y∗ (θ) ≡ f −1 ψ¯ (θ)r−1 − . c c

The Groves tax function is

X ti (θ) = αiκy (θ) − [vj (y (θ) |θj) − αjκy (θ)] + hi (θ−i) , j6=i

87 where hi is arbitrary. Substituting in the parameterized preferences and using the fact that (cf (y (θ)) + d) 1/(r−1) = ψ¯ (θ), this reduces to

 r − 1  t∗ (θ) = α κy∗ (θ) − (I − 1) ψ¯ (θ ) f (y∗ (θ)) + ψ¯ (θ)r + h (θ ) . i i −i −i rc i −i

Plugging in f (y (θ)) gives

I − 1  r − 1  t∗ (θ) = α κy∗ (θ) − ψ¯ (θ ) ψ¯ (θ)r−1 − d + ψ¯ (θ)r + h (θ ) . i i c −i −i r i −i

To get balance we need

  X X r − 1 r h (θ ) = (I − 1) ψ¯ (θ ) f (y∗ (θ)) + ψ¯ (θ) . i −i −i −i rc i i

After manipulation, this yields

" !r # X 1 I − 1 2r − 1 X X X h (θ ) = ψ (θ ) − d ψ (θ ) . i −i c Ir−1 r i i j j i i i j6=i

The goal is to express the right-hand side as sums over i of terms that do not depend P r on θi. To accomplish this, we need to break apart the ( i ψi (θi)) term. The following describes the procedure.

For this notational simplicity, assume temporarily that ψi (θi) = θi for all i. It is easy to see that !r X X X X θi = ··· θi1 θi2 . . . θir . (27)

i i1 i2 ir Now we can break that sum into partial sums based on exponent groupings, to get

!r X X X X X X X X X θ = θr + θr−1θ + θr−2θ2 + θr−2θ θ +··· . i i1 i1 i2 i1 i2 i1 i2 i3

i i1 i1 i26=i1 i1 i26=i1 i1 i26=i1 i36∈{ik}k<3

88 Take any one grouping in this sum, written generically as

X X X t ··· θt1 θt2 ··· θ q , i1 i2 iq

i1 i26=i1 iq6∈{ik}k

Pq where k=1 tk = r and tk > tk+1 for all k < q. Now, suppose for each i, we remove all terms containing the ith index, but then sum the entire expression over i. That would be   X X X X t ··· θt1 θt2 ··· θ q . (28)  i1 i2 iq 

i i16=i i26∈{i,ik}k<2 iq6∈{i,ik}k

 r  r − t  r − t − t  r − Pq−2 t  I (I − 1) 1 (I − 2) 1 2 ··· (I − q + 1) k=1 k t1 t2 t3 tq−1 such terms in the original sum. This reduces to

 r  I (I − 1) ··· (I − q + 1) , t1, t2, . . . , tq where the bracketed term is the multinomial coefficient r!/ (t1!t2! ··· tq!). Thus, in expression (28) we ‘overcounted’ terms by a factor of

I (I − 1) ··· (I − q) (I − q) = . r I (I − 1) ··· (I − q + 1) r  t1,t2,...,tq t1,t2,...,tq

To adjust for this, we alter (28) to get

r    X t1,t2,...,tq X X X t1 t2 tq  ··· θi θi ··· θi  . (29) I − q 1 2 q i i16=i i26∈{i,ik}k<2 iq6∈{i,ik}k

89 Return now to the expression

!r X X X X X X X X X θ = θr + θr−1θ + θr−2θ2 + θr−2θ θ +··· . i i1 i1 i2 i1 i2 i1 i2 i3

i i1 i1 i26=i1 i1 i26=i1 i1 i26=i1 i36∈{ik}k<3

For any r and q, let the set of decreasing sequences of positive integers that sum to r be given by

( q ) q X Tq,r = (t1, . . . , tq) ∈ N : tk = r &(∀k > 1) tk−1 > tk . k=1

Using (29), we now have

!r r  r    X X X X t1,t2,...,tq X X X t1 t2 tq θi =   ··· θi θi ··· θi  . I − q 1 2 q i i q=1 Tq,r i16=i i26∈{i,ik}k<2 iq6∈{i,ik}k

Returning to the question of budget balance, we derive that

 r  r   q  X X 1 X X I − 1 2r − 1 t1,t2,...,tq X X Y t h (θ ) = ··· ψ (θ ) k i −i  r−1  ik ik  c  I r I − q i i q=1 Tq,r i16=i iq6∈{i,ik}k

giving the desired expression for each hi (θ−i).

E Proof of Proposition 5

o o G Fix any allocation ((xi )i, y ) in the range of Γ satisfying the Samuelson condition

X o o MRSi(xi , y ) = κ, i

90 0 o o o P 0 o and some message profilem ˆ such thatm ˆ i(y ) = MRSi(xi , y ) for all i. Because i mˆ i(y ) = G o κ, the mechanism selects y (m ˆ ) = y . Note that if we add a scalar aj to eachm ˆ j then G G P a y (·) is unaffected, but each ti (·) decreases by j6=i aj. Define mi (·) =m ˆ i(·) + ai for I G a o each i and let (ai)i=1 be the unique vector of scalars such that ωi − ti (m ) = xi for all 41 a i. Suppose all agents j 6= i announce mj . By the arguments in Lemma 1, agent i’s ∗ best response will be a message mi satisfying

a G ∗ a X dmj (y (mi , m−i)) MRS (ω − tG(m∗, ma ), yG(m∗, ma )) = κ − . i i i i −i i −i dy j6=i

a G a G a o o The message mi satisfies this condition, because MRSi(ωi−ti (m ), y (m )) = MRSi(xi , y ), a G a o o P o o dmj (y (m ))/dy = MRSj(xj , y ) for all j, and i MRSi(xi , y ) = κ. This is true for all i, so ma is a Nash equilibrium.

Appendix for Chapter 2

A Proof of claim in the assignment example

Claim: In the assignment example described in subsection 2.3, serial dictatorship is √ ∗ −3+ 13 ∼ symmetrically implementable in BNE if and only if δ ≤ δ := 12 = 0.05.

Proof. We use the characterization in Theorem 1 to prove the claim. Denote by f the given serial dictatorship SCF. First, f is clearly BIC for any value of δ. Second, if we define f12 and f13 to be the uniform random assignment of goods to agents then these functions satisfy the conditions in the theorem for any value of δ. Indeed, these functions are clearly symmetric, and since agent 1 always gets his favorite good under f his expected utility cannot increase under any alternative SCF and any possible report of type.

41To find a, let A be the I × I matrix with zeros on the main diagonal and ones in every other entry. Then a = xo · A−1.

91 Next, define the SCF f23 to be the one in which agent 1 gets his favorite good and agents 2 and 3 are equally likely to get each of the remaining goods. Then f23 is 23- symmetric and is dominated by f for agent 2. Indeed, at any realization of types, agent 2 gets his favorite good after 1’s favorite is removed under f, while under f23 he is equally likely to get either that good or a good he likes less, regardless of the type he reports.

Similarly, define f21 to be the SCF in which agent 3 gets his favorite good, and agents 1 and 2 are equally likely to get each of the other goods. By the symmetry between the agents, a similar argument shows that this SCF satisfies the conditions of the theorem as well.

It remains to check whether we can find appropriate SCFs f31 and f32 to prevent agent 3 from pretending to be one of the other agents. We claim that such functions

∗ exist if and only if δ ≤ δ . Let (pa(δ), pb(δ), pc(δ)) be the probabilities with which agent 3 gets each of the goods under f for a particular value of δ. Notice that these probabilities do not depend on 3’s type. A simple computation gives that

1 p (δ) = − 4δ + 12δ2, a 3 1 p (δ) = + δ − 18δ2, b 3 1 p (δ) = + 3δ + 6δ2. c 3

We now argue that appropriate f31 and f32 exist if and only if pc(δ) ≤ 1/2, which is ∗ equivalent to δ ≤ δ (note that pa(δ) and pb(δ) are always less than 1/2). Suppose first that this condition holds. Define f31 to be the constant SCF that at every type profile selects the random assignment given by the following matrix

a b c

1 pa(δ) pb(δ) pc(δ)

2 1 − 2pa(δ) 1 − 2pb(δ) 1 − 2pc(δ)

3 pa(δ) pb(δ) pc(δ)

Since all three probabilities pa(δ), pb(δ) and pc(δ) are in [0, 1/2] this is a bi-stochastic

92 matrix, and hence corresponds to some lottery over assignments. Since this SCF is constant and gives identical lotteries over the goods to agents 1 and 3 it is 31-symmetric as needed. Finally, agent 3 gets the same lottery over goods under f and under f31 regardless of his report, so inequality (9) holds as equality. The construction of the SCF f32 is similar. We therefore proved that f is symmetrically implementable in BNE when δ ≤ δ∗.

∗ Suppose now that δ > δ , so that pc(δ) > 1/2. Assume by contradiction that a 31- symmetric SCF f31 exists that is dominated by f. For some fixed type t ∈ T , denote by

(qa(δ), qb(δ), qc(δ)) the expected lottery over the goods that agent 3 obtains by reporting type t under f31. Then (9) implies that the following inequalities must hold:

3pa(δ) + 2pb(δ) + pc(δ) ≥ 3qa(δ) + 2qb(δ) + qc(δ),

pa(δ) + 2pb(δ) + 3pc(δ) ≥ qa(δ) + 2qb(δ) + 3qc(δ).

The first inequality means that if agent 3 is of type abc then truthfully reporting his name (and his type) is not worse for him then pretending to be agent 1 of type t. The second inequality corresponds to agent 3 of type cba. Together, and taking into account that these are distributions, these inequalities imply that

pa(δ) − qa(δ) = pc(δ) − qc(δ).

Similarly, using the inequalities for types acb and bca of agent 3, we get that

pa(δ) − qa(δ) = pb(δ) − qb(δ)

must hold. Altogether this implies that (pa(δ), pb(δ), pc(δ)) = (qa(δ), qb(δ), qc(δ)).

Hence, since t was arbitrary, under f31 the expected lottery over the goods for agent 3 is the same regardless of his type, and is equal to the lottery he obtains under the serial dictatorship f. This implies in turn that under f31 the ex-ante (before learning his

93 type) expected lottery over the goods for agent 3 is also (pa(δ), pb(δ), pc(δ)). Since f31 is 31-symmetric, and since the distribution of types is i.i.d. across agents, the ex-ante expected lottery for agent 1 under f31 must be the same as that of agent 3. Therefore, both agents 1 and 3 each gets good c with ex-ante expected probability of pc(δ) > 1/2. However, the ex-ante expected assignment is a convex combination of the ex-post (random) assignments, so the sum of the probabilities for each of the goods must be exactly 1, a contradiction.

B Independent types and interdependent values ex- ample

Let n = 2, O = {x, y}, and T = {G, R}. Types are independent with the common prior being µ(R,R) = µ(R,G) = µ(G, R) = µ(G, G) = 1/4. Both agents have the same utility function given by the following matrices, where rows correspond to the agent’s own types, columns to the other agent types, and each matrix to a different alternative:

G R G R x: G 0 1 y: G 1 0 R 0 0 R 1 1

In words, each agent prefers alternative y over x in all type profiles, except when his type is G and the other agent type is R, in which case he prefers x over y. Consider the SCF f given by the following matrix:

G R G x y R x y

This is a dictatorial SCF with agent 2 being the dictator. It is clearly incentive compatible for agent 1. It is also incentive compatible for agent 2, since he believes that agent 1 is

94 equally likely to be of type G or R, so if his own type is G then he is indifferent between x and y, and if his own type is R then he prefers y. We claim that f cannot be symmetrically implemented in BNE, and we use the condition in Theorem 1 to prove this. Consider agent 1 of type G. His expected utility from truthfully reporting his type is 0. Thus, for symmetric implementation we need to

find probabilities (for outcome x) f12(G, G), f12(G, R) = f12(R,G), and f12(R,R) such that

1 1 0 ≥ [f (G, G) ∗ 0 + (1 − f (G, G)) ∗ 1] + [f (G, R) ∗ 1 + (1 − f (G, R)) ∗ 0] , 2 12 12 2 12 12 and

1 1 0 ≥ [f (R,G) ∗ 0 + (1 − f (R,G)) ∗ 1] + [f (R,R) ∗ 0 + (1 − f (R,R)) ∗ 1] . 2 12 12 2 12 12

However, the first inequality implies that f12(G, R) = 0 and the second implies that f12(R,G) = 1, a contradiction.

Appendix for Chapter 3

A Proof of Theorem 3

Message Space

1 2 3 Θ 3 Θ 4 M = Θ, M = Z+,M = {∆(X) | for every m ∈ ∆(X) , m (θ) ∈ ∆(F (θ))}, M 4 = {∆(X)Θ}, M 5 = ∆(X).

1 2 3 4 5 A typical message for a player i, mi = (mi , mi , mi , mi , mi ), can be described as follows:

1 1 • mi ∈ M : State of the world.

95 2 2 • mi ∈ M : An integer.

3 3 3 • mi ∈ M : A state contingent allocation plan such that mi (θ) ∈ ∆(F (θ)).

4 4 4 • mi ∈ M : A state contingent allocation plan such that mi (θ) ∈ ∆(X).

5 5 • mi ∈ M : A lottery over X.

Outcome Function

1 0 2 1. Universal Agreement: For every i, mi = θ , mi = 1.

g(m) = U[F (θ0)]

1 2 1 2 2. Unilateral Deviation: There is an agent i ∈ N s.t. (mi , mi ) 6= (mj , mj ) and for 1 2 0 0 0 every j 6= {i},(mj , mj ) = (θ , 1). Let b(θ ) ∈ argmin ui(y, θ ). y∈F (θ0)

0 2 I Coordination Failure: |F (θ )| ≥ 2 and mi = 1.

   U[F (θ0)] with probability 1  g(m) = 2 0 1  b(θ ) with probability 2 

0 2 II Other Cases: |F (θ )| = 1 or mi > 1.

0 0 4 0 0 i. ui(U[F (θ )], θ ) ≥ ui(mi (θ ), θ ),

  4 0 1  mi (θ ) with probability (1 − m2+1 )  g(m) = i 0 0 1  zi(θ , θ ) with probability 2  mi +1

96 0 0 4 0 0 ii. ui(U[F (θ )], θ ) < ui(mi (θ ), θ )

0 0 g(m) = zi(θ , θ )

3. Disagreement:

2 I Coordination Failure: For every i ∈ N, mi = 1 and |F (mi)| ≥ 2. Select an agent i at random.

   m3(m1) with probability 1  g(m) = i i 2 1 1  U[F (mi )] with probability 2 

2 II Other Cases: Select an agent i announcing the highest integer mi and ties are broken arbitrarily.

  5 1  mi with probability (1 − m2+1 )  g(m) = i 1  y with probability 2  mi +1

Throughout the proof we will assume that the true state of the world is θ ∈ Θ.

2 Lemma 5. If an SCC F satisfies NWA, then for any i ∈ N, if mi ∈ Ri(Γ, θ) then mi = 1.

Proof. Assume by the way of contradiction that there exists an i ∈ N such that mi ∈ 2 Ri(Γ, θ) and mi > 1. In the mechanism, the outcome is decided by either Rule 2(II) or Rule 3(II). For a strategy mi, let us denote the set of strategy profiles of opponents which will lead to Rule 2(II) and Rule 3(II) as follows.

97 2(II) 1 0 2 0 M−i = {m−i ∈ M−i|mj = θ and mj = 1 for some θ , for every j 6= i}

3(II) 2 M−i = M−i \ M−i

∗ We will construct a strategy mi for agent i and show that at state θ, for any belief 2 3 ∗ 4 λi ∈ ∆(M−i ∪M−i) strategy mi is better than strategy mi. Notice that mi is relevant for 5 Rule 2(II) and mi is relevant for Rule 3(II). This will allow us to break the argument 2(II) 3(II) into two separate cases. First case is λi ∈ ∆(M−i ) and second case λi ∈ ∆(M−i ).

2 Claim 8. If there exists an i ∈ N with a strategy mi ∈ Ri(Γ, θ) with mi > 1 and beliefs 2 such that λi ∈ ∆(M−i), then

1. At strategy profiles where rule 2(II)(ii) is applied, mi is not a best response.

2. mi is not a best response at rule 2(II)(i) either.

2 Proof. Let mi ∈ Ri(Γ, θ) be a strategy of agent i with mi > 1 and consider that (mi, m−i) 1 0 is a strategy profile such that rule 2(II)(ii) applies. This implies that mj = θ for all 1 1 4 1 1 1 1 j 6= i and ui(U[F (m−i)], m−i) < ui(mi (m−i), m−i) and g(mi, m−i) = zi(m−i, m−i). By 1 0 Lemma 4 we know that for m−i 6= θ there exist allocations {zi(θ , θ)}θ,θ0∈Θ such that

1 1 1 ui(zi(θ, m−i), θ) > ui(zi(m−i, m−i), θ)

1 1 1 1 ui(U[F (m−i)], m−i) > ui(zi(θ, m−i), m−i)

Therefore by announcing

   U[F (θ)] if θˆ = θ  m∗4(θˆ) = i ˆ ˆ  zi(θ, θ) if θ 6= θ 

agent i is better off by inducing Rule 2(II)(i) as compared to Rule 2(II)(ii). Since

98 the choice of strategy profile which induces Rule 2(II)(ii) was arbitrary, the claim holds for any such strategy profile.

2 ∗ 4 Now consider the belief such that λi ∈ ∆(M−i), utility from strategy mi , where mi ∗4 is replaced in mi by mi , is given by:

X m2 1 λ (m )[( i )u (m∗4(m1 ), θ) + ( )u (z (m1 , m1 ), θ)]. i −i m2 + 1 i i −i m2 + 1 i i −i −i 2 i i m−i∈M−i

∗4 1 1 1 Since ui(mi (m−i), θ) > ui(zi(m−i, m−i), θ) this utility is increasing in the choice of 2 integer mi . Therefore mi is not a best response for any beliefs.

2 Claim 9. If there exists an i ∈ N with a strategy mi ∈ Ri(Γ, θ) with mi > 1 and a 3(II) belief such that λi ∈ ∆(M−i ) then mi is not a best response.

∗5 ∗2 Proof. Now define mi = Max ui(y, θ) and a very high integer mi such that agent i y∈∆(X) 5 ∗5 wins the integer game. Replace mi by changing mi to mi . Now consider the belief such 3(II) ∗ 5 that λi(m−i ∈ M−i ) = 1, the utility from strategy mi , where mi is replaced in mi by ∗5 2 ∗2 mi and mi by mi , is given by:

X m∗2 1 λ (m )[( i )u (m∗5, θ) + ( )u (y, θ)]. i −i m∗2 + 1 i i m∗2 + 1 i 3(II) i i m−i∈M−i

∗5 2 Since ui(mi , θ) > ui(y, θ), this utility is increasing in the choice of integer mi . 3 Therefore mi is not a best response to any beliefs such that λi ∈ ∆(M−i).

2 Now consider the strategy mi such that mi > 1 with any belief such that λi ∈ 2(II) 3(II) ∗ 1 ∗2 3 ∗4 ∗5 ∆(M−i ∪M−i ). We use Claim 8 and 9 to show that the strategy mi = (mi , mi , mi , mi , mi ) ∗ is better than strategy mi. Consider the utility from mi . From Claim 8 (1) we know that conditional on Rule 2, the outcome is determined by Rule 2(II)(i).

99 X m∗2 1 λ (m )[( i )u (m∗4(m1 ), θ) + ( )u (z (m1 , m1 ), θ)] i −i m∗2 + 1 i i −i m∗2 + 1 i i −i −i 2(II)(i) i i m−i∈M−i X m∗2 1 + λ (m )[( i )u (m∗5, θ) + ( )u (y, θ)] i −i m∗2 + 1 i i mm∗2 + 1 i 3(II) i m−i∈M−i

2 This is greater than the utility from the strategy mi with mi > 1 under any beliefs 2 3 such that λi ∈ ∆(M−i ∪ M−i).

X m2 1 λ (m )[( i )u (m4(m1 ), θ) + ( )u (z (m1 , m1 ), θ)] i −i m2 + 1 i i −i m2 + 1 i i −i −i 2(II) i i m−i∈M−i X m2 1 + λ (m )[( i )u (m5, θ) + ( )u (y, θ)] i −i m2 + 1 i i m2 + 1 i 3(II) i i m−i∈M−i

This immediately follows from Claim 8 and Claim 9, hence our assumption that

2 mi ∈ R(Γ, θ) together with mi > 1 leads to a contradiction. We have thus established 2 that if mi is a rationalizable strategy for agent i in state θ, then mi = 1.

0 3 4 5 Lemma 6. If mi = (θ , 1, mi , mi , mi ) ∈ Ri(Γ, θ) then

0 3 4 5 (1) mj = (θ , 1, mj , mj , mj ) ∈ Rj(Γ, θ) for every j ∈ N.

1 (2) mi is a best response to beliefs λi(m−i ∈ M−i) = 1. where

1 0 3 4 5 M−i = {m−i ∈ M−i|∀j ∈ N \{i}, mj = (θ , 1, mj , mj , mj )}

0 3 4 5 Proof. First we show that if mi = (θ , 1, mi , mi , mi ) ∈ Ri(Γ, θ) then for every j ∈ N, 0 3 4 5 mj = (θ , 1, mj , mj , mj ) ∈ Rj(Γ, θ). Let us assume that there exists a j ∈ N s.t.

100 mj ∈/ Rj(Γ, θ). We will show that mi is not a best response to any belief λi ∈

∆(R−i(Γ, θ)). To show this we construct a strategym ˆ i which is better than mi for 4 4 5 ∗542 any belief λi ∈ ∆(R−i(Γ, θ)). We replace mi in mi withm ˆ i and mi in mi by mi .

  0 ˆ 0  U[F (θ )] if θ = θ .     ˆ ˆ ˆ ˆ ˆ   U[F (θ)] if θ s.t. |F (θ)| ≥ 2 and ui(U[F (θ)], θ) > ui(b(θ), θ).  mˆ 4(θˆ) = i ˆ ˆ ˆ ˆ ˆ  b(θ) if θ s.t. |F (θ)| ≥ 2 and ui(U[F (θ)], θ) < ui(b(θ), θ).     ∗4 ˆ   mi (θ) if Otherwise 

Under our assumption, that there exists a j ∈ N such that mj ∈/ Rj(Γ, θ), we know that the outcome is not decided by Rule (1). The strategym ˆ i is designed in such a way that for any m−i ∈ R−i(Γ, θ),m ˆ i is better than mi. There are four cases to be considered.

Case 1: m−i ∈ R−i(Γ, θ): Rule 2(I) is applied. ˜ 3 4 5 In this case everyone but i agrees. Let mj = (θ, 1, mi , mj , mj ) for j 6= i. Under strategy ˜ ˜ ˜ mi, i gets a uniform lottery over U(F (θ)) and b(θ). Underm ˆ i, i can ensure lottery U(θ) ˜ 2 or lottery b(θ) by announcing a very high integerm ˆ i .

Case 2: m−i ∈ R−i(Γ, θ): Rule 2(II)(i) is applied. ˜ 3 4 5 As in the previous case, in this case everyone but i agrees. Let mj = (θ, 1, mi , mj , mj ) ∗4 ˜ ˜ ˜ for j 6= i. Under strategy mi, i gets a uniform lottery over mi (θ) and zi(θ, θ). Under 4 ˜ 2 mˆ i, i can ensure lottery mi (θ) by announcing a very high integerm ˆ i .

Case 3: m−i ∈ R−i(Γ, θ): Rule 3(I) is applied. 5 ∗5 In this case i can choose mi = mi . This allocation can be achieved with an arbitrarily 3 0 0 high probability. This is better than a uniform lottery overm ˆ i (θ ) and U(F (θ )).

Case 4: m−i ∈ R−i(Γ, θ): Rule 3(II) is applied. 5 ∗5 In this case i can choose mi = mi and achieve this allocation with a arbitrarily high probability.

0 3 4 5 Thus we have shown that if mi = (θ , 1, mi , mi , mi ) ∈ Ri(Γ, θ) then, for every j ∈ N, 0 3 4 5 mj = (θ , 1, mj , mj , mj ) ∈ Rj(Γ, θ).

42 ∗5 Where mi = Max ui(y, θ). y∈∆(X)

101 0 3 4 5 Now we will show that if mi = (θ , 1, mi , mi , mi ) ∈ Ri(Γ, θ) then, mi is a best 1 response to only beliefs such that λi(m−i ∈ M−i) = 1. 1 Let us assume that mi is a best response to beliefs λi(m−i ∈ M−i) 6= 1. There are two cases to be considered.

1 Case 1: λi(m−i ∈ M−i) = 0. In this case by assumption the outcome is not decided by Rule 1. This case then becomes similar to part (1) of this Lemma. The interesting case is then the following.

1 Case 2: 0 < λi(m−i ∈ M−i) ≤ 1.

In this case we can show thatm ˆ i is better than mi. The argument is simple. When every agrees with i, i can ensure the lottery U[F (θ0)] with a very high probability. There is however some loss as compared to the U[F (θ0)] with probability one probability. In all 4 ˆ ∗5 other cases agent i strictly gains by ensuring a lotterym ˆ i (θ) and mi with an arbitrarily high probability. By a suitable choice of integer the loss in the event where everyone agrees is overcome by the gain in all other events.

Now we are ready to characterize the set R(Γ, θ). Let the set of Nash equilibrium at state θ be denoted by NE(Γ, θ)

0 3 4 5 Lemma 7. If mi = (θ , 1, mi , mi , mi ) ∈ Ri(Γ, θ) then the strategy profile m = (m1, . . . , mn) 0 3 4 5 where m1 = ... = mn = (θ , 1, mi , mi , mi ) is a Nash equilibrium.

0 3 4 5 Proof. Let mi = (θ , 1, mi , mi , mi ) ∈ Ri(Γ, θ) then by Lemma 6 mi is a best re- 1 sponse to λi(m−i ∈ M−i, θ) = 1. This is true for everyone since by part (1), mj = 0 3 4 5 (θ , 1, mj , mj , mj ) ∈ Rj(Γ, θ) for every j ∈ N and i was chosen arbitrarily in the proof of 0 3 4 5 Lemma 6. Hence m1 = ... = mn = (θ , 1, mi , mi , mi ) is a Nash equilibrium of Γ.

With some abuse of notation let us denote a Nash equilibrium at state θ by θ0. That

0 3 4 5 is if m1 = ... = mn = (θ , 1, mi , mi , mi ) is a Nash equilibrium of Γ at state θ then we write θ0.

102 0 0 3 4 5 Let NE(Γ, θ) = {θ ∈ Θ|m1 = ... = mn = (θ , 1, mi , mi , mi ) is a Nash equilibrium}.

Lemma 8. If θ0 ∈ NE(Γ, θ) then F (θ0) ⊆ F (θ).

Proof. Let us assume that θ0 ∈ NE(Γ, θ) but there exists an alternative ‘a’ such that a ∈

0 0 0 F (θ ) and a∈ / F (θ). Since θ ∈ NE(Γ, θ), for every i ∈ N, mi = (θ , 1, ., ., .) ∈ Ri(Γ, θ). Since a ∈ F (θ0) and a∈ / F (θ), by the assumption that F satisfies r-monotonicity there exists an agent i ∈ N and a pair of lotteries α ∈ ∆(F (θ0)) and α0 ∈ ∆(X) such that

0 0 0 0 ui(α, θ ) ≥ ui(α , θ ) and ui(α , θ)>ui(α, θ).

Under the assumption of expected utility theory for agent i we can say that for lottery U[F (θ0)] there exists a lottery γ ∈ ∆(X) such that

0 0 0 0 ui(U([F (θ )], θ ) ≥ ui(γ, θ ) and ui(γ, θ) > ui(U([F (θ )], θ).

To see why this true, we can always express U[F (θ0)] as a linear combination of two lotteries with support in F (θ0). Formally, we say that for every β ∈ ∆(F (θ0)) there exists a δ ∈ ∆(F (θ0)) and p ∈ (0, 1) such that

U([F (θ0)] = pβ + (1 − p)δ

In particular we can express the uniform lottery as a linear combination of α and δ.

0 0 Therefore ui(U([F (θ )], θ ) can be written as

0 0 0 0 ui(U([F (θ )], θ ) = pui(α, θ ) + (1 − p)ui(δ, θ )

103 0 0 0 0 0 Define γ = pα + (1 − p)δ, we know that ui(α, θ ) ≥ ui(α , θ ) and ui(α , θ) > ui(α, θ). Therefore it follows that

0 0 0 0 0 0 0 0 ui(U([F (θ )], θ ) = pui(α, θ ) + (1 − p)ui(δ, θ ) ≥ pui(α , θ ) + (1 − p)ui(δ, θ ) = ui(γ, θ )

0 0 ui(γ, θ) = pui(α , θ) + (1 − p)ui(δ, θ) > pui(α, θ) + (1 − p)ui(δ, θ) = ui(U([F (θ )], θ)

∗ 1 2 3 ∗4 ∗5 Now consider a deviation of this agent to a strategy mi = (mi , mi , mi , mi , mi ) ∗2 ∗4 ˆ ˆ 0 4 where mi > 1 and mi (θ) is γ if θ = θ and mi otherwise. In this case the agent is sure ∗ that Rule 2(II) is triggered. Utility of i with strategy mi is given by

∗2 mi 1 0 0 ( ∗2 )ui(γ, θ) + ( ∗2 )ui(zi(θ , θ ), θ) (34) mi + 1 mi + 1

0 ∗ Since we know that ui(γ, θ) > ui(U[F (θ )], θ), strategy mi can be made better than ∗2 mi by a choice of very high integer mi . This however contradicts the assumption that θ0 ∈ NE(Γ, θ).

Lemma 9. If θ0 ∈ NE(Γ, θ) then either θ0 = θ or|F (θ0)| ≥ 2.

Proof. Let us assume that θ0 ∈ NE(Γ, θ), θ0 6= θ and |F (θ0)| = 1. Let us F (θ0) = a.

Since |F (θ)|, by the assumption of ΘF - distinguishability, we know that there exists an agent i ∈ N and a lottery α ∈ ∆(X) such that

0 0 ui(a, θ ) ≥ ui(α, θ ) and ui(α, θ)>ui(a, θ).

By a very similar argument as in Lemma 8 we can show that θ0 ∈/ NE(Γ, θ). This leads to contradiction. Hence either θ0 = θ or|F (θ0)| ≥ 2 must be true.

Lemma 10. For every θ ∈ Θ and m ∈ R(Γ, θ), g(m) ∈ ∆(F (θ)).

104 Proof. Let R(Γ, θ) be the set of rationalizable strategies at state θ, and select an arbi- trary strategy profile m ∈ R(Γ, θ). By Lemma 5 we know that for every i ∈ N, mi is of 0 3 4 5 the following form mi = (θ , 1, mi , mi , mi ).

First consider the case where the true state θ ∈ ΘF . In this case, the set R(Γ, θ) can be 3 4 5 described by a strategy profile where for every i ∈ N, mi = (θ, 1, mi , mi , mi ) which also forms a Nash equilibrium. To see why this is true, consider a strategy profile m ∈ R(Γ, θ)

0 3 4 5 0 such that there exists an agent i ∈ N with mi = (θ , 1, mi , mi , mi ) and θ 6= θ. By Lemma 7 we know that θ0 ∈ NE(Γ, θ). By Lemma 8 we know that F (θ0) ⊆ F (θ). Since F (θ) is singleton, F (θ0) must be singleton. Furthermore we have assumed that θ 6= θ0. The fact that θ 6= θ0 and F (θ0) is singleton together contradict Lemma 9.

Now consider the case where the true state θ ∈ Θ \ ΘF . Notice that in this case |F (θ)| ≥ 2 . Select an arbitrary rationalizable strategy profile m ∈ R(Γ, θ). Using

1 Lemma 7 we know that for every i ∈ N, mi is a Nash equilibrium at state θ and Lemma 1 1 8 F (mi ) ⊆ F (θ). Finally using Lemma 9 we know that |F (mi )| ≥ 2. 1 We have thus established that for any rationalizable strategy profile m, |F (mi )| ≥ 2. This means that the outcome is decided by Rule 1, 2(I) or 3(I). In all these cases g(m) ∈ ∆(F (θ)). This completes the proof.

3 4 5 Lemma 11. For every θ ∈ Θ, for every i, mi = (θ, 1, mi , mi , mi ) ∈ Ri(Γ, θ).

Proof. This follows from the fact that θ ∈ NE(Γ, θ). To see this, by the construction of the mechanism the payoff from any unilateral deviation is bounded above by U[F (θ)]. This verifies part (1) of the definition of rationalizable implementation.

B Example

Bergemann et al. (2011) introduce a condition which they call strict Maskin monotonic- ity* and show that, under a weak restriction on the class of mechanisms, is also necessary

105 for the rationalizable implementation of an SCF f. The goal of this section is to provide an example demonstrating that strict Maskin monotonicity* is strictly stronger than Maskin monotonicity. Since we assume strict preferences, our example does not rely on the indifference in the preferences of the agents. To the best of our knowledge this is the first example in the literature. Before going to the example we define Strict Maskin monotonicity*.

Given a SCF f, let us consider the unique partition of Θ : Pf = {Θz}z=f(θ). In other words, Θz = {θ ∈ Θ|f(θ) = z}. We are now ready to define the notion of strict Maskin monotonicity∗

Definition 22. A social choice function f satisfies strict Maskin monotonicity∗ if there exists a partition P of Θ which is finer than Pf such that for any θ: θ0 ∈ P (θ) whenever for all i and y,

ˆ ˆ ˆ 0 0 [∀θ ∈ P (θ): ui(f(θ), θ) > ui(y, θ)] ⇒ [ui(f(θ), θ ) ≥ ui(y, θ )].

Example 11. Let X = {a, b, c, d}, N = {1, 2, 3} and Θ = {θ, θ0, θ00, θ000}. The preference profile for each state is given in the table below. Numbers in the parentheses specify the vNM utilities. Consider the following SCF f(θ) = f(θ0) = f(θ0) = a and f(θ000) = b.

Table 5: Preferences in Example 11 θ θ0 θ00 θ000 1 2 3 1 2 3 1 2 3 1 2 3 a(3) a(3) a(3) b(3) c(3) b(3) c(3) b(3) c(3) b(3) c(3) c(3) b(2) c(2) b(2) a(2) a(2) a(2) a(2) a(2) a(2) a(2) a(2) a(2) c(1) b(1) c(1) c(1) d(1) c(1) d(1) c(1) b(1) c(1) b(1) b(1) d(0) d(0) d(0) d(0) b(0) d(0) b(0) d(0) d(0) d(0) d(0) d(0) f(θ) = a f(θ0) = a f(θ00) = a f(θ000) = b

Claim 10. The SCF f violates strict Maskin monotonicity*.

106 0 00 000 Proof. Let us consider the partition Pf = {{θ, θ , θ }, {θ }}. Let us assume that the SCF f satisfies strict Maskin monotonicity* with respect to Pf . With the help of following claim we will show that this will lead to a contradiction.

ˆ 000 Claim 11. For every i ∈ N, ∩ Li(a, θ) ⊆ Li(a, θ ). ˆ θ∈Pf (θ)

ˆ 0 Proof. Consider agent 1. Let α ∈ ∩ L1(a, θ). This implies α ∈ L1(a, θ ). Since ˆ θ∈Pf (θ) 0 000 000 ˆ L1(a, θ ) = L1(a, θ ) therefore α ∈ L1(a, θ ). Now consider agent 2. Let α ∈ ∩ L3(a, θ). ˆ θ∈Pf (θ) 0 0 000 000 This implies α ∈ L2(a, θ ). Since L2(a, θ ) = L2(a, θ ) therefore α ∈ L2(a, θ ). Fi- ˆ 00 nally consider agent 3. Let α ∈ ∩ L3(a, θ). This implies α ∈ L3(a, θ ). Since ˆ θ∈Pf (θ) 00 000 000 L3(a, θ ) = L2(a, θ ) therefore α ∈ L2(a, θ ).

Using the above claim and the definition of strict Maskin monotonicity* we can conclude that θ000 ∈ P (θ), which is a contradiction. With the help of following remark we can show a similar argument holds true for any partition of Θ which is finer than Pf .

0 00 Remark. For every i, Li(a, θ ) ⊆ Li(a, θ) and Li(a, θ ) ⊆ Li(a, θ).

Let us assume that this SCF f satisfies strict Maskin monotonicity* with a possibly

finer partition than Pf . Consider an arbitrary partition P of Θ. Since this partition is 0 00 0 00 finer than Pf , there exists θ or θ such that θ∈ / P (θ ) or θ∈ / P (θ ). There are four mutually exclusive and exhaustive cases which we need to consider First, consider the partition P = {{θ}, {θ0 }, {θ00 }, {θ000 }}. Remark 11 together with the assumption that SCF f satisfies strict Maskin monotonicity* implies that θ ∈ P (θ00 ), which is a contradiction since P (θ00 ) = {θ00 }. We can use a similar argument for partition P = {{θ, θ0 }, {θ00 }, {θ000 }} and P = {{θ, θ00 }, {θ0 }, {θ000 }}. Finally consider P = {{θ00 , θ0 }, {θ}, {θ000 }}. Remark 11 together with the assumption that SCF f sat- isfies strict Maskin monotonicity* implies that θ ∈ P (θ00 ), which is a contradiction since P (θ00 ) = {θ00 , θ0 }

Claim 12. SCF f, satisfies Maskin monotonicity.

107 Proof. To prove that f satisfies Maskin monotonicity we need to show that for ev-

0 0 ery (θ, θ ) pair such that f(θ) 6= f(θ ), there exists i, j ∈ N such that Li(f(θ), θ) * 0 0 0 0 Li(f(θ), θ ) and Lj(f(θ ), θ ) * Lj(f(θ ), θ). There are three pairs to be considered, (θ, θ000), (θ0, θ000) and (θ00, θ000).

000 000 • (θ, θ ): b ∈ L1(f(θ), θ) and b∈ / L1(f(θ), θ ).

0 000 0 0 0 000 • (θ , θ ): c ∈ L3(f(θ ), θ ) and c∈ / L3(f(θ ), θ ).

00 000 00 00 00 000 • (θ , θ ): b ∈ L1(f(θ ), θ ) and b∈ / L1(f(θ ), θ ).

000 000 000 000 • (θ , θ): a ∈ L1(f(θ ), θ ) and a∈ / L1(f(θ ), θ).

000 0 000 000 000 0 • (θ , θ ): d ∈ L2(f(θ ), θ ) and d∈ / L2(f(θ ), θ ).

000 00 000 000 000 00 • (θ , θ ): a ∈ L1(f(θ ), θ ) and a∈ / L1(f(θ ), θ ).

C Proof of Theorem 5

Theorem 6. Theorem 5 Let  ∈ [0, 1) and n ≥ 3. If an SCC F satisfies NWA, r- monotonicity and is ΘF -distinguishable then it is -rationalizably implementable.

The proof of Theorem 5 follows very closely the proof of Theorem 5. This extension is reminiscent of the extension of Maskin’s theorem from SCF’s to SCC’s. Before we de- scribe the canonical mechanism we denote by α(θ, a) a lottery which assigns  probability to an alternative a ∈ F (θ) and rest of the probability to U[F (θ)]. Formally,

α(θ0, a) = a + (1 − )U[F (θ0)]

108 C.1 Message Space

1(i) 1(ii) Θ 1(ii) Θ 1(ii) 2 3 M = Θ, M = {X | for every m ∈ X , m (θ) ∈ F (θ)}, M = Z+, M = {∆(X)Θ| for every m3 ∈ ∆(X)Θ, m3(θ) ∈ ∆(F (θ))}, M 4 = {∆(X)Θ×X }, M 5 = ∆(X).

1 2 3 4 5 A typical message for a player i, mi = (mi , mi , mi , mi , mi ), can be described as follows:

1 1(i) 1(ii) • mi = (mi , mi ), where

1(i) 1(i) – mi ∈ M : State of the world.

1(ii) 1(ii) 1(ii) – mi ∈ M : A state contingent recommendation such that mi (θ) ∈ F (θ).

2 2 • mi ∈ M : An integer.

3 3 3 • mi ∈ M : A state contingent allocation plan such that mi (θ) ∈ ∆(F (θ)).

4 4 • mi ∈ M : A state and an alternative contingent allocation plan such that 4 mi (θ, a) ∈ ∆(X).

5 5 • mi ∈ M : A lottery over X.

C.2 Outcome Function

1 1(i) 1(i) 0 0 2 1. Universal Agreement: For every i, mi = (mi , mi ) = (θ , a ), and mi = 1.

g(m) = α(θ0, a0)

1 2 1 2 2. Unilateral Deviation: There is an agent i ∈ N s.t. (mi , mi ) 6= (mj , mj ) and for 1 2 0 0 0 0 every j 6= {i},(mj , mj ) = ((θ , a ), 1). Let b(θ ) ∈ argmin ui(y, θ ). y∈F (θ0)

109 0 2 I Coordination Failure: |F (θ )| ≥ 2 and mi = 1.

   α(θ0, a0) with probability 1  g(m) = 2 0 1  b(θ ) with probability 2 

0 2 II Other Cases: |F (θ )| = 1 or mi > 1

0 0 0 4 0 0 0 i. ui(α(θ , a ), θ ) ≥ ui(mi (θ , a ), θ ),

  4 0 0 1  mi (θ , a ) with probability (1 − m2+1 )  g(m) = i 0 0 1  zi(θ , θ ) with probability 2  mi +1

0 0 0 4 0 0 0 ii. ui(α(θ , a ), θ ) < ui(mi (θ , a ), θ )

0 0 g(m) = zi(θ , θ )

3. Disagreement:

2 I Coordination Failure: For every i ∈ N, mi = 1 and |F (mi)| ≥ 2. Select an agent i at random.

   m3(m1) with probability 1  g(m) = i i 2 1 1  U[F (mi )] with probability 2 

2 II Other Cases: Select an agent i announcing the highest integer mi and ties

110 are broken arbitrarily.

  5 1  mi with probability (1 − m3+1 )  g(m) = i 1  y with probability 3  mi +1

Throughout the proof we will assume that the true state of the world is θ ∈ Θ.

Lemma 12. If an SCC F satisfies NWA, then for any i ∈ N, if mi ∈ Ri(Γ, θ) then 2 mi = 1.

Proof. Assume by the way of contradiction that there exists an i ∈ N such that mi ∈ 2 Ri(Γ, θ) and mi > 1. Given the outcome function, the outcome is decided by either

Rule 2(II) or Rule 3(II). For a strategy mi, let us denote the set of strategy profiles of opponents which will lead to Rule 2(II) and Rule 3(II) as follows.

2(II) 1 0 2 0 M−i = {m−i ∈ M−i|mj = (θ , a) and mj = 1 for some (θ , a), for every j 6= i}

3(II) 2 M−i = M−i \ M−i

∗ We will construct a strategy mi for agent i and show that at state θ, for any belief 2 3 ∗ 4 λi ∈ ∆(M−i ∪M−i) strategy mi is better than strategy mi. Notice that mi is relevant for 5 Rule 2(II) and mi is relevant for Rule 3(II). This will allow us to break the argument 2(II) 3(II) into two separate cases. First case is λi ∈ ∆(M−i ) and second case λi ∈ ∆(M−i ).

2 Claim 13. If there exists an i ∈ N with a strategy mi ∈ Ri(Γ, θ) with mi > 1 and 2 beliefs such that λi ∈ ∆(M−i), then

111 1. At strategy profiles where rule 2(II)(ii) is applied, mi is not a best response.

2. mi is not a best response at rule 2(II)(i) either.

2 Proof. Let mi ∈ Ri(Γ, θ) be a strategy of agent i with mi > 1 and consider that (mi, m−i) 1 0 is a strategy profile such that rule 2(II)(ii) applies. This implies that mj = (θ , a) 0 1(i) 4 1(i) 1(ii) 1(i) for all j 6= i and ui(α(θ , a), m−i ) < ui(mi (m−i , m−i ), m−i ) and g(mi, m−i) = 1 1 1 0 zi(m−i, m−i). By Lemma 4 we know that for θ 6= m−i there exist allocations zi(θ , θ) such that

1 1 1 ui(zi(θ, m−i), θ) > ui(zi(m−i, m−i), θ)

0 1 1 1 ui(α(a, θ ), m−i) > ui(zi(θ, m−i), m−i)

Therefore by announcing

   α(θ, x) if θˆ = θ  m∗4(θ,ˆ x) = i ˆ ˆ  zi(θ, θ) if θ 6= θ 

agent i is better off by inducing Rule 2(II)(i) as compared to Rule 2(II)(ii). Since the choice of strategy profile which induces Rule 2(II)(ii) was arbitrary, the claim holds for any such strategy profile.

2 ∗ 4 Now consider the belief such that λi ∈ ∆(M−i), utility from strategy mi , where mi ∗4 is replaced in mi by mi , is given by:

X m2 1 λ (m )[( i )u (m∗4(m1 ), θ) + ( )u (z (m1 , m1 ), θ)]. i −i m2 + 1 i i −i m2 + 1 i i −i −i 2 i i m−i∈M−i

∗4 1 1 1 Since ui(mi (m−i), θ) > ui(zi(m−i, m−i), θ) this utility is increasing in the choice of 2 integer mi . Therefore mi is not a best response for any beliefs.

112 2 Claim 14. If there exists an i ∈ N with a strategy mi ∈ Ri(Γ, θ) with mi > 1 and a 3(II) belief such that λi ∈ ∆(M−i ) then mi is not a best response.

∗5 ∗2 Proof. Now define mi = Max ui(y, θ) and a very high integer mi such that agent i y∈∆(X) 5 ∗5 wins the integer game. Replace mi by changing mi to mi . Now consider the belief such 3(II) ∗ 5 that λi(m−i ∈ M−i ) = 1, the utility from strategy mi , where mi is replaced in mi by ∗5 2 ∗2 mi and mi by mi , is given by:

X m∗2 1 λ (m )[( i )u (m∗5, θ) + ( )u (y, θ)]. i −i m∗2 + 1 i i m∗2 + 1 i 3(II) i i m−i∈M−i

∗5 2 Since ui(mi , θ) > ui(y, θ), this utility is increasing in the choice of integer mi . 3 Therefore mi is not a best response to any beliefs such that λi ∈ ∆(M−i).

2 Now consider the strategy mi such that mi > 1 with any belief such that λi ∈ 2(II) 3(II) ∗ 1 ∗2 3 ∗4 ∗5 ∆(M−i ∪M−i ). We use Claim 13 and 14 to show that the strategy mi = (mi , mi , mi , mi , mi ) ∗ is better than strategy mi. Consider the utility from mi . From Claim 13 (1) we know that conditional on Rule 2, the outcome is determined by Rule 2(II)(i).

X m∗2 1 λ (m )[( i )u (m∗4(m1 ), θ) + ( )u (z (m1 , m1 ), θ)] i −i m∗2 + 1 i i −i m∗2 + 1 i i −i −i 2(II)(i) i i m−i∈M−i X m∗2 1 + λ (m )[( i )u (m∗5, θ) + ( )u (y, θ)] i −i m∗2 + 1 i i mm∗2 + 1 i 3(II) i m−i∈M−i

2 This is greater than the utility from the strategy mi such that mi > 1 with beliefs 2 3 such that λi ∈ ∆(M−i ∪ M−i)

113 X m2 1 λ (m )[( i )u (m4(m1 ), θ) + ( )u (z (m1 , m1 ), θ)] i −i m2 + 1 i i −i m2 + 1 i i −i −i 2(II) i i m−i∈M−i X m2 1 + λ (m )[( i )u (m5, θ) + ( )u (y, θ)] i −i m2 + 1 i i m2 + 1 i 3(II) i i m−i∈M−i

This immediately follows from Claim 13 and Claim 14, hence our assumption that

2 mi ∈ R(Γ, θ) together with mi > 1 leads to a contradiction. We have thus established 2 that if mi is a rationalizable strategy for agent i in state θ, then mi = 1.

0 3 4 5 Lemma 13. If mi = ((θ , a), 1, mi , mi , mi ) ∈ Ri(Γ, θ) then

0 3 4 5 (1) mj = (θ , 1, mj , mj , mj ) ∈ Rj(Γ, θ) for every j ∈ N.

1 (2) mi is a best response to beliefs λi(m−i ∈ M−i) = 1. where

1 0 3 4 5 M−i = {m−i ∈ M−i|∀j ∈ N \{i}, mj = ((θ , a), 1, mj , mj , mj )}

0 3 4 5 Proof. First we show that if mi = ((θ , a), 1, mi , mi , mi ) ∈ Ri(Γ, θ) then for every 0 3 4 5 j ∈ N, mj = ((θ , a), 1, mj , mj , mj ) ∈ Rj(Γ, θ). Let us assume that there exists a j ∈ N s.t. mj ∈/ Rj(Γ, θ). We will show that mi is not a best response to any belief

λi ∈ ∆(R−i(Γ, θ)). To show this we construct a strategym ˆ i which is better than mi for 4 4 5 ∗5 any belief λi ∈ ∆(R−i(Γ, θ)). We replace mi in mi withm ˆ i and mi in mi by mi .

  0 ˆ 0  α(θ , a) if θ = θ and x = a.     ˆ ˆ ˆ ˆ ˆ   α(θ, a) if θ s.t. |F (θ)| ≥ 2 and ui(α(θ, x), θ) > ui(b(θ), θ).  mˆ 4(θ,ˆ x) = i ˆ ˆ ˆ ˆ ˆ  b(θ) if θ s.t. |F (θ)| ≥ 2 and ui(α(θ, x), θ) < ui(b(θ), θ).     ∗4 ˆ   mi (θ, x) if Otherwise 

Under our assumption, that there exists a j ∈ N such that mj ∈/ Rj(Γ, θ), we know

114 that the outcome is not decided by Rule (1). The strategym ˆ i is designed in such a way that for any m−i ∈ R−i(Γ, θ),m ˆ i is better than mi. There are four cases to be considered.

Case 1: m−i ∈ R−i(Γ, θ): Rule 2(I) is applied. ˜ 3 4 5 In this case everyone but i agrees. Let mj = ((θ, a˜), 1, mi , mj , mj ) for j 6= i. Under ˜ ˜ strategy mi, i gets a uniform lottery over α(θ, a˜) and b(θ). Underm ˆ i, i can ensure ˜ ˜ 2 lottery α(θ, a˜) or lottery b(θ) by announcing a very high integerm ˆ i .

Case 2: m−i ∈ R−i(Γ, θ): Rule 2(II)(i) is applied. ˜ 3 4 5 As in the previous case, in this case everyone but i agrees. Let mj = ((θ, a˜), 1, mi , mj , mj ) ∗4 ˜ ˜ ˜ for j 6= i. Under strategy mi, i gets a uniform lottery over mi (θ, a˜) and zi(θ, θ). Under 4 ˜ 2 mˆ i, i can ensure lottery mi (θ, a˜) by announcing a very high integerm ˆ i .

Case 3: m−i ∈ R−i(Γ, θ): Rule 3(I) is applied. 5 ∗5 In this case i can choose mi = mi . This allocation can be achieved with an arbitrarily 3 0 0 high probability. This is better than a uniform lottery overm ˆ i (θ ) and U(F (θ )).

Case 4: m−i ∈ R−i(Γ, θ): Rule 3(II) is applied. 5 ∗5 In this case i can choose mi = mi and achieve this allocation with a arbitrarily high probability.

0 3 4 5 Thus we have shown that if mi = ((θ , a), 1, mi , mi , mi ) ∈ Ri(Γ, θ) then, for every 0 3 4 5 j ∈ N, mj = ((θ , a), 1, mj , mj , mj ) ∈ Rj(Γ, θ). 0 3 4 5 Now we will show that if mi = ((θ , a), 1, mi , mi , mi ) ∈ Ri(Γ, θ) then, mi is a best 1 response to only beliefs such that λi(m−i ∈ M−i) = 1. 1 Let us assume that mi is a best response to beliefs λi(m−i ∈ M−i) 6= 1. There are two cases to be considered.

1 Case 1: λi(m−i ∈ M−i) = 0. In this case by assumption the outcome is not decided by Rule 1. This case then becomes similar to part (1) of this Lemma. The interesting case is then the following.

1 Case 2: 0 < λi(m−i ∈ M−i) ≤ 1.

In this case we can show thatm ˆ i is better than mi. The argument is simple. When

115 every agrees with i, i can ensure the lottery α(θ0, a) with a very high probability. There is however some loss as compared to the α(θ0, a) with probability one probability. In 4 ˆ ∗5 all other cases agent i strictly gains by ensuring a lotterym ˆ i (θ, x) and mi with an 2 arbitrarily high probability. By a suitable choice of integer, mi , the loss in the event where everyone agrees is overcome by the gain in all other events.

Now we are ready to characterize the set R(Γ, θ). Let the set of Nash equilibrium at state θ be denoted by NE(Γ, θ) Now we are ready to characterize the set R(Γ, θ). Let the set of Nash equilibrium at state θ be denoted by NE(Γ, θ)

0 0 3 4 5 Lemma 14. If mi = ((θ , a ), 1, mi , mi , mi ) ∈ Ri(Γ, θ) then the strategy profile m = 0 0 3 4 5 (m1, . . . , mn) where m1 = ... = mn = ((θ , a ), 1, mi , mi , mi ) is a Nash equilibrium.

0 0 3 4 5 Proof. Let mi = ((θ , a ), 1, mi , mi , mi ) ∈ Ri(Γ, θ) then by Lemma 13 mi is a best 1 response to λi(m−i ∈ M−i, θ) = 1. This is true for everyone since by part (1), mj = 0 0 3 4 5 ((θ , a ), 1, mj , mj , mj ) ∈ Rj(Γ, θ) for every j ∈ N and i was chosen arbitrarily in the 0 0 3 4 5 proof of Lemma 13. Hence m1 = ... = mn = ((θ , a ), 1, mi , mi , mi ) is a Nash equilibrium of Γ.

With some abuse of notation let us denote a Nash equilibrium at a state θ by (θ0, a0).

0 0 3 4 5 That is if m1 = ... = mn = ((θ , a ), 1, mi , mi , mi ) is a Nash equilibrium of Γ at state θ then we write (θ0, a0).

0 0 0 0 3 4 5 Let NE(Γ, θ) = {(θ , a ) ∈ Θ × X|m1 = ... = mn = ((θ , a ), 1, mi , mi , mi ) is a Nash equilibrium}.

Lemma 15. If (θ0, a0) ∈ NE(Γ, θ) then F (θ0) ⊆ F (θ).

Proof. Let us assume that (θ0, a0) ∈ NE(Γ, θ) but there exists an alternative ‘a’ such that a ∈ F (θ0) and a∈ / F (θ). Using the assumption that F satisfies r-monotonicity

116 there exists an agent i ∈ N and a pair of lotteries α ∈ ∆(F (θ0)) and α0 ∈ ∆(X) such that

0 0 0 0 ui(α, θ ) ≥ ui(α , θ ) and ui(α , θ)>ui(α, θ).

Under the assumption of expected utility theory, for agent i we can say that for lottery α(θ0, a0) there exists a lottery γ ∈ ∆(X) such that

0 0 0 0 0 0 ui(α(θ , a ), θ ) ≥ ui(γ, θ ) and ui(γ, θ) > ui(α(θ , a ), θ)

To see why this true, we can always express U[α(θ0, a0)] as a linear combination of two lotteries with support in F (θ0). Formally, we say that for every β ∈ ∆(F (θ0)) there exists a δ ∈ ∆(F (θ0)) and p ∈ (0, 1) such that

α(θ0, a0) = pβ + (1 − p)δ

In particular we can express the lottery α(θ0, a0) as a linear combination of α and δ.

0 0 0 Therefore ui(U(α(θ , a ), θ ) can be written as

0 0 0 0 0 ui(α(θ , a ), θ ) = pui(α, θ ) + (1 − p)ui(δ, θ )

0 0 0 0 0 Define γ = pα + (1 − p)δ, we know that ui(α, θ ) ≥ ui(α , θ ) and ui(α , θ) > ui(α, θ). Therefore it follows that

117 0 0 0 0 0 0 0 0 0 ui(α(θ , a ), θ ) = pui(α, θ ) + (1 − p)ui(δ, θ ) ≥ pui(α , θ ) + (1 − p)ui(δ, θ ) = ui(γ, θ )

0 0 0 ui(γ, θ) = pui(α , θ) + (1 − p)ui(δ, θ) > pui(α, θ) + (1 − p)ui(δ, θ) = ui(α(θ , a ), θ)

∗ 1 2 ∗3 4 ∗5 Now consider a deviation of this agent to a strategy mi = (mi , mi , mi , mi , mi ) ∗2 ∗4 ˆ ˆ 0 4 where mi > 1 and mi (θ) is γ if θ = θ and mi otherwise. In this case the agent is sure ∗ that Rule 2(II) is triggered. Utility of i with strategy mi is given by

∗2 mi 1 0 0 ( ∗2 )ui(γ, θ) + ( ∗2 )ui(zi(θ , θ ), θ) (39) mi + 1 mi + 1

0 0 ∗ Since we know that ui(γ, θ) > ui(α(θ , a ), θ), strategy mi can be made better than ∗2 mi by a choice of very high integer mi . This however contradicts the assumption that (θ0, a0) ∈ NE(Γ, θ).

Lemma 16. If (θ0, a0) ∈ NE(Γ, θ) then either θ0 = θ or|F (θ0)| ≥ 2.

Proof. Let us assume that (θ0, a0) ∈ NE(Γ, θ), θ0 6= θ and |F (θ0)| = 1. Let us denote

0 F (θ ) = a. By the assumption of ΘF - distinguishability, we know that there exists an agent i ∈ N and a lottery α ∈ ∆(X) such that

0 0 ui(a, θ ) ≥ ui(α, θ ) and ui(α, θ)>ui(a, θ)

By a very similar argument as in Lemma 15 we can show that (θ0, a0) ∈/ NE(Γ, θ). This leads to contradiction. Hence either θ0 = θ or|F (θ0)| ≥ 2 must be true.

Lemma 17. For every θ ∈ Θ and m ∈ R(Γ, θ), g(m) ∈ ∆(F (θ)).

Proof. Let R(Γ, θ) be the set of rationalizable strategies at state θ, and select an arbi- trary m ∈ R(Γ, θ). By Lemma 5 we know that for every i ∈ N, mi is of the following

118 0 0 3 4 5 form mi = ((θ , a ), 1, mi , mi , mi ).

First consider the case where θ ∈ ΘF . In this case, the set R(Γ, θ) can be described by 0 0 3 4 5 a strategy profile where for every i ∈ N, mi = ((θ , a ), 1, mi , mi , mi ), which also forms a Nash equilibrium. To see why this is true, consider a strategy profile m ∈ R(Γ, θ)

0 0 3 4 5 0 such that there exists an agent i ∈ N with mi = ((θ , a ), 1, mi , mi , mi ) and θ 6= θ. By Lemma 14 we know that (θ0, a0) ∈ NE(Γ, θ). By Lemma 15 we know that F (θ0) ⊆ F (θ). Since F (θ) is singleton, F (θ0) must be singleton. Furthermore we have assumed that θ 6= θ0. The fact that θ 6= θ0 and F (θ0) is singleton together contradict Lemma 16.

Now consider the case where θ ∈ Θ \ ΘF . Notice that in this case |F (θ)| > 1 . Select an arbitrary rationalizable strategy profile m ∈ R(Γ, θ). Using Lemma 14 we know that

1 1 for every i ∈ N, mi is a Nash equilibrium at state θ and Lemma 15 F (mi ) ⊆ F (θ). 1 Finally using Lemma 16 we know that |F (mi )| ≥ 2. 1 We have thus established that for any rationalizable strategy profile m, |F (mi )| ≥ 2. This means that the outcome is decided by Rule 1, 2(I) or 3(I). In all these cases g(m) ∈ ∆(F (θ)). This completes the proof.

3 4 5 Lemma 18. For every θ ∈ Θ and every a ∈ F (θ), mi = ((θ, a), 1, mi , mi , mi ) ∈ Ri(Γ, θ) for every i.

Proof. This follows from the fact that (θ, a) ∈ NE(Γ, θ). To see this, by the construction of the mechanism the payoff from any unilateral deviation is bounded above by α(θ, a). This verifies part (1) of the definition of  rationalizable implementation.

119