Efficient Auctions with Altruism∗

Ruggiero Cavallo Microsoft Research 1290 Avenue of the Americas, 6th floor New York, NY 10104 [email protected] September 6, 2012

Abstract

We introduce a novel regret-based model of altruism, wherein agents are willing to forgo a small amount of value if doing so increases social welfare, and consider its implications in a single-item allocation setting. We demonstrate that even for very mildly altruistic agents classic approaches such as VCG are manipulable, but straightforward variants of known mechanisms that are strategyproof in the purely-selfish case succeed in achieving dominant strategy efficiency and strong budget-balance, i.e., full social welfare for the group, a result unachievable in the case of completely selfish agents. We contrast this positive result with a negative analysis of a more traditional (non-regret- based) altruism model, demonstrating that nothing short of complete altruism yields existence of a strongly budget-balanced efficient mechanism there. JEL Classification: D64, D44, D01, D82

1 Introduction

Mechanism design is a cynical enterprise: the goal is to derive schemes under which selfish agents, unconcerned with the welfare of others, cannot benefit from doing other than what is considered optimal from a social perspective (e.g., truthfully reporting private information to allow identification of a social welfare maximizing action). When such schemes succeed in simultaneously satisfying the goals of the social planner and each agent, the assumption of selfishness makes them robust. But unfortunately often they cannot succeed; most notably, efficient mechanisms in

∗A significant portion of this work was completed while the author was a postdoctoral fellow at the University of Pennsylvania.

1 which individuals in the group retain all value from the chosen outcome do not exist, even for very simple restricted settings. So is cynical because built into its framework is the selfishness assumption and its dark consequence: that efficient decision making is impossible. But in the real world the situation may not be so dark. The main observation of this paper is that, at least in some important settings, small amounts of the right kind of altruism can go a long way in achieving an efficient framework for group decision-making. The positive contribution here lies primarily not in design of a mechanism that dramatically deviates from previously known schemes; rather it lies in identification of reasonable characteristics of agent functions that, when present, allow simple variants on known schemes to bring success. Specifically, when agents are altruistic in a regret-based way, indifferent between giving up a small amount of value for the good of the group and not, budget-balanced and efficient single-item auctions exist. In environments where transferring money outside the group of agents is a loss that detracts from social welfare, true efficiency requires strong budget-balance, and this is not attainable in the purely selfish setting. The positive results we obtain for single-item allocation are paired with negative results for more general decision settings, and strong negative results—even in the single-item allocation case—for a more traditional (non-regret-based) model of altruism wherein each agent’s utility is a weighted linear combination of the values obtained by individuals in the group. This justifies the focus on a regret-based altruism notion, despite the fact that—like any specific utility model—it will not always apply. The following list summarizes the main contributions of the paper:

• We introduce a regret-based model of altruism, capturing the idea that indi- viduals may be willing to give up a certain fixed amount of value for the good of the group, and use it to exactly characterize the degree of altruism that is necessary and sufficient for dominant strategy implementation of a class of efficient mechanisms.

• We demonstrate that when agents are even slightly altruistic, mechanisms that are strategyproof in the case of selfish agents become manipulable (Proposition 3).

• We present a strongly budget-balanced variant of the redistribution mechanism of Cavallo (2006), and demonstrate that in single-item allocation settings it is efficient in dominant strategies and ex post individually rational if agents are “mildly” altruistic (i.e., if for the good of the group they are willing to give up an amount of utility that is small compared to the expected utility they obtain) (Theorem 2).

• We demonstrate that this positive result does not extend from allocations to general, unrestricted types settings (Theorem 4).

2 • We consider the historically more standard non-regret-based model of altruism in which an agent’s utility is a linear combination of his own value and that of the other agents. We demonstrate that no anonymous, strongly budget- balanced, dominant strategy efficient mechanism exists unless agents are com- pletely altruistic in this model (Theorem 5).

• We consider the case of “proportionally altruistic” agents: those that are willing to give up a certain percentage of the maximum selfish utility they could obtain. We show that the mechanisms successful for our first altruism model fail here, but we also present a simple alternative mechanism demonstrating that—unlike in the case of non-regret-based altruism—efficiency is possible for small groups of moderately (rather than completely) proportionally-altruistic agents (Theorem 7).

In the rest of this section we provide background and discuss related work. In Section 2 we introduce our model of altruism, starting with a general class of other-regarding utility functions and moving on to specific instances; here we also demonstrate the failure of previous mechanisms. (Readers well-informed of the relevant mechanism design literature may wish to skip Section 1.2.) In Section 3 we present solutions: efficient and strongly budget-balanced mechanisms for single-item allocation. In Section 4 we consider several natural generalizations. We end with a discussion in Section 5.

1.1 Altruism and mechanism design The enterprise of mechanism design is situated in a context of game-theoretic agents, each of whom acts in a way that maximizes some individual utility function. The agents hold private information that is critical to evaluating any potential decision; agents are asked to make claims about their private information and a decision is made. The goal of a direct mechanism is to align agent incentives towards a desired objective, such as social welfare maximization, so that they will truthfully reveal their private information and an optimal choice can be identified. The tool used for this purpose is monetary payments. The classical setting is one in which utility functions are quasilinear, with each agent’s utility independent of the payments imposed on other agents. However, to state the obvious, there is abundant evidence that people—as opposed, say, to corporations—are often concerned with the welfare of others (see, e.g., Andreoni and Miller (2002) or Charnes and Rabin (2002) for empirical evidence in an economic setting), and in such settings new approaches are required. Before presenting the relevant background in classical mechanism design, we draw attention to related work that considers agents that are not fully self-interested. Bowles and Hwang (2008) provide an analysis of the provision of optimal incen- tives in a mechanism design setting taking into account factors such as reciprocity,

3 intrinsic motivation, and respect for ethical norms. Their work is a response to compelling evidence that monetary incentives frequently either undermine or ex- aggerate individuals’ inherent inclination to be “civic-minded”. Incentives may in- hibit altruism by signaling that self-interested behavior is expected (Hoffman et al, 1994; Irlenbusch and Sliwka, 2005) or by diminishing individual feelings of self- determination (Deci et al, 1999) (but see Cameron et al (2001) for a counter- perspective); they may directly signal the mechanism designer’s preferences and in so doing undermine individuals’ valuation of the desired behavior (Benabou and Tirole, 2003; Seabright, 2004); or they may promote altruism via trust and bandwagon effects that result when individuals take incentives as indication that others will behave altruistically (Shinada and Yamagishi, 2007). Charnes and Rabin (2002) obtain evidence of social-welfare-motivated behavior in simple money-allocation be- havioral experiments. Frey and Jegen (2001) is a good survey of this type of empir- ical evidence; for a broader view including ethnographic studies, see Henrich et al (2004). Chen and Kempe (2008) analyze how the price of anarchy in a traffic-routing network problem changes if one refrains from assuming agents are indifferent to the latency effects their choices cause for other users. Brandt and Weiss (2001) consider the implications of spitefulness in an auction setting (see also Liang and Qi (2007)). Kucuksenel (forthcoming) provides a broad analysis of the mechanism design prob- lem with other-regarding participants. In these papers and others (e.g., Levine (1998)), the established context is the non-regret-based altruism model wherein in- dividual utility is a linear combination of standard selfish utility and total social welfare, which we consider and move beyond in this paper. No previous work that we are aware of demonstrates the effect of limited altruism in yielding new positive mechanism design results; the introduction of our regret- based altruism model and such a demonstration are the main contributions of this paper.

1.2 Setup and mechanism design background There is a set of agents I = {1, 2,...,n} and a set of outcomes O. Each agent i ∈ I holds private information (or type) θi, an element of typespace Θi. A vector of agent types (a type profile) is θ =(θ1,...,θn) ∈ Θ1 × ... × Θn = Θ, and the same vector excluding the type of some agent i is denoted θ−i. A mechanism consists of a decision function f : Θ → O and transfer function vector T =(T1,...,Tn), with Ti :Θ → ℜ for each i ∈ I. Given reported types θˆ ∈ Θ, a mechanism (f,T ) implements outcome f(θˆ) and delivers monetary payment Ti(θˆ) to each agent i (outcomes and payments are enforced by a social planner—“the center”). Given the context of a mechanism (f,T ), being as general as possible we can consider an agent i’s utility function ui to be an arbitrary mapping from his true type θi, a reported type profile θˆ, and the mechanism itself to a real number;

4 1 i.e., letting M denote the space of all mechanisms, ui :Θi × Θ ×M→ℜ. However, there is good reason for backing off somewhat from this level of generality. The Gibbard-Satterthwaite theorem (Gibbard, 1973; Satterthwaite, 1975) demonstrates that without placing some restrictions on agent utility functions, essentially no in- teresting decision functions can be implemented in equilibrium. For this reason, and also because of the nice fit with a strong selfishness assumption, in mechanism design it is generally assumed that agent utility functions are quasilinear.2

Definition 1 (quasilinear utility function). A utility function ui :Θi × Θ ×M→ℜ is quasilinear if and only if there exists a function vi :Θi × O → ℜ such that:

∀(f,T ) ∈M, ∀θi ∈ Θi, ∀θˆ ∈ Θ, ui(θi, θ,f,Tˆ )= vi(θi,f(θˆ)) + Ti(θˆ) (1)

Thus in the quasilinear context, can be expressed as functions of type, outcome, and individual transfer; i.e., for each i ∈ I, ui :Θ×O ×ℜ→ℜ. For quasi- linear and other utility models in which utility decomposes into a value component and other factors,3 we let f ∗ denote an aggregate-value maximizing decision function, ∗ i.e., ∀θ ∈ Θ, f (θ) = argmaxo∈O i∈I vi(θi,o), and let v−i(θ,o)= j∈I\{i} vj(θj,o), ∀i ∈ I and o ∈ O. For quasilinear utility functions,P the class of mechanisms thatP are truthful and efficient in dominant strategies corresponds to the Groves class of mechanisms (Green and Laffont, 1977), wherein the efficient outcome (according to agent re- ports) is selected and each agent is paid the reported aggregate value obtained by the other agents, plus or minus some quantity independent of his report. For- ∗ mally, a Groves mechanism is a mechanism (f ,T ) where, ∀i ∈ I, ∀θˆ ∈ Θ, Ti(θˆ) = ∗ v−i(θˆ−i,f (θˆ)) + hi(θˆ−i), for some function hi : Θ−i → ℜ. A Groves mechanism with hi(θˆ−i)=0, ∀i ∈ I, ∀θˆ ∈ Θ, simply pays each agent the reported value ob- tained by others; we will call this the basic-Groves mechanism. The VCG mech- anism (Clarke, 1971; Groves, 1973; Vickrey, 1961), another member of the Groves ∗ ∗ class, defines, ∀i ∈ I, ∀θˆ ∈ Θ, Ti(θˆ) = v−i(θˆ−i,f (θˆ)) − v−i(θˆ−i,f (θˆ−i)), where ∗ f (θ−i) = argmaxo∈O j∈I\{i} vi(θi,o). A mechanism is strategyproof if every agent maximizes his utility by reporting his private type truthfully,P whatever it is, regardless of what the other agents report. A mechanism is ex post individually rational if truthful reporting is guaranteed to yield non-negative utility, and is strongly budget-balanced if it makes zero payments ˆ ˆ to the agents in aggregate (i.e., if ∀θ, i∈I Ti(θ) = 0). The VCG mechanism is strategyproof and ex post individually rational but not strongly budget-balanced. P 1The form of the utility functions is taken to be known by the center—it is only the agent’s type that is private—so that, given a vector of reported types and a mechanism specification (f,T ), each agent’s “reported utility” is known. 2Exceptions include, e.g., studies of risk-aversion in auctions, such as Maskin and Riley (1984). 3We will be more formal on this matter in the next section, specifically in Footnote 6.

5 1.2.1 Budget balance and redistribution In settings where no agent has negative value for any outcome, the VCG mechanism never runs a deficit: no agent will ever receive a positive payment from the center, though agents may have to make payments to the center, potentially of a very large magnitude. Though perhaps desirable from the center’s perspective, these “charges” imposed on agents are undesirable when the goal is to maximize agents’ aggregate welfare. In such cases any payments made to the center detract from this objective and the ideal scenario would be for net payments to be exactly 0 ˆ ˆ (i.e., i∈I Ti(θ)=0, ∀θ). Unfortunately, though, for unrestricted typespaces and quasilinear utility there is no efficient and strategyproof mechanism that achieves this inP general (Green and Laffont, 1979).4 In Section 4.2 (Theorem 5) we prove that even for restricted typespaces and a spectrum of utility functions that includes and goes far beyond quasilinear, no strongly budget-balanced and dominant strategy efficient mechanism exists. The redistribution mechanism of Cavallo (2006) modifies VCG by returning rev- enue to the agents, while maintaining the desirable properties of no-deficit, ex post individual rationality, and efficiency in dominant strategies in non-negative value 5 typespaces. We define the VCG-revenue guarantee Gi(Θi, θˆ−i) for an agent i with typespace Θi when other agents report types θˆ−i as the minimum revenue that could result under VCG given Θi and θˆ−i, taken over all possible reports by i. The redis- tribution mechanism is defined to implement VCG and additionally pay each agent a 1/n share of his VCG-revenue guarantee: Definition 2 (The redistribution mechanism (Cavallo, 2006)). A mechanism (f ∗,T ) where, ∀i ∈ I, ∀θˆ ∈ Θ, G(Θ , θˆ ) T (θˆ)= v (θˆ ,f ∗(θˆ)) − v (θˆ ,f ∗(θˆ )) + i −i , (2) i −i j −i −i −i n ∗ ∗ with G(Θi, θˆ−i)= min v−j(θˆ−j,f (θˆ−j)) − v−j(θˆ−j,f (θˆ)) . θˆ ∈Θ i i j∈I X   While this mechanism is applicable to arbitrary decision making scenarios, it has a particularly simple form—and is also most effective—in the important subclass of domains that have the “all-or-nothing” (AON) property. Definition 3 (AON typespace). Given |I| = |O|, a typespace Θ is AON if and only if, for each i ∈ I, there exists a distinct oi ∈ O such that, ∀θ−i ∈ Θ−i, ∀j ∈ I \ {i}, vj(θj,oi)=0. 4In fact, for non-negative valued typespaces that are sufficiently broad, the VCG mechanism is unique among all mechanisms that are efficient in dominant strategies, ex post individually rational, and never run a deficit (see Corollary 3.2 of Cavallo (2008)). Moreover, Hurwicz and Walker (1990) show that efficiency and strong budget-balance is unattainable even in very restricted typespaces. 5Bailey (1997), Guo and Conitzer (2009), and Moulin (2009) are examples of other work in the same vein. In certain restricted settings including single-item allocation (which we discuss in the next subsection), the redistribution mechanism in fact coincides with Bailey’s earlier mechanism.

6 In AON domains each agent i gets “all or none” of the social value, and can fully express his preferences over outcomes with just a single number vi, the value for his favored outcome. The most natural and important example of an AON setting is that in which a single item is to be allocated. Here the redistribution mechanism takes the following form: the item is allocated to the highest bidder; the highest bidder pays the second highest bid to the center; and the center pays each agent i (including the winner) 1/n times the second highest bid amongst agents other than i. The redistribution mechanism does very well here in terms of utility for the agents—only 2/n times the difference between the second and third highest bids is transferred to the center—but of course does not achieve strong budget-balance, which we know to be impossible for selfish agents. In this paper we will seek to describe, qualitatively and quantitatively, the kind of altruism that will make this impossibility go away.

2 Altruistic utility functions

We will consider a utility model that deviates from quasilinearity via added com- ponents manifesting regret and concern for others. As in Definition 1, there will continue to exist a function vi :Θi ×O → ℜ that can be considered agent i’s “selfish value” function. Given this, we use notation wi(θi,o,xi) to denote quasilinear or “standard-utility” for an agent i whose type is θi, when outcome o ∈ O is chosen and he receives payment xi; i.e., wi(θi,o,xi) = vi(θi,o)+ xi. For arbitrary agent n i, reported type profile θˆ−i ∈ Θ−i, outcome o ∈ O, and payments x ∈ ℜ , we also define: ˆ ˆ • w−i(θ−i,o,x−i) = j∈I\{i}[vj(θj,o) + xj], i.e., the (reported) aggregate standard-utility for agents other than i. P Finally, we define mechanism-specific notation for an agent’s maximum possible standard-utility, given a context of type and other agents’ reported types. For mechanism (f,T ), i ∈ I with type θi ∈ Θi, and reported type profile θˆ−i ∈ Θ−i for the other agents:

ˆ ′ ′ ˆ ′ ˆ • wi(θi, θ−i,f,T ) = maxθi∈Θi {vi(θi,f(θi, θ−i)) + Ti(θi, θ−i)}, i.e., the maximum standard-utility that i could realize with any type report.

Note that the mechanism context (f,T ) is critical to determining quantity wi, and this is why we must model utility as not just a function of types and outcomes, but also of the mechanism itself. Now, the kind of utility functions we will focus on have the following general form; for any agent i with type θi, when the joint type reported is θˆ and the mechanism context is (f,T ), for some function g : ℜ3 → ℜ:

ui(θi, θ,f,Tˆ )= wi(θi,f(θˆ),Ti(θˆ))+ (3)

g wi(θi,f(θˆ),Ti(θˆ)), wi(θi, θˆ−i,f,T ), w−i(θˆ−i,f(θˆ),T−i(θˆ)) (4)   7 Thus an agent’s utility is quasilinear plus an additional term that may depend on: the agent’s standard-utility (wi), the greatest standard-utility he could possibly realize given other agents’ reported types (wi), and the aggregate standard-utility 6 (reportedly) obtained by the other agents (w−i). Dependence on the final term is where other-regarding elements can be exhibited.7 In order for the other-regarding quantities to play a role in an agent’s utility, the agent must be able to determine them. Whereas when agents have quasilinear utility the center must only execute the outcome and give each agent his individual transfer payment, in an other-regarding setting where agent utilities are parameterized by information about other agents’ welfare, some communication must take place. We can envision the following mechanism paradigm (where the mechanism form (f,T ) is presumed to be common-knowledge):

1. Each agent communicates a claim regarding his private type to the center.

2. The center computes and executes an outcome and transfer payments.

3. The center communicates w−i and wi to each agent i and utilities are realized.

In step 3 the center could alternatively just broadcast all reported types, but this may raise privacy concerns. By only communicating the relevant aggregate value information to each agent, the amount of information leakage (or violation of privacy) is arguably on a par with that of the basic-Groves or VCG mechanisms, where aggregate value information is indirectly communicated in each agent’s trans- fer payment. One might question whether it is appropriate that an agent’s “other- regardingness” should be manifested in concern for the value that other agents report obtaining. Wouldn’t an altruistic agent care about the value they actually obtain? Yes, but the “true” value obtained by an agent is by nature forever private to that agent; the utility of others can only be based on observable manifestations of that underlying reality. It is probably most realistic to say that an altruistic agent would obtain utility proportional to his beliefs about other agents’ utilities based on what they report. For this reason it is natural to restrict consideration

6 Given this, formally an agent i’s selfish-value function vi is defined to be the vi :Θ × O → ℜ that solves, ∀(f,T ) ∈M, ∀θi ∈ Θi, ∀θˆ ∈ Θ, ui(θi, θ,f,Tˆ )= vi(θi,f(θˆ)) + Ti(θˆ)+ g vi(θi,f(θˆ)) + ′ ′ Ti(θˆ), maxθ′ ∈Θ {vi(θi,f(θ , θˆ−i))+ Ti(θ , θˆ−i)}, w−i(θˆ−i,f(θˆ),T−i(θˆ)) . While one may be able to i i i i construct a utility function with multiple distinct vi as solutions, we restrict attention to utility  functions with a unique solution vi so that an agent’s “selfish-value” is unambiguously defined. 7Note that while agent utilities here depend on the perceived standard-utility that other agents experience, this does not put us in a multidimensional interdependent values setting of the sort considered in Dasgupta and Maskin (2000), which is shown to preclude efficient mechanism design in general. The reason is that an agent’s private information holds no “secret” about what another agent will experience once an outcome has finally been realized—it is the report itself that generates more or less utility for the others.

8 to strategyproof mechanisms, and we do; in such contexts we contend that simply taking an agent’s report to be an accurate reflection of the truth is sensible.8 So the class of utility functions represented in Eq. (4) is a basic extension of the quasilinear model; each agent has the standard quasilinear utility plus some bonus term that considers the impact of his report on the aggregate standard-utility ob- tained by the other agents, but in relation to the “opportunity cost” in standard- utility he bears. We can immediately identify the most basic special cases of this class: completely-altruistic utilities are represented by Eq. (4) with g(a,b,c) = c,9 and completely-selfish utilities with g(a,b,c) = 0, ∀a,b,c ∈ ℜ. These two cases are very well-understood. In a setting where all agents have completely-altruistic utility functions, a mechanism that chooses aggregate-value maximizing outcomes based on agent reports and never makes any transfer payments is efficient in dominant strategies and trivially strongly budget-balanced. When agents have completely- selfish (i.e., quasilinear) utility functions, we know that the Groves class exactly characterizes the efficient mechanisms that can be implemented in dominant strate- gies (Green and Laffont, 1977), and in settings of significance there is no strongly budget-balanced mechanism. So our approach will be to look at reasonable “semi-altruistic” utility functions somewhere in between these two extremes. The notion of altruism that will drive our positive results is the following, which captures the idea that agents are willing to sacrifice up to an amount α of personal standard-utility if at least that same amount is gained by the other agents:

Definition 4 (α-altruism). For arbitrary mechanism context (f,T ), an agent i with ′ ˆ utility function ui is α-altruistic if and only if, ∀θi,θi ∈ Θi, ∀θ−i ∈ Θ−i, letting (o,x) ˆ ˆ ′ ′ ′ ˆ ′ ˆ denote (f(θi, θ−i),T (θi, θ−i)) and (o ,x ) denote (f(θi, θ−i),T (θi, θ−i)):

(a) wi(θi,o,xi) ≥ wi(θi, θˆ−i,f,T ) − α ∧ ˆ ′ ′ ˆ ′ ′ (b) wi(θi,o,xi)+ w−i(θ−i,o,x−i) ≥ wi(θi,o ,xi)+ w−i(θ−i,o ,x−i) ˆ ′ ˆ ⇒ (c) ui(θi, (θi, θ−i),f,T ) ≥ ui(θi, (θi, θ−i),f,T )

8This is a nuanced argument: We will propose mechanisms which ask players to be honest. Given the mechanism, if all agents have faith in the other agents as honest, then a game results in which each agent maximizes his utility by being honest regardless of whether or not any of the other agents actually acts honestly. We submit that given this scenario and the fact that honesty is something that can never be verified or disconfirmed, it is plausible that agents will have faith in the others as honest. Importantly, we will make no assumption about whether agents actually are honest. At any rate, the typical precedent approach is apparently to either assume other agents’ standard-utility functions are known (see, e.g., Hori (2006))—and this is an extremely strong assumption that obviates the need for mechanism design—or to more or less ignore the issue. 9Or, perhaps some would prefer to describe this as completely social-welfare-concerned; if altruism is construed as regard for others, complete altruism with no self-regard would be g(a,b,c)= c − a.

9 The concept is straightforward: considering any alternative to truthful reporting for agent i, condition (a) states that i’s standard-utility from truthtelling is not more than α less than what he could attain from his standard-utility maximizing deviation; (b) states that social standard-utility is not improved by the alternative report; and then (c) states that i’s utility is no greater under it. Perhaps more intuitively, the definition entails that: each agent i weakly prefers obtaining standard-utility y ∈ [wi − α, wi] with others (in aggregate) obtaining standard-utility at least z over obtaining y +ǫ with others obtaining at most z −ǫ.10 It will also be useful to consider the following stronger altruism property: Definition 5 (strong-α-altruism). For arbitrary mechanism context (f,T ), an agent ˆ′ ˆ′′ ˆ i with utility function ui is strongly-α-altruistic if and only if, ∀θi, θi, θi ∈ Θi, ∀θ−i ∈ ′ ′ ′ ˆ ′ ˆ ′′ ′′ Θ−i, α-altruism holds and, letting (o ,x ) denote (f(θi, θ−i),T (θi, θ−i)) and (o ,x ) ′′ ˆ ′′ ˆ denote (f(θi , θ−i),T (θi , θ−i)): ′ ′ ′ ˆ (a ) wi(θi,o ,xi) ≥ wi(θi, θ−i,f,T ) − α ∧ ′ ′ ′ ˆ ′ ′ ′′ ′′ ˆ ′′ ′′ (b ) wi(θi,o ,xi)+ w−i(θ−i,o ,x−i) >wi(θi,o ,xi )+ w−i(θ−i,o ,x−i) ′ ′ ˆ ′′ ˆ ⇒ (c ) ui(θi, (θi, θ−i),f,T ) >ui(θi, (θi , θ−i),f,T ) In words, for any report that will garner an agent standard-utility within α of the maximum he could obtain, he will strictly prefer making that report over making any alternative report that yields strictly less social standard-utility. This second altruism notion strengthens the first in two ways: first, it imposes constraints on relative agent utilities for two type reports even when neither one of them is the truthful report; second, it says the is strict when the social standard- utility inequality is strict. Both α-altruism properties get at a threshold-type quality, which we crystalize in the next subsection. Note that α-altruism implies (α − ǫ)- altruism and strong-α-altruism implies strong-(α − ǫ)-altruism for any ǫ ≥ 0. Whether or not an agent is α-altruistic is only a well-formed question given a concrete mechanism context, since an agent’s utility may depend on regret, which depends on the set of counterfactual possibilities, which in turn depends on the mechanism context. For instance, in the context of an “oblivious” mechanism that picks a predetermined outcome regardless of what types agents report, every agent will technically be strongly-α-altruistic for any value of α because the utility they receive is invariant to the reports they make ((b′) is always false). This will not be the case in a mechanism that chooses outcomes to maximize social (or a partic- ular individual’s) welfare. In the next subsection we will discuss a broad class of concrete utility functions (parameterized by variable α) that satisfy these altruism

10Strategyproofness for α-altruistic agents has a flavor of so-called ǫ-Nash equilibrium, in that both concepts describe conditions under which an agent can’t gain more than α from deviating (for ǫ = α). A key difference, though, is that with α-altruism if an agent can gain some x<α by deviating and doing so will not hurt the other agents, then he may do so. So α-altruism is a significantly weaker assumption than “α indifference”, which forms the basis for ǫ-Nash equilibrium.

10 properties in general, i.e., across all mechanism contexts. But the above defini- tions are useful because they allow us to assume less in making positive statements about the incentive properties of a mechanism. On a similar note, throughout this paper bear in mind that a proof that α-altruism leads to a desired result implies that strong-α-altruism does as well; likewise demonstrating that—for a given class of utility functions—strong-α-altruism holds implies that α-altruism holds as well. Thus having these two notions allows us to be more precise in our findings. Going forward we will simplify notation somewhat, frequently using ui(θi, θˆ), wi(θi, θˆ), and wi(θi, θˆ−i) in place of ui(θi, θ,f,Tˆ ), wi(θi,f(θˆ),Ti(θˆ)), and wi(θi, θˆ−i,f,T ), respectively, with the understanding that the quantities are defined with respect to a tacit mechanism context (f,T ). In order to be able to talk about individual rationality of a given mechanism without positing any specific α-altruistic utility function, we will make the following natural assumption about utility that only plays a role in IR analysis and is not limiting in any practical way. Assumption 1. If an individual agent’s standard-utility is non-negative and the social standard-utility is non-negative, then the individual agent’s actual utility is non-negative. That is, ∀i ∈ I, ∀θi ∈ Θi, ∀θˆ ∈ Θ,

wi(θi, θˆ) ≥ 0 ∧ wi(θi, θˆ)+ w−i(θˆ−i, θˆ) ≥ 0 ⇒ ui(θi, θˆ) ≥ 0 (5)

2.1 Regret-based altruism utility functions α-altruism tells us something important about an agent’s utility function, but it does not actually specify a particular function. Before we get to results revolving around this property, we here make the case that it is something that corresponds to actual other-regarding utility functions of interest. Consider the following natural class, where utility equals a constant times the sum of the standard-utility and the “regret-weighted” standard-utility of the other agents: Definition 6. (regret-based altruism utility function) For constant k ∈ ℜ+ and altruism-coefficient function ρ : ℜ2 → (−∞, 1]:

ui(θi, θˆ)= k wi(θi, θˆ)+ ρ wi(θi, θˆ−i),wi(θi, θˆ) · w−i(θˆ) (6) This class covers a spectrum ranging from complete selfishne ss (ρ(c,d )=0, ∀c,d) or even spitefulness (ρ(c,d) negative) to complete altruism (ρ(c,d)=1, ∀c,d), with more plausible examples in-between.11 Our positive results will revolve around utility functions reflective of agents that are completely-altruistic as long as being so does not cause them to “give up” more than a quantity α of standard-utility:

11Note that this class also captures the standard model of non-regret-based linear altruism adopted in much previous work wherein utility equals the sum of the agent’s standard-utility and a constant times the other agents’ standard-utility (see Section 4.2). Such a function does not meet the requirements of Definition 7.

11 Definition 7. (α threshold regret-based altruism utility function) A regret-based altruism utility function with ρ(c,d)=1 for all c,d ∈ ℜ such that c − d<α.

An α threshold regret-based altruism utility function can be represented graph- ically as in Figure 1, where the x-axis is “regret” and the y-axis is the “degree of altruism” as represented by the ρ function.

α threshold regret-based altruism utility function class

1 altruism coefficient

0 0 α regret

Figure 1: For a given α, a visualization of the space of functions within the class of α threshold regret-based altruism utility functions. The x-axis (regret) is wi(θi, θˆ−i) − wi(θi, θˆ) and the y-axis (the altruism coefficient this regret leads to) is ρ(wi(θi, θˆ−i),wi(θi, θˆ)). For regret greater than α, an α threshold regret-based altruism utility function can be defined arbitrarily between 0 and 1, as depicted by the shaded area.

As the figure shows, the definition requires that agents are completely altruistic within α of their best-response standard-utility, but allows for arbitrary levels of altruism (including none) outside of this range. Perhaps the simplest example of α threshold utility is the following class:

wi(θi, θˆ)+ w i(θˆ) if wi(θi, θˆ i) − α ≤ wi(θi, θˆ) u (θ , θˆ)= − − (7) i i ˆ ( wi(θi, θ) otherwise If a given report yields standard-utility for an agent within α of the maximum standard-utility he could have achieved with a standard-utility maximizing best-

12 response, the agent’s utility function is altruistic. Otherwise, it is selfish.12 Put another way: each agent is willing to sacrifice amount α of standard-utility for the other agents. Note that α < 0 corresponds to the usual completely selfish quasilinear utility setting, and α ≥ wi(θi, θˆ−i) corresponds to a completely altruistic setting where the agent cares only about social welfare (including payments). For a perhaps more plausible (and continuous) example, consider another con- crete member of the threshold regret-based altruism class: utility equals the so- cial standard-utility when the agent obtains standard-utility within α of his selfish best-response; outside this range the agent’s utility smoothly decreases from so- cial standard-utility towards individual standard-utility as the obtained individual standard-utility decreases. Formally (assuming values normalized between 0 and 1):

1 − max{w (θ , θˆ ) − w (θ , θˆ),α} u (θ , θˆ)= w (θ , θˆ)+ i i −i i i w (θˆ) (8) i i i i 1 − α −i Other intuitive examples include a utility function in which the altruism coeffi- cient decreases with a growing rate as regret increases:

2 ui(θi, θˆ)= wi(θi, θˆ)+[1 − (max{wi(θi, θˆ−i) − wi(θi, θˆ),α}− α) ]w−i(θˆ), (9) or an example of the opposite, where altruism decreases with a decreasing rate: 1 ui(θi, θˆ)= wi(θi, θˆ)+ w−i(θˆ) (10) 10 · max{wi(θi, θˆ−i) − wi(θi, θˆ),α}− 10α +1 The above utility functions are just a few members of the α threshold regret- based altruism class, and can be visualized as in Figure 2. Having α regret-based threshold altruism utility implies α-altruism:

Proposition 1. For any non-negative-value typespace13 and any α ≥ 0, an agent with an α regret-based threshold altruism utility function is strongly-α-altruistic for any mechanism context (f,T ).

Proof. Consider arbitrary non-negative-value typespace Θ, agent i with an α regret- ′ based threshold utility function, θ−i ∈ Θ−i, and θi,θi ∈ Θi. We will first show that if conditions (a) and (b) of Definition 4 are satisfied, then condition (c) is also satisfied. If (a) is satisfied then i’s utility will be in the “completely-altruistic portion” (i.e., wi(θi,θi, θˆ−i) ≥ wi(θi, θˆ−i) − α) if he reports truthfully, so his utility + would be k[wi(θi,θi, θˆ−i)+w−i(θˆ−i,θi, θˆ−i)] for some constant k ∈ ℜ . When making ′ ′ ˆ ˆ ′ ˆ report θi, i’s utility will be k[wi(θi,θi, θ−i)+ γ · w−i(θ−i,θi, θ−i)] for some γ ≤ 1 ˆ ′ ˆ ˆ (since ρ(wi(θi, θ−i),wi(θi,θi, θ−i)) ≤ 1, by definition). But then wi(θi,θi, θ−i) +

12Note that this meets the form of Eq. (4) with g(a,b,c) = c if b − a ≤ α and g(a,b,c) = 0 otherwise. 13 I.e., any in which vi(θi,o) ≥ 0, ∀i ∈ I, ∀θi ∈ Θi, ∀o ∈ O.

13 1 1 altruism coefficient altruism coefficient

0 0

0 α 0 α regret regret

(a) Utility form of Eq. (7) (b) Utility form of Eq. (8)

1 1 altruism coefficient altruism coefficient

0 0

0 α 0 α regret regret

(c) Utility form of Eq. (9) (d) Utility form of Eq. (10)

Figure 2: A visualization of four utility functions within the α threshold regret- based altruism class. As in the previous figure, the x-axis is wi(θi, θˆ−i) − wi(θi, θˆ) and the y-axis is ρ(wi(θi, θˆ−i),wi(θi, θˆ)).

ˆ ˆ ′ ˆ ˆ ′ ˆ w−i(θ−i,θi, θ−i) ≥ wi(θi,θi, θ−i)+ w−i(θ−i,θi, θ−i) ((b) of Definition 4) implies (using non-negative-values in the second inequality):

ui(θi,θi, θˆ−i)= k[wi(θi,θi, θˆ−i)+ w−i(θˆ−i,θi, θˆ−i)] (11) ′ ˆ ˆ ′ ˆ ≥ k[wi(θi,θi, θ−i)+ w−i(θ−i,θi, θ−i)] (12) ′ ˆ ˆ ′ ˆ ′ ˆ ≥ k[wi(θi,θi, θ−i)+ γ · w−i(θ−i,θi, θ−i)] = ui(θi,θi, θ−i), (13) i.e., (c) of Definition 4 holds, and so α-altruism is satisfied. Furthermore, strong- ′ ′′ ′ ˆ α-altruism is satisfied because for arbitrary θ−i, θi, θi, and θi , if wi(θi,θi, θ−i)+ ˆ ′ ˆ ′′ ˆ ˆ ′′ ˆ w−i(θ−i,θi, θ−i) >wi(θi,θi , θ−i)+ w−i(θ−i,θi , θ−i), then for any γ ≤ 1:

′ ˆ ′ ˆ ˆ ′ ˆ ui(θi,θi, θ−i)= k[wi(θi,θi, θ−i)+ w−i(θ−i,θi, θ−i)] (14) ′′ ˆ ˆ ′′ ˆ > k[wi(θi,θi , θ−i)+ w−i(θ−i,θi , θ−i)] (15) ′′ ˆ ˆ ′′ ˆ ′′ ˆ ≥ k[wi(θi,θi , θ−i)+ γ · w−i(θ−i,θi , θ−i)] = ui(θi,θi , θ−i) (16)

14 Broad though the class of α threshold regret-based altruism utility functions is, there are potentially compelling utility functions that are not members yet still lead to satisfaction of the α-altruism property. For arbitrary i and θˆ−i ∈ Θ−i, let w˜−i(θˆ−i) = maxo∈O v−i(θˆ−i,o), i.e., the aggregate standard-utility agents other than i would obtain under a strongly budget-balanced mechanism (f ∗,T ) if i were not present (or, equivalently, had constant value for every outcome and received no payments). Now consider the following, under which agents’ utilities are scaled in a way that reflects their individual abilities to obtain standard-utility:

Definition 8. (scaled threshold altruism utility function)

wi(θi,θˆ−i) · (wi(θi, θˆ)+ w−i(θˆ)) if wi(θi, θˆ−i) − α ≤ wi(θi, θˆ) u (θ , θˆ)= w˜−i(θ−i) i i ˆ ( wi(θi, θ) otherwise

This utility function is the same as the one described in Eq. (7), except in the “altruistic range” of the function agents’ utilities are bumped up by an amount proportional to their potential to obtain standard-utility and are bumped down proportionally to the other agents’ potential to obtain standard-utility. When α is large enough, this will mean agents that get large standard-utility obtain a bonus greater than those that obtain little, and each agent is “hurt” when the other agents come far from meeting their standard-utility potential. We will see how this works out in an example in the beginning of the next section, but here we note that such functions satisfy α-altruism.

Proposition 2. For any non-negative-value typespace and any α ≥ 0, an agent with an α scaled threshold altruism utility function is α-altruistic in any strongly budget-balanced mechanism context (f ∗,T ).

Proof. See Appendix.

2.2 Efficiency in altruistic settings Our goal is to achieve strong budget-balance as a means to achieving efficiency, but care must be taken in defining the concept in this altruistic setting. A choice function f ∗ is efficient for standard-utility settings; but we are concerned with max- imizing actual utility—that which takes altruistic elements into account—instead. Specifically, since α threshold regret-based altruism utilities remain linear in money (at least as long as standard-utility regret is no greater than α), as in the stan- dard setting money provides a common denomination across agents and we will be concerned with maximizing the sum of agent utilities. Given our restriction to strat- egyproof mechanisms (which is motivated by the nature of altruism), a pertinent design question thus revolves around the following optimality concept:

15 Definition 9 (welfare-optimal). Given a set of agents I,14 a mechanism (f,T ) is (utilitarian) welfare-optimal if and only if it is strategyproof, no-deficit, and, for every strategyproof and no-deficit mechanism (f ′,T ′),

′ ′ ∀θ ∈ Θ, ui(θi,θ,f(θ),T (θ)) ≥ ui(θi,θ,f (θ),T (θ)) (17) i∈I i∈I X X In the case of α threshold regret-based altruism utility functions, if a no-deficit mechanism exists that maximizes standard-utility and never yields regret greater than α for each agent, then this mechanism is welfare-optimal. The converse is not necessarily true since there may be no no-deficit mechanism that always yields regret less than α for every agent, yet some mechanism may still be welfare-optimal. Lemma 1. For any typespace Θ and mechanism (f,T ), for agents with α thresh- old regret-based altruism utility functions with α ≥ maxi∈I,θ∈Θ[wi(θi,θ−i,f,T ) − wi(θi,f(θ),Ti(θ))], (f,T ) is welfare-optimal if: i) it is strategyproof, ii) it is strongly budget-balanced, and iii) the choice function is a standard-utility maximizer, f ∗. Proof. Consider arbitrary typespace Θ and strongly budget-balanced mechanism (f ∗,T ) that is strategyproof for agents with α threshold regret-based altruism utility ∗ ∗ functions with α ≥ maxi∈I,θ∈Θ[wi(θi,θ−i,f ,T ) − wi(θi,f (θ),Ti(θ))]. Also consider arbitrary mechanism (f ′,T ′) that is no-deficit and strategyproof for agents with α threshold regret-based altruism utility functions. + ∗ For arbitrary θˆ ∈ Θ, for some constant k ∈ ℜ , each agent’s utility ui(θˆi, θ,fˆ ,T ) under (f ∗,T ) equals:

∗ ∗ ∗ k wi(θˆi,f (θˆ),Ti(θˆ)) = [vi(θˆi,f (θˆ)) + Ti(θˆ)] = vi(θˆi,f (θˆ)), (18) i∈I i∈I i∈I X X X where the last equality holds by strong budget-balance of (f ∗,T ). Each agent’s ′ ′ ′ ′ utility ui(θˆi, θ,fˆ ,T ) under (f ,T ) is at most:

ˆ ′ ˆ ′ ˆ ˆ ′ ˆ ′ ˆ k wi(θi,f (θ),Ti (θ)) = [vi(θi,f (θ)) + Ti (θ)] (19) i∈I i∈I X X ′ ≤ vi(θˆi,f (θˆ)) (20) i∈I X ∗ ≤ vi(θˆi,f (θˆ)), (21) i∈I X 14Specifically, given the agents’ utility functions. Note that a mechanism may be strategyproof (and thus potentially welfare-optimal) for agents with one form of utility function but not for another.

16 where the first inequality holds by no-deficit of (f ′,T ′) and the second by the defi- nition of f ∗. Therefore (f ∗,T ) is welfare-optimal for agents with α threshold regret- baed altruism utility functions, and at the same time it is efficient with respect to standard (selfish) utility. The main implication of this lemma is that, in a context of a strongly budget- balanced mechanism and α threshold regret-baed altruism utilities where, ∀i ∈ I, ∀θ ∈ Θ, α ≥ wi(θi,θ−i) − wi(θi,θ), welfare-optimality (maximization of actual altruistic utility functions) and efficiency with respect to standard-utility completely coincide.

2.3 Classic mechanisms fail We now observe the negative implications of using some well-known mechanisms designed for quasilinear utilities when agents in fact are strongly-α-altruistic. In the formal statement of the result, we will use the term “non-trivial AON” to mean an AON domain in which there exist at least three positive values that are admitted by each agent’s selfish value space, along with value 0. One typespace is “at least as general as” another if every type in the second is included in the first.

Proposition 3. For any typespace at least as general as some non-trivial AON typespace, when agents are strongly-α-altruistic for any α > 0, the basic-Groves, VCG, and redistribution mechanisms are all not truthful in dominant strategies.

Proof. First recall that strong-α-altruism implies that for two reports that both yield standard-utility meeting the α threshold, the one yielding higher social welfare yields greater utility for the agent. Now note that under all three mechanisms, truthful reporting always maximizes an agent’s standard-utility (this is a direct consequence of strategyproofness of the mechanisms; see Cavallo (2006)). From this it follows that if there is another (besides truth) standard-utility maximizing report that yields greater social welfare—e.g, if the outcome chosen remains efficient but there is less revenue (net payment to the center)—this will be a beneficial deviation. Consider an example in which there are n agents and, for arbitrary z>x>y> 0, agent 1 has value x for outcome o1 and value 0 for other outcomes, agent 2 has value y for outcome o2 and 0 for others, and all other agents’ values for all outcomes are 0. Under basic-Groves, it is clear that increasing reported valuations may yield less revenue for the center. If agent 1 reports value z instead of x for outcome o1, revenue will be −(n − 1)z rather than −(n − 1)x and social standard-utility (and u1) is accordingly higher. Under VCG revenue will be decreased in this example if agent 2 reports value 0 rather than y for outcome o2—revenue will be 0 rather than y. Under the redistribution mechanism, also, this misreport will decrease revenue. y 2y With truthful reporting revenue will equal: y −(n−2)· n = n . With the misreport,

17 revenue will equal 0. All these deviations (like truth) yield maximum standard- utility and do not change the selected outcome (which is efficient), regardless of the number of agents. Since we were able to find deviations that all fit within an AON framework, this shows that when agents are strongly-α-altruistic for any positive α the mechanisms will not be strategyproof for any typespace that has at least the generality of non- trivial AON (including, e.g., combinatorial allocation domains or the unrestricted typespace). In the case of the redistribution mechanism to complete the proof we must also observe that—holding any given instance of valuations constant—a typespace that is strictly larger than another will admit weakly less redistribution.

The combination of Propositions 1 and 3 immediately yields the following: Corollary 1. For any typespace at least as general as some non-trivial AON types- pace, if agents have α threshold regret-based altruism utility functions for any α> 0, the basic-Groves, VCG, and redistribution mechanisms are all not truthful in dom- inant strategies. This fact is worth reflecting on because it implies that the classic solutions of mechanism design fail if agents are not purely selfish as the mechanism designer might have imagined.15 If agents are even slightly altruistic instead, they can ben- efit from gaming these mechanisms, and the equilibrium properties break down. Alternative solutions are required, which we now provide.

3 Balanced redistribution mechanisms

In this section we present positive results regarding what can be achieved in a context of semi-altruistic agents. The following will simplify our analysis. Lemma 2. For arbitrary typespace Θ, any strongly budget-balanced mechanism ∗ (f ,T ) is strategyproof if and only if each agent i ∈ I is αi-altruistic, where:

αi = max[wi(θi,θ−i) − wi(θi,θ)] (22) θ∈Θ Proof. Consider arbitrary typespace Θ and strongly budget-balanced mechanism ∗ 16 (f ,T ). For arbitrary agent i ∈ I, let αi = maxθ∈Θ[wi(θi,θ−i) − wi(θi,θ)]. If i is ′ not αi-altruistic, then there exists some θ ∈ Θ and θi ∈ Θi such that wi(θi,θ−i)−αi ≤ 15There are previous demonstrations that deviations from quasilinearity such as budget con- straints can cause VCG to break down (see, e.g., Che and Gale (1998) or Borgs et al (2005)); but Proposition 3 is noteworthy because it shows specifically that unselfishness will break these mechanisms. 16Note that mechanism (f ∗,T ) provides the implicit context for the shorthand notation used for ′ ′ ∗ wi, wi, and ui in what follows (e.g., ui(θi,θi,θ−i) for ui(θi,θi,θ−i,f ,T )).

18 ′ ′ wi(θi,θi,θ−i) and wi(θi,θi,θ−i)+ w−i(θ−i,θi,θ−i) ≥ wi(θi,θi,θ−i)+ w−i(θ−i,θi,θ−i), ′ yet ui(θi,θi,θ−i)

′ ′ wi(θi,θi,θ−i)+ w−i(θ−i,θi,θ−i) ≥ wi(θi,θi,θ−i)+ w−i(θ−i,θi,θ−i) (23)

′ αi-altruism thus implies that (c) of Definition 4 is satisfied for all i, θ, and θi, which is exactly the criterion for strategyproofness. This lemma demonstrates that the concept of α-altruism precisely represents the amount of altruism required to obtain strategyproofness in a strongly budget- balanced mechanism in which truthfulness yields a standard-utility loss of up to α for any given agent. The next lemma will also be useful for deriving strategyproof mechanisms for altruistic agents. Lemma 3. For arbitrary typespace Θ, arbitrary mechanism (f ∗,T ), and arbitrary mechanism (f ∗,T ∗) that is strategyproof for agents with quasilinear utility, if for + ∗ ∗ some constant α ∈ ℜ , ∀i ∈ I, ∀θ ∈ Θ, Ti(θ) − Ti (θ) ∈ [0,α], then (f ,T ) is strategyproof for α-altruistic agents. Proof. Consider arbitrary mechanism (f ∗,T ∗) that is strategyproof for agents with quasilinear utility, arbitrary α ∈ ℜ+, and arbitrary mechanism (f ∗,T ) satisfying, ∗ ∀i ∈ I, ∀θ ∈ Θ, Ti(θ) − Ti (θ) ∈ [0,α]. Consider arbitrary i ∈ I and θ ∈ Θ. We ′ have, ∀θi ∈ Θi: ∗ ′ ′ ∗ ′ ∗ ′ [vi(θi,f (θi,θ−i)) + Ti(θi,θ−i)] − [vi(θi,f (θi,θ−i)) + Ti (θi,θ−i)] ∈ [0,α] (24)

∗ ∗ ∗ ˆ ∗ ˆ Since (f, ,T ) is strategyproof, θi ∈ maxθˆi∈Θi [vi(θi,f (θi,θ−i)) + Ti (θi,θ−i)], and ′ thus, ∀θi ∈ Θi: ∗ ′ ′ ∗ ∗ [vi(θi,f (θi,θ−i)) + Ti(θi,θ−i)] − [vi(θi,f (θ)) + Ti (θ)] ≤ α (25)

′ ∗ ˆ ˆ Then, letting θi = maxθˆi∈Θi [vi(θi,f (θi,θ−i)) + Ti(θi,θ−i)], we have: ∗ ∗ ′ ′ wi(θi,θ−i,f ,T )= vi(θi,f (θi,θ−i)) + Ti(θi,θ−i) (26) ∗ ∗ ≤ vi(θi,f (θ)) + Ti (θ)+ α (27) ∗ ≤ vi(θi,f (θ)) + Ti(θ)+ α (28) ∗ = wi(θi,f (θ),Ti(θ)) + α (29) If all agents are α-altruistic, then this inequality and Lemma 2 imply mechanism (f ∗,T ) is strategyproof.

19 3.1 A simple balanced mechanism Now we are ready to explore new mechanisms for altruistic agents. We start by considering what is arguably the simplest method of forming a budget-balanced mechanism through a modification of one that is strategyproof and efficient for selfish agents: simply run VCG and redistribute an equal share of the resulting revenue to each agent.

Definition 10. (simple balanced mechanism) A mechanism (f ∗,T ) where, ∀i ∈ I, ∀θˆ ∈ Θ:

Z(θˆ) T (θˆ)= v (θˆ ,f ∗(θˆ)) − v (θˆ ,f ∗(θˆ )) + , (30) i −i −i −i −i −i n

ˆ ˆ ∗ ˆ ˆ ∗ ˆ where Z(θ) = j∈I [v−j(θ−j,f (θ−j)) − v−j(θ−j,f (θ))], i.e., the revenue that would result under VCG. P The mechanism is defined to be strongly budget-balanced. In terms of positive equilibrium properties, it is difficult to establish much in the general case (we return to this topic in Section 4.1). But AON domains provide the structure necessary for v(1) ˆ ∗ ˆ a positive analysis. Let ˆ−i denote v−i(θ−i,f (θ−i)), i.e., the highest “bid” amongst agents other than i. Consider the mechanism’s implementation in the scenario depicted in Table 1, below, with agent values of 100, 80, 60 and 40, when each agent 17 v(1) i has an α threshold regret-based altruism utility function for some α ≥ ˆ−i /n. Agent 1 is the winner (i.e., outcome o1 is chosen) and pays the center 80, and then each agent (including agent 1) is payed 20.

ˆv−i agent vi(oi) vi(o1) Ti wi wi n ui 1 100 100 −60 40 40 20 100 2 80 0 20 20 25 25 100 3 60 0 20 20 25 25 100 4 40 0 20 20 25 25 100

Table 1: Four-agent AON example illustrating the simple balanced mechanism.

Agent 2, 3, or 4 could increase his standard-utility from 20 to 25 by bidding 100 (assuming the tie is broken in favor of agent 1), because revenue under VCG—of which each agent gets a share—would then be 100 rather than 80. But the difference v(1) between wi and wi for each agent i is less than ˆ−i /n, and no deviation can increase v(1) social welfare, which entails that ˆ−i /n-altruistic agents will be truthful. 17With constant k = 1.

20 Theorem 1. For arbitrary non-negative value AON typespace18 with maximum value K, the simple balanced mechanism is strongly budget-balanced and ex post K individually rational. It is strategyproof if and only if every agent is n -altruistic. If K every agent has an α threshold regret-based altruism utility function for α ≥ n , it is welfare-optimal. Proof. Ex post individual rationality and strong budget-balance hold by construc- tion of the mechanism. Now consider arbitrary non-negative value AON types- K ∗ SB pace with maximum value K, and assume agents are n -altruistic. Let (f ,T ) denote the simple balanced mechanism and (f ∗,T VCG) denote the VCG mech- anism. By Lemma 3, strategyproofness of (f ∗,T SB) holds if, ∀i ∈ I, ∀θ ∈ SB VCG K SB VCG Z(θ) Θ, Ti (θ) − Ti (θ) ∈ [0, n ]. For any θ ∈ Θ, Ti (θ) − Ti (θ) = n , i.e., K 1/n times the second highest value in θ. This clearly is always in the range [0, n ], ∗ SB K and so (f ,T ) is strategyproof. Alternatively, if there exists an i ∈ I that is not n - ∗ SB K altruistic, by Lemma 2 (f ,T ) is not strategyproof, since wi(θi,θ−i)−wi(θi,θ)= n in the case where θ consists of value 0 for all agents besides some j ∈ I \ {i} who has value K—assuming ties broken in favor of j over i, i reporting value K rather K than 0 increases VCG revenue by K, and increases i’s payoff by n . Then, by Lemma 1 strategyproofness and strong budget-balance of the mecha- nism entail that it is welfare-optimal for agents with α threshold regret-based altru- K ism utility functions with α ≥ n , since the mechanism chooses outcomes according to f ∗.

3.2 The Balanced-Redistribution Mechanism We can achieve a balanced and efficient mechanism that has a much more modest requirement of agent altruism. The following implements the redistribution mech- anism of Cavallo (2006), then portions out equal shares of the remaining revenue. Recall that Gi(Θi, θˆ−i) is the minimum revenue that could result under the VCG mechanism given Θi and θˆ−i, taken over all possible reports by i.

Definition 11. (The balanced-redistribution mechanism) A mechanism (f ∗,T ) where, ∀i ∈ I, ∀θˆ ∈ Θ:

G(Θ , θˆ ) Y (θˆ) T (θˆ)= v (θˆ ,f ∗(θˆ)) − v (θˆ ,f ∗(θˆ )) + i −i + , (31) i −i −i −i −i −i n n

ˆ ˆ ∗ ˆ ˆ ∗ ˆ ˆ where Y (θ) = j∈I [v−j(θ−j,f (θ−j)) − v−j(θ−j,f (θ)) − G(Θj, θ−j)], i.e., the revenue that would result under the redistribution mechanism of Cavallo (2006). P

v(k) v(k) In AON domains the mechanism reduces to the following, where and −i th denote the k highest elements of vectors v and v−i, respectively: 18E.g., a single-item allocation problem with free-disposal.

21 Definition 12. (The balanced-redistribution mechanism in aon type- n spaces) Given bids v = (v1,..., vn) ∈ ℜ , the outcome preferred by the highest bidder h (with ties broken arbitrarily) is chosen and the following transfer pay- ments are made: 1 2 T (v)= v(2) + (v(2) − v(3)) − v(2) (32) h n −h n2 1 2 T (v)= v(2) + (v(2) − v(3)), ∀i ∈ I \ {h} (33) i n −i n2

Theorem 2. For arbitrary non-negative value AON typespace with maximum value K, the balanced-redistribution mechanism is strongly budget-balanced and ex post 2K individually rational. It is strategyproof if and only if every agent is n2 -altruistic. 2K If every agent has an α threshold regret-based altruism utility function for α ≥ n2 , it is welfare-optimal.

Proof. Ex post individual rationality and strong budget-balance hold by construc- tion of the mechanism. Now consider arbitrary non-negative value AON typespace 2K ∗ BR with maximum value K, and assume agents are n2 -altruistic. Let (f ,T ) de- note the balanced-redistribution mechanism and (f ∗,T RM ) denote the redistribu- tion mechanism (see Definition 2). By Lemma 3, strategyproofness of (f ∗,T BR) v n BR v RM v 2K v n holds if, ∀i ∈ I, ∀ ∈ [0,K] , Ti ( ) − Ti ( ) ∈ [0, n2 ]. For any ∈ ℜ , BR v RM v 2 v(2) v(3) Ti ( ) − Ti ( ) = n2 ( − ) (see Theorem 3.5 of Cavallo (2008)). Since 2 v(2) v(3) 2 ∗ BR values range between 0 and K, 0 ≤ n2 ( − ) ≤ n2 K, and so (f ,T ) is strategyproof. 2K Alternatively, if there exists an i ∈ I that is not n2 -altruistic, by Lemma 2, ∗ BR 2K (f ,T ) is not strategyproof, since wi(θi,θ−i)−wi(θi,θ)= n2 in the case where all agents have value 0 except i and some j,l ∈ I \ {i}, who have value K. i reporting 2 2 2K value 0 rather than K increases VCG revenue by n (K − 0) − n (K − K)= n , and 2K thus increases i’s payoff by n2 . Finally, by Lemma 1 strategyproofness and strong budget-balance of (f ∗,T BR) entail that it is welfare-optimal for agents with α threshold regret-based altruism 2K utility functions with α ≥ n2 , since the mechanism chooses outcomes according to f ∗. Imagine that 100 is an upper-bound on the value that any agent could have for any outcome (K). These results tell us that in, say, a scenario with 4 agents, if the agents are 12.5-altruistic these desirable properties will hold in equilibrium. If there were 8 agents, the requirement would be only 3.125-altruism. Consider again the example in which agents 1 through 4 have values 100, 80, 60, and 40, respectively, for their preferred outcomes. The redistribution mechanism 2 v(2) v(3) alone already retains all but n ( − ) value amongst the agents, which here

22 1 amounts to 2 (80−60) = 10, or 10% of the outcome value. If each agent has a scaled threshold altruism utility function with α ≥ 12.5 and reports truthfully, the original redistribution mechanism yields Table 2.

2K agent vi(oi) vi(o1) Ti wi wi w˜−i n2 ui 1 100 100 −65 35 35 80 12.5 39.375 2 80 0 15 15 15 100 12.5 13.5 3 60 0 20 20 20 100 12.5 18 4 40 0 20 20 20 100 12.5 18

Table 2: Illustration of the outcome that would result, despite incentives to deviate, if scaled threshold regret-based altruistic agents were truthful under the redistribu- tion mechanism.

But we know from Proposition 3 that rational altruistic agents may not be truthful under the redistribution mechanism. The balanced-redistribution mech- anism amends the redistribution mechanism by giving an equal share (2.5, here) of the revenue generated by that mechanism to each agent, and in so doing yields truthful reporting as a dominant strategy for 12.5-altruistic agents (Table 3).

2K agent vi(oi) vi(o1) Ti wi wi w˜−i n2 ui 1 100 100 −62.5 37.5 37.5 80 12.5 46.875 2 80 0 17.5 17.5 20 100 12.5 20 3 60 0 22.5 22.5 25 100 12.5 25 4 40 0 22.5 22.5 25 100 12.5 25

Table 3: Illustration of the dominant strategy truthful results of the balanced- redistribution mechanism.

In the example above, truthfulness would be a best-response strategy even if agents were only 2.5-altruistic (2.5 is the greatest difference between wi and wi, 2K across all i). But the n2 figure is useful because, when agents’ altruism thresh- olds meet that level, it provides a guarantee of truthtelling as a utility-maximizing strategy for any valuations that may arise. We’ve now seen two mechanisms: one very simple, requiring a significant amount of altruism to achieve the desired results, and one somewhat more sophis- ticated with a much more modest requirement. Any strategyproof mechanism can be transformed—via an uncomplicated equal-redistribution step—into a strongly budget-balanced mechanism that is strategyproof conditioned on a certain level of

23 altruism on the part of the agents; the closer the original “core” mechanism is to strong budget-balance, the less altruism will be required in the transformed mech- anism. As in other areas of mechanism design, a tradeoff can be identified between simplicity and (formal) efficacy. The simple balanced mechanism sits at one end of this spectrum, while the balanced-redistribution mechanism is able to achieve strong results while maintaining a certain simplicity in description. In the next section we will evaluate just how effective it is. But first we consider the mechanism for AON domains that, among all consisting of a strategyproof core mechanism plus equal redistribution, requires the least altruism for strategyproofness.

3.3 The balanced-WCO mechanism The WCO (worst-case optimal) mechanism of Guo and Conitzer (2009) and Moulin (2009), which is only defined for typespaces with multi-unit allocation structure (this includes AON), minimizes the worst-case (maximum) revenue of the VCG mechanism (see Remark 1 in Moulin (2009)). The description of this mechanism is relatively complex, but for AON settings the mechanism can serve as the core of a balanced mechanism that requires a level of altruism that decreases exponentially as the population size grows. In a context of agents that are altruistic, but not quite to the level required by the balanced-redistribution mechanism, the tradeoff of simplicity for a lower altruism requirement by moving to the following mechanism may be worthwhile. We introduce the balanced-WCO mechanism for AON settings:

Definition 13. (The balanced-wco mechanism for aon typespaces) n Given bids v = (v1,..., vn) ∈ ℜ , the outcome preferred by the highest bidder h (with ties broken arbitrarily) is chosen and the following transfer payments are made. Letting:

n−1 k−2 n−1 n−1 (n − 1)(−1) c=k c (k) rj(v−j)= v , ∀j ∈ I, (34) k(2n−1 − 1) n−1 −i k=2 P k  X  v(2) v − j∈I rj( −j) (2) Th(v)= rh(v−h)+ − v (35) Pn v(2) v − j∈I rj( −j) Ti(v)= ri(v−i)+ , ∀i ∈ I \ {h} (36) Pn

Theorem 3. For arbitrary non-negative value AON typespace with maximum value K, the balanced-WCO mechanism is strongly budget-balanced and ex post individu- (n−1)K ally rational. It is strategyproof if every agent is n(2n−1−1) -altruistic. If every agent

24 (n−1)K has an α threshold regret-based altruism utility function for α ≥ n(2n−1−1) , it is welfare-optimal.

Proof sketch The proof follows the same lines as that of the previous theorems, here noting from (Moulin, 2009, Remark 1) that the maximum revenue yielded by the (n−1)K WCO mechanism in AON domains with maximum value K is 2n−1−1 . (n−1)K 2K Note that for large n, n(2n−1−1) ≈ 2n . We have found that the rate of de- K 4K crease of the required altruism level is n2 in the simple balanced mechanism, n3 in the balanced-redistribution mechanism, and exponential under the balanced-WCO mechanism. Because the redistribution mechanism already yields very little revenue for small n while retaining a very simple form, we now briefly focus further attention there in considering whether the altruism it requires for strategyproofness should in fact be considered small.

3.4 Mildness of the required altruism We now provide some context for evaluating the positive statement of Theorem 2. We can start with the kind of anecdotal observation of the previous subsection: for, say, a value range of [0, 1] and 10 agents, the theorem proves that the balanced- redistribution mechanism succeeds if agents are all 0.02-altruistic. While this does seem mild, it is more informative to compare the required altruism level to a base- line such as an agent’s expected standard-utility from truthful participation in the mechanism, given a distribution over values. If the ratio of the required altruism level to expected standard-utility is low, then we can justifiably say the mechanism succeeds for “mildly altruistic” agents. This is in fact the case. The illustrations in Figure 3 depict—for values uniformly distributed between 0 and 1—the ex ante distribution over standard-utility for an agent as it relates to the required altruism level.19 Figure 4 depicts the relationship between the required altruism level and interim expected standard utility as a func- tion of value. Figure 5 illustrates the rate of decrease—as group size increases—of the ratio of the required altruism level to (ex ante) expected standard-utility, and also the probability that any given agent obtains less standard-utility than the re- quired altruism level. As we would hope and expect, the required altruism level decreases significantly more quickly, as population size grows, than does the ex- pected standard-utility from participating in the mechanism. For groups of size greater than 5, there is virtually zero probability that an individual will obtain standard-utility less than the required altruism level.

19We also tested truncated normal distributions on the interval [0, 1] with mean 0.5; the results were qualitatively very similar, for diverse standard deviation levels. Results become more favor- able as n increases; we provide illustrations for n = 5 and n = 15 to give a sense of the general trend.

25 5 agents 15 agents

distribution over s.u. distribution over s.u. required α required α expected s.u. expected s.u. probability density probability density

0 0 0 0.08 0.167 0.3 0.0089 0.0625 0.3 standard utility standard utility

Figure 3: Illustration of the ex ante probability distribution over standard-utility an agent will receive under the balanced-redistribution mechanism, with the ex- pected standard-utility and the bound on required altruism (α) of Theorem 2 high- lighted. For an AON domain where agent values are all (independently) uniformly distributed on the interval [0, 1].

5 agents 15 agents 0.35 0.35 expected s.u. expected s.u. required α required α 0.3 0.3

0.25 0.25

0.2 0.2

0.15 0.15

0.1 0.1 expected standard utility in BR expected standard utility in BR 0.05 0.05

0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 selfish value selfish value

Figure 4: Illustration of the interim expected standard utility for an agent under the balanced-redistribution mechanism, as a function of his value in an AON domain with values independently and uniformly distributed over [0, 1]. Again, the bound on required altruism (α) of Theorem 2 is highlighted.

These pictures notwithstanding, it should be noted that the practical strength of the bound relies on having good information about the range of values agents might have. If, for instance, agent values ranged between 0 and 1 but we only knew that they ranged between 0 and 10, things would not look as good. Similarly, for distributions in which there is an infinitesimal positive probability of getting 2K extreme high values (compared to the average), the n2 bound looks weak because K will be so large compared to the values that are seen in practice.

26 1 required α / expected s.u. probability that individual s.u. < required α

0.8

0.6

0.4

0.2

0 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 number of agents

Figure 5: The ratio of required altruism to expected standard-utility is less than 0.5 for groups of size 5 or greater, and decreases as the group size grows. For group size greater than 5 there is virtually 0 probability that the required altruism level will be greater than the standard-utility an agent obtains. For i.i.d. uniform values and the balanced-redistribution mechanism.

4 Other settings, negative results

The results of the last section, while positive, are far from dispositive. We saw that strongly budget-balanced auctions can be implemented in equilibrium when agents are α-altruistic for some α that is greater than or equal to a specific value that is—relative to their own valuations—constant and small. While there are plausible utility functions meeting this criterion, it may also be of interest to consider models of altruism of a different sort. Additionally, the results were limited to AON domains. What about more general, unstructured decision-making problems? In this section we consider generalizations of the following three kinds: 1) we examine incentives in an unrestricted typespace context, i.e., where agent valuations do not necessarily fit AON or any other limiting structure; 2) we consider the more traditional “non-regret-based” linear altruism model where each agent’s utility is the sum of his standard-utility plus a constant times the other agents’ standard- utility; 3) we consider agents that are “proportionally-altruistic”, willing to give up some percentage of their best-response standard-utility (whatever it is), but not all of it. In the first two cases we will show that significant negative results apply in more or less all cases in which agents are not completely altruistic; the situation is somewhat more hopeful for proportionally-altruistic agents, but still difficult. These observations bring an added measure of significance to the positive results of the previous section.

27 4.1 Difficulty of the unrestricted values setting The positive results of Section 3 were about what is possible in AON domains, where types have a very specific structure that has its most natural realization in single-item allocation scenarios. These results were achieved by leveraging the upper bounds on revenue that are established under the redistribution mechanism of Cavallo (2006) or the WCO mechanism of Moulin (2009) and Guo and Conitzer (2009) for AON domains. Any upper bound on revenue places, in turn, an upper bound on the amount of agent altruism required for equilibrium implementation of a mechanism that shares that revenue evenly. Unfortunately, though, in an unrestricted values setting no significant bound on revenue can be established. We note the following fact, a direct consequence of results (specifically, Corollary 3.1) in Cavallo (2008):

Proposition 4. For a typespace restricted only by lower bound 0 and upper bound K on agent valuations for any outcome, VCG is the only mechanism that is truthful and efficient in dominant strategies, ex post individually rational, and no-deficit for agents with quasilinear utility.

Using this we now obtain a bound on the potential of any agent to increase revenue by deviating from truthful reporting in an efficient mechanism.

Lemma 4. For a typespace restricted only by lower bound 0 and upper bound K on agent valuations for any outcome, under any mechanism that is truthful and efficient in dominant strategies, ex post individually rational, and no-deficit for agents with quasilinear utility, (n − 1) · K is a tight upper bound on the possible increase in revenue an agent can unilaterally induce by deviating from truth.

Proof. In light of Proposition 4, to prove the lemma it is sufficient to establish that: 1) the VCG mechanism doesn’t generate revenue exceeding (n − 1) · K on any problem instance; and 2) there is a problem instance in which an agent induces an (n − 1) · K increase in revenue by deviating under VCG. The first part can be shown algebraically. On a problem instance in which type profile θ is reported, the revenue under VCG equals:

∗ ∗ v−i(θ−i,f (θ−i)) − v−i(θ−i,f (θ)) (37) i∈I X  ∗ ∗ = v−i(θ−i,f (θ−i)) − (n − 1) · v(θ,f (θ)) (38) i∈I X ∗ ∗ ≤ n · max v−i(θ−i,f (θ−i)) − (n − 1) · v(θ,f (θ)) (39) i∈I ∗ ≤ max v−i(θ−i,f (θ−i)) (40) i∈I ≤ (n − 1) · K (41)

28 ∗ ∗ The move from Eq. (39) to Eq. (40) holds since v(θ,f (θ)) ≥ v−i(θ−i,f (θ−i)) for arbitrary θ and i. For the second part, consider the following example with upper bound on values K:

v1 v2 v3

o1 0 K K

o2 K 0 K

o3 K K 0 Under VCG one of the outcomes is chosen (all are efficient and it doesn’t matter which) and revenue is: (2K − K)+(2K − K)+(2K − 2K)=2K = (n − 1) · K. But imagine instead that agent i had valuation 0 for every outcome; outcome oi would be chosen and the revenue would be 0 if he (and the other agents) reported truthfully. He could increase revenue from 0 to (n−1)·K by misreporting according to the table above. Completely analogous examples with analogous results can be constructed for any other value of n ≥ 2. Simply set the number of outcomes equal to the number of agents, and set values such that each outcome is valued K by each member of a distinct set of n − 1 agents. In the unrestricted values setting, the redistribution mechanism reduces to VCG, and moreover since VCG is unique amongst mechanisms with the desired properties, the preceding lemma indicates a strict requirement on the necessary altruism level to achieve strong budget-balance.

Theorem 4. For typespaces restricted only by lower bound 0 and upper bound K on valuations for any outcome, any mechanism that modifies a mechanism that is strategyproof, ex post individually rational, no-deficit, and efficient for agents with quasilinear utility by evenly redistributing the resulting revenue is strategyproof if n−1 and only if every agent is n K-altruistic. Proof. The theorem follows from Lemmas 2 and 4. First, in the positive direction, n−1 Lemma 4 implies that maxi,θ[wi(θi,θ−i) − wi(θi,θ)] ≤ n K, and given this, Lemma n−1 2 implies that n K-altruism yields strategyproofness. In the negative direction, the example above in the proof of Lemma 4 (along with completely analogous examples for other values of n) demonstrates that there exists n−1 a scenario in which each agent could increase his standard-utility by a full n K by deviating from truth. This combined with Lemma 2 entails that the mechanism is n−1 not strategyproof when agents are not all n K-altruistic. Corollary 2. For typespaces restricted only by lower bound 0 and upper bound K on valuations for any outcome, the simple balanced and balanced-redistribution n−1 mechanisms are strategyproof if and only if every agent is n K-altruistic.

29 This result is essentially negative, as it demonstrates that almost total altruism on the part of all agents is required for equilibrium implementation of the balanced- redistribution mechanism. While we haven’t proven that no mechanism can balance the budget with only mildly altruistic agents in an unrestricted values setting, the evidence points towards that conjecture. Theorem 4 tells us that if there does exist a balanced mechanism that requires less altruism in this unrestricted setting, it must be identified as so via a very different approach than the one we’ve used here. Known structure to agent valuations is crucial. While positive results are potentially achievable in other important structured domains such as combinatorial allocation, these results suggest that complete lack of structure precludes strong budget-balance unless agents are almost completely altruistic.

4.2 Linear non-regret-based altruism Perhaps the most basic and ubiquitous model of altruism in the previous literature is what we will call linear non-regret-based altruism (see, e.g., Bell and Keeney (2009); Chen and Kempe (2008); Ledyard (1997)). In this model, an agent’s utility for a given outcome is proportional to his own selfish-value plus some constant times the value obtained by the other agents. The key distinction from our regret-based notion is that other agents’ values are appreciated uniformly across all outcomes, regardless of how the individual fares. In a mechanism design context, this kind of altruism can be defined as follows: Definition 14. (linear non-regret-based altruism utility function) For constants β ∈ ℜ and k ∈ ℜ, ∀θi ∈ Θi, ∀θˆ ∈ Θ,

ui(θi, θˆ)= k[wi(θi, θˆ)+ βw−i(θˆ)] (42)

1 So, for instance, if we take β = 1 and k = n , ui(θi,θ) will equal the average standard-utility obtained among the group (according to reports θ). One might hope and even expect that this kind of altruism, particularly for β close to 1, would allow for efficient allocation of a single item with agents retaining all value. The following two lemmas will lead to a theorem demonstrating that this will not be possible except in the case of β = 1 (complete-altruism). The first lemma shows that, without complete-altruism, in AON typespaces if two reports for an agent yield the same outcome given others’ reports, then he must receive the same payment in either case. 20 Lemma 5. For arbitrary AON typespace, for agents with β linear non-regret-based altruism utility functions for any β =16 , if a mechanism (f ∗,T ) is strategyproof and

20We continue to focus on mechanisms that are efficient in the sense of maximizing the aggregate selfish-value amongst agents; for agents with β linear non-regret-based altruism for β = 1, any strongly budget-balanced mechanism that is efficient in this sense also maximizes agents’ actual utilities.

30 ′ ′′ ∗ ′ strongly budget-balanced, then ∀i ∈ I, ∀θi,θi ∈ Θi, ∀θ−i ∈ Θ−i, if f (θi,θ−i) = ∗ ′′ ′ ′′ f (θi ,θ−i) then Ti(θi,θ−i)= Ti(θi ,θ−i). Proof. See Appendix. The next lemma shows that, assuming some basic expressivity of the value space, each agent’s transfer must reflect his impact on the welfare of the other agents. We use the term “0,1-admitting” to indicate a value space that includes the values 0 and 1 for every agent (i.e., one in which we can’t exclude, a priori, the possibility of any agent having value 0 or 1 for his preferred outcome).21

Lemma 6. For arbitrary AON typespace with a continuous, 0,1-admitting value space, for agents with β linear non-regret-based altruism utility functions for any β =6 1, if a mechanism (f ∗,T ) is strategyproof and strongly budget-balanced, then ′ ′′ ′ ′′ ∗ ′ ∀i ∈ I, ∀θi,θi ∈ Θi, ∀θ−i ∈ Θ−i, Ti(θi,θ−i) − Ti(θi ,θ−i) = v−i(θ−i,f (θi,θ−i)) − ∗ ′′ v−i(θ−i,f (θi ,θ−i)). Proof. See Appendix. Lemma 6 tells us that if a mechanism (f ∗,T ) exists that is strategyproof and strongly budget-balanced for continuous, 0,1-admitting AON typespaces and agents with linear non-regret-based altruism utility for β =6 1, it must satisfy Ti(θ) = ∗ v−i(θ−i,f (θ)) + hi(θ−i), ∀θ ∈ Θ, for some function hi :Θ−i → ℜ; i.e., it must be a Groves mechanism. Given this, we will now be able to show that no such mechanism exists, at least if we restrict attention to anonymous mechanisms. Informally, an anonymous mechanism is one that does not discriminate based on agent identity; i.e., the payments it specifies would be the same if the identities of the agents were mixed up or hidden in computing payments (and the outcome) and reassociated only after-the-fact. An agent’s expected value for the outcome chosen by an anonymous mechanism, and his transfer payment received, is invariant to identity information about the agent, i.e., information that could not—in principle—be associated with any other agent.

Theorem 5. For any typespace that is at least as general as AON with a continuous, 0,1-admitting value space, for agents with β linear non-regret-based altruism utility functions for any β =16 (i.e., not completely altruistic), there exists no anonymous mechanism (f ∗,T ) that is strategyproof and strongly budget-balanced.

Proof. Consider an arbitrary AON typespace Θ with a continuous, 0,1-admitting value space. Assume β linear non-regret-based altruism utility functions for any

21Note that Lemma 6 holds for the weaker assumption that agent value spaces are continuous and overlapping, i.e., that there is some continuous interval (x,y) for some y>x such that all agent value spaces include (x,y). We present the lemma with the stronger 0,1-admitting assumption only for clarity.

31 β =6 1, and assume existence of an anonymous, strongly budget-balanced, and strat- egyproof mechanism (f ∗,T ) (this is without loss of generality by the revelation principle). We will show a contradiction. By Lemma 6 we know that (f ∗,T ) must be a Groves mechanism, and so speci- ∗ fying (f ,T ) amounts to specifying the hi function for each i. Also, since Θ is AON each agent i’s type can be fully represented by a single value zi, and so for each i, n−1 ∗ hi : ℜ → ℜ. By the assumption that (f ,T ) is anonymous, for any set of n − 1 values a,b,c,..., for any two agents i and j, E[hi(a,b,c,...)] = E[hj(a,b,c,...)]. That is, there is a function h : ℜn−1 → ℜ such that, for any length n − 1 vector x of real values admitted by the value space, and any i, j ∈ I, E[hi(x)] = E[hj(x)] = h(x). Now consider an arbitrary value profile z ∈ ℜn and observe that, by the definition of AON, for a single agent i ∈ arg maxj∈I zj (the “winner”), Ti(z) = hi(z−i), and for all j =6 i, Tj(z) = zi + hj(z−j). So by strong budget-balance of the mechanism, for any vector of values z, j∈I hj(z−j) = −(n − 1) · maxi∈I zi. This implies the weaker condition that, considering uncertainty due, e.g., to random tie-breaking, in P E expectation the two sides are equal, i.e., j∈I [hj(z−j)] = P h(z−j)= −(n − 1) · max zi (43) i∈I j∈I X Consider the case in which all agents have value 0 (zi = 0 for every i). Then z−i for each i will be the vector of n − 1 zeros and i∈I h(z−i)=(n − 1)h(0, 0,..., 0). From Eq. (43), since max{0, 0,..., 0} = 0, we know that h(0, 0,..., 0) = 0. Also, in P the case where zi = 1 for every i, we have:

h(1, 1,..., 1) = −(n − 1) · max zi,i.e., (44) j∈I i∈I X n · h(1, 1,..., 1)=1 − n,i.e., (45) 1 − n h(1, 1,..., 1) = (46) n

More generally, let σq denote the length n − 1 vector in which q values are 1 and 1−n the other n−q−1 values are 0. We showed above that h(σ0)=0and h(σn−1)= n . Now, getting more abstract, for arbitrary m ∈ {0,...,n − 2} consider a scenario in which m + 1 agent values are 1 and the other n − m − 1 are 0. For any of the agents with value 1, the h value computed for his payment will have input σm, and for an agent with value 0, the h value computed for his payment will have input σm+1. The budget-balance equation Eq. (43) requires that:

(m + 1)h(σm)+(n − m − 1)h(σm+1)= −(n − 1) (47)

Solving for h(σm), we have: −(n − 1) − (n − m − 1)h(σ ) h(σ )= m+1 (48) m m +1

32 1−n Assume that h(σm+1)= n . Then:

−(n − 1) − (n − m − 1) 1−n h(σ )= n (49) m m +1 1 n − n2 (n − n2 − m + mn − 1+ n) = − (50) m +1 n n   1 −mn − n + m +1 = · (51) m +1 n m +1 − n(m + 1) = (52) n(m + 1) 1 − n = (53) n

1−n Since we know that h(σn−1) does equal n (Eq. (46)), this demonstrates that 1−n h(σn−2),h(σn−3),...,h(σ0) all equal n . But this contradicts the fact that h(σ0)= 0. Since we’ve shown that no anonymous, strongly budget-balanced, and dominant strategy efficient mechanism exists for continuous, 0,1-admitting AON domains, none exists for any typespace that includes such a domain as a subspace. The theorem follows. The anonymity condition is not essential to the result, but allows for a cleaner proof; the 0,1-admitting condition could be replaced with x,y-admitting for any x< y. Note also that this negative theorem does not rely at all on individual rationality constraints, which will also typically be required by a mechanism designer. Also, the special case of β = 0 (and constant k = 1) corresponds to quasilinear utility, and so the above provides a proof of the impossibility of an efficient mechanism with strong budget-balance in the typical selfish setting, already known to hold by Hurwicz and Walker (1990).

Corollary 3. For any typespace that is at least as general as AON with a contin- uous, 0,1-admitting value space, for agents with quasilinear utility there exists no anonymous mechanism that is strongly budget-balanced and efficient in dominant strategies.

In a sense, this fact is the underlying motivation for this entire paper: if a strongly budget-balanced and efficient mechanism did exist for selfish agents, then it would be strategyproof for completely selfish and altruistic agents alike.

4.3 Altruism proportional to lost standard-utility In this subsection we return to a regret-based model of altruism, but we now explore whether the positive results of Section 3 carry over to a setting with a variation of

33 α-altruism in which agents are willing to give up a certain percentage of the standard- utility they could achieve. Intuitively, we can take α for each agent i with type θi to be a fraction of wi(θi, θˆ−i) when other agents report θˆ−i. We will call an agent with this kind of altruism “proportionally-altruistic”. Definition 15 (ǫ-proportional-altruism). An agent i is ǫ-proportionally-altruistic if he has a regret-based altruism utility function that is fully altruistic if proportion 1 − ǫ of best-response standard-utility is achieved and fully selfish if all best-response standard-utility is sacrificed; it can be otherwise arbitrarily defined. That is, for + 2 constant k ∈ ℜ and altruism-coefficient function ρ : ℜ → [0, 1], ∀θi ∈ Θi, ∀θˆ ∈ Θ:

ui(θi, θˆ)= k [wi(θi, θˆ)+ ρ(wi(θi, θˆ−i),wi(θi, θˆ)) · w−i(θˆ)] (54) with the constraint:

1 if (1 − ǫ)wi(θi, θˆ i) ≤ wi(θi, θˆ) ρ(w (θ , θˆ ),w (θ , θˆ)) = − i i −i i i ˆ ˆ ( 0 if ǫ =16 and wi(θi, θ−i) >wi(θi, θ)=0 So, for instance, a 1-proportionally-altruistic agent is completely-altruistic, and a 0.25-proportionally-altruistic agent will always be willing to give up one quar- ter of his best-response standard-utility for the good of the group. This no- tion of proportional-altruism becomes nonsensical in an environment where wi may be negative, so we will consider mechanisms that are selfishly-IR, where agents obtain standard-utility at least 0 whenever they are truthful—i.e., where wi(θi,θi, θˆ−i) ≥ 0, ∀θi, θˆ−i. While α-altruism is a purely positive statement about how altruistic an agent is, we’ve defined ǫ-proportional-altruism (for ǫ < 1) to imply selfishness at the extreme end of the proportional spectrum where an agent sacrifices all standard- utility. This is motivated by a desire to make a clear distinction between completely- altruistic agents and merely proportionally-altruistic agents, which will allow us to present a negative result that is illuminating (Lemma 8). ǫ-proportional-altruism for ǫ< 1 definitionally means there are at least some cases in which the agent is not completely-altruistic. For example, in a setting with 5 agents, if a particular agent 1 i is n -proportionally-altruistic and able to realize standard-utility 10 given his type 10 and the reports of other agents, he would be willing to sacrifice at least 5 = 2 units of utility for the benefit of the group, but he would not be willing to give up all 10. Perhaps the simplest proportional-altruism utility function is the analog of the sharp threshold utility function of Eq. (7), which can be written for this setting as follows (ǫ = 1 is completely-altruistic, ǫ = 0 is completely-selfish):

wi(θi, θˆ)+ w i(θˆ) if(1 − ǫ)wi(θi, θˆ i) ≤ wi(θi, θˆ) u (θ , θˆ)= − − i i ˆ ( wi(θi, θ) otherwise We now provide a pair of lemmas—one positive, one negative—that will help us understand what we may be able to achieve in this setting, and how to achieve it.

34 Lemma 7. For any typespace Θ and any strongly budget-balanced mechanism ∗ wi(θi,θ) ∗ (f ,T ), for each i ∈ I letting δi = minθ , (f ,T ) is strategyproof if each ∈Θ wi(θi,θ−i) agent i is (1 − δi)-proportionally-altruistic.

wi(θi,θ) Proof. For arbitrary agent i ∈ I, let δi = minθ . Assume that each agent ∈Θ wi(θi,θ−i) i is (1 − δi)-proportionally-altruistic. Now, for all i, θi, and θˆ−i,

wi(θ˜i, θ˜) δi · wi(θi, θˆ−i)= min · wi(θi, θˆ−i) (55) ˜ θ∈Θ wi(θ˜i, θ˜−i)

wi(θi,θi, θˆ−i) ≤ · wi(θi, θˆ−i) (56) wi(θi, θˆ−i)

= wi(θi,θi, θˆ−i) (57)

∗ So ρ(wi(θi, θˆ−i),wi(θi, θˆ)) = 1 for all θi, θˆ−i. Since the mechanism implements f and is strongly budget-balanced, social standard-utility is maximized by truth, and so truthfulness and efficiency in dominant strategies follows.

Lemma 8. For any non-negative value typespace Θ, for any ǫ< 1, a strongly budget- balanced mechanism (f ∗,T ) is not strategyproof for agents that are ǫ-proportionally- altruistic if ∃i ∈ I and θ ∈ Θ such that wi(θi,θ−i) >wi(θi,θ)=0.

Proof. Assume there exists an i ∈ I and θ ∈ Θ such that wi(θi,θ−i) >wi(θi,θ)=0. ′ ′′ Let θ = argmax ′′ wi(θi,θ ,θ i). By the definition of ǫ-proportional-altruism i θi ∈Θi i − ′ ′′ (for ǫ < 1), ui(θi,θ) = wi(θi,θ) = 0 and ui(θi,θi,θ−i) ≥ k · wi(θi,θi ,θ−i) > 0 (for some k ∈ ℜ+). So strategyproofness fails. Unfortunately, this demonstrates that the positive results we saw in Section 3 fail to carry over generically to this intuitively compelling proportional-altruism setting. The lemma says, essentially, that we must minimally guarantee that any agent that can improve standard-utility by deviation will obtain some positive standard-utility from being honest. This excludes the mechanisms of the previous section.

Theorem 6. The balanced-redistribution mechanism is not strategyproof for agents that are ǫ-proportionally-altruistic for any ǫ < 1, even for the restricted setting of AON domains.

Proof. We will prove the theorem by example. Consider a three-agent AON scenario in which agent 1 has value 9 and agents 2 and 3 both have value 0. If all agents report truthfully, outcome o1 will be chosen and no payments will occur. Notably, w2 = w3 = 0. If, on the other hand, agent 2 reports value x ∈ (0, 9) the revenue x 2x from the redistribution mechanism will be x − 3 = 3 . Then under the balanced- redistribution mechanism, agent 2 will get a 1/3 share of that, for a total standard- 2x utility of 9 . So w2 >w2 = 0, and given Lemma 8, strategyproofness fails.

35 However, this is not the end of the story. The balanced-redistribution and balanced-WCO mechanisms succeed for α-altruistic agents, intuitively, because they have strategyproof mechanisms at their core that yield very little revenue. Thus, redistributing this revenue evenly can only present a limited incentive for deviation from truth. The example above shows that, though that incentive is indeed small in absolute terms, it is enough to sway proportionally-altruistic agents. But we can take a very different approach here and achieve some success. If we can guarantee that every agent will get a certain share of the welfare an outcome generates, then we can cap the proportional gain that deviation can bring. As before, given a vector (i) th of bids ˆv = {ˆv1,..., ˆvn}, letˆv denote the i highest bid, and consider the following mechanism.

Definition 16. (simple proportional balanced-redistribution mech- anism for AON typespaces) Given bids ˆv = {ˆv1,..., ˆvn}, with high bidder k (breaking ties arbitrarily), outcome ok is chosen and the following transfers are made: 1 1 T (ˆv)= −γˆv(1) + γˆv(1) and T (ˆv)= γˆv(1), ∀i =6 k, (58) k n i n n − γ′n + γ′ γ′ where γ = argmax min , ′ ′ . γ′∈[0,1] n n − γ n + γ n o

γ Each agent is paid a constant n times the highest bid, and the highest bidder must additionally pay γ times his bid. Let g(n) = n γ′n γ′ γ′ ′ − + maxγ ∈[0,1] min n , n−γ′n+γ′ . We will see in the theorem below that (1−g(n))-proportional-altruismn is sufficiento for strategyproofness of the above mech- anism. To get a sense of what this means, these altruism levels for population sizes 2 to 13 are presented in Table 4. Theorem 7. For any non-negative value AON typespace, the simple propor- tional balanced-redistribution mechanism is strategyproof, strongly budget-balanced, ex post individually rational, and welfare-optimal for agents that are (1 − g(n))- proportionally-altruistic, where n denotes the number of agents.

(i) Proof. As before, let v1,..., vn denote the true values of the agents and ˆv denote the ith highest reported value. The aggregate payments made by the center equal: 1 1 − γˆv(1) + γˆv(1) +(n − 1) γˆv(1) =0, (59) n n so strong budget-balance holds. Considering ex post individual rationality, the top- bidder k (making bid ˆv(1)), if truthful, obtains standard-utility: 1 n − γn + γ n − γn + γ ˆv(1) − γˆv(1) + γˆv(1) = ˆv(1) = v , (60) n n n k

36 n 1 − g(n) n 1 − g(n) 2 0.385 8 0.691 3 0.500 9 0.705 4 0.570 10 0.720 5 0.612 11 0.733 6 0.642 12 0.742 7 0.669 13 0.753

Table 4: A listing of population size along with a corresponding sufficient proportional-altruism level to yield dominant strategy efficiency of the simple pro- portional balanced-redistribution mechanism.

which is at least 0 for any γ ∈ [0, 1] and any vk ≥ 0. All other bidders obtain 1 v(1) v standard-utility equal to n γˆ which, again, is at least 0 for γ ≥ 0 and k ≥ 0. Now we address truthfulness. First consider the agent k whose true value is highest. When all other agents report value 0, if k reports value arbitrarily close to 0 his standard-utility will be arbitrarily close to vk, and reporting any higher value will only decrease his standard-utility. Alternatively if there is another agent whose value is positive, let j be the highest-value reporting agent besides k. If k reports the highest value he will make some positive payment and obtain standard-utility less than vk. If he under-reports his value sufficiently so as to “lose”, he will obtain 1 v 1 v v standard-utility equal to n γˆj ≤ n γ k < k. So an upper-bound on the standard- utility i can obtain is vk. Then letting wk be k’s standard-utility when truthful, and wk be his best-response standard-utility, we know that, over any possible realization of values and reports,

n−γn+γ w vk n − γn + γ k ≥ n = (61) wk vk n Now consider an arbitrary agent j who is not the “winner” (j =6 k). j’s standard- 1 v v v n−1 v utility will either be n γˆk (from reporting a value less than ˆk) or j − n γˆj (from v v v n−1 v reporting a value ˆj ≥ ˆk), which is less than or equal to j − n γˆk. j’s value 1 1 n−1 wj from truthful reporting is γˆvk. If γˆvk ≥ vj − γˆvk, then = 1. Otherwise, n n n wj v n−1 v 1 v j − n γˆk ≥ n γˆk ≥ 0, and:

1 w γˆvk γˆv γˆv γ j ≥ n = k ≥ k = (62) v n−1 v v v v v wj j − n γˆk n j − (n − 1)γˆk nˆk − (n − 1)γˆk n − γn + γ Thus we can conclude that the ratio of each agent’s standard-utility under truth

37 compared to best-response standard-utility is at least: w n − γn + γ γ min h ≥ min , (63) h∈I wh n n − γn + γ n o In the mechanism, γ is chosen from the interval [0, 1] to maximize this minimum. Then, by Lemma 7, an agent that is (1 − g(n))-proportionally-altruistic for g(n)= n−γn+γ γ maxγ∈[0,1] min n , n−γ′n+γ will be truthful given that no non-truthful reports can improve socialn standard-utility;o and none can, by strong budget-balance of the mechanism and the definition of f ∗. A direct analog of Lemma 1 applies here, which yields welfare-optimality. The simple proportional balanced-redistribution mechanism is just that: simple. This style of altruism is not the focus of this paper, and we strongly suspect that other mechanisms will achieve smaller bounds on the altruism sufficient for strate- gyproofness. Our purpose in presenting the mechanism is mainly to highlight the fact that—unlike in the case of linear non-regret-based altruism—the situation for proportionally-altruistic agents is far from hopeless. In fact, in a decision-making scenario with 3 agents that are all willing to give up half of the maximum standard- utility they could achieve, even the simple mechanism we presented is successful.

5 Discussion

The main positive contributions of this paper are the introduction of a regret-based model of altruism, and the demonstration that a relatively small amount of such altruism is sufficient to overcome a critical negative result for mechanism design in fully self-interested (quasilinear) settings. We provided a single-item allocation mechanism that is strategyproof, strongly budget-balanced, ex post individually rational, and welfare-optimal for agents with a certain amount of altruism that— we demonstrated—is small relative to the expected standard-utility the mechanism will yield for them. Although we obtained a negative result for the unrestricted typespace, it seems clear that a similar positive analysis of the impact of relatively small amounts of altruism could be done for other restricted settings such as trade of a single item or decisions regarding a public project. The crucial observation underlying these positive results is of the same character as that underlying the earlier work on redistribution mechanisms and in fact much other work in mechanism design: namely, when the mechanism designer knows something a priori about the way agents value outcomes (and transfer payments), this knowledge can be harnessed to obtain positive results where otherwise none would be possible. Here, that “something” is the fact that agents are semi-altruistic, i.e., are happy to give up some (perhaps small) amount of utility for the good of the group as a whole.

38 The relevance of these results stands or falls on the legitimacy of the “knowledge” that the center may have. Here, that means our findings solidly tell us something useful about scenarios where it is known that agents have at least this small amount of altruism. But, one might object, it is very difficult to know this will be the case. Our response is that this is very true and indeed problematic, but it is a problem here no more and no less than it is a problem of mechanism design as a whole. As we demonstrated in Proposition 3, standard celebrated results including the VCG mechanism are equally fragile when faced with the possibility that agent utilities are non-quasilinear. If instead they are even a little bit altruistic, using the standard mechanisms instead of the ones we presented here could lead to bad results. The moral of the story is that the whole enterprise is fragile in its reliance on having accurate models of agent utilities. We hope to have contributed here an approach to be used when there is good knowledge that points to at least a small willingness to sacrifice utility for the good of all, rather than the extreme selfishness that has traditionally been assumed by mechanism designers.

Acknowledgments

I am grateful to Michael Kearns for some helpful ideas regarding the presentation of utility functions satisfying the α-altruism property in Section 2.1, and to David Parkes for detailed discussions, and also specifically for important suggestions on how to demonstrate that the bound on required altruism is in fact mild (Section 3.4). Thanks also to Preston McAfee.

References

Andreoni J, Miller J (2002) Giving according to GARP: An experimental test of the consistency of preferences for altruism. Econometrica 70(2):737–753

Bailey MJ (1997) The demand revealing process: To distribute the surplus. Public Choice 91:107–126

Bell DE, Keeney RL (2009) Altruistic utility functions for joint decisions. In: The Mathematics of Preference, Choice and Order, Springer Berlin Heidelberg, pp 27–38

Benabou R, Tirole J (2003) Intrinsic and extrinsic motivation. Review of Economic Studies 70:489–520

Borgs C, Chayes J, Immorlica N, Mahdian M, Saberi A (2005) Multi-unit auctions with budget-constrained bidders. In: Proceedings of the 6th ACM conference on Electronic Commerce, pp 44–51

39 Bowles S, Hwang SH (2008) Social preferences and public : Mechanism design when social preferences depend on incetives. Journal of Public Economics 92:1811–1820

Brandt F, Weiss G (2001) Antisocial agents and vickrey auctions. In: Proceedings of the 8th Workshop on Agent Theories, Architectures and Languages, pp 335–347

Cameron J, Banko KM, Pierce WD (2001) Pervasive negative effects of rewards on intrinsic motivation: The myth continues. Behavior Analyst, Special Issue 24(1):1–44

Cavallo R (2006) Optimal decision-making with minimal waste: Strategyproof redis- tribution of VCG payments. In: Proceedings of the 5th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-06), pp 882–889

Cavallo R (2008) Social Welfare Maximization in Dynamic Strategic Decision Prob- lems. Ph.D. Thesis, Harvard University

Charnes G, Rabin M (2002) Understanding social preferences with simple tests. Quarterly Journal of Economics 117:817–869

Che YK, Gale I (1998) Standard auctions with financially constrained bidders. Re- view of Economic Studies 65:1–21

Chen PA, Kempe D (2008) Altruism, selfishness, and spite in traffic routing. In: Proceedings of the 9th ACM Conference on Electronic Commerce (EC-08), pp 140–149

Clarke E (1971) Multipart pricing of public goods. Public Choice 8:19–33

Dasgupta P, Maskin E (2000) Efficient auctions. The Quarterly Journal of Eco- nomics 115(2):341–388

Deci EL, Koestner R, Ryan RM (1999) A meta-analytic review of experiments ex- amining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin 125(6):627–668

Frey BS, Jegen R (2001) Motivation crowding theory: A survey of empirical evi- dence. Journal of Economic Surveys 15:589–611

Gibbard A (1973) Manipulation of voting schemes: a general result. Econometrica 41(4):587–601

Green JR, Laffont JJ (1977) Characterization of satisfactory mechanisms for the revelation of preferences for public goods. Econometrica 45:427–438

40 Green JR, Laffont JJ (1979) Incentives in public decision-making. North Holland, New York

Groves T (1973) Incentives in teams. Econometrica 41:617–631

Guo M, Conitzer V (2008) Undominated redistribution mechanisms. In: Proceedings of the Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS-08), pp 1039–1046

Guo M, Conitzer V (2009) Worst-case optimal redistribution of VCG payments in multi-unit auctions. Games and Economic Behavior 67(1):69–98

Henrich J, Boyd R, Bowles S, Camerer C, Fehr E, Gintis H (eds) (2004) Foundations of Human Sociality : Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies. Oxford University Press, New York

Hoffman E, McCabe KA, Shachat K, Smith VL (1994) Preferences, property rights, and anonymity in bargaining games. Games and Economic Behavior 7(3):346–380

Hori H (2006) Altruistic utility functions, unpublished

Hurwicz L, Walker M (1990) On the generic nonoptimality of dominant-strategy allocation mechanisms: A general theorem that includes pure exchange economies. Econometrica 58(3):683–704

Irlenbusch B, Sliwka D (2005) Incentives, decision frames and motivation crowding out- an experimental investigation, discussion paper No. 1758

Kucuksenel S (forthcoming) Behavioral mechanism design. Journal of Public Eco- nomic Theory

Ledyard JO (1997) Public goods: A survey of experimental research. In: Kagel J, Roth A (eds) Handbook of Experimental Economics, Princeton University Press, pp 111–194

Levine DK (1998) Modeling altruism and spitefulness in experiments. Review of Economic Dynamics 1:593–622

Liang L, Qi Q (2007) Cooperative or vindictive: Bidding strategies in sponsored search auctions. In: Proceedings of the 3rd Workshop on Internet and Network Economics (WINE), pp 167–178

Maskin E, Riley JG (1984) Optimal auctions with risk averse buyers. Econometrica 52(6):1473–1518

Moulin H (2009) Almost budget-balanced VCG mechanisms to assign multiple ob- jects. Journal of Economic Theory 144:96–119

41 Satterthwaite MA (1975) Strategy-proofness and arrow’s conditions: Existence and correspondence theorems for voting procedures and social welfare functions. Jour- nal of Economic Theory 10:187–217

Seabright P (2004) Continuous preferences can cause discontinuous choices: An application to the impact of incentives on altruism, iDEI Working Paper, n. 257

Shinada M, Yamagishi T (2007) Punishing free riders: Direct and indirect promotion of cooperation. Evolution and Human Behavior 28:330–339

Vickrey W (1961) Counterspeculations, auctions, and competitive sealed tenders. Journal of Finance 16:8–37

Appendix

Proposition 2. For any non-negative-value typespace and any α ≥ 0, an agent with an α scaled threshold altruism utility function is α-altruistic in any strongly budget-balanced mechanism context (f ∗,T ).

Proof. Consider arbitrary non-negative-value typespace Θ, strongly budget- balanced mechanism (f ∗,T ), agent i ∈ I with an α scaled threshold utility func- ′ ′ ˆ tion, θ−i ∈ Θ−i, and θi,θi ∈ Θi. First consider the case in which wi(θi,θi, θ−i) ≥ wi(θi, θˆ−i) − α. If conditions (a) and (b) of Definition 4 are satisfied then:

wi(θi, θˆ−i) ui(θi,θi, θˆ−i)= · (wi(θi,θi, θˆ−i)+ w−i(θi, θˆ−i)) (64) w˜−i(θ−i) ˆ wi(θi, θ−i) ′ ˆ ′ ˆ ′ ˆ ≥ · (wi(θi,θi, θ−i)+ w−i(θi, θ−i)) = ui(θi,θi, θ−i), (65) w˜−i(θ−i)

′ ˆ and so (c) is satisfied. Now alternatively consider the case in which wi(θi,θi, θ−i) < wi(θi, θˆ−i) − α. Again assuming (a) and (b) of Definition 4 are satisfied then:

wi(θi, θˆ−i) ui(θi,θi, θˆ−i)= · (wi(θi,θi, θˆ−i)+ w−i(θi, θˆ−i)) (66) w˜−i(θ−i)

≥ wi(θi, θˆ−i) (67) ′ ˆ ′ ˆ ≥ wi(θi,θi, θ−i)= ui(θi,θi, θ−i), (68) where the move from (66)to(67) is valid because strong budget-balance and choice ∗ function f implies wi(θi, θˆ)+ w−i(θˆ) ≥ w˜−i(θ−i), since i’s value for every outcome is non-negative. So (c) is again satisfied, and thus α-altruism holds.

42 Lemma 5. For arbitrary AON typespace, for agents with β linear non-regret-based altruism utility functions for any β =16 , if a mechanism (f ∗,T ) is strategyproof and ′ ′′ ∗ ′ strongly budget-balanced, then ∀i ∈ I, ∀θi,θi ∈ Θi, ∀θ−i ∈ Θ−i, if f (θi,θ−i) = ∗ ′′ ′ ′′ f (θi ,θ−i) then Ti(θi,θ−i)= Ti(θi ,θ−i). Proof. Assume β linear non-regret-based altruism utility functions for any β =6 1, and consider an arbitrary strategyproof and strongly budget-balanced mechanism (f ∗,T ). Since we are considering an AON domain, agent i’s type can be reported as ∗ a single number zi. Thus let Ti(zi,z−i) be the transfer to agent i, and f (zi,z−i) be the outcome selected, when values reported are zi for agent i and z−i for the others. ′ ∗ ∗ ′ Assume the lemma fails, i.e., that for some zi, zi, and z−i, f (zi,z−i)= f (zi,z−i) ′ and yet Ti(zi,z−i) >Ti(zi,z−i). Then if β < 1, the outcome being the same, i prefers greater transfer to himself despite the fact that that means proportionally lower transfer to the others (by strong budget balance), and strategyproofness would be ′ violated because agent i would benefit from reporting zi when his true value is zi. ′ ′ We have ui(zi, (zi,z−i)) =

′ ∗ ′ ∗ ′ ′ ′ vi(zi,f (zi,z−i)) + βv−i(z−i,f (zi,z−i)) + Ti(zi,z−i)+ β(−Ti(zi,z−i)) (69) ′ ∗ ∗ ′ = vi(zi,f (zi,z−i)) + βv−i(z−i,f (zi,z−i))+(1 − β)Ti(zi,z−i) (70) ′ ∗ ∗

(Note that j∈I\{i} Tj = −Ti by strong budget-balance.) Likewise if β > 1, i prefers greater transfer for others, and strategyproofness would be violated because P ′ i would benefit from reporting zi when his true value is zi. We have ui(zi, (zi,z−i)) =

∗ ∗ vi(zi,f (zi,z−i)) + βv−i(z−i,f (zi,z−i)) + Ti(zi,z−i)+ β(−Ti(zi,z−i)) (73) ∗ ′ ∗ ′ = vi(zi,f (zi,z−i)) + βv−i(z−i,f (zi,z−i))+(1 − β)Ti(zi,z−i) (74) ∗ ′ ∗ ′ ′

43 ′ ′′ values z, letz ¯−i = maxj∈I\{i} zj. Given Lemma 5, for any zi, zi , and z−i, we ∗ ′ ∗ ′′ ′ ′′ know that if f (zi,z−i) = f (zi ,z−i), then Ti(zi,z−i) = Ti(zi ,z−i). Also, given any z−i there are only two possible outcomes, so we can simplify notation and let + ′ ′ ′ Ti (z−i) = Ti(zi,z−i) for any zi that leads to outcome oi (i.e., for zi > z¯−i) and let − ′ ′ Ti (z−i) = Ti(zi,z−i) for any zi that is not the maximum. So in light of Lemma 5 the current lemma can be restated in these terms as saying, for arbitrary i and z−i, − + Ti (z−i) − Ti (z−i)=¯z−i − 0=¯z−i. Letting T−i(z) denote the aggregate payment to other agents when z is reported, ′ + ′ note that by strong budget-balance, T−i(zi,z−i) = −Ti (z−i) when zi > z¯−i and ′ − T−i(zi,z−i)= −Ti (z−i) otherwise. Now consider an arbitrary z−i withz ¯−i ∈ (0, 1) and an arbitrary δ ∈ (0, min{z¯−i, 1 − z¯−i}). Strategyproofness demands that when agent i’s value isz ¯−i + δ (and we know this is possible because the value space is 0,1-admitting and continuous), agent 1 prefers outcome oi. That is:

+ + − − z¯−i + δ + Ti (z−i) − βTi (z−i) ≥ βz¯−i + Ti (z−i) − βTi (z−i),i.e., (77) − + δ ≥ (1 − β)(Ti (z−i) − Ti (z−i) − z¯−i) (78)

Strategyproofness also demands that when agent i’s value isz ¯−i − δ, agent i prefers the outcome preferred by the highest-bidding other agent. That is:

− − + + βz¯−i + Ti (z−i) − βTi (z−i) ≥ z¯−i − δ + Ti (z−i) − βTi (z−i),i.e., (79) − + δ ≥ (β − 1)(Ti (z−i) − Ti (z−i) − z¯−i) (80)

+ − − But then, fixing any β =6 1 and also any Ti (z−i) and Ti (z−i) such that Ti (z−i) − + Ti (z−i) =6 z ¯−i, there is always some δ > 0 small enough such that either Eq. (78) or Eq. (80) must be violated. Since β cannot be 1 by supposition of the theorem, − + this demonstrates that Ti (z−i) − Ti (z−i) must equalz ¯−i.

44