<<

Complexity of Mechanism

Vincent Conitzer and Tuomas Sandholm {conitzer, sandholm}@cs.cmu.edu Computer Science Department Carnegie Mellon University Pittsburgh PA 15211

Abstract The key problem is the following . The co- ordinator of the aggregation generally does The aggregation of conflicting preferences is a not know the agents’ preferences a priori. Rather, the central problem in multiagent systems. The agents report their preferences to the coordinator. Un- key difficulty is that the agents may report fortunately, in this setting, most naive preference ag- their preferences insincerely. Mechanism de- gregation mechanisms suffer from manipulability.An sign is the art of designing the rules of the agent may have an incentive to misreport its prefer- game so that the agents are motivated to re- ences in order to mislead the mechanism into selecting port their preferences truthfully and a (so- an that is more desirable to the agent than cially) desirable outcome is chosen. We pro- the outcome that would be selected if the agent re- pose an approach where a mechanism is au- vealed its preferences truthfully. Manipulation is an tomatically created for the preference ag- undesirable phenomenon because preference aggrega- gregation setting at hand. This has sev- tion mechanisms are tailored to aggregate preferences eral advantages, but the downside is that in a socially desirable way, and if the agents reveal the mechanism problem their preferences insincerely, a socially undesirable out- needs to be solved anew each time. Focus- come may be chosen. ing on settings where side payments are not possible, we show that the mechanism design Manipulability is a pervasive problem across pref- problem is N P -complete for deterministic erence aggregation mechanisms. A seminal nega- mechanisms. This holds both for dominant- tive result, the Gibbard-Satterthwaite theorem,shows implementation and for Bayes-Nash that under any nondictatorial preference aggregation implementation. We then show that if we al- scheme, if there are at least 3 possible outcomes, there low randomized mechanisms, the mechanism are preferences under which an agent is better off re- design problem becomes tractable. In other porting untruthfully [9, 19]. (A preference aggregation words, the coordinator can tackle the compu- scheme is called dictatorial if one of the agents dictates tational complexity introduced by its uncer- the outcome no matter how the others vote.) tainty about the agents’ preferences by mak- With software agents, the they use for de- ing the agents face additional uncertainty. ciding how to report their preferences must be coded This comes at no loss, and in some cases at explicitly. Given that the reporting needs a gain, in the (social) objective. to be designed only once (by an expert), and can be copied to large numbers of agents (even ones represent- 1 Introduction ing unsophisticated humans), it is likely that manipu- In multiagent settings, agents generally have different lative preference reporting will increasingly become an preferences, and it is of central importance to be able issue, unmuddied by irrationality, emotions, etc. to aggregate these, i.e., to pick a socially desirable out- What the coordinator would like to do is to design a come from a set of outcomes. Such outcomes could be preference aggregation mechanism so that 1) the self- potential presidents, joint plans, allocations of interested agents are motivated to report their prefer- or resources, etc. For example, voting mechanisms ences truthfully, and 2) the mechanism chooses an out- constitute an important class of preference aggrega- come that is desirable from the perspective of some so- tion methods. cial objective. This is the classic setting of mechanism ∗ This material is based upon work supported by NSF design in . In mechanism design, there under CAREER Award IRI-9703122, Grant IIS-9800994, are two different types of uncertainty: the coordina- ITR IIS-0081246, and ITR IIS-0121678. tor’s uncertainty about the agents’ preferences, and the agents’ uncertainty about each others’ preferences how hard this computational problem is under the two (which can affect how each agent tries to manipulate most common nonmanipulability requirements: domi- the mechanism). nant strategies, and Bayes- [16]. We will study preference aggregation settings where side Mechanism design provides us with a variety of care- payments are not an option. fully crafted definitions of what it means for a mecha- nism to be nonmanipulable, and goals to pursue under The rest of the paper is organized as follows. In Sec- this constraint (e.g., social welfare maximization). It tion 3 we define the deterministic mechanism design also provides us with some general mechanisms which, problem, where the mechanism coordinator is con- under certain assumptions, are nonmanipulable and strained to deterministically choose an outcome on the socially desirable (among other properties). The up- basis of the preferences reported by the agents. We side of these mechanisms is that they do not rely then show that this problem is NP-complete for the on (even probabilistic) information about the agents’ two most common concepts of nonmanipulability. In preferences (e.g., the Vickrey-Groves-Clarke mecha- Section 4 we generalize this to the randomized mech- nism [5, 10, 20]), or they can be easily applied to any anism design problem, where the mechanism coordi- probability over the preferences (e.g., the nator may stochastically choose an outcome on the dAGVA mechanism [1, 7]). The downside is that these basis of the preferences reported by the agents. (On mechanisms only work under very restrictive assump- the side, we demonstrate that randomized mechanisms tions, the most common of which is to assume that may be strictly more efficient than deterministic ones.) the agents can make side payments and have quasilin- We then show that this problem is solvable by linear ear functions. The quasilinearity assumption is programming for both concepts of nonmanipulability. quite unrealistic. It means that each agent is risk neu- tral, does not care about what happens to its friends or 2 The setting enemies, and values independently of all other Before we define the computational problem of mecha- attributes of the outcome. Also, in many voting set- nism design, we should justify our focus on nonmanip- tings, the use of side payments would not be politi- ulable mechanisms. After all, it is not immediately ob- cally feasible. Furthermore, among software agents, it vious that there are no manipulable mechanisms that, might be more desirable to construct mechanisms that even when agents report their types strategically and do not rely on the ability to make payments. hence sometimes untruthfully, still reach better out- In this paper, we propose that the mechanism be de- comes (according to whichever objective we use) than signed automatically for the specific preference aggre- any nonmanipulable mechanism. Additionally, given gation problem at hand. We formulate the mecha- our computational focus, we should also be concerned nism design problem as an optimization problem. The that manipulable mechanisms that do as well as non- input is characterized by the number of agents, the manipulable ones may be easier to find. It turns out agents’ possible types (preferences), and the coordi- that we need not worry about either of these points: nator’s prior probability distributions over the agents’ given any mechanism, we can quickly construct a non- types. The output is a nonmanipulable mechanism manipulable mechanism whose performance is exactly that is optimal with respect to some (social) objective. identical. For given such a mechanism, we can build an interface layer between the agent and this mechanism. This approach has three advantages over the classic The agents input (some report of) their preferences (or approach of designing general mechanisms. First, it types) into the interface layer; subsequently, the inter- can be used even in settings that do not satisfy the as- face layer inputs the types that the agents would have sumptions of the classical mechanisms. Second, it may strategically reported if their types were as declared yield better mechanisms (in terms of stronger non- into the original mechanism, and the resulting out- manipulability guarantees and/or better social out- come is the outcome of the new mechanism. Since the comes) than the classical mechanisms. Third, it may interface layer acts “strategically on each agent’s best allow one to circumvent impossibility results (such as behalf”, there is never an incentive to report falsely to the Gibbard-Satterthwaite theorem) which state that the interface layer; and hence, the types reported by there is no mechanism that is desirable across all pref- the interface layer are the strategic types that would erences. When the mechanism is designed to the set- have been reported without the interface layer, so the ting at hand, it does not matter that it would not work results are exactly as they would have been with the on preferences beyond those in that setting. original mechanism. This argument (or at least the However, this approach requires the mechanism de- existential part of it, if not the constructive) is known sign optimization problem to be solved anew for each in the mechanism design literature as the revelation preference aggregation setting. In this paper we study principle [16]. Given this, we can focus on truthful mechanisms in the rest of the paper. We first define a preference aggregation setting. Dominant strategy implementation is very robust. It does not rely on the prior probability distributions be- Definition 1 A preference aggregation setting con- ing (commonly) known, or the types being private in- sists of a set of outcomes O, a set of agents A with formation. Furthermore, each agent is motivated to |A| = N, and for each agent: report truthfully even if the other agents report un- • A set of types Θi; truthfully, for example due to irrationality. The second most prevalent in mech- • A p over Θi; 1 i anism design, Bayes-Nash equilibrium,doesnothave i 2 • A utility function ui :Θ × O → R; any of these robustness benefits. Often there is a - off between the robustness of the solution concept and These are all . However, each ∈ the social welfare (or some other goal) that we expect agent’s type θi Θi is private information. to attain, so both concepts are worth investigating. Though this follows standard game theory nota- Definition 4 Given a preference aggregation setting, tion [16], the fact that agents have both utility func- a mechanism is said to implement its outcome func- tions and types is perhaps confusing. The types en- tion in Bayes-Nash equilibrium if truthtelling is al- code the various possible preferences that agents may ways optimal as long as the other agents report turn out to have, and the agents’ types are not known truthfully. For a deterministic mechanism, this by the coordinator. The utility functions are common means that for any i ∈ A, and for any θi, θˆi ∈ knowledge, but the agent’s type is a parameter in the i i 1 2 i N Θ , we have Eθ−i (ui(θ ,o(θ ,θ ,...,θ ,...,θ ))) ≥ agent’s utility function. So, the utility of agent i is i 1 2 i N E −i (u (θ ,o(θ ,θ ,...,θˆ ,...,θ ))).(HereE −i u (θi,o), where o ∈ O is the outcome and θi is the θ i θ i means the expectation taken over the types of all agents agent’s type. except i, according to their true type distributions pj.)

3 Deterministic mechanisms We are now ready to define the computational problem We now formally define a deterministic mechanism. that we study.

Definition 2 Given a preference aggregation setting, Definition 5 (DETERMINISTIC- a deterministic mechanism is a function that, given MECHANISM-DESIGN) We are given a any vector of reported types, produces a single outcome. preference aggregation setting, a solution concept, and That is, it is a function o :Θ1 × Θ2 × ...× ΘN → O. an objective function g :Θ1 × Θ2 × ...× Θn × O → R with a goal G. We are asked whether there exists a de- A solution concept is some definition of what it means terministic mechanism for the preference aggregation for a mechanism to be nonmanipulable. We now setting which present the two most common solution concepts. They are the ones analyzed in this paper. • satisfies the given solution concept, and • Definition 3 Given a preference aggregation setting, attains the goal, i.e., 1 2 N 1 2 N ≥ a mechanism is said to implement its outcome func- E(g(θ ,θ ,...,θ ,o(θ ,θ ,...,θ ))) G. tion in dominant strategies if truthtelling is always op- (Here the expectation is taken over the types of timal, no matter what types the other agents report. all the agents, according to the distributions pj.) For a deterministic mechanism, this means that for any i ∈ A, for any type vector (θ1,θ2,...,θi,...,θN ) ∈ One common objective function is the social welfare 1 2 N Θ1 × Θ2 × ... × Θi × ... × ΘN , and for any function g(θ ,θ ,...,θ ,o)= ui(θi,o). i i i 1 2 i N i∈A θˆ ∈ Θ , we have ui(θ ,o(θ ,θ ,...,θ ,...,θ )) ≥ i 1 2 i N If the number N of agents is unbounded, specifying ui(θ ,o(θ ,θ ,...,θˆ ,...,θ )). 1 an outcome function o will require exponential , This assumes that the agents’ types are drawn inde- and in this case it is perhaps not surprising that com- pendently. If this were not the case, the input to the mechanism design algorithm would include a joint prob- putational complexity becomes an issue even for the ability distribution over the agents’ types instead. All of decision variant of the problem given here. However, the results of this paper apply to that setting as well. we will demonstrate NP-completeness even in the case 2Throughout the paper we allow the utility functions, where the number of agents is fixed at 2, and g is objective functions, and goals in the input to be real- restricted to be the social welfare function, for both valued. This makes the usual assumption of a computing framework that can handle real numbers. If that is not solution concepts. We begin with dominant strategy available, our results hold if the inputs are rational num- implementation: for this, we will reduce from the NP- bers. complete INDEPENDENT-SET problem [8]. • 1 2 L ∈ Definition 6 (INDEPENDENT-SET) We are o(θi ,θi )=oii if i/S; given a graph (V,E) andagoalK.Weareasked • 1 2  ∈ whether there exists a subset S ⊆ V , |S| = K such o(θi ,θj )=oij if i = j,(i, j) / E; that for any i, j ∈ S, (i, j) ∈/ E. • 1 2 1  ∈ ∈ o(θi ,θj )=oij if i = j,(i, j) E,andj S; Theorem 1 2-agent DETERMINISTIC- • 1 2 2 o(θi ,θj )=oij otherwise. MECHANISM-DESIGN with dominant strategies implementation (DMDDS) is NP-complete, even with First we show that this mechanism is incentive com- the social welfare function as the objective function. patible. We first demonstrate that agent 1 never has an incentive to misreport. Suppose agent 1’s true type 1 2 Proof: To show that the problem is in N P ,weob- is θi and agent 2 is reporting θj . Then agent 1 never 1   serve that specifiying an outcome function here re- has an incentive to instead report θk with k = i, k = j, 1 2 quires only the specification of |Θ1|×|Θ2| outcomes, since this will lead to the selection of okj, okj,orokj, and given such an outcome function we can ver- all of which give agent 1 utility −5n2. What about 1  ∈ ify whether it meets our requirements in polynomial reporting θj instead (when i = j)? If j/S, this will time. To show N P -hardness, we reduce an arbi- lead to utility −5n2 for agent 1, so in this case there is trary INDEPENDENT-SET instance to the follow- no incentive to do so. If j ∈ S, this will lead to utility ing DMDDS instance. There are 2 agents, whose 2 for agent 1. However, if (i, j) ∈/ E, reporting truth- { 1 1 1 } ∈ type sets are as follows: Θ1 = θ1,θ2,...,θn and fully will also give agent 1 utility 2; and if (i, j) E, { 2 2 2 } ∈ Θ2 = θ1,θ2,...,θn , each with a uniform distribu- then since j S, reporting truthfully will in fact give tion. For every pair i, j (1 ≤ i ≤ n and 1 ≤ j ≤ n), agent 1 utility 5n2. It follows that again, there is no H there are the following outcomes: if i = j,wehaveoii incentive to misreport. To show that agent 2 has no L  and oii;ifi = j, and (i, j) is not an edge in the graph, incentive to misreport, we apply the exact same argu- 1 we have oij ; if it is an edge in the graph, we have oij ment as for agent 1: the only claim we need to prove 2 in order to make this work is that if i = j,(i, j) ∈ E, and oij . The utility functions are as follows: ∈ 1 2 2 and i S, then o(θi ,θj )=oij. All that is necessary • 1 H 2 H u1(θi ,oii )=u2(θj ,ojj)=2; to show is that in this case, j/∈ S. But this follows immediately from the fact that there are no edges be- • u (θ1,oH )=u (θ2,oH )=2fori = k and j = l; 1 i kk 2 j ll tween elements in S, since i ∈ S and (i, j) ∈ E. Hence, • 1 L 2 L we have established . All that u1(θi ,oii)=u2(θj ,ojj)=1; is left to show is that we reach the goal. Suppose • 1 L 2 L − 2  1 2 u1(θi ,okk)=u2(θj ,oll)= 5n for i = k and the agents’ types are θi and θj .If(i, j) is an edge, j = l; 2m which happens with probability n2 , we get a social 2 ∈ • 1 2  ∈ welfare of 5n +1. If i = j and i/S, which hap- u1(θi ,oij )=u2(θj ,oij )=2fori = j,(i, j) / E; n−K pens with probability n2 , we get a social welfare of 1 2 2 ∈ • u1(θ ,okl)=u2(θ ,okl)=−5n for k = l,(k, l) ∈/ 2. If i = j and i S, which happens with probability i j K  E, i = k and j = l; n2 , we get a social welfare of 4. Finally, if i = j and (i, j) is not an edge, which happens with probability • 1 1 2 2 2  ∈ 2− − u1(θi ,oij )=u2(θj ,oij )=5n for i = j,(i, j) n 2m n n2 , we get a social welfare of 4. Adding up all E; the terms we find that the goal is exactly satisfied. 1 2 2 1 So there is a solution to the DMDDS instance. Now • u1(θ ,o )=u2(θ ,o )=1fori = j,(i, j) ∈ E; i ij j ij suppose the DMDDS instance has a solution, that is, • 1 1 2 1 1 2 u1(θi ,okl)=u2(θj ,okl)=u1(θi ,okl)= a function o satisfying the desired properties. Sup- 2 2 − 2  ∈  pose there are r distinct vectors (p, θ1 ,θ2 ) such that u2(θj ,okl)= 5n for k = l,(k, l) E, i = k 1 i1 i2  u (θp ,o(θ1 ,θ2 )) = 5n2 (that is, r cases where some- and j = l. p ip i1 i2 1 one gets the very large payoff) and r2 distinct vectors Finally, we set G = 1 2 p 1 2 2 2 − 2− − (p, θ ,θ ) such that up(θ ,o(θ ,θ )) = −5n .Let 2m(5n +1)+2(n K)+4K+4(n 2m n) .Wenowpro- i1 i2 ip i1 i2 n2 r = r1 − r2. Then, since the highest other payoff that ceed to show that the two problem instances are can be reached is 2, the expected social welfare can equivalent. r(5n2)+4n2 be at most n2 . But the goal is greater than First suppose there is a solution to the 2m(5n2) n2 , and since by assumption o reaches this goal, INDEPENDENT-SET instance, that is, a subset we have r ≥ 2m. Of course, we can only achieve a S of V of size K such that for any i, j ∈ S,(i, j) ∈/ E. 2 1 2 payoff of 5n with outcomes oij or oij . However, if it Let the outcome function be as follows: 1 2 ∈{ 1 2 }   is the case that o(θi ,θj ) okl,okl for k = i or l = j, • 1 2 H ∈ − 2 o(θi ,θi )=oii if i S; one of the agents in fact receives a utility of 5n , and the contribution of this outcome to r can be at m + 2 outcomes: o1,o2,...,om,om+1,om+2.Wehave { 1 1 1 } most 0. It follows that there must be at least 2m pairs Θ1 = θ1,θ2,...,θm , where θj occurs with proba- 1 2 1 2 wj { 2 2} (θi ,θj ) whose outcome is oij or oij . But this is possi- bility W ; and Θ2 = θ1,θ2 , where each of these 1 ble only if (i, j) is an edge, and there are only 2m such types occurs with probability 2 . The utility func- 1 1 vj pairs. It follows that whenever (i, j) is an edge, o or tions are as follows: for all j, u1(θ ,oj)=( +1)W ; ij j wj 2 { 1 2 H } 1 − oij is chosen. Now, we set S = i : o(θi ,θi )=oii . u1(θj ,om+2)= W ;andu1 is 0 everywhere else. Fur- First we show that S is an independent set. Suppose 2 2 − thermore, u2(θ1,om+1)=W ; u2(θ1,om+2)=W C; not: i.e., i, j ∈ S and (i, j) ∈ E. Consider o(θ1,θ2). 2 i j u2(θ2,om+2)=W (2V + 1); and u2 is 0 everywhere If it is o1 , then agent 2 receives 1 in this scenario W +D ij else. We set G = WV + 2 . We now proceed to 2 and has an incentive to misreport its type as θi , since show that the two problem instances are equivalent. 1 2 H o(θi ,θi )=oii , which gives it utility 2. On the other 2 First suppose there is a solution to the KNAPSACK hand, if it is oij, then agent 1 receives 1 in this sce- ≤ 1 instance, that is, a subset S of I such that wj C nario and has an incentive to misreport its type as θj ,  j∈S 1 2 H ≥ since o(θj ,θj )=ojj, which gives it utility 2. But this and vj D. Then let the outcome function be as contradicts incentive compatibility. Finally, we show j∈S 1 2 ∈ 1 2 that S has at least K elements. Suppose the agents’ follows: o(θj ,θ1)=oj if j S, o(θj ,θ1)=om+1 oth- 1 2 1 2 erwise; and for all j, o(θ ,θ )=om+2. First we show types are θi and θj .If(i, j) is an edge, which happens j 2 2m 2 that truthtelling is a BNE. Clearly, agent 1 never has with probability n2 , we get a social welfare of 5n +1. ∈ an incentive to misrepresent its type, since if agent 2 If i = j and i/S, which happens with probability 2 n−|S| has type θ , the type that 1 reports makes no differ- , we get a social welfare of at most 2. If i = j 2 n2 ence; and if agent 2 has type θ2, misrepresenting its ∈ |S| 1 and i S, which happens with probability n2 ,weget type will lead to utility at most 0 for agent 1, whereas  a social welfare of 4. Finally, if i = j and (i, j)isnot truthful reporting will give it utility at least 0. It is n2−2m−n an edge, which happens with probability n2 ,we also clear that agent 2 has no incentive to misrepre- get a social welfare of at most 4. Adding up all the 2 sent its type when its type is θ2. What if agent 2 terms we find that we get a social welfare of at most has type θ2? Reporting truthfully will give it utility 2 −| | | | 2− −  1  2m(5n +1)+2(n S )+4 S +4(n 2m n) wj ≥ − n2 . But this must be W W = wj W C, whereas reporting 2 − 2− − j∈I−S j∈I−S at least G = 2m(5n +1)+2(n K)+4K+4(n 2m n) .It n2 θ2 instead will give it utility W − C. So there is no in- follows that |S|≥K. So there is a solution to the 2 centive for agent 2 to misrepresent its type in this case, INDEPENDENT-SET instance. either. Now, we show that the expected social welfare 1 attains the goal. With probability 2 agent 2 has type We now move on to Bayes-Nash implementation. For 2 − θ2 and the social welfare is W (2V +1) W =2WV.On this, we will reduce from the NP-complete KNAP- 2 the other hand, if agent 2 has type θ1, the expected SACK problem [8]. social welfare is wj W + wj (( vj +1)W )= W W wj  j∈I−S j∈S Definition 7 (KNAPSACK) We are given a set I W + vj ≥ W + D. So, the total expected social of m pairs of (nonnegative) integers (wi,vi),acon- j∈S 2WV+W +D straint C>0 and a goal D>0. We are asked whether welfare is greater or equal to 2 = G. Hence, there exists a subset S ⊆ I such that wj ≤ C and the DMDBN instance has a solution.  j∈S vj ≥ D. Now suppose the DMDBN instance has a solution, j∈S that is, a function o satisfying the desired proper- ties. First suppose that there exists a θ1 such that Theorem 2 2-agent DETERMINISTIC- j 1 2  MECHANISM-DESIGN with Bayes-Nash imple- o(θj ,θ2) = om+2. The social welfare in this case can vj be at most ( +1)W