
Incremental Mechanism Design¤ Vincent Conitzer Tuomas Sandholm Duke University Carnegie Mellon University Department of Computer Science Computer Science Department [email protected] [email protected] Abstract The traditional approach to mechanism design has been to try to design good mechanisms that are as general as possible. Mechanism design has traditionally focused almost Probably the best-known general mechanism is the Vickrey- exclusively on the design of truthful mechanisms. Clarke-Groves (VCG) mechanism [16; 4; 10], which chooses There are several drawbacks to this: 1. in certain the allocation that maximizes the sum of the agents’ utilities settings (e.g. voting settings), no desirable strategy- (the social welfare), and makes every agent pay the external- proof mechanisms exist; 2. truthful mechanisms ity that he1 imposes on the other agents. This is sufficient to are unable to take advantage of the fact that com- ensure that no individual agent has an incentive to manip- putationally bounded agents may not be able to ulate, but it also has various drawbacks: for example, the find the best manipulation, and 3. when designing surplus payments can, in general, not be redistributed, and mechanisms automatically, this approach leads to the designer may have a different objective than social wel- constrained optimization problems for which cur- fare, e.g. she may wish to maximize revenue. Other general rent techniques do not scale to very large instances. mechanisms have their own drawbacks, and there are vari- In this paper, we suggest an entirely different ap- ous impossibility results such as the Gibbard-Satterthwaite proach: we start with a na¨ıve (manipulable) mech- theorem [8; 15] that show that certain objectives cannot be anism, and incrementally make it more strategy- achieved by truthful mechanisms. proof over a sequence of iterations. The lack of a general mechanism that is always satisfac- We give examples of mechanisms that (variants of) tory led to the creation of the field of automated mechanism our approach generate, including the VCG mech- design [5]. Rather than try to design a mechanism that works anism in general settings with payments, and the for a range of settings, the idea is to have a computer au- plurality-with-runoff voting rule. We also provide tomatically compute the optimal mechanism for the specific several basic algorithms for automatically execut- setting at hand, by solving an optimization problem. A draw- ing our approach in general settings. Finally, we back of that approach is that current techniques do not scale discuss how computationally hard it is for agents to to very large instances. This is in part due to the fact that, find any remaining beneficial manipulation. to ensure strategy-proofness, one must simultaneously decide on the outcome that the mechanism chooses for every possi- 1 Introduction ble input of revealed preferences, and the strategy-proofness In many multiagent settings, we must choose an outcome constraints interrelate these decisions. based on the preferences of multiple self-interested agents, Another observation that has been made is that in com- who will not necessarily report their preferences truthfully plex settings, it is unreasonable to believe that every agent if it is not in their best interest to do so. Typical settings is endowed with the computational abilities to compute an in which this occurs include auctions, reverse auctions, ex- optimal manipulation. This invalidates the above-mentioned changes, voting settings, public good settings, resource/task revelation principle, in that restricting attention to truthful allocation settings, ranking pages on the web [1], etc. Re- mechanisms may in fact come at a cost in the quality of the search in mechanism design studies how to choose outcomes outcomes that the mechanism produces. Adding to this the in such a way that good outcomes are obtained even when observation that in some domains, all strategy-proof mecha- agents respond to incentives to misreport their preferences nisms are unsatisfactory (by the Gibbard-Satterthwaite theo- (or manipulate). For the most part, researchers have focused rem), it becomes important to be able to design mechanisms simply on creating truthful (or strategy-proof) mechanisms, that are not strategy-proof. Recent research has already pro- in which no agent ever has an incentive to misreport. This ap- posed some manipulable mechanisms. There has been work proach is typically justified by appealing to a result known as that proposes relaxing the constraint to approximate truth- the revelation principle, which states that for any mechanism fulness (in various senses). Approximately truthful mech- that does well in the face of strategic misreporting by agents, anisms can be easier to execute [12; 2], or can circumvent there is a truthful mechanism that will perform just as well. impossiblity results that apply to truthful mechanisms [14; 9]. Other work has studied manipulable mechanisms in which ¤This material is based upon work supported by the National Sci- ence Foundation under ITR grants IIS-0121678 and IIS-0427858, a Sloan Fellowship, and an IBM Ph.D. Fellowship. 1We will use “she” for the center/designer, and “he” for an agent. 3 finding a beneficial manipulation is computationally difficult ² For each i 2 N, a utility function ui : £i £ O ! R; in various senses [3; 13; 6; 7]. ² An objective function g : £ £ O ! R. In this paper, we introduce a new approach. We start For example, in a single-item auction, N is the set of bid- with a na¨ıvely designed mechanism that is not strategy-proof ders; O = S £¦, where S is the set of all possible allocations (for example, the mechanism that would be optimal in the of the item (one for each bidder, plus potentially one alloca- absence of strategic behavior), and we attempt to make it tion where no bidder wins), and ¦ is the set of all possible more strategy-proof. Specifically, the approach systemati- vectors h¼1; : : : ; ¼ni of payments to be made by the agents cally identifies situations in which an agent has an incentive (e.g., ¦ = Rn); assuming no allocative externalities (that is, to manipulate, and corrects the mechanism to take away this it does not matter to a bidder which other bidder wins the incentive. This is done iteratively, and the mechanism may or item if the bidder does not win himself), £i is the set of pos- may not become (completely) strategy-proof eventually. The sible valuations that the bidder may have for the item (for ¸0 final mechanism may depend on the order in which possible example, £i = R ); the utility function ui is given by: manipulations are considered. ui(θi; (s; h¼1; : : : ; ¼ni)) = θi ¡ ¼i if s is the outcome in One can conceive of this approach as being a computa- which i wins the item, and ui(θi; (s; h¼1; : : : ; ¼ni)) = ¡¼i tionally more efficient approach to automated mechanism de- otherwise. (In situations in which a type consists of a single 4 sign, insofar as the updates to the mechanism to make it value, we will typically use vi rather than θi for the type.) more strategy-proof can be executed automatically (by a com- A (deterministic) mechanism consists of a function puter). Indeed, we will provide algorithms for doing so. It is M : £ ! O, specifying an outcome for every vector also possible to think about the results of this approach theo- of (reported) types.5 Given a mechanism M, a benefi- retically, and use them as a guide in “traditional” mechanism cial manipulation6 consists of an agent i 2 N, a type design. We will pursue this as well, giving various examples. vector hθ1; : : : ; θni 2 £, and an alternative type re- ^ Finally, we will argue that if the mechanism that the approach port θi for agent i such that ui(θi;M(hθ1; : : : ; θni)) < produces remains manipulable, then any remaining manipu- ^ ui(θi;M(hθ1; : : : ; θi¡1; θi; θi+1; : : : ; θni)). In this lations will be computationally hard to find. case we say that i manipulates from hθ1; : : : ; θni into This approach bears some similarity to how mechanisms ^ hθ1; : : : ; θi¡1; θi; θi+1; : : : ; θni. A mechanism is strategy- are designed in the real world. Real-world mechanisms are proof or (dominant-strategies) incentive compatible if there often initially na¨ıve, leading to undesirable strategic behavior; are no beneficial manipulations. (We will not consider once this is recognized, the mechanism is amended to disin- Bayes-Nash equilibrium incentive compatibility here.) cent the undesirable behavior. For example, some na¨ıvely de- In settings with payments, we enforce an ex-post individual signed mechanisms give bidders incentives to postpone sub- rationality constraint: we cannot make an agent worse off mitting their bids until just before the event closes (i.e., snip- than he would have been if he had not participated. That is, ing); often this is (partially) fixed by adding an activity rule, we cannot charge an agent more than he reported the outcome which prevents bidders that do not bid actively early from (disregarding payments) was worth to him. winning later. As another example, in the 2003 Trading Agent Competition Supply Chain Management (TAC/SCM) game, 3 Our approach and techniques the rules of the game led the agents to procure most of their components on day 0. This was deemed undesirable, and the In this section, we explain the approach and techniques that designers tried to modify the rules for the 2004 competition we consider in this paper. We recall that our goal is not to to disincent this behavior [11].2 (immediately) design a strategy-proof mechanism; rather, we As we will see, there are many variants of the approach, start with some manipulable mechanism, and attempt to in- each with its own merits.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-