<<

Proceedings of the Twenty-Fifth International Joint Conference on (IJCAI-16)

Facility Location with Envy

Qingpeng Cai Aris Filos-Ratsikas Pingzhong Tang Tsinghua University, China University of Oxford, UK Tsinghua University, China [email protected] [email protected] [email protected]

Abstract be ensured. In fact, the setting in [Procaccia and Tennen- holtz, 2009] is the very same facility location setting pre- We study the problem of locating a public facil- sented above. The objectives studied in [Procaccia and ity on a real line or an interval, when agents’ costs Tennenholtz, 2009], as well as in a large body of subse- are their (expected) distances from the location of quent literature [Alon et al., 2009; Lu et al., 2009; 2010; the facility. Our goal is to minimize the maximum Fotakis and Tzamos, 2013b] are minimizing the social cost, envy over all agents, which we will refer to as the or the maximum cost, The social cost is the equivalent to the minimax envy objective, while at the same time en- utilitarian objective from , whereas the maximum suring that agents will report their most preferred cost is often referred to the egalitarian solution and is in gen- locations truthfully. First, for the problem of locat- eral interpreted as a more “fair” , in comparison to ing the facility on a real line, we propose a class its utilitarian counterpart. of truthful-in-expectation mechanisms that gener- In this paper, we adopt a different fairness criterion from alize the well-known LRM mechanism [Procaccia the literature [Caragiannis et al., 2009; Lipton and Tennenholtz, 2009; Alon et al., 2009], the best et al., 2004; Netzer and Meisels, 2013], that of minimax of which has performance arbitrarily close to the envy. Since in the standard facility location setting in com- social optimum. Then, we restrict the possible lo- puter science agents’ preferences are expressed through cost cations of the facility to a real interval and consider functions, quantifying comparisons between costs are possi- two cases; when the interval is determined rela- ble and in fact, such comparisons are inherent in both the so- tively to the agents’ reports and when the interval is cial cost and the maximum cost objectives. In a similar spirit, fixed in advance. For the former case, we prove that we can define a quantified version of envy; the envy of an for any choice of such an interval, there is a mecha- agent with respect to another agent, is simply their difference nism in the aforementioned class with additive ap- in distance from the facility (normalized by the length of the proximation arbitrarily close to the best approxima- location profile). For example, when there are two agents, tion achieved by any truthful-in-expectation mech- placing the facility on either agent’s most preferred point gen- anism. For the latter case, we prove that the approx- erates an envy of 1, while placing it in the middle of the in- imation of the best truthful-in-expectation mecha- terval defined by their most preferred locations generates an nism is between 1/3 and 1/2. envy of 0. The goal is to find the location that minimizes the maximum envy of any agent, while at the same time mak- ing sure that agents will report their most preferred locations 1 Introduction truthfully. Over the past years, facility location has been a topic of in- tensive study at the intersection of AI and theory. In 1.1 Our contributions the basic version of the problem [Procaccia and Tennenholtz, We study two versions of the problem, when the facility can 2009], a central planner is asked to locate a facility on a be placed anywhere on a real line and when the space of al- real line, based on the reported preferences of self-interested lowed facility locations is restricted to an interval. As we agents. Agents’ preferences are single-peaked and are ex- will argue later on, the usual notion of the (multiplicative) pressed through cost functions; an agent’s cost is the distance approximation ratio used typically in facility location prob- between her most-preferred position (her “peak”) and the lo- lems is not fit for the minimax envy objective. In fact, it cation of the facility [Procaccia and Tennenholtz, 2009]. is not hard to see that no truthful-in-expectation mechanism Our work falls under the umbrella of approximate mech- can achieve a finite approximation ratio, the reason being anism design without money, a term coined in [Procaccia that there are inputs where the minimum envy is 0, which and Tennenholtz, 2009] to describe problems where some ob- is not however achievable by any truthful mechanism. For jective function is approximately maximized under the con- that reason, we will employ an additive approximation to straint that the truthful behavior of the participants must quantify the performance of truthful mechanisms. Additive

137 approximations are quite common in the literature of ap- Caragiannis et al. [Caragiannis et al., 2009] study indivisible proximation [Alon et al., 2005; Goemans, 2006; item allocation for the minimax envy criterion and bound the Williamson and Shmoys, 2011] as well as the mechanism de- performance of truthful mechanisms. For the same problem, sign literature [Roughgarden and Sundararajan, 2009; Cohler Nguyen and Rothe [Nguyen and Rothe, 2013] study the min- et al., 2011], even for facility location [Nissim et al., 2012]. imization of the maximum envy as well as the total envy, but First, we study the version where the facility can be placed from a computational standpoint and in the absence of incen- anywhere on the real line. We design a class of truthful- tives. Netzer and Meisels employ the minimax envy criterion in-expectation mechanisms, that we will call ↵-LRM. which to resource allocation problems in a distributed framework. It generalizes the well-known LRM (“left-right-median”) mech- is worth noting that relaxations of envy-freeness and the goals anism introduced in [Procaccia and Tennenholtz, 2009] and of envy minimization have also been considered in the past in named in [Alon et al., 2009]. We prove that there exists a economics [Zeckhauser, 1991]. mechanism in the ↵-LRM class that is near-optimal, i.e. it achieves an ✏ additive approximation, for any ✏>0. 2 Preliminaries Next, we consider the variant of the problem where the In the facility location problem, a set N = 1,...,n of facility is restricted to an interval. This case captures most agents have to decide collectively on the location{ of a}fa- real-life scenarios and is often implicit in the motivation of cility y . Each agent i is associated with a location the problem; when we locate a library in a university, the R x ,2 which is her most preferred point on the real line. candidate locations are clearly constrained to be within the i R Let2x =(x ,...,x ) be the vector of agents’ locations, university premises or when choosing the ideal temperature 1 n that we will refer to as a location profile. Given a loca- in a room, it might not be sensible to consider temperatures tion of the facility y, the cost of an agent is simply the dis- much higher or lower than what any agent would ideally pre- tance between her location and the location of the facility, fer. The reason why this restriction was not discussed explic- i.e. cost (y)= y x . For a given location profile x, itly in previous work is that the best performance guarantees i i we will use lm(x|) and rm| (x) to denote the locations of for other objective functions are achievable even if we restrict the leftmost and the rightmost agents respectively. We will our attention to the interval defined by the reported locations let mid(x)=(rm(x)+lm(x))/2. Finally, we will let of the agents. As we will show, this is no longer the case for L(x)=rm(x) lm(x) denote the length of profile x. the minimax objective. A mechanism is a function f : n mapping loca- When the interval is defined by the report of the agents, we R R tion profiles to locations of the facility.! The cost of an agent prove that for any choice of such an interval, there exists a from mechanism f is defined as cost (f)=cost (f(x)). In mechanism in the class ↵-LRM with additive approximation i i this paper, we will also consider randomized mechanisms; a that is arbitrarily close to the approximation achieved by the randomized mechanism f is a random variable X, with prob- best truthful-in-expectation mechanism. For the case when ability density function all reports lie withing fixed intervals, we prove that the ap- proximation of the best truthful-in-expectation mechanism is 1 between 1/3 ✏ and 1/2, for any ✏>0. For both the real- p(y, x1,x2,...,xn) and p(y, x1,x2,...,xn)dy =1. line and the restriction to the interval, we prove that any truth- Z1 ful deterministic mechanism has a minimum maximum envy Informally, randomized mechanisms output different loca- of 1, which shows that without randomization, making some tions with different probabilities. The cost of agent i from agent maximally envious is unavoidable. a randomized mechanism f is defined as

1 1.2 Related work cost (f)= y x p(y, x ,x ,...,x )dy. i | i| 1 2 n The facility location problem has been studied extensively Z1 in the computer science literature. [Procaccia and Ten- We will be interested in truthful mechanisms, i.e. mecha- nenholtz, 2009; Alon et al., 2009; Lu et al., 2009; 2010; nisms that do not incentivize agents to misreport their loca- Fotakis and Tzamos, 2013a; 2013b]. Crucially however, most tions. Formally, a mechanism f is truthful if for every agent of the rich literature on facility location in computer science i N and for any R, it holds that considers either the social cost or the maximum cost objec- 2 2 tive. It is only recently that different objectives have been costi(f(xi + , x i)) costi(f(x)), considered. Feigenbaum et al. [Feigenbaum et al., 2013] where x i is the vector of reported locations of agents in consider facility location on a real line for minimizing the N i . For randomized mechanisms, the notion of truthful- [ Lp norm of agents’ costs, while Feldman et al. Feldman ness\{ is} truthfulness-in-expectation and the definition is simi- ] and Wilf, 2013 design truthful mechanisms for approximat- lar, with respect to the expected cost of an agent. ing the least squares objective on networks. Our work We will aim to minimize the maximum envy of all agents. introduces a different objective which is not based on aggre- For a given location of the facility y, the envy of agent i with gate measures, but rather on a quantitative notion of individ- respect to agent j is defined as costi(y) costj(y). The max- ual fairness. imum envy is defined as The objective of envy minimization has attracted consid- erable attention in the recent literature of fair division and max1 i=j n(costi(y) costj(y)) F (y)=  6  . . Lipton at al. [Lipton et al., 2004] and L(x)

138 where without loss of generality we can assume that L(x) = Place the facility at mid(x) with probability 1 2↵, at 0 (otherwise outputting lm(x) trivially minimizes the maxi-6 lm(x) L↵(x) with probability ↵, and at rm(x)+ L↵(x) mum envy). We define the maximum envy of a mechanism with probability ↵. f on input profile x to be F (f(x)) = F (f,x). For random- Note that the class ↵-LRM generalizes the well-known ized mechanisms, the maximum envy is defined with respect LRM mechanism [Procaccia and Tennenholtz, 2009; Alon et to the expected costs and given the definition of a randomized al., 2009], since the LRM mechanism can be obtained from mechanism as a random variable can be written as the above definition by setting ↵ =1/4. We prove the fol- E [max1 i=j n( X xi X xj )] lowing: F (f,x)=  6  | || | . L(x) Theorem 2. Any mechanism in the class ↵-LRM is truthful- Our objective will be to find mechanisms that minimize the in-expectation. quantity F (f,x) over all location profiles. Note that by the Proof. Let x =(x1,x2,...,xn) be any location profile. For definitions above, the maximum envy of any mechanism is 1 ease of notation and without loss of generality, assume that and the minimum envy is 0. x1 x2 ... xn. Agents 1 and n can change the lo- In a location profile with two agents, the location that min- cation of the facility by misreporting their locations, but any imizes the envy of any agent is the midpoint of the location other agent j can not impact the output unless the misreported interval. Furthermore, the minimum envy at that location is location xj0 is such that xj0 xn. We consider 0. On the other hand, it is not hard to see that a mechanism two main cases. that outputs the midpoint of the location interval is not truth- Case 1: The deviating agent is either agent 1 or agent n.We ful while at the same time, assigning positive probabilities to only need to prove that agent 1 can not benefit from misre- other locations renders the minimum envy strictly positive. porting; the argument for agent n is completely symmetric. This means that if we employ the commonly used notion of We will consider the cases when agent 1 reports x + and [ 1 multiplicative approximation (the approximation ratio) Pro- x , for >0 separately. caccia and Tennenholtz, 2009], the ratio of any truthful mech- 1 First consider the case when agent 1 reports x + for anism is infinite and hence this measure of efficiency is not • 1 appropriate for the problem that we study here. >0. Without loss of generality, we can assume that We will instead consider an additive approximation: a lm(x1 + , x 1)=x1 + , because any misreport x1 + (x ,x ] gives agent 1 the same expected cost as the mechanism achieves an approximation of ⇢ [0, 1] if the 2 2 n maximum envy that it generates for any location2 profile x misreport x1 + = x2 (and any misreport x1 + >xn is at most O(x)+⇢, where O(x) is the minimum possible is obviously worse for the agent). By this discussion, it holds that mid(x1 + , x 1)=(x1 + xn + )/2. envy achievable on profile x. We will be interested in truthful mechanisms with good additive approximations. Then the contribution to the cost of agent 1 by mid(x1 + , x 1) is larger by (1 2↵)/2, when compared to the The following lemma will be very useful for analyzing the approximations of our mechanisms. The lemma implies that contribution to the cost of the agent from mid(x). The total contribution to the cost of the agent from lm(x) for any number of agents, we can assume that the location ↵ ↵ that minimizes the envy is in fact the midpoint of the location L (x) and rm(x)+L (x) is profile. The proof is based on a case analysis, and is omitted ↵L↵(x)+↵(L↵(x)+L(x)) = (1 2↵)L(x)/2. due to lack of space. When agent 1 reports x1 + , the corresponding contri- Lemma 1. Given a location profile x, the location mid(x) bution to the cost of the agent from lm(x1 + , x 1) ↵ ↵ of the facility minimizes the maximum envy over all agents. L (x1 +, x 1) and rm(x1 +, x 1)+L (x1 +, x 1) is (1 2↵)(L(x) )/2 which is smaller than the 3 Location on the real line contribution to the cost of the agent from locations lm(x) L↵(x) and rm(x)+L↵(x) by (1 2↵)/2 In this section, we consider the variant of the problem where the location of the facility can be any point y R on the and hence agent 1 can not benefit by misreporting. real line. We start with the following theorem, which2 implies If agent 1 reports x , for >0 it holds that mid(x • 1 1 that deterministic truthful mechanisms are bound to perform , x 1)(x1 + xn )/2. Then the contribution to the poorly; the proof is not very involved and is omitted due to cost of the agent from mid(x1 , x 1) is smaller by lack of space. (1 2↵)/2, when compared to the contribution to the Theorem 1. The approximation of any truthful deterministic cost from mid(x). Again, the contribution to the cost from locations lm(x) L↵(x) and rm(x)+L↵(x) is mechanism is 1. Theorem 1 suggests that in order to any reasonable mini- ↵L↵(x)+↵(L↵(x)+L(x)) = ((1 2↵)L(x))/2. max envy guarantees, we need to resort to randomization. When agent 1 reports x1 , the corresponding con- Next, we define a class of truthful-in-expectation mecha- ↵ tribution from lm(x1 , x 1) L (x1 , x 1) nisms, parametrized by a constant ↵ (0, 1/4], that we will ↵ 2 and rm(x1 , x 1)+L (x1 , x 1) is (L(x)+ call ↵-LRM. )(1 2↵)/2, which is larger than the contribution to Mechanism ↵-LRM. For any location profile x, let the cost of the agent from locations lm(x) L↵(x) and ↵ 1 4↵ rm(x)+L (x) by (1 2↵)/2 and hence agent 1 cannot L↵(x)= L(x). 4↵ benefit from misreporting.

139 Case 2: The deviating agent is some agent i/ 1,n . From whenever x1 = x2 = ... = xk = x0 and xk+1 = xk+2 = 2{ } 1 the earlier discussion, in order to affect the location of the ... = xn = x20 . Then, fk is truthful. Furthermore, if the facility, it has to be the case that for any misreport xi0 of agent output of f is restricted to an interval [a, b], the output of fk i, it holds that xi0 xn. We will only consider the is restricted to the interval [a, b] as well. case when xi0 0 be such that xi0 = x1 . Since x1 is now the to bound the approximation of truthful mechanisms by only leftmost point of the location interval, it holds that mid(x 1 considering the case of two agents. , x i)=(x1 + xn)/2. Then the contribution to the Lemma 3. Let f be a truthful mechanism for n players with cost of agent i from mid(x1 , x i) is smaller by at most approximation ⇢. Then, there exists a truthful mechanism g ((1 2↵))/2, compared to the contribution to the cost from for two agents with approximation at most ⇢. mid(x). On the other hand, the contribution to the cost of the ↵ agent by lm(x1 , x i) L (x1 , x i) and rm(x1 Proof. Let f be such a mechanism as above and let g be some ↵ , x i) L (x1 , x i) when compared to the contribution mechanism fk for two agents constructed as in Lemma 2, for ↵ ↵ from lm(x) L (x) and rm(x) L (x) is larger by at least some value of k. Since the approximation of f is the worst ↵( +(1 4↵)/2↵)=(1 2↵)/2 and thus agent i cannot case approximation over all input profiles, the achieved ap- benefit from misreporting her location. proximation of f (i.e. the difference between the minimax envy of the mechanism and the minimum maximum envy) on Next, we prove that a mechanism in this class is arbitrarily some input of the form x =(x1,x2,...,xk,xk+1,...,xn), close to the optimal assignment, that minimizes the maximum with x1 = = xk and xk+1 = = xn is at most envy. ⇢. By the··· construction of mechanism··· g and given that cost (f(x)) = cost (f(x)) i, i =1,...,k Theorem 3. For any ✏>0, there exists a mechanism in the i i0 for all 0 and that cost (f(x)) = cost (f(x)) for all j, j = k +1,...,n, the class ↵-LRM with additive approximation ✏. j j0 0 0 approximation of g on any profile (x10 ,x20 ) is at most ⇢. Proof. Recall that O(x) is the minimum maximum envy on profile x. By Lemma 1, it holds that O(x)=F (mid(x), x) 4.1 Relative Interval and hence the maximum envy achieved by a mechanism ↵- Here, we consider the case where the allowed interval is de- LRM, for some ↵ (0, 1/4] is at most (1 2↵)O(x)+2↵ and termined by the reports of the extremal (leftmost and right- the approximation2 of the mechanism is at most 2↵ 2↵O(x). most) agents. This setting corresponds to scenarios where the For any ✏>0, we can choose ↵ small enough to achieve an designer aims to balance the quality of the solution with the approximation of ✏. set of reports, in order to meet some natural restrictions ex- post. For example, the designer might want to make sure that in every possible run of the mechanism, the facility does not 4 Location on an interval lie too far away from any agent, no matter how small the prob- In real-world applications, the designer might not have the ability of that happening is. In the following, we will prove freedom to place the facility anywhere he wants; the allowed that for any choice of such an interval, there is a mechanism locations might be constrained by physical borders (like a in the class ↵-LRM that achieves the best possible approxi- university campus) or practicality constraints (placing the li- mation among all truthful-in-expectation mechanisms. brary to far away even with a small probability is impractical). First, we consider the case when the allowed interval is We will consider the minimax envy when the facility is re- (strictly) contained in the interval defined by the profile. stricted to some interval [a, b], and consider two cases; when Lemma 4. Let (x1,x2) be any location profile with two the interval is relative, i.e. determined by the reports of the agents and assume without loss of generality that x1 x2. agents or fixed in advance, with all the reports lying within For any k [0, 1), there does not exist a truthful mechanism the interval. Again, we have the following theorem, the proof such that it2 always holds that of which is omitted due to lack of space. (1 + k)x1 (1 k)x2 (1 k)x1 (1 + k)x2 f(x1,x2) + , + 2 2 2 2 2 Theorem 4. The approximation of any deterministic truthful  mechanism for facility location on an interval is 1. i.e. the length of the allowed interval is r = k(x x ). 2 1 Next, we state some useful lemmas that will allow us to first Proof. Assume for contradiction that there exists a truthful consider profiles with two agents and then generalize the re- mechanism f in this setting. For ease of notation, let (x, y) sults to any number of agents. We start with a lemma that denote the location profile (x1,x2) and for any such profile, uses a mechanism for n agents to simulate a mechanism for let two agents. Similar lemmas in spirit have been proven before (1 + k)x (1 k)y (1 k)x (1 + k)y in the literature [Lu et al., 2010; Filos-Ratsikas et al., 2015]; lx,y = + ,rx,y = + we omit the proof due to lack of space. 2 2 2 2 denote the endpoints of the allowed interval and let gx,y = Lemma 2. Let f be a truthful mechanism for n agents with E[ Xx,y lx,y] denote the expected distance between the fa- locations x1,...,xn. Let fk be the mechanism for two agents | | cility and the left endpoint lx,y of the allowed interval. It is with locations x10 ,x20 such that easy to see that

f (x0 ,x0 )=f(x ,x ,...,x ,x ,...,x ) 0 g k(y x) (1) k 1 2 1 2 k k+1 n  x,y 

140 Next, consider a location profile (x0,y0) (with x0

By Equation (4), we have that out xi + yi Ei = E Xxi,yi Xxi,yi [xi,yi]) n 1 2 | 62 1+k 2 k 1  a = + (5) i 2 1+k 2 denote the expected distance between the facility and the mid- · ✓ ◆ point of the interval, conditioned on Xx ,y lying outside the As 0 k<1, there exist some m such that a < 0 and i i  m interval. By the definition of the maximum envy, it holds that hence gx ,y < 0, contradicting Inequality (1). We conclude 0 0 in that a truthful mechanism for this case does not exist. 2(1 p0)E F (f,x ,y )=p + 0 < 1/(1 + k) ✏ 0 0 0 y x By Lemma 4 and Lemma 2, we obtain the following theorem. 0 0 The proof is simple and omitted due to lack of space. A = p Eout +(1 p )Ein (y x )/4. 0 0 0 0 0 0 0 Theorem 5. Let x be any location profile with n agents and By the restriction of the facility location to the allowed inter- assume without loss of generality that x1 x2 ... xn. val, we have Eout k(y x )/2. Then For any k [0, 1), there does not exist a truthful  mechanism 0  0 0 such that it2 always holds that 1 p0 > (6) (1 + k)x1 (1 k)xn (1 k)x1 (1 + k)xn 2(1 + k) f(x) + , + 2 2 2 2 2 (k + 1)(y x )  Eout > 0 0 (7) Next, we consider intervals that are the same with the in- 0 4 x terval defined by the agents’ reports. Let E = E[( Xxi,yi xi Xxi,yi [xi,yi])] denote the i | || 62 Lemma 5. Let (x1,x2) be any location profile with two expected distance between the facility and xi conditioned agents and assume without loss of generality that x1 x2. on the fact that Xx ,y lies outside the interval and also let  y i i For any k>1, and for any truthful mechanism f such that E = E[( Xxi,yi yi Xxi,yi [xi,yi])] denote the ex- i | || 62 pected distance between the facility and yi conditioned on (1 + k)x1 (1 k)x2 (1 k)x1 (1 + k)x2 f(x1,x2) + , + that X lies outside the interval. We have that 2 2 2 2 2 xi,yi  Ex + Ey =2Eout (8) i.e. the length of the allowed interval is r = k(x2 x1), the i i i approximation of f is at least 1/(1 + k) ✏ for any✏>0. By (7) and (8), we get that Proof. Since O(x)=0in the case of two agents, it suffices (k + 1)(y x ) Ex + Ey > 0 0 to prove that for any truthful mechanism f, the worst case 0 0 2 maximum envy of the mechanism over all profiles is at least 1/(1 + k) ✏, for any ✏>0. Assume by contradiction that and then there exists a truthful mechanism f such that for any input E[ Xx0,y0 x0 ]+E[ Xx0,y0 y0 ] > profile (x1,x2), it holds that | | | | (k 1)p0 1+ (y0 x0) F (f,x1,x2) < 1/(1 + k) ✏. 2 ✓ ◆

141 Without loss of generality, we can assume that 4.2 Fixed interval 1 (k 1)p0 In this subsection, we consider the setting where the allowed E[ Xx0,y0 x0 ] > + (y0 x0) (9) | | 2 4 interval is fixed and the most preferred positions of the agents, ✓ ◆ By (6) and (9), we have as well as the set of their reports lie in this interval. This corresponds to scenarios where the facility location is con- 1 k 1 strained by physical borders and agents are asked to choose E[ Xx0,y0 x0 ] > + (y0 x0) | | 2 8k +8 ✓ ◆ their most preferred points within these restrictions. For ex- Now consider the location profile (x ,y ), with x =2x ample, when planning to build a university library, it makes 1 1 1 0 y0 and y1 = y0. Then, by truthfulness of mechanism f, it sense to assume that the library will be built within the uni- must hold that versity campus and the participants should only specify their k 1 preferences over locations present within the premises. A > 1/4+ (y x ). 1 16k + 16 1 1 Recall that the approximation of any mechanism in the ✓ ◆ class ↵-LRM is at most 2↵ 2↵O(x). For ↵ =1/4, we ob- By steps similar to the ones above, we get that tained the LRM mechanism which has an approximation of 5 1 5k 5 p > and Eout > 1/2+ (y x ) at most 1/2. Due to lack of space, we only state the theorem 1 4 · 2(k + 1) 1 16 1 1 that establishes the upper bound on any truthful mechanism. ✓ ◆ We can use this method to obtain the sequence Theorem 7. Let f be any truthful-in-expectation mechanism (x0,y0), (x1,y1),...,(xm,ym). For each input profile, for n agents and let I =[0, 1] be the fixed interval. Then the p >a,Eout >b(y x ) approximation of f is at least 1/3 ", for any ">0. i i i i i i (1 + k)(1 + (2b 1)a ) b = i i (10) i+1 4 5 Conclusion 2b 1 1 k +1 The minimax envy is an alternative criterion for the quality of a = i+1 ,a = ,b = (11) i+1 (k 1)(k + 1) 0 2(1 + k) 0 4 truthful facility location mechanisms and future work could By (10) and (11), we get that adopt the same approaches that have be popularized in other 2 objectives in facility location literature, like studying more 1+k (2bi 1) general metric spaces or multiple facilities. Finally, our work bi+1 = 1+ 4 · (k + 1)(k 1)! adds to the existing work on fair division on the minimax ob- jective and offers a new natural setting on which the objective Note that b is an increasing and bounded sequence, and the i can be applied. It would be interesting to apply the same ob- limit of b is k/2. By the same argument, we conclude that i jective to other domains, in combination with truthfulness. the limit of ai is 1/(1 + k). Thus, there exists m such that p > 1/(1 + k) ✏ and F (f,x ,y ) > 1/(1 + k) ✏, m m m Acknowledgments which is a contradiction and hence there does not exist a Qingpeng Cai and Pingzhong Tang were supported by truthful-in-expectation mechanism f with worst case maxi- the National Basic Research Program of China Grant mal envy less than 1/(1 + k) ✏. 2011CBA00300, 2011CBA00301, the NSFC Grant When the allowed interval is the location profile, we have 61033001, 61361136003, 61303077, a Tsinghua Ini- the following lemma and the following theorem. tiative Scientific Research Grant and a National Youth Lemma 6. Let (x1,x2) be any location profile with two 1000-talent program. Aris Filos-Ratsikas was supported agents and assume without loss of generality that x1 x2. by the ERC Advanced Grant 321171 (ALGAME) and For any truthful mechanism f such that it is always the case acknowledges support from the Danish National Research that f(x) [x ,x ], the approximation of f is at least 1/2. Foundation and The National Science Foundation of China 2 1 2 Theorem 6. Let x be any location profile with n agents and (under the grant 61061130540) for the Sino-Danish Center for the Theory of Interactive Computation, within which assume without loss of generality that x1 x2 ... xn. For any k>1 and any truthful mechanism f such that it this work was performed and the Center for Research in always holds that Foundations of Electronic Markets (CFEM), supported by

(1 + k)x1 (1 k)xn (1 k)x1 (1 + k)xn the Danish Strategic Research Council. f(x) + , + , 2 2 2 2 2  the approximation of f is at least 1/(1 + k) ✏ for any References ✏>0. Furthermore, for any truthful mechanismg such that [Alon et al., 2005] Noga Alon, Asaf Shapira, and Benny Su- g(x) [x ,x ], the approximation of g is at least 1/2. dakov. Additive approximation for edge-deletion prob- 2 1 n We omit the proofs due to lack of space. For an appropriate lems. In Foundations of Computer Science, 2005. FOCS choice of ↵, we can construct a mechanism in the class ↵- 2005. 46th Annual IEEE Symposium on, pages 419–428. LRM that always outputs a valid location within the interval; IEEE, 2005. for k 1, we can set ↵ =1/2(1 + k) and the approximation [Alon et al., 2009] Noga Alon, Michal Feldman, Ariel D of f will then be at most 1/(1+k) O(x)/(1+k) 1/(1+ Procaccia, and Moshe Tennenholtz. Strategyproof ap- k). By Theorem 6, the mechanism is optimal among truthful proximation mechanisms for location on networks. arXiv mechanisms. preprint arXiv:0907.2049, 2009.

142 [Caragiannis et al., 2009] Ioannis Caragiannis, Christos [Nissim et al., 2012] Kobbi Nissim, Rann Smorodinsky, and Kaklamanis, Panagiotis Kanellopoulos, and Maria Moshe Tennenholtz. Approximately optimal mechanism Kyropoulou. On low-envy truthful allocations. In design via differential privacy. In Proceedings of the 3rd Algorithmic , pages 111–119. Springer, Innovations in Theoretical Computer Science Conference, 2009. pages 203–213. ACM, 2012. [Cohler et al., 2011] Yuga J Cohler, John K Lai, David C [Procaccia and Tennenholtz, 2009] Ariel D Procaccia and Parkes, and Ariel D Procaccia. Optimal envy-free cake Moshe Tennenholtz. Approximate mechanism design cutting. In AAAI, 2011. without money. In Proceedings of the 10th ACM con- ference on Electronic commerce, pages 177–186. ACM, [Feigenbaum et al., 2013] Itai Feigenbaum, Jay Sethuraman, 2009. and Chun Ye. Approximately optimal mechanisms for strategyproof facility location: Minimizing l p norm of [Roughgarden and Sundararajan, 2009] Tim Roughgarden costs. arXiv preprint arXiv:1305.2446, 2013. and Mukund Sundararajan. Quantifying inefficiency in cost-sharing mechanisms. Journal of the ACM (JACM), [ ] Feldman and Wilf, 2013 Michal Feldman and Yoav Wilf. 56(4):23, 2009. Strategyproof facility location and the least squares objec- tive. In Proceedings of the fourteenth ACM conference on [Williamson and Shmoys, 2011] David P Williamson and Electronic commerce, pages 873–890. ACM, 2013. David B Shmoys. The design of approximation algo- rithms. Cambridge University Press, 2011. [Filos-Ratsikas et al., 2015] Aris Filos-Ratsikas, Minming Li, Jie Zhang, and Qiang Zhang. Facility location with [Zeckhauser, 1991] Richard Zeckhauser. and double-peaked preferences. In Proceedings of the 29th choice. MIT Press, 1991. AAAI Conference on Artificial Intelligence (AAAI-15), 2015. [Fotakis and Tzamos, 2013a] Dimitris Fotakis and Christos Tzamos. On the power of deterministic mechanisms for facility location . In Automata, Languages, and Pro- gramming, pages 449–460. Springer, 2013. [Fotakis and Tzamos, 2013b] Dimitris Fotakis and Christos Tzamos. Strategyproof facility location for concave cost functions. In Proceedings of the fourteenth ACM con- ference on Electronic commerce, pages 435–452. ACM, 2013. [Goemans, 2006] Michel X Goemans. Minimum bounded degree spanning trees. In Foundations of Computer Sci- ence, 2006. FOCS’06. 47th Annual IEEE Symposium on, pages 273–282. IEEE, 2006. [Lipton et al., 2004] Richard J Lipton, Evangelos Markakis, Elchanan Mossel, and Amin Saberi. On approximately fair allocations of indivisible goods. In Proceedings of the 5th ACM conference on Electronic commerce, pages 125–131. ACM, 2004. [Lu et al., 2009] Pinyan Lu, Yajun Wang, and Yuan Zhou. Tighter bounds for facility games. In Internet and Network Economics, pages 137–148. Springer, 2009. [Lu et al., 2010] Pinyan Lu, Xiaorui Sun, Yajun Wang, and Zeyuan Allen Zhu. Asymptotically optimal strategy-proof mechanisms for two-facility games. In Proceedings of the 11th ACM conference on Electronic commerce, pages 315–324. ACM, 2010. [Netzer and Meisels, 2013] Arnon Netzer and Amnon Meisels. Distributed envy minimization for resource allocation. In ICAART (1), pages 15–24, 2013. [Nguyen and Rothe, 2013] Trung Thanh Nguyen and Jorg¨ Rothe. How to decrease the degree of envy in alloca- tions of indivisible goods. In Algorithmic Decision The- ory, pages 271–284. Springer, 2013.

143