Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)
Facility Location with Minimax Envy
Qingpeng Cai Aris Filos-Ratsikas Pingzhong Tang Tsinghua University, China University of Oxford, UK Tsinghua University, China [email protected] [email protected] [email protected]
Abstract be ensured. In fact, the setting in [Procaccia and Tennen- holtz, 2009] is the very same facility location setting pre- We study the problem of locating a public facil- sented above. The objectives studied in [Procaccia and ity on a real line or an interval, when agents’ costs Tennenholtz, 2009], as well as in a large body of subse- are their (expected) distances from the location of quent literature [Alon et al., 2009; Lu et al., 2009; 2010; the facility. Our goal is to minimize the maximum Fotakis and Tzamos, 2013b] are minimizing the social cost, envy over all agents, which we will refer to as the or the maximum cost, The social cost is the equivalent to the minimax envy objective, while at the same time en- utilitarian objective from economics, whereas the maximum suring that agents will report their most preferred cost is often referred to the egalitarian solution and is in gen- locations truthfully. First, for the problem of locat- eral interpreted as a more “fair” outcome, in comparison to ing the facility on a real line, we propose a class its utilitarian counterpart. of truthful-in-expectation mechanisms that gener- In this paper, we adopt a different fairness criterion from alize the well-known LRM mechanism [Procaccia the fair division literature [Caragiannis et al., 2009; Lipton and Tennenholtz, 2009; Alon et al., 2009], the best et al., 2004; Netzer and Meisels, 2013], that of minimax of which has performance arbitrarily close to the envy. Since in the standard facility location setting in com- social optimum. Then, we restrict the possible lo- puter science agents’ preferences are expressed through cost cations of the facility to a real interval and consider functions, quantifying comparisons between costs are possi- two cases; when the interval is determined rela- ble and in fact, such comparisons are inherent in both the so- tively to the agents’ reports and when the interval is cial cost and the maximum cost objectives. In a similar spirit, fixed in advance. For the former case, we prove that we can define a quantified version of envy; the envy of an for any choice of such an interval, there is a mecha- agent with respect to another agent, is simply their difference nism in the aforementioned class with additive ap- in distance from the facility (normalized by the length of the proximation arbitrarily close to the best approxima- location profile). For example, when there are two agents, tion achieved by any truthful-in-expectation mech- placing the facility on either agent’s most preferred point gen- anism. For the latter case, we prove that the approx- erates an envy of 1, while placing it in the middle of the in- imation of the best truthful-in-expectation mecha- terval defined by their most preferred locations generates an nism is between 1/3 and 1/2. envy of 0. The goal is to find the location that minimizes the maximum envy of any agent, while at the same time mak- ing sure that agents will report their most preferred locations 1 Introduction truthfully. Over the past years, facility location has been a topic of in- tensive study at the intersection of AI and game theory. In 1.1 Our contributions the basic version of the problem [Procaccia and Tennenholtz, We study two versions of the problem, when the facility can 2009], a central planner is asked to locate a facility on a be placed anywhere on a real line and when the space of al- real line, based on the reported preferences of self-interested lowed facility locations is restricted to an interval. As we agents. Agents’ preferences are single-peaked and are ex- will argue later on, the usual notion of the (multiplicative) pressed through cost functions; an agent’s cost is the distance approximation ratio used typically in facility location prob- between her most-preferred position (her “peak”) and the lo- lems is not fit for the minimax envy objective. In fact, it cation of the facility [Procaccia and Tennenholtz, 2009]. is not hard to see that no truthful-in-expectation mechanism Our work falls under the umbrella of approximate mech- can achieve a finite approximation ratio, the reason being anism design without money, a term coined in [Procaccia that there are inputs where the minimum envy is 0, which and Tennenholtz, 2009] to describe problems where some ob- is not however achievable by any truthful mechanism. For jective function is approximately maximized under the con- that reason, we will employ an additive approximation to straint that the truthful behavior of the participants must quantify the performance of truthful mechanisms. Additive
137 approximations are quite common in the literature of ap- Caragiannis et al. [Caragiannis et al., 2009] study indivisible proximation algorithms [Alon et al., 2005; Goemans, 2006; item allocation for the minimax envy criterion and bound the Williamson and Shmoys, 2011] as well as the mechanism de- performance of truthful mechanisms. For the same problem, sign literature [Roughgarden and Sundararajan, 2009; Cohler Nguyen and Rothe [Nguyen and Rothe, 2013] study the min- et al., 2011], even for facility location [Nissim et al., 2012]. imization of the maximum envy as well as the total envy, but First, we study the version where the facility can be placed from a computational standpoint and in the absence of incen- anywhere on the real line. We design a class of truthful- tives. Netzer and Meisels employ the minimax envy criterion in-expectation mechanisms, that we will call ↵-LRM. which to resource allocation problems in a distributed framework. It generalizes the well-known LRM (“left-right-median”) mech- is worth noting that relaxations of envy-freeness and the goals anism introduced in [Procaccia and Tennenholtz, 2009] and of envy minimization have also been considered in the past in named in [Alon et al., 2009]. We prove that there exists a economics [Zeckhauser, 1991]. mechanism in the ↵-LRM class that is near-optimal, i.e. it achieves an ✏ additive approximation, for any ✏>0. 2 Preliminaries Next, we consider the variant of the problem where the In the facility location problem, a set N = 1,...,n of facility is restricted to an interval. This case captures most agents have to decide collectively on the location{ of a}fa- real-life scenarios and is often implicit in the motivation of cility y . Each agent i is associated with a location the problem; when we locate a library in a university, the R x ,2 which is her most preferred point on the real line. candidate locations are clearly constrained to be within the i R Let2x =(x ,...,x ) be the vector of agents’ locations, university premises or when choosing the ideal temperature 1 n that we will refer to as a location profile. Given a loca- in a room, it might not be sensible to consider temperatures tion of the facility y, the cost of an agent is simply the dis- much higher or lower than what any agent would ideally pre- tance between her location and the location of the facility, fer. The reason why this restriction was not discussed explic- i.e. cost (y)= y x . For a given location profile x, itly in previous work is that the best performance guarantees i i we will use lm(x|) and rm| (x) to denote the locations of for other objective functions are achievable even if we restrict the leftmost and the rightmost agents respectively. We will our attention to the interval defined by the reported locations let mid(x)=(rm(x)+lm(x))/2. Finally, we will let of the agents. As we will show, this is no longer the case for L(x)=rm(x) lm(x) denote the length of profile x. the minimax objective. A mechanism is a function f : n mapping loca- When the interval is defined by the report of the agents, we R R tion profiles to locations of the facility.! The cost of an agent prove that for any choice of such an interval, there exists a from mechanism f is defined as cost (f)=cost (f(x)). In mechanism in the class ↵-LRM with additive approximation i i this paper, we will also consider randomized mechanisms; a that is arbitrarily close to the approximation achieved by the randomized mechanism f is a random variable X, with prob- best truthful-in-expectation mechanism. For the case when ability density function all reports lie withing fixed intervals, we prove that the ap- proximation of the best truthful-in-expectation mechanism is 1 between 1/3 ✏ and 1/2, for any ✏>0. For both the real- p(y, x1,x2,...,xn) and p(y, x1,x2,...,xn)dy =1. line and the restriction to the interval, we prove that any truth- Z 1 ful deterministic mechanism has a minimum maximum envy Informally, randomized mechanisms output different loca- of 1, which shows that without randomization, making some tions with different probabilities. The cost of agent i from agent maximally envious is unavoidable. a randomized mechanism f is defined as
1 1.2 Related work cost (f)= y x p(y, x ,x ,...,x )dy. i | i| 1 2 n The facility location problem has been studied extensively Z 1 in the computer science literature. [Procaccia and Ten- We will be interested in truthful mechanisms, i.e. mecha- nenholtz, 2009; Alon et al., 2009; Lu et al., 2009; 2010; nisms that do not incentivize agents to misreport their loca- Fotakis and Tzamos, 2013a; 2013b]. Crucially however, most tions. Formally, a mechanism f is truthful if for every agent of the rich literature on facility location in computer science i N and for any R, it holds that considers either the social cost or the maximum cost objec- 2 2 tive. It is only recently that different objectives have been costi(f(xi + , x i)) costi(f(x)), considered. Feigenbaum et al. [Feigenbaum et al., 2013] where x i is the vector of reported locations of agents in consider facility location on a real line for minimizing the N i . For randomized mechanisms, the notion of truthful- [ Lp norm of agents’ costs, while Feldman et al. Feldman ness\{ is} truthfulness-in-expectation and the definition is simi- ] and Wilf, 2013 design truthful mechanisms for approximat- lar, with respect to the expected cost of an agent. ing the least squares objective on tree networks. Our work We will aim to minimize the maximum envy of all agents. introduces a different objective which is not based on aggre- For a given location of the facility y, the envy of agent i with gate measures, but rather on a quantitative notion of individ- respect to agent j is defined as costi(y) costj(y). The max- ual fairness. imum envy is defined as The objective of envy minimization has attracted consid- erable attention in the recent literature of fair division and max1 i=j n(costi(y) costj(y)) F (y)= 6 . mechanism design. Lipton at al. [Lipton et al., 2004] and L(x)
138 where without loss of generality we can assume that L(x) = Place the facility at mid(x) with probability 1 2↵, at 0 (otherwise outputting lm(x) trivially minimizes the maxi-6 lm(x) L↵(x) with probability ↵, and at rm(x)+ L↵(x) mum envy). We define the maximum envy of a mechanism with probability ↵. f on input profile x to be F (f(x)) = F (f,x). For random- Note that the class ↵-LRM generalizes the well-known ized mechanisms, the maximum envy is defined with respect LRM mechanism [Procaccia and Tennenholtz, 2009; Alon et to the expected costs and given the definition of a randomized al., 2009], since the LRM mechanism can be obtained from mechanism as a random variable can be written as the above definition by setting ↵ =1/4. We prove the fol- E [max1 i=j n( X xi X xj )] lowing: F (f,x)= 6 | | | | . L(x) Theorem 2. Any mechanism in the class ↵-LRM is truthful- Our objective will be to find mechanisms that minimize the in-expectation. quantity F (f,x) over all location profiles. Note that by the Proof. Let x =(x1,x2,...,xn) be any location profile. For definitions above, the maximum envy of any mechanism is 1 ease of notation and without loss of generality, assume that and the minimum envy is 0. x1 x2 ... xn. Agents 1 and n can change the lo- In a location profile with two agents, the location that min- cation of the facility by misreporting their locations, but any imizes the envy of any agent is the midpoint of the location other agent j can not impact the output unless the misreported interval. Furthermore, the minimum envy at that location is location xj0 is such that xj0
139 Case 2: The deviating agent is some agent i/ 1,n . From whenever x1 = x2 = ... = xk = x0 and xk+1 = xk+2 = 2{ } 1 the earlier discussion, in order to affect the location of the ... = xn = x20 . Then, fk is truthful. Furthermore, if the facility, it has to be the case that for any misreport xi0 of agent output of f is restricted to an interval [a, b], the output of fk i, it holds that xi0