Large Population Aggregative Potential Games∗

Ratul Lahkar†

April 27, 2016

Abstract

We consider population games in which payoff depends upon the aggregate level and which admit a potential function. Examples of such aggregative potential games include the tragedy of the commons and the model. These games are technically simple as they can be analyzed using a one–dimensional variant of the potential function. We use such games to model the presence of externalities, both positive and negative. We characterize Nash equilibria in such games as socially inefficient. Evolutionary dynamics in such games converge to socially inefficient Nash equilibria.

Keywords: Externalities, Aggregative Games, Potential Games, Evolutionary Dynamics.

JEL classification: C72; C73; D62.

∗I thank an anonymous associate editor and two anonymous referees for various comments and suggestions. Any remaining error or omission is my responsibility. †School of Economics, Ashoka University, Rajiv Gandhi Education City, Kundli, Haryana, 131 028, India. email: [email protected].

1 1 Introduction

Potential games are games in which information about players’ payoffs can be summarized using a real valued function. Monderer and Shapley (1996) define and analyze the fundamental properties of a normal form potential game. Sandholm (2001, 2009) extends the notion of a potential game to population games. In the context of a population game, a potential game is one in which payoffs are equal to the gradient of a real valued function called the potential function. Potential games are of interest in evolutionary because a variety of evolutionary dynamics converge to Nash equilibria in such games.1 In this paper, we consider a particularly simple class of potential games which we call aggregative potential games. These are potential games which belong to the class of aggregative population games, i.e. population games in which payoffs depend upon the strategy used, and the aggregate strategy level at a population state. This notion of an aggregative population game is an extension of the original concept introduced by Corch´on(1994) and elaborated further by, for example, Acemoglu and Jensen (2013), in the context of finite player games. We describe aggregative potential games as “simple” because, as we show here, they can be analyzed using a one–dimensional analogue of the potential function, which we call the quasi– potential function. In general, Nash equilibria of a potential game are related to the maximizers of its potential function (Sandholm, 2001). For aggregative potential games, the maximizers of the potential function are well approximated by the maximizers of the quasi–potential function, partic- ularly if the strategy set of the underlying game is sufficiently dense. Since finding the maximizers of the quasi–potential function is a trivial task in comparison to computing the maximizers of the potential function, identifying Nash equilibria in aggregative potential games becomes particularly easy. In addition to their technical simplicity, aggregative potential games are of interest because they include important economic models like the tragedy of the commons, the model of Cournot competition and models of search. Further, aggregative potential games provide a parsimonious framework to model both negative and positive externalities in the context of population games. This allows us to use existing results on evolution in potential games to explain how, in the presence of externalities, societies may converge to inefficient aggregate economic outcomes. We consider aggregative potential games with a finite set of positive strategies. We analyze two types of such games—one with negative externalities and the other with positive externalities. Our results on games with negative externalities are particularly interesting. In this case, under reasonable assumptions, the equilibrium aggregate strategy level is uniquely defined. This equilib- rium aggregate strategy level is either exactly characterized by or well–approximated by the unique maximizer of the quasi–potential function. Games with positive externalities do not have unique equilibrium aggregate strategy levels. Nor is their characterization as elegant as in the case of neg- ative externalities. Nevertheless, with some relatively strong assumptions on the quasi–potential

1See Sandholm (2010) for a review of such results.

1 function, we are able to relate the equilibrium aggregate strategy levels under positive externalities to the maximizers of the quasi–potential function. Externalities imply a difference between Nash equilibria and the socially efficient state, i.e. the state that maximizes aggregate payoff in the population. For aggregative potential games, we estab- lish this distinction by using a technique similar to characterizing Nash equilibria in these games. We construct a one-dimensional analogue to the aggregate payoff and show that the maximizer of this function corresponds to the socially efficient state, either exactly or with a high degree of approximation. We then show that under negative externalities, the unique equilibrium aggregate social state is higher than the socially efficient aggregate state. Under positive externalities, aggre- gate states at equilibria are lower than the socially efficient state. Such conclusions are, of course, consistent with standard microeconomic theory. The novelty lies in the technique through which we arrive at these conclusions. Use of the one–dimensional analogues of the potential function and the aggregate payoff function makes it particularly easy to relate Nash equilibria and efficient states in aggregative potential games. We can then apply standard results on evolution in potential games (Sandholm, 2001) to con- clude that the social state in aggregative potential games converge to a state that is different from the socially efficient state. Deterministic evolutionary dynamics like the replicator dynamic, the logit dynamic, the BNN dynamic and the Smith dynamic converge either to Nash equilibria or to a perturbed version of Nash equilibria in potential games from all or almost all initial states. In aggregative potential games, therefore, all such dynamics converge to an aggregate social state that differs from the socially efficient aggregate state. From a broader perspective, the evolutionary analysis of aggregative potential games allows us to appreciate how inefficient aggregate behavior becomes prevalent in societies. If we regard the revision protocols that generate the dynamics commonly studied in evolutionary game theory as reasonable descriptions of human behavior, then our analysis helps explain why we should expect social inefficiency under a wide variety of behavioral norms and from a wide range of initial conditions. There are precedents to the evolutionary analysis of the models considered here. For example, Vega–Redondo (1997) and Al´os–Ferrer and Ania (2005) consider an imitative model of Cournot competition with a finite population of players. Their main result is that the evolutionarily stable strategy (ESS) in the finite population model is not the of the model, but the Walrasian or competitive equilibrium. The key reason why Nash equilibria are not finite population ESS is that a strategy change by single player affects a finite population state. Hence, when a player playing, say, strategy i observes the payoff of strategy j he is seeking to imitate, he is not observing the payoff that he would actually obtain when he changes the strategy to j. This is because when the i−player shifts to j, the number of j−players increases by one, thereby changing the payoff to strategy j. The failure to appreciate this distinction may make it advantageous to imitate a mutant and change strategy at a Nash equilibrium in a finite population Cournot model. Al´os–Ferrer and Ania (2005) also extend this result on finite population ESS to other models like the tragedy of the

2 commons and the search model that we have considered here. Schaffer (1988) notes an important implication of the finite population ESS. It is as if players seek to maximize not absolute payoffs but relative payoffs. Thus, deviation from a Nash equilibrium becomes worthwhile because even though it may reduce one’s absolute payoff, it can reduce oppo- nents’ payoff even more, thereby increasing relative payoff. Another significant model that explores the distinction between Nash equilibria and finite population ESS under imitative learning due to the concern with relative payoff is Bergin and Bernhardt (2004). In their model, this distinction does not arise if agents’ learn from one’s own experience instead of imitating the experience of others. Unlike the models discussed in the previous two paragraphs, in our paper, only Nash equilib- ria can be evolutionarily stable. This is because in an infinite population model such as ours, a strategy change by a single agent has no effect on the population state and on payoffs. Hence, the difference between the payoff observed before a strategy change and the payoff obtained after the strategy change cannot arise in this context. In terms of Schaffer’s (1988) interpretation, concern with relative payoffs is irrelevant and so, deviation from a Nash equilibrium is never worthwhile. We should note, however, a stable Nash equilibrium in our model coincides with the finite popu- lation ESS identified in these other models. Thus, in our Cournot model, Nash equilibria are also Walrasian equilibria. Intuitively, this is because with an infinite population, our model of Cournot competition is actually a model of perfect competition. Indeed, as the number of producers in the finite population Cournot models in Vega–Redondo (1997) and Al´os–Ferrer and Ania (2005) increases, the Nash equilibrium in their model would also converge to the Walrasian equilibrium, or the Nash equilibrium in our model. Thus, while there are technical differences between the results in our model and these other models, the substantive difference is more a matter of degree. In contrast to the models in Vega–Redondo (1997) and Al´os–Ferrer and Ania (2005), there are other dynamic models which do show convergence to Nash equilibria in finite population ag- gregative games. For example, Kukushkin (2004) shows that every deterministic best response path leads to a Nash equilibrium in finite player aggregative games that satisfy certain strategic complementarity or substitutability conditions. Dindoˇsand Mezzetti (2006) consider a stochastic process in which players move towards better replies (“better-reply dynamics”) in finite population aggregative games. They show that these dynamics converge globally to Nash equilibria if the aggregative game satisfies strategic complementarity or substitutability conditions. The examples we consider in our paper–tragedy of the commons, Cournot competition and search—satisfy ei- ther strategic complementarity or strategic substitutability. Hence, the assumptions in Kukushkin (2004) or Dindoˇsand Mezzetti (2006) are more general than the property of being an aggregative potential game. Their results will hold for finite player versions of the examples we consider. On the other hand, use of the theory of potential games allows us to establish convergence results under a wider class of dynamics, even if for a more limited set of aggregative games. Within the literature on evolution in potential games, this paper is most closely related to Sandholm (2002). That paper considers a network (for example, a road network) in which users

3 choose paths between two points. Additional users create delay and, hence, disutility to existing users of a path. In fact, in a broad sense, this is also a commons problem since the network is universally accessible and the Nash equilibrium involves congestion. There are, however, differences in the details of the two models (for example, different types of strategies, different payoff functions) due to which we cannot analyze such congestion games in the specific framework of aggregative potential games we are considering. However, towards the end of the paper, we discuss an extension of our model that can incorporate congestion games. This extension uses the notion of a multi– dimensional aggregate of the type considered, for example, in Acemoglu and Jensen (2013). Another extension we discuss is allowing for negative strategies. The rest of the paper is as follows. Section 2 defines aggregative population games and Section 3 introduces the notion of aggregative potential games. In Section 4, we introduce the quasi– potential function and characterize Nash equilibria of aggregative potential games. Section 5 shows that such equilibria are inefficient. In Section 6, we discuss the evolutionary implications of our results. Section 7 concludes by providing brief extensions of our model to negative strategies and congestion games. Some proofs are in the appendix.

2 Aggregative Population Games

We consider a population consisting of a continuum set of agents. The mass of the population is 1. For the applications we are concerned with in this paper, it is natural to think of the entire positive subset of reals as the strategy set of agents. However, to avoid the technical difficulties of the analysis of infinite dimensional evolutionary dynamics, we restrict the agents’ choice of 1 2 mn−2 mn−1 strategies to the set Sn = {0, n , n , ··· , n , n , m}, where n is a positive integer. The highest strategy m is a finite positive integer that can be arbitrarily large. For n sufficiently large, Sn is mn+1 a close approximation of the set [0, m] ⊂ R+. The set of population states is X = {x ∈ R+ : P x = 1}.2 The scalar x ∈ [0, 1] represents the proportion of players choosing strategy i ∈ S . i∈Sn i i n Throughout, we use the notation ei to denote the monomorphic state where every agent uses the strategy i. We characterize a population game with a smooth payoff function F : X → Rmn+1 such that

Fi(x) denotes the payoff of an agent using strategy i when the population state is x. A population state x∗ is a Nash equilibrium of the population game F if

∗ ∗ xi > 0 ⇒ i ∈ argmax Fj(x ), for all i ∈ Sn. j∈Sn

If a Nash equilibrium has only one strategy in its support, we call it a monomorphic equilibrium. An aggregative population game is one in which the payoff to an agent depends upon his indi- vidual strategy and the aggregate level of strategy used in the population. We denote the aggregate

2 mn+1 The assumptions that m and n are positive integers ensure that the dimension mn + 1 in R+ makes sense.

4 strategy level as a(x) and define it as

X a(x) = ixi. (1)

i∈Sn

3 We note that given the strategy set Sn, a(x) ∈ [0, m]. We are interested in aggregative population games in which payoffs take the form

Fi(x) = iβ(a(x)) − c(i). (2)

The function β : [0, m] → R describes the benefit an agent receives when the aggregate strategy level in the population is a(x). We refer to this function as the aggregate benefit function. The gross benefit an agent using i receives when the population state is x is, therefore, iβ(a(x)). The 4 function c : [0, m] → R describes the cost of using strategy i ∈ Sn. The payoff Fi(x) is, therefore, the net benefit of using strategy i at population state x. Throughout the paper, we assume that β and c are differentiable. We also assume c is a strictly increasing and convex function with c(0) = 0. In addition, in stating our results in the following sections, we impose relevant conditions on the slopes of β and c.

3 Aggregative Potential Games

We seek to consider such aggregative population games which can be analyzed as potential games. Sandholm (2001) defines a potential game as follows.5

mn+1 mn+1 Definition 3.1 (Sandholm, 2001) The population game F : R+ → R is a potential game mn+1 if there exists a continuously differentiable function f : R+ → R such that

mn+1 ∇f(x) = F (x) for all x ∈ R+ . (3)

Therefore, F is a potential game if for every strategy i, ∂f = F (x). The function f is called the ∂xi i potential function for the game F . We can appreciate the potential function’s role better by considering a strategy change from i to j by a small group of agents. This change is represented by the displacement vector z = ej −ei. The marginal change in the value of the potential function is then ∇f(x)0z = ∂f − ∂f = F (x) − F (x). ∂xj ∂xi j i Therefore, if the strategy revision is profitable, i.e. Fj(x) > Fi(x), then the value of the potential increases. It is this relationship between the potential function and payoffs that underlie many

3It is possible to define more general versions of the strategy aggregate, as in Al´os–Ferrer and Ania (2005). See the last paragraph of Section 3 for more on this. 4 Note that we define the cost function with respect to the domain [0, m] and not the finite set Sn. This is primarily because when we define the quasi-potential function in Section 4, we need the continuous version of the cost function. 5See Sandholm (2009) for an alternative definition of potential games. Definition 3.1 extends the domain of F from X to Rmn+1 in order to ensure that partial derivatives of the type ∂f does exist. The alternative way is to + ∂xi confine oneself to X and define a potential game using affine calculus, as in Sandholm (2009).

5 of the attractive evolutionary properties of potential games. Sensible strategy revisions increase potential and move the population state towards a maximizer of the potential function. Condition (3) also implies an economically more meaningful characterization of potential games. Let i, j ∈ S . The marginal change in the payoff of i due to a change in x , ∂Fi(x) , is a measure n j ∂xj of the externality imposed by strategy j users on the payoff of strategy i users. Clearly, if F is a potential game, then ∂Fi(x) = ∂Fj (x) . We may, therefore, interpret potential games equivalently as ∂xj ∂xi population games that generate symmetric externalities. The following proposition establishes that an aggregative game of the form (2) is a potential game. We, therefore, call such games aggregative potential games.

mn+1 mn+1 Proposition 3.2 Consider an aggregative population game F : R+ → R defined by the mn+1 payoff function (2). Such a game is a potential game with potential function f : R+ → R defined as mn+1 Z a(x) X f(x) = β(z)dz − c(i)xi. (4) 0 i=1 Proof. Differentiating (4), we get ∂f(x) = iβ(a(x)) − c(i), which is the payoff function (2). ∂xi Therefore, ∇f(x) = F (x). 

In our analysis in the following sections, we distinguish between two types of aggregative po- tential games depending on whether β is decreasing or increasing. This distinction is significant because the slope of β determines whether payoffs exhibit negative or positive externalities. To see this, note that if the payoff function is (2), then ∂Fi(x) = ijβ0(a(x)) which is negative if β0(a(x)) < 0 ∂xj and positive if β0(a(x)) > 0. We now present three examples of aggregative potential games. The first two exhibit negative externalities while the third has positive externalities.

Example 3.3 (Tragedy of the Commons): Let i ∈ Sn be a specific level of input that an agent chooses for use in a common pool resource. The population state x determines aggregate input level a(x), which then generates total output π(a(x)) through a standard increasing and strictly π(a(x)) concave production function π : R+ → R+. The average product is AP (a(x)) = a(x) and the marginal product is MP (a(x)) = π0(a(x)). Since the resource is commonly owned, the payoff to i ∈ Sn is

π(a(x)) F (x) = i − c(i) i a(x) = iAP (a(x)) − c(i), (5) where c(i) is the cost of using input level i. This payoff function is of the form (2) with aggregate benefit function β(z) = AP (z) and potential function

Z a(x) X f(x) = AP (z)dz − c(i)xi. (6) 0 i∈Sn

6 The strict concavity of π implies that AP (z) is strictly decreasing in z. Hence, (5) exhibits negative externalities.§

Example 3.4 (Cournot Competition): Each firm in a population chooses an output level from the set Sn. Let p : R+ → R+ be a decreasing inverse demand function of aggregate output a(x). Then, the payoff obtained by a firm using strategy i is the profit level

Fi(x) = ip(a(x)) − c(i), (7) where c(i) is the cost of producing output i. This is an aggregative potential game with aggregate benefit function β(a(x)) = p(a(x)) and potential function

Z a(x) X f(x) = p(z)dz − c(i)xi. (8) 0 i∈Sn

As p is a strictly decreasing function, (7) exhibits negative externalities. Sandholm (2010) provides the economic interpretation of this potential function. It is the total surplus derived by consumers and producers. Even though we do not model consumers as active agents here, we can consider the first term in the right hand side of (8) as consumer surplus plus the total revenue obtained by P producers. Once we subtract the total cost, i∈S c(i)xi, incurred by producers, we are left with consumer surplus plus producer surplus.§

Example 3.5 (Search with Positive Externalities): Sandholm (2010) presents a model of macroeconomic spillovers in which agents choose a search level from Sn. The aggregate level of strategy, a(x), in this case is the total search effort. It is natural to assume that payoffs are increasing in both individual search effort and aggregate search effort. We assume that payoffs take the form

Fi(x) = iβ(a(x)) − c(i), where β is a strictly concave and increasing aggregate benefit function, and c is a convex and increasing cost function. With β0 > 0, there are positive externalities.§

Each of these three examples extend the corresponding finite–player versions considered in Vega–Redondo (1997) or Al´os–Ferrer and Ania (2005) to our scenario of a continuum of agents. In addition, Al´os–Ferrer and Ania (2005) also consider other examples of finite player aggregative games like a rent-seeking model and a minimum effort model using more generalized notions of aggregate strategy level. These other examples are, however, not readily amenable to interpretation as population games.

7 4 Nash Equilibria of Aggregative Potential Games

4.1 Quasi–Potential Function

Sandholm (2001) establishes that local maximizers of the potential function f in X are Nash equilibria of the potential game F .6 The characterization is particularly strong if f is a concave function. In that case, all Nash equilibria of the potential game are maximizers of the potential function. We now show that if F is an aggregative potential game, then we can use these results from Sandholm (2001) to derive a simpler characterization of Nash equilibria of F , at least with a high degree of approximation. To obtain this characterization, we introduce the function g : [0, m] → R defined as Z α g(α) = β(z)dz − c(α). (9) 0 We establish certain relationships between this function g and the potential function f. Recall our earlier assumption (at the end of Section 2) that c is a strictly increasing and convex function with c(0) = 0.7 The relationship between g and f depends upon whether c is linear or strictly convex. First, consider the case where c is linear and of the form c(i) = ki, k > 0. Let x ∈ X be such that a(x) = α. Then, P c(i)x = ka(x) = kα = c(α). Therefore, for a linear c, f(x) = g(α) i∈Sn i if a(x) = α. This relationship proves useful to us when we characterize Nash equilibria of an aggregative potential game with a linear cost function. For future reference, we also note that due to the linearity of the function a(x), the set {x ∈ X : a(x) = α} is a convex set. Now consider the case where c is strictly convex. The relationship between f and g now depends upon whether the population state x is monomorphic and polymorphic. For a monomorphic state ei, a(ei) = i and f(ei) = g(i). For a polymorphic state x with a(x) = α, the strict convexity of c implies c(α) = c(P ix ) < P c(i)x . Therefore, in this case, g(α) > f(x). Hence, for a i∈Sn i i∈Sn i strictly convex c, if a(x) = α, then g(α) ≥ f(x) with equality only if x is the monomorphic state eα. Due to such relationships between the potential function and g, we interpret g as a one– dimensional analogue of f and call it the quasi-potential function. To obtain some intuition of the role of the quasi–potential function, consider a linear cost function c(i) = ki and note that 0 0 g (α) = β(α) − c (α) = β(α) − k. Consider a population state x such that a(x) = α. Let i, j ∈ Sn such that j > i. A strategy change from i to j at state x is profitable if Fj(x) > Fi(x) which, given (2), reduces to β(α) > k. But this is precisely the condition for g to be increasing. Hence, if g is increasing, agents have incentive to move to a higher strategy which then increases the aggregate strategy level. This preliminary argument, therefore, suggests that evolutionary forces would tend to move the aggregate strategy value towards a maximizer of the quasi-potential function, in a

6The converse is not true. It is possible for Nash equilibria to be local minimizers of the potential function. For example, completely mixed equilibria of pure coordination games minimize the corresponding potential function. 7Also note that in defining g, we use the fact that c has domain [0, m]. See also footnote 4.

8 manner analogous to directing the population state towards a maximizer of the potential function (see the paragraph after (3)). But a maximizer of the potential function is a Nash equilibrium (Sandholm, 2001). Hence, a maximizer of the quasi-potential function should intuitively corre- spond to the aggregate strategy level at a Nash equilibrium of the underlying aggregative potential game. This is what our paper will show. We now turn to the formal characterization of Nash equilibria of aggregative potential games using the quasi–potential function. While the formal analysis requires us to consider various cases depending upon whether c is linear or strictly convex and whether β is decreasing or increasing, one can obtain a preview of the results by looking at perhaps the simplest case; the one where ∗ g has a unique maximizer α ∈ Sn. In that case, irrespective of the shape of g, eα∗ is a global maximizer of f and, hence, is a Nash equilibrium of F . One way to see this is that in that case, ∗ for all α ∈ [0, m] and x such that a(x) = α, we have f(eα∗ ) = g(α ) ≥ g(α) ≥ f(x), where the final inequality is due to the convexity of c. We may, therefore, identify a Nash equilibrium of F by relating the maximizer of g to a maximizer of f. This is the general strategy we adopt in most of our results below. Further details like the uniqueness of Nash equilibrium then depends upon the properties of β and c.

4.2 Negative Externalities

Let β0(z) < 0 so that externalities in the aggregative potential game F are negative. We state the following lemma. The proof is in Appendix A.1.

Lemma 4.1 Let F be an aggregative potential game with potential function f as defined in (4) and quasi-potential function g as defined in (9). If the aggregate benefit function β is strictly decreasing, then f is a concave function but not strictly concave. Moreover, in this case, g is strictly concave with a unique maximizer α∗ ∈ [0, m].

The concavity of the potential function f implies that F has either a unique Nash equilibrium or a convex set of equilibria, which coincides with the set of maximizers of f (Corollary 3.1.4; Sandholm, 2010). Proposition 4.2 below characterizes Nash equilibria of F when the cost function is strictly convex. We show that in this case, F has a unique Nash equilibrium identifiable with the maximizer of the quasi–potential function g. It is important to note that uniqueness of equilibrium follows despite f not being strictly concave. In fact, had f been strictly concave, uniqueness would be a trivial consequence of existing results on potential games in Sandholm (2001). In stating the proposition, we use the notations bbc anddbe to denote respectively the largest integer not greater than b and the smallest integer not less than b. The proof of the result is in Appendix A.1.

Proposition 4.2 Consider the aggregative potential game F of the form (2) with potential function (4). Let the aggregate benefit function β be strictly decreasing and the cost function c be strictly increasing and strictly convex. Let α∗ ∈ [0, m] be the unique maximizer of the quasi-potential function g. Then,

9 ∗ 1. If α ∈ Sn, the unique Nash equilibrium of F is eα∗ .

∗ 2. If α ∈/ Sn, F has a unique Nash equilibrium. In that equilibrium, at most two strategies are bnα∗c dnα∗e in use, n ∈ Sn and n ∈ Sn.

By Lemma 4.1, g has a unique maximizer α∗. Part 1 of Proposition 4.2 uses the relationship ∗ ∗ between g and f to show that f is uniquely globally maximized at the strategy α if α ∈ Sn. bnα∗c dnα∗e The intuition behind part 2 is that the maximizer of f should have support in { n , n }, i.e. support as close to α∗ as possible, in order to minimize the strictly convex cost function of having an aggregate strategy level close to α∗. We note that this maximizer of f, and hence, the Nash equilibrium, need to have aggregate strategy level equal to α∗. This is obvious if the equilibrium is monomorphic. In case of a polymorphic equilibrium, the aggregate strategy level depends upon h bnα∗c dnα∗e i which aggregate value in n , n equalizes the payoffs of these two strategies, and that need not be equal to α∗. We also note that that there does not seem to be any readily apparent way to determine whether that equilibrium will be monomorphic or polymorphic. The proposition, however, makes it clear that for n large enough, α∗ is a close approximation of the aggregate ∗ strategy level at the unique Nash equilibrium even if α ∈/ Sn. The uniqueness of equilibrium in Proposition 4.2 is a consequence of the convexity of the cost function c. This is clear from Proposition 4.3 below where we consider a linear cost function instead of a strictly convex one. We then have a convex set of equilibria, albeit all having the same aggregate level of strategy use.8 The proof of the proposition is in Appendix A.1.

Proposition 4.3 Let F be a potential game of the form (2) with a linear cost function of the form c(i) = ki, k > 0. Let α∗ be the maximizer of the quasi–potential function g. Then, a population state x∗ ∈ X is a Nash equilibrium of F if and only if a(x∗) = α∗. In particular, the set of Nash equilibria is convex.

We note that the only assumption that we have made about the aggregate benefit function in Propositions 4.2 and 4.3 is that the function is decreasing. However, for the results to have some substance, it is also reasonable to assume that β(0) > 0. Otherwise, a decreasing β function would imply that α∗ = 0 and 0 is the dominant strategy. We apply Propositions 4.2 and 4.3 to Examples 3.3 and 3.4. In the tragedy of the commons, α∗ is characterized by AP (α∗) = c0(α∗). Hence, in equilibrium, the aggregate input level in the popu- lation makes the average product equal to or close to marginal cost, at least when n is sufficiently large. In the model of Cournot competition, equilibrium aggregate output by producers is equal or close to α∗, which is characterized by p(α∗) = c0(α∗). The equilibrium aggregate output, therefore,

8As an example of a game with multiple equilibria, consider a tragedy of the commons model with production √ function π(z) = 5 z, strategy set Sn = {0, 1, 2, 3, 4, 5} and linear cost function c(i) = 3i. The average product, √5 and so the aggregate benefit function, is AP (z) = z . Hence, the resulting quasi–potential function is g(α) = R α √ ∗ 25 0 AP (z)dz − c(α) = 10 α − 3α. This function is maximized at α = 9 . The linearity of the cost function implies that if a(x) = α, then f(x) = g(α). Hence, the potential function f is maximized at any state x∗ such that ∗ ∗ 25 a(x ) = α = 9 . The set of such states is convex which, by Proposition 4.3, coincides with the set of Nash equilibria.

10 tends to equate price to marginal cost and, thereby, approximate the perfectly competitive solution or the Walrasian equilibrium.9

4.3 Positive Externalities

We now consider the case where β0(z) > 0 so that externalities are positive. We seek to relate the set of Nash equilibria of F to the maximizers of the corresponding quasi–potential function g. In this case, however, our results are not as precise or elegant as those under negative externalities. This is because with positive externalities, the potential function f is convex. This follows as a corollary to Lemma 4.1. Hence, there may be Nash equilibria of F that do not maximize f. Therefore, any link with the maximizers of g can achieve only a partial characterization of the set of Nash equilibria of F . Moreover, with β0(z) > 0, it is not necessary that g has a unique maximizer.10 Hence, we need to impose additional, relatively stronger conditions, on g to make the characterization of Nash equilibria more interesting. With this in mind, we consider two types of quasi-potential function; one with a unique maximizer and the other with a unique minimizer. We first consider the case where g is strictly quasiconcave such that its unique maximizer α∗ > 0. For an example, consider β(α) = 2α0.5 and c(α) = α2. For m ≥ 1, the resulting quasi–potential function has maximizer 1. We then arrive at the following partial characterization of Nash equilibria of a game F with such a quasi–potential function. The proof is in Appendix A.1.

Proposition 4.4 Consider the aggregative potential game F of the form (2) with potential function (4). Let the aggregate benefit function β be strictly increasing with β(0) ≥ 0. Further, suppose that β and the cost function c are such that the quasi-potential function g is strictly quasiconcave with unique maximizer α∗ ∈ (0, m]. Then,

1. If β(0) = 0, then e0 is a Nash equilibrium of F .

2. Every local maximizer of the potential function f is a Nash equilibria of F . Specifically, if ∗ α ∈ Sn, then eα∗ is the global maximizer of f, and, hence is a Nash equilibrium of F .

∗ 3. If c is linear, α = m and, hence, em is the global maximizer of f. Furthermore, em is the unique Nash equilibrium of F .

∗ 4. If c is strictly convex and α ∈ Sn, then eα∗ is the global maximizer of f, and, hence is a Nash ∗ equilibrium of F . If α ∈/ Sn, either e bnα∗c or e dnα∗e is the global maximizer of f. Therefore, n n at least one of these two states is a Nash equilibrium of F . For n large, the aggregate level of ∗ strategy at every monomorphic equilibrium except e0 is close to α .

9The large population model of Cournot competition is, therefore, actually a model of perfect competition. For- mally, this follows from the fact that each producer has zero weight in the whole population. See also the related discussion in the Introduction and comparison with results in Vega–Redondo (1997) and Al´os–Ferrer and Ania (2005). √ 3 10 3 2 For example, β(z) = 2 z and c(α) = α . In that case, g(α) = 0 for all α, and there is no unique maximizer.

11 The assumptions on g imply that it must be increasing in the immediate neighbourhood of 0. 11 The first part of the proposition establishes e0 as a strict equilibrium of F if β(0) = 0. In the second part, the fact that every local maximizer of the potential function is a Nash equilibrium follows directly from Sandholm (2001). We then identify one particular maximizer, the global ∗ maximizer, when α ∈ Sn. This is eα∗ which is, therefore, also a Nash equilibrium. For a linear ∗ c, α = m ∈ Sn and, therefore, em is a Nash equilibrium (by part 2). Indeed, this is the unique equilibrium.12 For a strictly convex c, part 4 characterizes the global maximizer of f and hence, one monomorphic equilibrium of F , by using α∗. It is possible that for a strictly convex c, there are also other monomorphic Nash equilibria corresponding to local maximizers of the convex potential function. But it is not apparent how we can characterize them completely by using g. However, part 4 of Proposition 4.4 does show that for n large, only such monomorphic states (except, possibly, e0) in which the aggregate strategy level is close to α∗ survive as Nash equilibria. The assumption that β(α) > 0 for α > 0 proves useful in establishing this result. From a dynamic standpoint, this is sufficient to characterize population behavior as, typically, only monomorphic equilibria can maximize a convex potential function and most evolutionary dynamics converge to a maximizer of the potential function.13 Interior minimizers of the potential function in Proposition 4.4 would also typically be Nash equilibria. But again, there is no apparent way to characterize such equilibria using the quasi– potential function.14 Such equilibria are also not significant from an evolutionary point of view as dynamics will diverge away from them. Nevertheless, in the second type of quasi–potential function that we consider, one in which g is strictly convex with a minimizer in (0, m), we can identify Nash equilibria that minimizes √ the potential function. As an example, consider β(α) = 1 + α, c(α) = 2α and m > 1. The resulting quasi–potential function has minimizer 1. We arrive at the following characterization of Nash equilibria in this case. The proof is in Appendix A.1.

Proposition 4.5 Consider the aggregative potential game F of the form (2) with potential function (4). Let the aggregate benefit function β be strictly increasing with β(0) ≥ 0. Further, suppose that β and the cost function c are such that the quasi-potential function g is strictly convex with unique minimizer αˆ ∈ (0, m). Then,

1. Every local maximizer of the potential function f is a Nash equilibria of F . Specifically, e0

and em are local maximizers of f and are, therefore, Nash equilibria of F .

11 The converse of Proposition 4.4(1) does not hold if Sn is not sufficiently dense. For example, consider F with 1 2 1 Sn = {0, 1}, β(α) = 2 (1 + α) and c(i) = i . Then, e0 is a Nash equilibrium as F0(e0) = 0 > F1(e0) = − 2 . 12We note that if c is linear, then a necessary (but not sufficient condition) for g to be as assumed in Proposition 4.4 is β(0) > 0. If β(0) = 0 and c is linear, then β0(0) < 0 and we will be in the case considered in Proposition 4.5 below. 13We refer the reader to Section 6 for a more detailed discussion on how evolutionary dynamics behave in potential games. 14Monomorphic states that minimize the potential function in Proposition 4.4 need not be Nash equilibria. For 0.5 2 example, consider the aggregative potential game F with β(α) = (α + 1) , c(α) = α and Sn = {0, 0.25.0.5, 0.75, 1}. The resulting potential function is globally minimized at the state e0, which corresponds to α = 0 that minimizes the quasi–potential function. But, e0 is not a Nash equilibrium of F as F0.5(e0) = 0.25 > F0(e0) = 0.

12 2. If the cost function c is linear, then e0 and em are the only local maximizers of f. Further, any state x such that a(x) =α ˆ is a Nash equilibrium of F .

3. If c is strictly convex and αˆ ∈ Sn, then eαˆ is a Nash equilibrium of F . If αˆ ∈ / Sn, then there n bnαˆc dnαˆe o exists a Nash equilibrium with support in n , n . The assumptions on g imply that it must be strictly decreasing in the immediate neighborhood of 0 and strictly increasing in the immediate neighborhood of m. Hence, 0 and m are the local maximizers of g. This allows us to identify e0 and em as local maximizers of f and, hence, as Nash equilibria of F . In part 2, these are the only Nash equilibria that locally maximize the potential function. Apart from these, there exists a convex set of equilibria, all of which have aggregate strategy levelα ˆ. This is obviously the set of global minimizers of f (because with linear c, f(x) = g(α), if a(x) = α).15 For a strictly convex c, part 3 identifies Nash equilibria in addition to e0 and em. These Nash equilibria have aggregate strategy level close toα ˆ. We should note, however, that these are boundary Nash equilibria and, therefore, may not be a minimizer of f, even thoughα ˆ itself is the minimizer of g.16

5 Inefficiency and Efficiency of Nash Equilibria

Standard microeconomic theory implies that a Nash equilibrium is inefficient in the presence of ex- ternalities, whether positive or negative. To examine this issue formally in the context of aggregative potential games, we define F¯(x) = P x F (x) as the aggregate (or equivalently, average) payoff i∈Sn i i in the population under population state x. For the aggregative potential game with payoff function (2), X X F¯(x) = xiFi(x) = a(x)β(a(x)) − c(i)xi. (10)

i∈Sn i∈Sn A population state x∗∗ that globally maximizes F¯(x) is an efficient population state. Sandholm (2001) establishes certain results on local efficiency (states that locally maximize the aggregate payoff) in potential games on the basis of the homogeneity of the payoff function

Fi(x). Here, we introduce a simpler way to identify (globally) efficient states in an aggregative potential game and compare such states to Nash equilibria of the game. This method is similar to the characterization of Nash equilibria in such games and is based on the analysis of the one– dimensional analogue of the aggregate payoff function. We denote this function asg ¯ : [0, m] → R and define it as g¯(α) = αβ(α) − c(α). (11)

15Indeed, the similarity of the proof of this part of the proposition and the if part of Proposition 4.3 leads to a more general conclusion. If the cost function is linear and g has an interior maximizer or minimizer, say α, then there exists a convex set of Nash equilibria of F , with each such equilibrium having aggregate strategy level α. This is irrespective of the shape of β. 16Proposition 4.5 considers the case where a strictly convex g has minimizerα ˆ ∈ (0, m). If g is strictly convex with minimizer 0, then that case is covered by Proposition 4.4. If g is strictly convex with minimizer m, then a variant of Proposition 4.4 holds. In that case, e is a Nash equilibrium but e is not necessarily so. For example, if β(α) = √ 0 m √ 1 + α, c(α) = 2α and m < 1, then the minimizer of g is m. In that case, Fm(em) = m(1 + m) − 2m < 0 = F0(em).

13 As with the quasi–potential function, we note that if c is linear and a(x) = α, theng ¯(α) = F¯(x).

If c is strictly convex and a(x) = α, theng ¯(α) ≥ F¯(x), with equality holding only if x = eα. Let us first consider a game F with a decreasing β function. For such games, αβ(α) will often be a strictly concave function on R+, although that may not always be the case. For example, in the tragedy of the commons (Example 3.3), αβ(α) = π(α), the total output when aggregate input is α. In the Cournot competition model (Example 3.4), αβ(α) is the total revenue when aggregate output by producers is α. The following lemma then establishes F¯ as a concave function. The proof of the lemma is in Appendix A.2.

Lemma 5.1 Let F be an aggregative potential game of the form (2) with a strictly decreasing aggregate benefit function β. Suppose that αβ(α) is a strictly concave function on R+. Then F¯ defined in (10) is a concave function but not strictly concave. Hence, either F¯ has a unique maximizer in X or its set of maximizers is convex.

We also establish certain properties ofg ¯(α) in the following lemma under the same assumptions that β is strictly decreasing and αβ(α) is strictly concave. The proof is in Appendix A.2.

Lemma 5.2 Consider g¯ defined in (11). Let β be strictly decreasing and αβ(α) be a strictly ∗∗ concave function on R+. Then, g¯(α) is strictly concave on R+. with a unique maximizer α > 0. Moreover, α∗∗ ≤ α∗, where α∗ is the maximizer of the quasi-potential function (9), with equality holding only if α∗∗ = m.

We now prove the analogue of Propositions 4.2 and 4.3 and relate x∗∗ to α∗∗. If c is strictly convex, the efficient state is unique with aggregate strategy level close to α∗∗. If c is linear, there exists a convex set of efficient states such that a(x∗∗) = α∗∗. Lemma 5.2 then allows us to conclude that a(x∗∗) ≤ a(x∗), at least for n sufficiently large. Thus, under negative externalities, the equilibrium aggregate strategy level in an aggregative potential game is less than the socially efficient aggregate level. We state these conclusions formally in the following proposition. The proof is in Appendix A.2.

Proposition 5.3 Let F be the aggregative potential game (2) with a strictly decreasing aggregate benefit function β such that αβ(α) is strictly concave on [0, m]. Denote by α∗∗ the unique maximizer of g¯ defined in (11).

1. Let the cost function c be strictly convex.

∗∗ ∗∗ (a) If α ∈ Sn, then the unique efficient state of F is x = eα∗∗ . ∗∗ ∗∗ (b) If α ∈/ Sn, then F has a unique efficient state x . This efficient state has support in n bnα∗∗c dnα∗∗e o the set n , n . Therefore, for n sufficiently large, a(x∗∗) is arbitrarily close to α∗∗, and hence, a(x∗∗) ≤ a(x∗), ∗∗ ∗ with equality only if x = em. Here, a(x ) is the aggregate strategy level at the unique Nash equilibrium of F .

14 2. Let c be linear. Then x∗∗ ∈ X is an efficient state of F if and only if a(x∗∗) = α∗∗. Therefore, the set of efficient states is convex. Hence, for all n, a(x∗∗) ≤ a(x∗), with equality only if ∗∗ ∗ ∗ x = em. Here, a(x ) is the identical aggregate strategy level at every Nash equilibrium x of F .

We apply Proposition 5.3 to Examples 3.3 and 3.4. In the tragedy of the commons, F¯(x) = π(a(x)) − P c(i)x , the net output at state x. Therefore,g ¯(α) = π(α) − c(α) which we can i∈Sn i rewrite as Z α g¯(α) = MP (z)dz − c(α). 0 Hence, α∗∗ is characterized by MP (α∗∗) = c0(α∗∗). Therefore, the efficient state x∗∗ seeks to make the marginal product of the aggregate strategy level a(x∗∗) equal to the marginal cost. On the other hand, the Nash equilibrium tends to equate the average product to the marginal cost. Intuitively, due to the concavity of the production function π, MP (α) < AP (α). Therefore, the efficient aggregate level of strategy is less than the equilibrium aggregate level. In the model of Cournot competition (Example 3.4),g ¯(α) = TR(α) − c(α), where TR(α) is the total revenue obtained from aggregate output α. Therefore, the efficient aggregate output level a(x∗∗) is the monopoly output level that equates marginal revenue to marginal cost. On the other hand, the Nash equilibrium tends to equate the average revenue, or the price, to the marginal cost.17 We now characterize the efficient population state when β is strictly increasing. In this case, even if β is strictly concave, αβ(α) is often strictly convex. A corollary to Lemma 5.1 then implies that F¯ is convex on X. Subject to assumptions on g we make in Propositions 4.4 and 4.5, we show that α∗∗ ≥ α∗. Hence, typically, the efficient aggregate level of strategy is higher than the aggregate strategy level in a Nash equilibrium. We state these conclusions formally in the following proposition. The proof is in Appendix A.2.

Proposition 5.4 Let F be the aggregative potential game (2) with a strictly increasing β function such that β(0) ≥ 0. Suppose αβ(α) is strictly convex on [0, m]. Denote by x∗∗, α∗ and α∗∗ the global maximizer of F¯, g and g¯ respectively.

1. Suppose the quasi-potential function g is strictly quasiconcave, with maximizer α∗ ∈ (0, m], as ∗∗ ∗ ∗∗ ∗∗ ∗∗ in Proposition 4.4. Then, α ≥ α , Further, if α ∈ Sn, x = eα∗∗ . If α ∈/ Sn, then either ∗∗ ∗∗ ∗∗ x = e bnα∗∗c or x = e dnα∗∗e . Hence, for n sufficiently large, a(x ) is arbitrarily close to n n α∗∗. Therefore, if c is strictly convex and n is sufficiently large, then a(x∗∗) ≥ a(x∗) for all

17We note, however, that x∗∗ here is efficient only in the restricted sense of maximizing the aggregate profit of the population of producers, i.e. maximizing producer surplus. If, however, we also consider consumers as active agents, then the Nash equilibrium, which is the maximizer of the potential function (8), is indeed efficient in the wider sense of maximizing the sum producer and consumer surplus. Under a perfectly discriminating monopolist, the Nash equilibrium aggregate quantity level would have maximized producer surplus. But that requires that the monopolist obtains a different price for every unit of output sold, i.e. price p(i) for the ith unit of output. In the competitive scenario that we have modeled, that is not possible as producers get the same price p(a(x)) from every unit of output.

15 ∗ monomorphic Nash equilibria x of F , with equality holding only if em is a Nash equilibrium of F . If c is linear, then a(x∗∗) = a(x∗) = m.

2. Let g be strictly convex, with minimizer αˆ ∈ (0, m), as in Proposition 4.5. Then, α∗∗ = m. ∗∗ ∗∗ ∗ Therefore, x = em. Hence, for all n, a(x ) ≥ a(x ), with equality holding only for the Nash ∗ equilibrium x = em.

We note that in part 1 of this proposition, we restrict ourselves to the relationship between the socially efficient state and monomorphic equilibria. As noted earlier in the context of Proposition 4.4, this is because evolutionary dynamics typically converge to only such equilibria in potential games with convex potential functions.

5.1 Monomial β and a constant cost function

While we have generally assumed that the cost function c is increasing, one particular case in which the cost is constant deserves special mention. This is the case where β is a monomial and of the form β(z) = zr, r > −1, and the cost function is a constant, i.e. of the form c(i) = c ≥ 0. The r 18 resulting aggregative potential game has payoff function Fi(x) = ia(x) − c. We note here that if r ∈ (−1, 0), then β(0) is not defined. We, therefore, define Fi(e0) = ∞, for i > 0, and F0(e0) = −c, if r ∈ (−1, 0). R α With these assumptions, the quasi–potential function, g(α) = 0 β(z)dz − c, is a strictly in- creasing function with maximizer m. A modification of our results in Section 4 then imply that 0 em is a Nash equilibrium in F . If β (z) < 0 (r ∈ (−1, 0)), em is the unique Nash equilibrium. If 0 19 β (z) > 0, e0 is also a Nash equilibrium. The assumptions on β also imply thatg ¯(α) = αβ(α) − c is a strictly increasing function with maximizer m. Results in Section 5 then imply that em is the efficient state. Hence, the set of efficient states either coincides with the set of Nash equilibria (if β0(z) < 0) or is a subset of the set of Nash equilibria (if β0(z) > 0). We can provide an alternative interpretation to this conclusion based on the homogeneity of a potential game (Sandholm; 2001, 2010). Note that the aggregative potential game with a constant cost function, Fi(x) = iβ(a(x)) − c, is strategically equivalent to the aggregative potential game

Fˆi(x) = iβ(a(x)). If β is a monomial with exponent r > −1, then the game Fˆ is positively homogeneous.20 Corollary 3.1.11 of Sandholm (2010) establishes that for a positively homogeneous potential game, the set of efficient states is a subset of the set of Nash equilibria; and the two sets coincide if the potential function is concave. Hence, if β0 < 0, so that the potential function is concave, the unique Nash equilibrium of Fˆ (and, hence, F ) is efficient. If β0 > 0, the efficient state is a Nash equilibrium. 18An example of such a game is the Cournot competition model with constant elasticity of demand function − 1 p(α) = α ε , ε > 1, and zero marginal cost. I thank one of the anonymous referees for suggesting this example. 19 For β to be increasing, r > 0 which means β(0) = 0. Hence, Fi(e0) = −c, for all i ∈ Sn. 20 k Positive homogeneity of Fˆ means Fˆi(tx) = t Fˆi(x), k > −1. The term positive homogeneity arises from the fact that if a potential game is homogeneous of degree k > −1, then its potential function is homogeneous of degree l > 0. See Section 3.1.6, Sandholm (2010) for details.

16 6 Discussion: Evolutionary Implications

Potential games have attractive evolutionary properties. A wide range of evolutionary dynamics converge to Nash equilibria, or to an approximation of Nash equilibria, in such games from all or almost all initial states. Sandholm (2001) establishes that if an evolutionary dynamic admits a unique solution from every initial condition in X, and satisfies the three properties of forward invariance, Nash stationarity and positive correlation, then the set of Nash equilibria of a potential game F is globally asymptotically stable. Examples of such dynamics are the Brown–von Neumann– Nash (BNN) dynamic (Hofbauer, 2000) and the Smith dynamic (Smith, 1984). The replicator dynamic (Taylor and Jonker, 1978) does not satisfy Nash stationarity, but it does converge to Nash equilibria in potential games from every state in the interior of X. The logit dynamic (Fudenberg and Levine, 1998) converges to logit equilibria, which is a perturbed version of Nash equilibria, from every initial condition in potential games. Applying these results to aggregative potential games, we conclude that all such dynamics converge to a social state that is, in general, different from the efficient social state. Propositions 4.2 and 4.3 imply that that under negative externalities, the aggregate social state must converge to a value equal to or close to α∗, the maximizer of the quasi–potential function, under all the dynamics we have mentioned in the previous paragraph. By Proposition 5.3, the equilibrium aggregate strategy level is typically more than the efficient aggregate level, the only exception being when the efficient state itself is em. Hence, under negative externalities, all well known evolutionary dynamics in aggregative potential games converge to an aggregate strategy level that is too high relative to the efficient level. In contrast, when externalities are positive, the resulting potential function in convex and, hence, maximized at monomorphic equilibria. Therefore, evolutionary dynamics converge to such monomorphic equilibria at which, by Propositions 4.4 and 4.5, the aggregate strategy level is either 0, α∗ or close to α∗, at least when n is large. On the other hand, by Proposition 5.4, the efficient aggregate state is typically larger than α∗. Hence, the population converges to an aggregate strategy level that is too low relative to the efficient level. The evolutionary dynamics we have mentioned in this section are generated by a wide variety of behavioral rules.21 The fact that such a diversity of behavioral rules lead to a common inefficient outcome enables us to understand why the problem of social inefficiency is endemic in the presence of externalities. For example, these evolutionary results illustrate why a problem like the tragedy of the commons is so ubiquitous in such diverse contexts as environmental degradation and depletion of natural resources. The evolutionary explanation is particularly valid when the number of agents involved are large. If the population size is small and agents know one another, then it may be possible to overcome such inefficiency using more sophisticated, forward looking strategies that rely on punishment mechanisms, as noted, for example, by Ostrom (1990). In a large population in which agents are mostly anonymous, such mechanisms may be unfeasible. Instead, agents may

21See, for example, Lahkar and Sandholm (2008) for a more detailed exposition of behavioral rules, also called revision protocols, and the resulting evolutionary dynamics.

17 come to rely on simple and myopic behavioral norms which generate the evolutionary dynamics that converge to inefficient Nash equilibria. We note one exception to our argument of externalities being the cause of evolutionary ineffi- ciency in aggregative potential games. This is the case we have discussed in Section 5.1 where β is monomial with an exponent r > −1. The argument in that section establishes that if c is constant, so that the game is positively homogeneous, efficient states are Nash equilibria. Sandholm (2001) establishes that in positively homogeneous games, efficient states are stable, at least locally, under dynamics that satisfy positive correlation. Therefore, in the context of games we discuss in Section 5.1, dynamics converge to efficient states if costs are constant despite the presence of negative or positive externalities. This means that any evolutionary inefficiency that arise when costs are vari- able cannot be attributed to externalities. Such inefficiency would purely be due to the variation in costs between different strategies.

7 Extensions

7.1 Negative Strategies

In the aggregative potential games we have considered up to now, the strategy set Sn ⊂ R+. This is because in the models like the tragedy of the commons or Cournot competition, it is natural to assume that strategies take positive values. Formally, however, there is nothing in our method of analysis that requires us to confine ourselves to positive strategies. We can extend our analysis to include negative strategies as well.

As an example, let us consider an aggregative potential game F with the strategy set Sn = mn−1 1 1 mn−1 2mn+1 {−m, − n , ··· , − n , 0, n , ··· , n , m}. A population state is now x ∈ R+ such that P x = 1. Let the aggregate benefit function be decreasing and be of the form β(a(x)) = i∈Sn i m − a(x), where a(x) = P ix ∈ [−m, m]. A model like this can represent a situation in i∈Sn i which there exists a scarce resource of initial value m. Agents either use up or contribute to a scarce resource, with positive strategies representing the extent of use of the resource, and negative strategies representing the extent of contribution. The aggregate strategy level a(x) then represents the net usage of the resource. If a(x) < 0, then agents contribute more than they use and therefore, the value of the resource increases, i.e. β(a(x)) > m, the initial value.

The payoff function is, as before, Fi(x) = iβ(a(x)) − c(i). We note that if the strategy i < 0, then the gross benefit the agent receives, iβ(a(x)), is also negative. The potential function f and the quasi–potential function g has the same forms as in (4) and (9) respectively. We retain our usual assumptions on c, i.e. it is a smooth convex function with c(0) = 0. This leaves open the possibility that c is increasing on [−m, m] (so that negative strategies involve negative cost, maybe because contributing to the resource is altruistic and produces a “warm glow” effect); or that it is positive but decreasing on [−m, 0) and positive and increasing on (0, m] (for example, c(i) = i2). Either way, f remains concave and g remains strictly concave so that we obtain equilibrium characterization as in Propositions 4.2–4.3. We may also characterize the efficient state of this model with the

18 maximizer of the functiong ¯, as in Proposition 5.3. We should note, however, that while the methodological extension to negative strategies is relatively straightforward, the economic interpretation of the results, particularly the comparison between equilibrium aggregate and efficient aggregate states, is no longer clear–cut. This is because a decreasing β is no longer sufficient for negative externalities. Instead, with ∂Fi(x) = ijβ0(a(x)), ∂xj externalities can be either positive or negative depending upon the signs of i and j. Hence, a result like Lemma 5.2 (α∗∗ ≤ α∗), a result which we had interpreted in terms of negative externalities, may no longer hold when there are negative strategies. Therefore, in this case, we can no longer conclude that that the equilibrium aggregate strategy level is necessarily higher than the efficient aggregate strategy level even if β0 < 0. A similar extension to negative strategies is possible with an increasing benefit function. Subject to the assumptions made in Propositions 4.4–4.5, we can obtain an analogous equilibrium char- acterization. Once again, however, we note the caveat that an increasing β function is no longer sufficient for positive externalities.

7.2 Congestion Games and Multi-dimensional Aggregates

Another possible extension of our model is the introduction of multi-dimensional aggregates, as in Acemoglu and Jensen (2013). To motivate this extension, let us consider a congestion game with two parallel (non-intersecting) links in a network. We call these links A and B. Consider the strategy set Sn as introduced in Section 2.

We now interpret i ∈ Sn as representing the intensity of use of a particular link. Therefore, a strategy for a player now has two components: which link to use and the intensity with which to use that particular link. Hence, a typical strategy is of the form iK, where i ∈ Sn and K ∈ {A, B}. A population state is, therefore, of the form

  2(mn+1) X X x = x0A, x 1 , ··· , xmA, x0B, x 1 , ··· , xmB ∈ R+ such that xiK = 1 n A n B K∈{A,B} i∈Sn

Here, xiK represents the proportion of agents who are using link K with intensity i. It is con- venient to write the population state as x = (xA, xB) where xA = (x0A, x 1 , ··· , xmA) and n A xB = (x0B, x 1 , ··· , xmB). We now define two aggregates a(xA) ∈ [0, m] and a(xB) ∈ [0, m] n B as X X a(xA) = ixiA and a(xB) = ixiB.

i∈Sn i∈Sn

The first aggregate, a(xA), measures the intensity of use of link A while the second aggregate, a(xB), measures the intensity of usage of link B. Note that a(xA) + a(xB) ∈ [0, m]. We then define the congestion game F in which the payoff to strategy iK, i ∈ Sn, K ∈ {A, B}, is

FiK (x) = bK (i) − icK (a(xK )).

19 This payoff function is different from (2) in that the benefit, bK , depends upon the strategy i whereas 22 the cost, cK , depends upon the aggregate intensity of usage of link K. This is a potential game with potential function

X Z a(xA) X Z a(xB ) f(x) = bA(i)xiA − cA(z)dz + bB(i)xiB − cB(z)dz. 0 0 i∈Sn i∈Sn

The quasi-potential function is g : [0, m] × [0, m] → R defined as

Z αA Z αB g(αA, αB) = bA(αA) − cA(z)dz + bB(αB) − cB(z)dz. 0 0

Since there are two aggregate variables, the quasi–potential function is two–dimensional here. Nev- ertheless, one can establish analogous results as in Section 4. For example, suppose the benefit functions bA and bB are increasing and linear while the cost functions cA and cB are increasing ∗ ∗ and strictly convex. Then, g is strictly concave with a unique maximizer (αA, αB) subject to the ∗ ∗ ∗ ∗ conditions that αA, αB ∈ [0, m] and αA + αB ≤ m. Further, the potential function f is a concave ∗ ∗ function. Any population state x such that a(xA) = αA and a(xB) = αB is a maximizer of f, and, hence, is a Nash equilibrium of the congestion game F .

A Appendix

A.1 Proofs of Section 4

Proof of Lemma 4.1: From (4), we have

∂2f(x) ∂F (x) = i = ijβ0(a(x)). ∂xjxi ∂xj

k Since a typical strategy in Sn is i = n for some k ∈ {0, 1, 2, ··· , mn}, the Hessian matrix of f is   0 0 0 ··· 0   0 1 2 ··· mn  β0(a(x))   D2f(x) = 0 2 4 ··· 2mn  . 2   n . . . . .  ......    0 mn 2mn ··· m2n2

If β0(a(x)) < 0, this is a negative semidefinite matrix. Therefore, f is concave. To check that f is not strictly concave, take x, y ∈ X, x 6= y, such that a(x) = a(y). In that case, f(λx+(1−λ)y) = λf(x) + (1 − λ)f(y), for all λ ∈ [0, 1]. For the properties of g, note that g0(α) = β(α) − c0(α) and g00(α) = β0(α) − c00(α). Since β is

22 To obtain a congestion game of the type considered in Sandholm (2001) with two parallel links from FiK , take bK (i) = 0 and Sn = {1}.

20 strictly decreasing and c is convex, g00(α) < 0. Concavity and the existence of a unique maximizer follows. 

Proof of Proposition 4.2: Let f be the potential function of F . Since f is concave, all local maximizers of f are global maximizers. Therefore, it is sufficient to consider only global maximizers of f to identify Nash equilibria.

∗ 1. Let α ∈ Sn. Due to the concavity of f (by Lemma 4.1), it is sufficient to show that eα∗ is the unique global maximizer of f. Consider x 6= eα∗ such that a(x) = α. Then,

∗ f(eα∗ ) = g(α ) ≥ g(α) > f(x), where the equality in the weak inequality holds only if x is polymorphic such that a(x) = α∗, and the strict inequality holds due to the strict convexity of c. Therefore, for all x 6= eα∗ , f(eα∗ ) > f(x).

bnα∗c dnα∗e 2. To simplify notation, denote i = n and j = n . First, consider i. Since g is strictly concave and i < α∗, it is clear that for all k < i, g(k) < g(i). Suppose x is such that a(x) < i.

Then, as in part 1 of this proof, if x is monomorphic, f(ei) > f(x). If x is polymorphic,

f(ei) = g(i) ≥ g(a(x)) > f(x), where the strict inequality follows from the strict convexity of c. Therefore, f(ei) > f(x) for all x such that a(x) < i. A similar argument implies that if a(x) > j, then f(ej) > f(x). This follows from the fact that for all k > j, g(k) < g(j). Hence, if x is a global maximizer of f, it must be the case that a(x) ∈ [i, j]. Now consider x and x0 such that a(x) = a(x0) ∈ [i, j], x has support in {i, j} and there exists 0 P P 0 k ∈ Sn \{i, j} in the support of x . Then, the strict convexity of c implies l c(l)xl < l c(l)xl. This then implies that f(x) > f(x0). We, therefore, conclude that if x is a global maximizer of f, and, hence, a Nash equilibrium of F , then its support must be limited to {i, j}.

To show uniqueness, suppose both ei and ej are Nash equilibria. If ei is a Nash equilibrium, then Fi(ei) ≥ Fj(ei) or iβ(i) − c(i) ≥ jβ(i) − c(j), which implies

c(j) − c(i) ≥ β(i). (12) j − i

Similarly, if ej is a Nash equilibrium, Fj(ej) ≥ Fi(ej) or jβ(j) − c(j) ≥ iβ(j) − c(i) which implies

c(j) − c(i) β(j) ≥ . (13) j − i

From (12) and (13), we get β(j) ≥ β(i) which contradicts the assumption that β is strictly decreas- ing. Hence, both ei and ej cannot be Nash equilibria. Now consider equilibrium x∗ which has both strategies i and j as support. Clearly, there can

21 be only one such Nash equilibrium. Note that i < a(x∗) < j and it is possible that a(x∗) 6= α∗. In ∗ ∗ ∗ ∗ such an equilibrium, Fi(x ) = Fj(x ) or iβ(a(x )) − c(i) = jβ(a(x )) − c(j) which implies

c(j) − c(i) = β(a(x∗)). (14) j − i

∗ ∗ Suppose both ei and x are Nash equilibria. Then from (12) and (14), we obtain β(a(x )) ≥ ∗ β(i) which contradicts the assumption that β is strictly decreasing. Suppose both ej and x are Nash equilibria. Then from (13) and (14), we obtain β(j) ≥ β(a(x∗)) which leads to the same contradiction.

Hence, there exists a unique Nash equilibrium which is either e bnα∗c , e dnα∗e or a polymorphic n n ∗ n bnα∗c dnα∗e o state x having n , n as support. 

Proof of Proposition 4.3: For the only if part, let x∗ be a Nash equilibrium of F . Therefore, by concavity of f, it is a global maximizer of f. Hence, f(x∗) ≥ f(x), ∀x ∈ X. Takeα ˜ ∈ [0, m]. For any suchα ˜, there existsx ˜ ∈ X such that a(˜x) =α ˜.23 Due to the linearity of c, f(x) = g(α) if a(x) = α. Therefore, f(x∗) ≥ f(˜x) implies g(a(x∗)) ≥ g(a(˜x)) = g(˜α). Since this holds for all α˜ ∈ [0, m], a(x∗) is the global maximizer of g, or a(x∗) = α∗.

For the if part, we need to consider three cases.

∗ ∗ ∗ ∗ 1. Let α = 0. In that case, the unique x such that a(x ) = α is e0. We show that this is the unique Nash equilibrium. Note that α∗ = 0 implies g0(α∗) = β(0) − k ≤ 0, which implies

β(α) − k < 0, for all α > 0, due to the fact that β is strictly decreasing. Let x 6= e0, a(x) = α > 0, be a Nash equilibrium of F , with strategy j 6= 0 in its support. If x 6= e0, there must be at least one such strategy in its support. Then, Fj(x) = j(β(α) − k) < 0 = F0(x), which is a contradiction. ∗ Thus, the only Nash equilibrium is x = e0.

∗ ∗ ∗ ∗ 2. Let α = m. In that case, the unique x such that a(x ) = α is em. We show that this is the unique Nash equilibrium. Note that α∗ = m implies g0(α∗) = β(m) − k ≥ 0, which implies

β(α)−k > 0, for all α < m, due to the fact that β is strictly decreasing. Let x 6= em, a(x) = α < m, be a Nash equilibrium of F , with strategy j 6= m in its support. If x 6= em, there must be at least one such strategy in its support. Then, Fj(x) = j(β(α)−k) < m(β(α)−k) = Fm(x), since β(α) > k ∗ and m > j. But this is a contradiction. Thus, the only Nash equilibrium is x = em.

3. Let α∗ ∈ (0, m). In that case, the set {x∗ ∈ X : a(x∗) = α∗} is convex. For all such x∗, 0 ∗ ∗ ∗ ∗ ∗ g (α ) = 0 ⇒ β(α ) − k = 0 ⇒ Fi(x ) = i(β(α ) − k) = 0, for all i ∈ Sn. Hence, any such x is a Nash equilibrium.

Note that in each of three cases above, the set of Nash equilibria is convex. Hence, the second part of the proposition follows. 

23 To see the existence of suchx ˜, note that {0, m} ⊆ Sn, for all n. Take the support ofx ˜ to be {0, m} and let α˜ α˜ x˜ = {1 − m , 0, ··· , 0, m }.. Then a(˜x) =α ˜.

22 Proof of Proposition 4.4: Note that if β is strictly increasing, then the potential function f is convex. Hence, there may exist more than one local maximizer of f which implies that the set of Nash equilibria is not necessarily convex.

1. Let β(0) = 0. Then, since c(0) = 0, F0(e0) = 0. Moreover, since c(i) > 0 for i > 0, Fi(e0) < 0 for i > 0. Therefore, e0 is a Nash equilibrium of F .

2. The fact that every local maximizer of f is a Nash equilibrium of F follows from results in ∗ Sandholm (2001). If α ∈ Sn, g(α) = f(eα). Therefore, if α ∈ Sn, then f(ei) < f(eα∗ ), for all

∗ ∗ i ∈ Sn \{α }. This, combined with the convexity of f, implies f(x) ≤ Σi∈Sn xif(ei) < f(eα ), for all x ∈ X \{eα∗ }. Therefore, eα∗ is the global maximizer of f and, hence, is a Nash equilibrium of F .

3. For linear c, let c(i) = ki, for some k > 0. Then, g0(α) = β(α) − k. If g is strictly quasiconcave, then g0(α) > 0 in the immediate neighborhood of 0, which implies β(α) > k in the neighborhood of 0. Increasing β(α) then implies β(α) > k for all α, which implies g is upward sloping for all α > 0. ∗ Hence, α = m. By part 2, therefore, em is the global maximizer of f which implies it is a Nash equilibrium of F .

For uniqueness of equilibrium, note that Fi(x) = i(β(a(x)) − k) = i(β(α) − k), where a(x) = α, which is increasing in i as β(α) > k, for all α ∈ [0, m]. Consider x 6= em. Let a(x) = α and let j be in the support of x. Hence, Fj(x) = j(β(α) − k) > 0 (due to β(α) > k). But then, Fm(x) > Fj(x), which means x 6= em cannot be a Nash equilibrium.

∗ ∗ ∗ 4. If α ∈ Sn, then the conclusion follows from part 2. If α ∈/ Sn, α ∈ (0, m). The continuity of g then implies that on Sn, g is maximized at either e bnα∗c or e dnα∗e . Then, the same argument n n as in part 2 of this proof establishes this state as the global maximizer of f and, hence, as a Nash equilibrium of F . We note that as n → ∞, the aggregate strategy level at this equilibrium becomes arbitrarily close to α∗.

To show that for large n, all monomorphic equilibria except e0 must have aggregate strategy ∗ level close to α , fix i ∈ (0, m] and let Sn(i) denote the strategy set Sn containing i as a strategy. ∗ We show that if i 6= α , then, for n large enough, ei is not a Nash equilibrium in the game F with strategy set Sn(i). Combined with the previous paragraph of this proof, this suffices to establish the result.

For fixed i ∈ (0, m], define φi : [0, m] → R as φi(j) = jβ(i) − c(j). Note that if i, j ∈ Sn, then

φi(j) = Fj(ei). The strict convexity of c and the fact that β(i) > 0 for i > 0 implies that φi(j) is strictly concave with a unique maximizer, say j∗(i) ∈ (0, m], characterized by β(i) ≥ c0(j∗(i)), with equality if j∗(i) < m. Also, by assumption, α∗ ∈ (0, m]. If α∗ ∈ (0, m), then α∗ is the unique strictly positive solution to the equation β(α) = c0(α). If α∗ = m, then β(α) > c0(α), ∀α ∈ [0, m). We now show that if i 6= α∗, then j∗(i) 6= i. First, suppose α∗ = m. Hence, i 6= α∗ implies i < m. Hence, j∗(i) = i implies β(i) = c0(i) (from the condition β(i) = c0(j∗(i)) if j∗(i) = i < m). But if α∗ = m, β(i) > c0(i), ∀i ∈ [0, m). This is a contradiction.

23 Second, suppose α∗ ∈ (0, m) and i 6= α∗. Then, for j∗(i) = i, we require β(i) ≥ c0(i) (from the condition β(i) ≥ c0(j∗(i))), with equality if i < m. But if this condition holds with equality, then i = α∗ (since if α∗ ∈ (0, m), it is the unique positive solution to the equation β(i) = c0(i)). If this condition holds with strict inequality, then i = m and β(m) ≥ c0(m). But this is not possible if α∗ ∈ (0, m). ∗ ∗ ∗ Therefore, if i 6= α , then j (i) 6= i. Hence, for every i ∈ Sn \{0}, i 6= α , there exists j ∈ [0, m], ∗ j 6= i such that φi(j) > φi(i). But if n is large enough, either j (i) ∈ Sn(i) or there exists ∗ j ∈ Sn(i) arbitrarily close to j (i). Then, either Fj∗(i)(ei) > Fi(ei), or by the continuity of the payoff functions, Fj(ei) > Fi(ei). Either way, ei is not a Nash equilibrium if the strategy set is Sn(i), for n large. 

Proof of Proposition 4.5: 1. The fact that every local maximizer of f is a Nash equilibrium follows from Sandholm (2001). For the remainder of this part of the proposition, note that both 0 and m are local maximizers of g. We need to show that e0 and em are local maximizers of f. − First, consider e0. Consider the strategy set Sn and denote Sn = {i ∈ Sn : 0 ≤ i ≤ αˆ}. Since − 0 is a local maximizer of g, f(e0) = g(0) > g(i) = f(ei), ∀i ∈ Sn \{0}. Let x ∈ X such that it − − has support only in Sn . Thus, if j∈ / Sn , xj = 0. The convexity of f implies f(x) ≤ f(e0), with 0 0 − equality only if x = e0. Now consider x ∈ X such that xj > 0 for at least some j ∈ Sn \ Sn . By 0 − continuity of f, if xj is small enough for all j ∈ Sn \ Sn , then f(x) < f(e0). Hence, e0 is a local maximizer of f and, therefore, a Nash equilibrium of F . + An analogous argument with the set Sn = {i ∈ Sn :α ˆ ≤ i ≤ m} establishes em as a Nash equilibrium of F .

0 0 0 2. A linear c implies g(α) = f(x) if a(x) = α. Now consider x ∈/ {e0, em}. Therefore, a(x ) = α ∈/ {0, m}. Suppose α0 ∈ [ˆα, m). Then there exists α00 ∈ (α0, m) and arbitrarily close to α0 such that g(α00) > g(α0). But this implies there exists x00 arbitrarily close to x0 such that a(x00) = α00. In that case, f(x00) > f(x0) and, therefore, x0 is not a local maximizer of f. A similar argument works if α0 ∈ (0, αˆ]. Let c(i) = ki, k > 0. Atα ˆ, g0(ˆα) = β(ˆα) − c0(ˆα) = 0 ⇒ β(ˆα) = k. Now, consider x such that a(x) =α ˆ. At such x, for all i ∈ Sn,

Fi(x) = iβ(a(x)) − c(i) = iβ(ˆα) − ki = 0.

Hence, x such that a(x) =α ˆ is a Nash equilibrium.

3. Consider φi(h) = hβ(i) − c(h) as defined in the proof of part 4 of Proposition 4.4, and note that for h, i ∈ Sn, φi(h) = Fh(ei). For fixed i > 0, the strict convexity of c and the fact that β(i) > 0 ∗ imply that φi(h) is strictly concave with a unique maximizer, which we denote as h (i). ∗ 0 ∗ Now, letα ˆ ∈ Sn and note that h (ˆα) is characterized by β(ˆα) ≥ c (h (ˆα)), with equality if h∗(ˆα) < m. But from the fact thatα ˆ is the unique minimizer of g and thatα ˆ ∈ (0, m), we know 0 ∗ that β(ˆα) = c (ˆα). Hence, h (ˆα) =α ˆ. Therefore, φαˆ(ˆα) > φαˆ(j), for all j 6=α ˆ, or Fαˆ(eαˆ) > Fj(eαˆ),

24 for all j ∈ Sn \ αˆ. Hence, eαˆ is a Nash equilibrium of F . bnαˆc dnαˆe Forα ˆ ∈ / Sn, denote i = n and j = n . Note that we only need to consider the case where i 6= 0 and j 6= m, as otherwise, by part 1 of the proposition, the statement is automatically satisfied.

First, we show that Fi(ei) > Fj(ei) and Fj(ej) > Fi(ej). For Fi(ei) > Fj(ei), note that β(α) < c0(α), for α < αˆ (this is because g is declining at α < αˆ). From this, we can conclude ∗ 0 that φα(h) is maximized at h (α) < α for α ∈ (0, αˆ). To see this, note that β(α) < c (α) rules out h∗(α) = m as that would mean β(α) ≥ c0(m) > c0(α) (the first inequality holds if h∗(α) = m, the second follows from the strict convexity of c). Hence, h∗(α) ∈ (0, m), and is, therefore, characterized by β(α) = c0(h∗(α)). This must mean h∗(α) 6= α as otherwise α =α ˆ (by the earlier paragraph). ∗ 0 0 ∗ If h (α) > α, then β(α) < c (α) < c (h (α)), by the strict convexity of c. Therefore, φα(h) is ∗ ∗ maximized at h (α) < α for α < αˆ. But i < αˆ and, hence, h (i) < i. Hence, φi(h), which is a ∗ strictly concave function, starts declining at h (i) < i. Therefore, since j > i, φi(i) > φi(j), or

Fi(ei) > Fj(ei). A similar argument, using the fact that g is increasing at α > αˆ, establishes that

φα(k) is increasing at j and, hence, that Fj(ej) > Fi(ej).

Therefore, there must exist someα ˜ ∈ (i, j) such that Fi(˜x) = Fj(˜x), wherex ˜ is the unique state having support in {i, j} such that a(˜x) =α ˜. We show that thisx ˜ is a Nash equilibrium of F .

For this, note that Fi(˜x) = Fj(˜x) implies φα˜(i) = φα˜(j). Now, consider φα˜(h). The strict concavity of φα˜(h) implies that φα˜(h) > φα˜(i) = φα˜(j) if and only if h ∈ (i, j). But sinceα ˆ ∈ / Sn,

{h ∈ Sn : i < h < j} = ∅. Hence, there exists no h ∈ Sn such that Fh(˜x) > Fi(˜x) = Fj(˜x), where x˜ is the unique state with support in {i, j} such that a(˜x) =α ˜. Therefore, this state is a Nash equilibrium of F . 

A.2 Proofs of Section 5

Proof of Lemma 5.1: The Hessian matrix of F¯ is   0 0 0 ··· 0   0 1 2 ··· mn  a(x)β00(a(x)) + 2β0(a(x))   D2F¯(x) = 0 2 4 ··· 2mn  . 2   n . . . . .  ......    0 mn 2mn ··· m2n2

The strict concavity of αβ(α) on [0, m] implies that a(x)β00(a(x)) + 2β0(a(x)) < 0. Therefore, D2F¯(x) is a negative semidefinite matrix which implies F¯ is concave. To check that F¯ is not strictly concave, take x, y ∈ X, x 6= y, such that a(x) = a(y). In that case, F¯(λx + (1 − λ)y) = ¯ ¯ λF (x) + (1 − λ)F (y), for all λ ∈ [0, 1]. 

Proof of Lemma 5.2: Note thatg ¯0(α) = αβ0(α) + β(α) − c0(α) andg ¯00(α) = αβ00(α) + 2β0(α) − c00(α). Since αβ(α) is strictly concave, αβ00(α) + 2β0(α) < 0. The convexity of c implies c00(α) ≤ 0. Hence,g ¯00(α) < 0, i.e.g ¯ is strictly concave with a unique maximizer which we denote α∗∗ > 0.

25 On the other hand, g0(α) = β(α) − c0(α) > g¯0(α), since β0(α) < 0. Therefore, if α∗∗ ∈ (0, m), so thatg ¯0(α∗∗) = 0, then g0(α∗∗) > 0. Hence, g is increasing at α∗∗. Since g is concave (by Lemma 4.1) with a unique maximizer α∗, this implies that α∗ > α∗∗. If α∗∗ = m, then, by the same argument, ∗ ∗ g is rising at m, the maximum possible value of α . Therefore, α = m. 

Proof of Proposition 5.3: Sinceg ¯ is strictly concave on [0, m] and F¯ is concave on X, the proofs of 1(a) and 1(b) follow from the arguments in the proofs of Propositions 4.2(1) and (2) respectively, with F¯ replacing f,g ¯ replacing g and α∗∗ replacing α∗. This implies that for n sufficiently large, ∗∗ ∗∗ ∗∗ even if α ∈/ Sn, a(x ) is arbitrarily close to α . On the other hand, Proposition 4.2 imply that for n large, a(x∗) is close to α∗. But by Lemma 5.2, α∗∗ ≤ α∗, with equality only if α∗∗ = m. Hence, for n sufficiently large, a(x∗∗) ≤ a(x∗), with equality only if a(x∗∗) = m. If c is linear, then F¯(x) = a(x)β(a(x)) − ca(x). Hence, if a(x) = α, F¯(x) =g ¯(α). The proof of part 2 then follows from the argument in the proof of Proposition 4.3 with, once again, F¯ replacing f,g ¯ replacing g and α∗∗ replacing α∗. Therefore, at all n, every efficient state is characterized by a(x∗∗) = α∗∗. The relationship between a(x∗∗) and a(x∗) then follows from the relationship between α∗∗ and α∗ (Lemma 5.2) and the fact that at every Nash equilibrium x∗, a(x∗) = α∗ (Proposition 4.3). 

Proof of Proposition 5.4: The proof relies on the relationship between α∗∗ and α∗.

1. Note that since β0(α) > 0,g ¯0(α) > g0(α) for α > 0. If g is as in Proposition 4.4, then g increases monotonically to α∗ > 0. Therefore, on (0, α∗],g ¯0(α) > g0(α) ≥ 0. Hence,g ¯ is strictly increasing on [0, α∗] which implies α∗∗ ≥ α∗, with equality only if α∗ = m. The characterization of x∗∗ follows from the strict convexity of F¯ and the argument in the proof of Proposition 4.4(2 and 4) with F¯ replacing f, andg ¯ replacing g. Therefore, for n large, a(x∗∗) is either equal to or very close to α∗∗. On the other hand, if c is strictly convex, then by Proposition 4.4 (4), a(x∗) is either 0, α∗ or very close to α∗ for all monomorphic Nash equilibria x∗ of F , if n is large enough. The relationship between α∗ and α∗∗ then implies the stated relationship between ∗∗ ∗ ∗ a(x ) and a(x ). If c is linear, then the relationship follows from the fact that x = em is the unique Nash equilibrium (Proposition 4.4 (3)).

2. We show that if g is as in Proposition 4.5, then α∗∗ = m. The minimizer of g,α ˆ ∈ (0, m), is characterized by β(ˆα) = c0(ˆα). Denote as MC(α) = c0(α), the marginal cost, and AC(α) = c(α) α , the average cost. Note that g(0) =g ¯(0) = 0. Atα ˆ,g ¯(ˆα) =αβ ˆ (ˆα) − c(ˆα) =α ˆ(MC(ˆα) − AC(ˆα)) ≥ 0, since MC(α) is greater than (equal to) AC(α) when the cost function is strictly convex (linear). Moreover, (MC(α) − AC(α)) is strictly increasing in α (zero) if c is strictly convex (linear). Therefore, if α ∈ (0, αˆ), so that g is strictly decreasing (which implies β(α) < c0(α)), g¯(α) = αβ(α) − c(α)<αc0(α) − c(α)= α(MC(α) − AC(α))≤αˆ(MC(ˆα) − AC(ˆα)) =g ¯(ˆα). Hence, g¯(ˆα) ≥ g¯(α), for all α ∈ [0, αˆ), with equality only at α = 0. Furthermore, on [ˆα, m] (where g is increasing so that β(α) ≥ c0(α)),g ¯0(α) = β(α) + αβ0(α) − c0(α) > 0 (since β0 > 0). We, therefore, conclude that if α ∈ [0, αˆ],g ¯(ˆα) ≥ g(α), and on (ˆα, m],g ¯ is

26 strictly increasing. Hence, α∗∗ = m. Note that α∗ is either 0 or m. Hence, α∗∗ ≥ α∗, with equality only if α∗ = m. ∗∗ ∗∗ Since α ∈ Sn, the argument in the proof of Proposition 4.4(2) imply that x = em. Hence, a(x∗∗) = m. Since a(x∗) ≤ m for all Nash equilibria x∗, the relationship between a(x∗) and a(x∗∗) follows. 

References

[1] D. Acemoglu, M. Jensen, 2013, Aggregate Comparative Statics, Games Econ. Behav. 81, 27– 49.

[2] C. Al´os–Ferrer, A. Ania, 2005, The Evolutionary Stability of Perfectly Competitive Behavior, Econ. Theory 26, 497–516.

[3] J. Bergin, D Bernhardt, 2004, Comparative Learning Dynamics, Int. Econ. Rev., 2, 431–465.

[4] L. Corch´on,1994, Comparative Statics for Aggregative Games the Strong Concavity Case, Math. Soc. Sci., 23, 151–165.

[5] M. Dindoˇs,C. Mezzetti, 2006, Better-reply Dynamics and Global Convergence to Nash Equi- libria in Aggregative Games, Games Econ. Behav. 54, 261–292.

[6] D. Fudenberg, D. Levine, Theory of Learning in Games. MIT Press, Cambridge, Massachusetts, 1998.

[7] J. Hofbauer, 2000, From Nash and Brown to Maynard Smith: Equilibria, dynamics, and ESS, Selection 1, 81–88.

[8] N. Kukushkin, 2004, Best response dynamics in finite games with additive aggregation. Games Econ. Behav. 48, 94110.

[9] R. Lahkar, W. Sandholm, 2008, The projection dynamic and the geometry of population games, Games Econ. Behav. 64, 565–590.

[10] D. Monderer, L. Shapley, 1996, Potential games, Games Econ. Behav. 14, 124-143.

[11] E. Ostrom, 1990, Governing the Commons: The Evolution of Institutions for Collective Action, Cambridge University Press, Cambridge, UK.

[12] W. Sandholm, 2001, Potential Games with Continuous Player Sets, J. Econ. Theory 97, 81– 108.

[13] W. Sandholm, 2002, Evolutionary Implementation and Congestion Pricing, Rev. Econ. Stud. 69, 667–689.

27 [14] W. Sandholm, 2009, Large Population Potential Games, J. Econ. Theory 144, 1710–1725.

[15] W. Sandholm, 2010. Population Games and Evolutionary Dynamics, MIT Press, Cambridge, MA, USA.

[16] M. Schaffer, 1988, Evolutionarily Stable Strategies for a Finite Population and a Variable Contest Size, J. Theor. Biol., 132, 469–478.

[17] M. Smith, 1984. The Stability of a Dynamic Model of Traffic Assignment. Transp. Sci. 18, 245-252.

[18] P. Taylor, L. Jonker, 1978. Evolutionarily stable strategies and game dynamics, Math. Biosci. 40, 145–156.

[19] F. Vega–Redondo, 1997, The evolution of Walrasian Behavior. Econometrica 65, 375–384.

28