<<

Homophily, payoff distributions, and truncation selection in replicator

dynamics

by

Bryce Morsky

A Thesis presented to The University of Guelph

In partial fulfilment of requirements for the degree of Doctor of Philosophy in Applied Mathematics

Guelph, Ontario, Canada

©Bryce Morsky, April, 2016 ABSTRACT

HOMOPHILY, PAYOFF DISTRIBUTIONS, AND TRUNCATION SELECTION IN

REPLICATOR DYNAMICS

Bryce Morsky Advisors: University of Guelph, 2016 Chris T. Bauch & Daniel Ashlock

This dissertation explores the field of replicator dynamics by examining extensions to and relaxations of the classical and complimentary agent-based mod- els. We extend the replicator equation by the incorporation of homophilic imitation, a form of tag-based selection. We show that though the equilibria are not affected by this mod- ification, the population’s diversity may increase or decrease depending on two invasion scenarios we detail, and there is significant impact on the rates of convergence to equi- libria. Two important assumptions of the replicator equation that we relaxed are: mean payoffs, where all replicators earn the mean payoff of the underlying game; and propor- tional selection, where the probabilities for survival and reproduction are proportional to the difference between the fitness of a replicator and the mean fitness of the population.

Our models thus comprise payoff distributions and two types of truncation selection: in- dependent, where replicator above a threshold, φ, survive; and dependent, where the top

τ of replicators survive. The reproduction rates are equal for all survivors. We show that the classical replicator equation is a special case of our independent truncation equation.

Further, for any boundary fixed point, we may choose a φ such that that point is stable (or unstable). We observed complex and transient dynamics in both truncation methods. We applied this framework to evolutionary graphs that included diffusion, and show where cooperation is facilitated by these models in comparison to spatial and non-spatial propor- tional selection. Alfred Russel Wallace reasoned that the relatively unfit could coexist with the fit, and it has been argued that this would result in a genotypically diverse population resistant to extinction. This is because , rather than Spencer’s “survival of the fittest,” may be better encapsulated by the phrases: “survival of the fit,” or “non- survival of the non-fit.” We argue that truncation selection, here explored, can model this phenomenon, and thus is an important addition to the theoretical biology literature. iv

ACKNOWLEDGEMENTS

I would like to acknowledge the contributions and support of the Department of Mathe- matics & Statistics, my advisory committee, and my examining committee. I also wish to express my deep gratitude to my advisor, Chris Bauch, for his mentorship. v

Table of Contents

List of Tables viii

List of Figures ix

1 Introduction 1 1.1 The replicator equation ...... 2 1.1.1 ESS ...... 3 1.1.2 Limitations ...... 3 1.1.3 Games ...... 4 1.2 Extensions to the replicator equation ...... 6 1.2.1 Imitation dynamics ...... 6 1.2.2 Truncation selection ...... 7 1.3 Outline ...... 8

2 Homophilic replicator equations 11 B. MORSKY, R. CRESSMAN, & C. T. BAUCH 2.1 Abstract ...... 11 2.2 Introduction ...... 12 2.2.1 Replicator equations ...... 14 2.2.2 Games ...... 16 2.3 Methods ...... 17 2.3.1 The Model ...... 17 2.3.2 Measures of diversity ...... 19 2.3.3 Simulations ...... 20 2.3.3.1 Recurrent mutations ...... 21 2.3.3.2 Further simulations ...... 21 2.4 Results ...... 22 2.4.1 Fixed points and stability ...... 22 2.4.2 The two-tag Snowdrift game with recurrent mutations ...... 25 2.4.3 Coat-tailing and diversity ...... 25 vi

2.4.3.1 Invasion scenario 1 ...... 25 2.4.3.2 Invasion scenario 2 ...... 28 2.4.4 Rates of convergence ...... 31 2.5 Discussion ...... 36 2.6 Appendix: Games ...... 38

3 Truncation selection and payoff distributions applied to the replicator equa- tion 40 B. MORSKY & C. T. BAUCH 3.1 Abstract ...... 40 3.2 Introduction ...... 41 3.3 Methods ...... 44 3.3.1 distributions ...... 44 3.3.2 The replicator equation ...... 47 3.3.3 Proportional selection ...... 49 3.3.4 Truncation selection ...... 50 3.3.5 Agent-based models ...... 52 3.4 Results ...... 52 3.4.1 Evolutionary stability ...... 52 3.4.2 Agent-based simulations vs the 2- model ...... 55 3.4.2.1 Independent truncation ...... 55 3.4.2.2 Dependent truncation ...... 58 3.5 Discussion ...... 60 3.6 Conclusions ...... 62 3.7 Appendix: mean payoffs and the proportional selection ...... 63 3.8 Fixed points and stability proofs ...... 65

4 Truncation selection facilitates cooperation on random spatially structured populations of replicators 68 B. MORSKY & C. T. BAUCH 4.1 Abstract ...... 68 4.2 Introduction ...... 70 4.3 Methods ...... 75 4.4 Results ...... 78 4.4.1 Proportional selection ...... 78 4.4.2 Independent truncation ...... 80 4.4.3 Dependent truncation ...... 84 4.5 Discussion ...... 86

5 Discussion 90 5.1 Summary ...... 90 5.2 Directions for future work ...... 92 vii

A Appendix of Java code 94 A.1 Code for chapter 3 ...... 94 A.1.1 DepFogel. java ...... 94 A.1.2 IndepFogel. java ...... 97 A.1.3 Paper2. java ...... 102 A.1.4 Player. java ...... 108 A.2 Code for chapter 4 ...... 109 A.2.1 CA. java ...... 109 A.2.2 CAdep. java ...... 116 A.2.3 CAindep. java ...... 119 A.2.4 Paper3. java ...... 124 A.2.5 Player. java ...... 135

References 136 viii

List of Tables

3.1 Parameter values for the Hawk-Dove game...... 46 ix

List of Figures

2.1 Relative entropy vs. rjj for invasion scenario 1...... 27 2.2 Relative entropies vs. κ for invasion scenario 1...... 29 2.3 Relative entropy vs. κ for invasion scenario 2...... 32 2.4 Relative entropy vs. κ for invasion scenario 2...... 34 2.5 HRE rates of convergence...... 35

3.1 Equilibria for the independent truncation equation...... 55 3.2 Independent truncation agent-based simulation averages and extinction rates. 56 3.3 Independent truncation agent-based simulation results...... 57 3.4 Equilibria for the dependent truncation equation vs. simulation results. . . . 58 3.5 Dependent truncation agent-based simulation results...... 59 3.6 Time series of the agent-based dependent truncation model...... 60

4.1 Heatmap of cooperation in parameter space...... 71 4.2 Heatmaps of cooperation in parameter space for proportional selection. . . 79 4.3 Heatmaps of cooperation in parameter space for independent truncation. . . 81 4.4 Cooperator density, ρc, of each game vs. φ...... 83 4.5 Heatmaps of cooperation in parameter space for dependent truncation. . . . 85 4.6 Cooperator density, ρc, of each game vs. τ...... 87 1

Chapter 1

Introduction

In this dissertation, we explore evolutionary dynamics- the mathematical scheme by which we study . Our focus is upon replicators, evolutionary agents that interact with each other and evolve over time. Within the framework of evolutionary game the- ory (Maynard Smith and Price (1973); Maynard Smith (1974)), replicators earn payoffs from their interactions with each other. The aggregation of these payoffs determines fit- ness, which in turn determines survival and replication. Replicator dynamics have been used to study a variety of fields- biological evolution, animal behaviour, genetics, ecology, chemistry, sociology, evolutionary economics, and even cryptography (Dosi and Nelson

(1994); Dugatkin and Reeve (1998); Hines (1987); Hammerstein et al. (1994); Hofbauer and Sigmund (2003); Nowak and Sigmund (2004); Schuster and Sigmund (1983)). Here, we study different mathematical models of replicator dynamics with the purpose of better understanding social group formation, biological evolution, and cooperation. 2

1.1 The replicator equation

Introduced in Taylor and Jonker (1978), the replicator equation is the classical im- plementation of evolutionary dynamics (Schuster and Sigmund (1983)). It models the change in time of frequencies of replicator phenotypes, which are represented as strate- gies of a game. Mathematically, we say that si ∈ S is the strategy of xi, the frequency of replicators playing si. When replicators interact, they play a game, and the strategies of the players determines the payoffs each receives. Since the replicator equation is a mean

field model, the fitness of xi, fi(x), is a function of the strategy profile of the population,

T x = [x1, x2, . . . , xn] , which is a vector of the frequencies of each phenotype/strategy. We thus have: n X fi(x) = πijxj. (1.1) j=1

Where πij is the payoff to replicators playing si vs. sj, and is an element of the payoff matrix, Π. We then apply a Darwinian process by letting the change in frequencies of replicators be proportional to the difference between their fitness and the average fitness

¯ Pn of the population, f(x). Since i=1 xi = 1 is an invariant attracting hyperplane for the

¯ Pn system, we have f = i=1 xifi(x). The replicator equation is thus:

 n  ¯ X x˙ i = xi(fi(x) − f(x)) = xi fi(x) − xjfj(x) . (1.2) j=1

Of note, this equation has an important relation to the Lotka-Volterra equation. The replicator equation can be imbedded into the Lotka-Volterra equation; further, the Lotka- 3

Volterra equation with equivalent growth rates for all species can be reduced to the replica- tor equation (Bomze (1983, 1995); Hofbauer and Sigmund (1998)).

1.1.1 ESS

The stability concept for replicator equation is the evolutionary stable state (ESS), which is equivalent to an asymptotically stable rest point (Hofbauer and Sigmund, 1998;

Weibull, 1997). Let x¯ be a fixed point of the replicator equations. Then, it is an ESS if:

∃ > 0 such that ∀x ∈ B(x¯, ), xT Πx < x¯T Πx. (1.3)

A is a strategy profile in which no unilateral deviation in a player’s strategy can be profitable to that player. By the folk theorem of evolutionary

(Cressman (2003)), all interior fixed points are Nash Equilibria and all Nash Equilibria are

fixed points. However, not all Nash Equilibria are ESSes; although, they are fixed points.

1.1.2 Limitations

The replicator equation makes several assumptions: the population is infinite; if the elements of the payoff matrix are stochastic, the replicators earn the mean payoffs; each replicator interacts with every other replicator non-preferentially; and selection is propor- tional to the difference between a replictaror’s fitness and the average fitness of the pop- ulation. The population is infinite (or infinitely divisible), since x ∈ Rn. Further, the 4 payoffs in a payoff matrix are the means of the payoffs earned in the underlying game

(as elaborated upon in Chapter 3). The literature showcases a variety of relaxations of these assumptions with other replicator dynamics, and development of further evolutionary stability concepts (Ohtsuki and Nowak (2008); Nowak and Sigmund (2004)). Examples in- clude: finite populations Taylor et al. (2004), heterogeneity (Bergstrom and Godfrey-Smith

(1998)), networks (Roca et al. (2009); Szabo´ and Fath (2007)), and stochasticity (Traulsen et al. (2006)). Here, we will look at the following modifications: homophilic imitation, truncation selection, and spatial effects.

1.1.3 Games

Thus far, we have not detailed the games that determine payoffs, which leads to fit- nesses, which can then be selected upon. When replicators interact, they play a game, the outcome of which determines the payoffs each receives. Often, we consider a single game that is played for any contest between any two replicators in the system. Each replicator has a strategy that determines how it plays the game, and replicators earn payoffs for each contest. The sum or average of these payoffs is their fitness. Fit individuals survive and reproduce (as detailed later). A classic example is the Prisoner’s Dilemma, a two strategy non-zero sum game.

The Prisoner’s Dilemma is conceived as follows. Two individuals are caught for a crime. The police place them in separate rooms and attempt to convince them to confess to the crime. If both confess, then they are imprisoned and earn the punishment, P . If, 5 however, they do not, then they are only charged with a lesser crime, and thus both are rewarded with the payoff, R > P . Since, the police do not have enough information to charge them with the entirety of their crimes. However, the prosecutor makes an offer; if one will rat out the other, then that person is released and thus earns the temptation payoff,

T > R. The other will earn the sucker’s payoff, S < P . The payoff matrix for this is:

  RS    Π =   (1.4)   TP

T > R > P > S.

If both criminals can cooperate by not speaking to the police, then they both will face minimal time. We assume that this is the socially optimal solution- i.e. the least amount of days behind bars for both individuals. However, there is an incentive to cheat by the offer of immunity, which is present for both players. Thus, they both should rationally defect and speak to the police. Thus, they enter the socially suboptimal solution, where both are imprisoned for a moderate period, which is the game’s Nash Equilibrium.

This is a social dilemma, and the Prisoner’s Dilemma has been used to model coopera- tion and the incentive to cheat in biology and economics. How can we foster cooperation?

What mechanisms are required to do so? A variety of literature has explored this important topic. Spatial evolutionary dynamics is a particularly fruitful field by which cooperation can be facilitated. In spatial games, the position of replicators in space determines with 6 whom they interact, or the degree of interaction, as well as which strategies survive. Evo- lutionary graphs, as reviewed in (Nowak et al. (2010); Szabo´ and Fath (2007)), are spatial games where replicators are positioned upon vertices and interact with neighbours with whom they share an edge (Hauert (2001); Nowak and May (1992)).

The Hawk-Dove (aka Snowdrift), , and harmony games (Axelrod and Hamil- ton (1981); Skyrms (2004); Sugden (1986)) are other important two player models of bio- logical systems with a rich body of literature. In constrast to the Prisoner’s Dilemma, the

Hawk-Dove game yields a mixed Nash Equilibrium (where both strategies are played with a positive probability), which is an ESS. The Stag Hunt has three Nash Equilibria; two are

ESSes, and the third is an evolutionarily unstable mixed Nash Equilibrium. The harmony game has one Nash Equilibrium that is also an ESS. We detail these games completely in the following chapters.

1.2 Extensions to the replicator equation

1.2.1 Imitation dynamics

In terms of cultural evolution, replicator dynamics can be formulated as imitation dy- namics. In this conception, survival and replication are replaced by imitation; replicators imitate each other by comparing their fitnesses. Imitation dynamics are important for the study of the formation and evolution of social groups. Social groups may be distinguished by behaviours as well as by dress, social norms, and a variety of other social tags of the 7 members (Cialdini and Trost (1998)). How do these groups form? How are they rein- forced? Do social traits buttress group cohesion and differentiation from other groups?

In difference to behaviours, tags and traits may have no discernible effects upon fitness of the individuals involved. However, if they exhibit the “greenbeard effect” (Hamilton

(1964a,b); Dawkins (1976); Laird (2011); Lehmann and Perrin (2002); Queller (2011)), they may have an important impact upon evolutionary dynamics by maintaining or estab- lishing groups. A greenbeard is a gene that codes for a conspicuous tag that has no direct benefit to fitness; it is not sharp teeth or claws, nor is it intelligence or speed. However, it induces altruistic behaviour towards other greenbeards, and thus it is advantageous and can spread through the population by evolutionary dynamics. Homophily is an imitation dynamic that utilizes such tags. Homophilic imitation is a process by which individuals imitate others with whom they share tags (Centola et al. (2007); Ehrlich and Levin (2005)), and has been used to study how groups form and evolve (Bernhard et al. (2006); Durrett and Levin (2005)).

1.2.2 Truncation selection

Returning to the biological realm, let us clarify what we mean by survival and repro- duction. Survival is dependent upon comparing fitnesses of replicators. A method is then employed to determine survival and reproduction. In the replicator equation, we used pro- portional selection- selection is proportional to the difference between a replicator and its opponents. Thus, the “fitter” a replicator is, the greater its chance of survival and the greater 8 number of offspring it will produce. Another selection method is truncation selection (Back¨ et al. (2000); Blickle and Thiele (1995); Ficici et al. (2000)). In truncation selection, we rank players by their fitness, and those above some threshold survive and reproduce. How- ever, the number of offspring is not contingent upon how fit a replicator is- only upon it passing the threshold.

1.3 Outline

Here we present an outline of the dissertation. Each of the next three chapters are papers that explore the field of replicator dynamics.

In chapter 2, we explore the incorporation of homophilic imitation to replicator dynam- ics by modifying the replicator equation. In addition to a strategy, which determines fitness, replicators are assigned a tag. Replicators imitate one another depending on the similarity of their tags- i.e. like imitates like. We studied this model in the context of social group formation. We may imagine social groups as sets of agents with the same strategy, tag, or strategy and tag pair. How then does homophily impact group formation? Using Shannon entropy as our measure of diversity, we discuss how our model may promote or inhibit diversity compared to the classical replicator equation.

Chapter 3 introduces payoff distributions and truncation selection. These two model characteristics are relaxations of assumptions of the classical replicator equation- namely, mean payoffs and proportional selection. With mean payoffs, replicators earn the mean 9

payoff from all contests in which they engage. For example, imagine the coin flipping

game, where a player wins $1 if a heads is flipped, and nothing if a tails is flipped. With

the mean payoff assumption, players would earn 50¢. When we use payoff distributions,

we incorporate the distributions of payoffs that may be earned from repeated games. In the

coin flipping case, some players would fortunately win many times, some few, and some

would on average earn the mean payoff. We will show how this simple phenomenon has

important implications to replicator dynamics when truncation selection is incorporated

into our models. In this chapter we distinguish between two types of truncation selection,

independent and dependent, and explore their impact upon replicator dynamics.

Chapter 4 extends the idea of truncation selection to evolutionary graphs, where repli- cators live on the vertices of graphs and play games with their neighbours. Truncation selection is a generalization of the rule, “imitate the best,” which has been extensively researched in the literature. We compare truncation selection to proportional selection on these graphs, and show that truncation selection furthers cooperation for some games where it hinders it for others. Further, we study diffusion on our graphs, where replicators “swap” positions with neighbouring replicators. These models are discussed with attention to their impact on the density of cooperators.

We discuss our results and their broader conclusions in chapter 5. We explore sev-

eral future areas of research that can expand on these ideas and address limitations of our

models, and discuss some preliminary results in this research program.

Chapter 2 has been accepted in The Journal of Mathematical Biology, chapter 3 has 10 been accepted in The Journal of Theoretical Biology, and chapter 4 is in preparation for submission to Physical Review E. 11

Chapter 2

Homophilic replicator equations1

B. MORSKY, R. CRESSMAN, & C. T. BAUCH

2.1 Abstract

Tags are conspicuous attributes of organisms that affect the behaviour of other organ- isms toward the holder, and have previously been used to explore group formation and altruism. Homophilic imitation, a form of tag-based selection, occurs when organisms im- itate those with similar tags. Here we further explore the use of tag-based selection by developing homophilic replicator equations to model homophilic imitation dynamics. We assume that replicators have both tags (sometimes called traits) and strategies. Fitnesses are determined by the strategy profile of the population, and imitation is based upon the strategy profile, fitness differences, and similarity in tag space. We show the characteristics

1accepted in The Journal of Mathematical Biology 12 of resulting fixed manifolds and conditions for stability. We discuss the phenomenon of coat-tailing (where tags associated with successful strategies increase in abundance, even though the tags are not inherently beneficial) and its implications for population diversity.

We extend our model to incorporate recurrent mutations and invasions to explore their im- plications upon tag and strategy diversity. We find that homophilic imitation based upon tags significantly affects the diversity of the population, although not the ESS. We clas- sify two different types of invasion scenarios by the strategy and tag compositions of the invaders and invaded. In one scenario, we find that novel tags introduced by invaders be- come more readily established with homophilic imitation than without it. In the other, diversity decreases. Lastly, we find a negative correlation between homophily and the rate of convergence.

2.2 Introduction

Social norms, which include behaviours, dress, and other cultural aspects, are an im- portant part of group identity and may serve to form and enforce human social groups and impact diversity through such mechanisms as homophily, where individuals are more likely to imitate those who are similar (Centola et al. (2007); Ehrlich and Levin (2005)) and con- formity (van de Waal et al. (2013)). Many of these norms have no discernable benefit in that they do not ostensibly increase the payoffs of the individuals. However, they may be

“greenbeards” (Hamilton (1964a,b); Dawkins (1976); Laird (2011); Lehmann and Perrin 13

(2002); Queller (2011)). A greenbeard is a noticeable tag that induces beneficial treatment toward others that have that tag, and it is an important concept in the exploration of group dynamics and norms. Such tag-based effects have been used to explore altruism and the questions of how cooperation can emerge among selfish individuals (Antal et al. (2009);

Fehr and Fischbacher (2002, 2003, 2004); Jansen and Van Baalen (2006); Lehmann and

Perrin (2002); McNamara et al. (2008); Riolo et al. (2001); Traulsen and Schuster (2003)); how social groups are formed and evolve over time (Bernhard et al. (2006); Durrett and

Levin (2005)); and the impact of behaviours and norms on groups (Bendor and Swistak

(2001); Helbing and Johansson (2010); Hruschka and Henrich (2006); Rimal et al. (2005);

Rimal and Real (2003)).

Previous agent-based models of imitation dynamics have exhibited group formation

(Durrett and Levin (2005); Riolo et al. (2001); Tanimoto (2007); Traulsen and Schuster

(2003)). With homophily and an affiliation between strategies and tags, groups can emerge without punishment or reward schemes (Durrett and Levin (2005)). With tags, cooper- ation can occur without reciprocity, when repeated interactions are rare, reputations are not established, memory is not required, and tags are arbitrary (Riolo et al. (2001); Antal et al. (2009); Tanimoto (2007); Traulsen and Schuster (2003)). Further studies that have explored tag systems have uncovered group dynamics with agent-based modelling (Jansen and Van Baalen (2006); Traulsen and Nowak (2007)). However, these previous models assume mechanisms in addition to homophily.

Our objective has been to explore tag dynamics with only homophilic imitation in order 14 to explore the influence of strategies on tag structure and population diversity. We begin by assuming that individuals have tags and strategies, where the tags do not affect their pay- offs, but strategies do. Further, individuals imitate others by comparing fitnesses. However, this imitation rate is less if they share few tags than if they share many. We discuss fixed points and stability in our equations, and the phenomenon of coat-tailing (where a tag in- creases in abundance because it is associated with a successful strategy) and its implications for diversity.

2.2.1 Replicator equations

Replicator equations model the change in frequencies of replicators, asexual players that reproduce exact copies of themselves, in a population (Taylor and Jonker (1978)).

These replicators each have a strategy that determines their payoff when playing other strategies. Fitness is a function of these payoffs and the composition of the entire popu- lation, and determines the reproduction rate. Let pi ∈ [0, 1] be the population density of

Pn replicator type i where i=1 pi = 1, and let fi(p) be the fitness of type i playing against

T the entire population of size n, p = [p1, p2, . . . , pn] .

The fitness of each pi is derived from the payoffs received from interacting with the entire population weighted by the size of each proportion. Letting aij be the payoff group i receives from interacting with group j and A the matrix composed of these values, we have: n X fi(p) = aijpj = (Ap)i, (2.1) j=1 15

where (Ap)i is the ith row of Ap.

We define the change of pi as pi multiplied by the difference of fi(p) and the average

fitness of the population. Mathematically, this is:

 n  X p˙i = pi fi(p) − pjfj(p) j=1  n  X = pi  pj((Ap)i − (Ap)j) . (2.2) j=1

An evolutionary stable state (ESS) is a composition of the population that is stable

under small perturbations. The condition for a fixed point, p¯, to be an ESS is:

∃ > 0, such that ∀p ∈ B(p¯, ), pT Ap < p¯T Ap. (2.3)

If a state in the replicator equations is an ESS, then it is an asymptotically stable rest point

(Hofbauer and Sigmund, 1998; Weibull, 1997). More broadly, if p¯ is a Nash equilibrium of the game, then it is also a rest point (although not necessarily an ESS). A related concept is the evolutionary stable set (ESSet), which is a set of strategy profiles that have the same

fitness and would be ESSes if the other memebers of the set did not exist.

Replicator equations may be interpreted as a selection mechanism for strategies played by imitators. Replicators change their strategies by comparing their fitnesses to those with different strategies and changing their strategies to those that result in higher payoffs (Hof- bauer and Sigmund (1998); Weibull (1997)). We will refer to replicator equations (2.2) as 16

SRE (standard replicator equations) throughout the remainder of this paper.

2.2.2 Games

We examine our model with respect to three games: the Snowdrift, Prisoner’s Dilemma,

and Stag Hunt (Axelrod and Hamilton (1981); Sugden (1986); Skyrms (2004)). Details for

these games can be found in the appendix. Letting 1 > κ > 0, and ΠSD (2.4), ΠPD (2.5),

ΠSH (2.6) be the payoff matrices for the Snowdrift, Prisoner’s Dilemma, and Stag Hunt

games, respectively, we have:

  1 − κ/2 1 − κ   ΠSD =   , (2.4)   1 0   1 − κ −κ   ΠPD =   , (2.5)   1 0   1 − κ −κ   ΠSH =   . (2.6)   0 0

Letting pc be the density of cooperators and pd the density of defectors, the ESSes for the Snowdrift and Prisoner’s Dilemma games are (pc, pd) = ((1−κ)/(1−κ/2), κ/(2−κ)) and (0, 1), respectively. Further, the fixed points (1, 0) and (0, 1) are unstable. The Stag

Hunt game has two ESSes: (1, 0) and (0, 1), and an unstable interior fixed point, (κ, 1−κ). 17

2.3 Methods

2.3.1 The Model

To model homophily, we assume that the population is partitioned into groups of repli-

cators that have the same combination of M tags. That is, a group gj is given by a tag vector

T Vj = [v1, v2, . . . , vM ] where each component vm represents a tag that can be chosen from

an alphabet of size αm (e.g. if there are two options per tag, we can represent each vm as

QM 0 or 1). Thus, the number of possible groups is N ≡ m=1 αm. Each replicator also has a strategy, si from the set of strategies, S. We can thus compartmentalize the population into |S|N subgroups and let pij be the proportion (or frequency) of the population that use strategy i and have tag vector j (where j ∈ {1, ..., N}).

Let Π be the |S| × |S| payoff matrix with entries πik for a game with strategy set S

(i.e. πik is the payoff received from a player playing strategy i interacting with a player playing strategy k). The payoff matrix for our model, A, is thus an (N|S|) × (N|S|) matrix composed of N 2 non-overlapping Π submatrices. The subpopulations are ordered so that the first |S| components correspond to the strategies for the group with j = 1, etc.

Let fij(p) be the fitness of replicators in subgroup ij. This fitness is derived from the expected payoff received from interacting with a random replicator in the entire population.

Letting aij,kl be the entries of A:

X fij(p) = aij,klpkl = (Ap)ij, (2.7) k,l 18

where (Ap)ij is the ijth row of Ap.

Let rij,kl be the tag relatedness, i.e. the closeness of subgroups ij and kl in tag space.

The more alike the tags are, the larger rij,kl with a maximum, r0, if they have identical tags.

We also require symmetry. Thus, the axioms for the tag closeness measure are:

0 ≤ rij,kl ≤ r0, (2.8)

rij,kl = r0 ⇐⇒ j = l, (2.9)

rij,kl = rkl,ij. (2.10)

Let R = {rij,kl} be the relatedness matrix. We will assume that relatedness is in- dependent of strategy. Together with the assumption above that the payoff between two replicators is independent of their tag vectors, it follows that rij,kl = rjl, aij,kl = πik, and fij(p) = fi(p).

In our model, replicators change their strategy and tag by comparing their fitnesses to the fitnesses of others, scaling this difference by their relatedness. The flow of a replicator from compartment ij to compartment kl is the difference in their payoffs multiplied by the population density of each and their relatedness, i.e. pijpkl(fi(p) − fk(p))rjl. Explicitly, homophilic replicator equations (HRE) are:

X p˙ij = pijpkl(fi(p) − fk(p))rjl k,l   X = pij  pkl((Ap)ij − (Ap)kl)rjl . (2.11) k,l 19

Clearly, if rjl = 1 for every j, l, then we recover SRE.

As a natural modification of this model, we explored a discrete version of HRE and

observed qualitatively similar results in our simulations (results not shown).

2.3.2 Measures of diversity

There are several methods of measuring the diversity of a population. These methods

include species richness, i.e. the number of species; Shannon entropy (Shannon (1948a,b)),

the uncertainty in predicting the species of a randomly sampled organism from the ecosys-

Pn 2 tem; the Simpson index (Simpson (1949)) , i=1 pij, the probability that two organisms be-

long to the same species; and the Berger-Parker index (Berger and Parker (1970)), max(pij),the proportion belonging to the most abundant species (further discussion appears in Refs. Hill

(1973); Jost (2006); Magurran (2013)). The importance of information theory to our model stems from the fact that replicator equations can be interpreted as the continuous Bayesian inference equation (Harper (2011)). We use Shannon entropy to explore the change in diversity due to invasions, particularly the phenomenon of coat-tailing.

We will sample individuals from the population and examine their characteristics. As we continue this process, we produce a list of characteristics, χ0χ1χ2 ... derived from a

finite set. In our case, this finite set is the set of all combinations of tags and strategies.

We can assign a probability, pij, of picking an individual with strategy, i, and tag, j. Since

these probabilities are independent of individuals picked before, we can observe the average

amount of uncertainty over many sampling trials. This uncertainty is a measure of the 20

diversity of the population. The maximum diversity occurs when we are equally likely to

find each possible characteristic in the population. The minimum diversity occurs when the

population is composed of only one type and we are thus certain of finding it every time

we sample the population. Letting Htot be this measure of diversity, the Shannon entropy, we have:

X Htot = − pij log(pij). (2.12) i,j

Htot is the total diversity of the population, the diversity when we segment the popula-

tion into every possible combination of tag and strategy. We are also interested in the tag

diversity, Htag (2.13).

! ! X X X Htag = − pij log pij . (2.13) j i i

We will study these entropies for HRE relative to SRE. Let Htot,HRE be the total entropy

for HRE, and Htot,SRE be the total entropy for SRE. The relative total entropy is thus htot =

Htot,HRE/Htot,SRE. Similarly, the relative tag entropy is htag = Htag,HRE/Htag,SRE. We ignore

the strategy entropy in this study, since HRE do not alter it.

2.3.3 Simulations

In our simulations, we explored a two-tag two-strategy HRE. Every replicator has either

tag-1 or tag-2. Further, we set rjl = 0.5 − rjj. The two strategies are cooperate, sc, and

defect, sd. Thus, we divide up the population into four subgroups, pc1, pc2, pd1, and pd2. 21

The default parameter values are κ = 0.5, rjj = 0.45, and rjl = 0.05.

2.3.3.1 Recurrent mutations

As we will see, HRE for the two-tag Snowdrift game have an interior globally asymp- totically stable plane (g.a.s. plane) that is determined by the strategy densities alone, pc = pc1 + pc2 and pd = pd1 + pd2. To observe the effects of recurrent mutations on our model, we ran simulations in MATLAB that invoked random recurrent mutations within the pop- ulation. Specifically, we initialized the population on the g.a.s. plane. We then perturbed from these points 1% in a randomly determined direction in the population density space, which placed them on or near the plane. We then calculated the population’s new position on the plane using MATLAB’s ode45 after it had converged. We then perturbed it again.

We continued in this manner to observe the movement of the population across the plane.

We use the default parameter values and ran 500 realizations with 500 perturbations for each initial position on the plane.

2.3.3.2 Further simulations

We explored the two-tag Snowdrift, Prisoner’s Dilemma, and Stag Hunt games for HRE systematically by varying rjj and κ. Further, we examined the effects of two invasion sce- narios. Invasion scenario 1 models the introduction of a novel tag. A population of tag-1 replicators at the ESS is invaded by a smaller group of tag-2 replicators, half cooperators and half defectors. In invasion scenario 2, a novel successful strategy is introduced into 22 the population. For the Snowdrift and Prisoner’s Dilemma, we have an invaded population of cooperators. Since a population of cooperators is resistant to invasion by a small group of defectors in the Stag Hunt, we set the invaded population at the unstable internal fixed point. We varied the distribution of tags in the invaded population. The invaded popula- tion comprises 90% of the replicators, and the remaining 10% are an invading group of defectors, equally split between tags 1 and 2.

Finally, we compared the rates of convergence for the Stag Hunt game near the unstable interior equilibrium for various values of rjj. We consider a system converged when it is within 1% of the ESS.

2.4 Results

2.4.1 Fixed points and stability

The following theorems show that the fixed points of HRE are solely dependent on the payoff matrix A, and more specifically, the submatrix matrix Π. If there are interior fixed

|S0| 0 points, they form a convex polytope with N vertices where S = {si|psij > 0}. Further, if the payoff matrix Π has an interior ESS, then the corresponding ESSet in the (mn) ×

(mn) game is globally asymptotically stable (g.a.s.). This theorem is the tag-structured generalization of the corresponding theorem for SRE in .

Theorem 2.4.1. Suppose pij > 0 for all ij and rjl > 0 for all j, l. Then p is a rest point of

Equation 2.11 if and only if (Ap)ij = (Ap)kl for all ij, kl. 23

Proof. (a) If (Ap)ij = (Ap)kl for all ij, kl, clearly p˙ij = 0 for all ij (i.e. p is a rest point).

(b) For the converse, assume that (Ap)ij 6= (Ap)kl for all ij, kl. Then, there is some

i0j0 and k0l0 such that (Ap)i0j0 ≤ (Ap)kl for all kl and (Ap)i0j0 < (Ap)k0l0 . Thus,

p˙i0j0 ≤ pi0j0 pk0l0 ((Ap)i0j0 − (Ap)k0l0 ) rj0l0 < 0 at p. That is, p is not a rest point. This contradiction completes the proof.

Theorem 2.4.2. Suppose there are m = 2 tags and n strategies. If the payoff matrix A has an interior ESS and each entry in the matrix R is positive, then the corresponding ESSet in the (mn) × (mn) game is globally asymptotically stable.

Proof. aij,k0l0 = πik0 since payoffs depend only on strategies. Thus

  X X p˙ij = pij pkl  (πik0 − πkk0 ) pk0l0  rjl k,l k0,l0   P P   k rj1pk1 k0,l0 (πik0 − πkk0 ) pk0l0    = pij    P P   + k rj2pk2 k0,l0 (πik0 − πkk0 ) pk0l0   P T  k rj1pk1 (ei − ek) Π(p1 + p2)    = pij    P T  + k rj2pk2 (ei − ek) Π(p1 + p2)

where p1 = (p11, p21, ..., pn1) and p2 = (p12, p22, ..., pn2) and ei is the n-dimensional unit

th ∗ ∗ ∗ vector with 1 in the i entry and 0 everywhere else. Suppose p = (p1, ..., pn) is the

∗ ∗ ∗ interior ESS of the payoff matrix Π and define pi1 = pi2 = pi for all i = 1, ..., n. Consider

∗ Y pij V (p) ≡ pij (2.14) ij 24

Then, for p = (p11, p21, ..., pn1, p12, p22, ..., pn2) in the interior of the tag-strategy space,

V (p) > 0 and

˙ V (p) X ∗ p˙ij = pij V (p) i,j pij   P T  k rj1pk1 (ei − ek) Π(p1 + p2)  X ∗   = pij   i,j  P T  + k rj2pk2 (ei − ek) Π(p1 + p2)

X ∗ X T = pi1 (r11pk1 + r12pk2)(ei − ek) Π(p1 + p2) i k X ∗ X T + pi2 (r21pk1 + r22pk2)(ei − ek) Π(p1 + p2) i k X ∗ X T = pi ((r11 + r21)pk1 + (r12 + r22)pk2)(ei − ek) Π(p1 + p2) i k X ∗ X T = r pi (pk1 + pk2)(ei − ek) Π(p1 + p2) i k

0 where r12 + r11 = r21 + r22 ≡ r. Letting p1 + p2 = p , we have

˙ V (p) X ∗ X  T T  = r pi (pk1 + pk2) ei Π(p1 + p2) − ek Π(p1 + p2) V (p) i k ∗ = r (p Π(p1 + p2) − (p1 + p2)Π(p1 + p2))

= r(p∗Πp0 − p0Πp0)

> 0

∗ 0 0 0 ∗ ∗ p Πp > p Πp , unless p1 + p2 = p , since p is an ESS. By the standard Lyapunov

∗ argument, p(t) evolves to the set E ≡ {p | p1 + p2 = p }. This is an ESSet of the payoff

P ∗ matrix Π given by those p that satisfy j pij = pi for all i = 1, ..., n. 25

Pm Pm Remark: For m > 2 tags, the proof can be generalized if l=1 rjl = l=1 rlj is inde- pendent of j (i.e. the “total” homophilic degree is the same for all tags). That is, Theorem

2 remains true for any number of tags in this special case.

2.4.2 The two-tag Snowdrift game with recurrent mutations

Since the ESS for the Snowdrift game is interior, we have a g.a.s. interior manifold for HRE. Although we proved this property, we were motivated to explore the trajectories along the stable plane due to such perturbations, since perturbations are always present in real populations. Our computations show that the density of points on the plane spread out to become nearly uniform as the number of perturbations increase. The mean of the walks along the plane at one of the initial points was that initial point. Further, the standard deviation did not vary significantly among initial points. Thus, recurrent mutations result in a random walk along the fixed plane without directional bias. Similarly, for the Prisoner’s

Dilemma and Stag Hunt games, we found that random recurrent mutations resulted in random walks on the ESSets (i.e. pd1 +pd2 = 1 for the Prisoner’s Dilemma; and pc1 +pc2 =

1 and pd1 + pd2 = 1 for the Stag Hunt).

2.4.3 Coat-tailing and diversity

2.4.3.1 Invasion scenario 1

In figure 2.1, we varied rjj from 0.25 to 0.5 (i.e. from less to more homophily) and ob- served the relative total and tag entropy at equilibrium for invasion scenario 1, a population 26

at equilibrium invaded by a novel tag. The initial conditions for the Snowdrift game were:

(pc1, pc2, pd1, pd2) = (0.9((2 − 2κ)/(2 − κ)), 0.05, 0.9(κ/(2 − κ)), 0.05), i.e. a tag-1 popu- lation at the ESS is invaded by an equal mix of tag-2 cooperators and defectors. Similarly, the initial conditions for the Prisoner’s Dilemma and Stag Hunt were: (0, 0.05, 0.9, 0.05).

We observe higher total and tag entropy for HRE than for SRE due to the increase in fre- quency of tag-2 replicators. This effect was greater the more homophilic the replicators were. We omit results for total relative diversity, since they were similar. The effect is minuscule for the Snowdrift game. The increase in diversity is due to the increase in tag-2 replicators, which coat-tailed upon the successful strategy. These results show how a new tag may become established in a population once introduced.

In figure 2.2, we varied the parameter κ from 0.05 to 0.95 with rjj = 0.45 and rjl = 0.05 for HRE, and rjj = rjl = 0.25 for SRE. Note that the results for the Prisoner’s Dilemma remain unchanged for various values of κ. Because, p˙c1 = −κpc1(pd1r11 + pd2r12) (and similarly for pc2, pd1, and pd2). Thus, κ will only affect the rate of convergence (explored in section 2.4.4). The results for the Stag Hunt are identical to those of the Prisoner’s

Dilemma, except for κ = 0.05. For this value, the Stag Hunt game is at the unstable equilibrium, and is represented by the cross in the figure. Both tag and total diversity are relatively higher for HRE vs SRE for the Prisoner’s Dilemma and the Stag Hunt.

For the Snowdrift game, at κ = 2/3, the system will be at the equilibrium point, and thus, clearly relative diversity will be 1. For κ > 2/3, fc(p) < fd(p), and for κ < 2/3, fc(p) > fd(p). In figure 2.2a, htag ≥ 1. This result is strongest for low and high κ. 27

(a)

Relative entropy vs rjj, SD, scenario 1

htot 1.04 htag

1.02

1

Relative entropy 0.98

0.96

0.25 0.3 0.35 0.4 0.45 0.5

rjj (b)

Relative entropy vs rjj, PD/SH, scenario 1 1.6

1.5

1.4

1.3

1.2 Relative entropy

1.1

1 0.25 0.3 0.35 0.4 0.45 0.5

rjj

Figure 2.1: Relative entropy vs. rjj. For the Snowdrift game (a), relative total and tag entropies. Note that the results for the Stag Hunt are identical to those of the Prisoner’s Dilemma (b). The invaded population frequencies are at the stable equilibria and all tag-1. The frequency of the invaders is (pc2, pd2) = (0.05, 0.05). κ = 0.5. 28

However, in figure 2.2b, we see that htot ≈ 1 for most parameter values of κ. The exception to this observation occurs near κ ≈ 0.1053. At this value, pc = p1, and pd = p2, where p1 and p2 are the frequencies of tags 1 and 2, respectively. For κ less than this threshold, pc > p1 and pd < p2; and for κ greater than this threshold, pc < p1 and pd > p2.

2.4.3.2 Invasion scenario 2

In figure 2.3, we see the effects of various degrees of tag similarity upon diversity at equilibrium under invasion scenario 2 for different initial conditions of two-tag HRE.

We examine this invasion for initial tag densities of the invaded population, (p1, p2), of

(0.45, 0.45), (0.6, 0.3), (0.7, 0.2), and (0.8, 0.1). The strategy profile of the invaded popu- lation was at the unstable equilibrium. The invaders were defectors split equally between tags 1 and 2. We set κ = 0.5 and varied rjj from 0.25 to 0.5. We only display the relative tag diversity for the Prisoner’s Dilemma and Stag Hunt, since we have boundary equilibria and thus Htot = Htag at equilibrium. When the distribution of tags to strategies was equal at the initial conditions, there was no change in relative diversity. However, for unequal initial distributions of tags to strategies, relative diversity is lower at equilibrium. Further, the greater this inequality, the larger the final relative diversity; tags are coat-tailing on the success of strategies with which they were initially associated. As we increase homophily

(increase rjj), we observe decreased diversity for the Prisoner’s Dilemma and Stag Hunt.

However, for the Prisoner’s Dilemma and Stag Hunt, we observe a minimum diversity in 29

(a)

htag vs κ, scenario 1 1.3 SD SD 1.25 PD 1.2

1.15 tag h 1.1

1.05

1

0.95 0 0.2 0.4 0.6 0.8 1 κ (b)

htot vs κ, scenario 1 1.3 SD SD 1.25 PD 1.2

1.15 tot h 1.1

1.05

1

0.95 0 0.2 0.4 0.6 0.8 1 κ Figure 2.2: Relative entropies vs. κ for the Snowdrift and the Prisoner’s Dilemma. Note that the results for the Stag Hunt are identical to those of the Prisoner’s Dilemma, except for κ = 0.05. For this value, the Stag Hunt game is at equilibrium, and is represented by the cross. The invaded population frequencies are pd1 = 0.9 for the Prisoner’s Dilemma and Stag Hunt, and (pc1, pd1) = (0.9((2 − 2k)/(2 − k)), 0.9(k/(2 − k))) for the Snowdrift. The frequencies of the invaders is (pc2, pd2) = (0.05, 0.05). rjj = 0.45, rjl = 0.05. 30

(a)

htag vs rjj, SD, scenario 2 1.2 p1 = 0.45 p = 0.60 1.1 1 p1 = 0.70 p1 = 0.80 1

0.9 tag h 0.8

0.7

0.6

0.5 0.25 0.3 0.35 0.4 0.45 0.5

rjj (b)

htot vs rjj, SD, scenario 2 1.2 p1 = 0.45 p = 0.60 1.1 1 p1 = 0.70 p1 = 0.80 1

0.9 tot h 0.8

0.7

0.6

0.5 0.25 0.3 0.35 0.4 0.45 0.5

rjj 31

the interior of rjj’s domain.

In figure 2.4, we observe the effects of varying κ from 0.05 to 0.95 upon relative di-

versity. We omit the results for the Prisoner’s Dilemma, since relative diversity remains

unchanged (see section refinv1). Again, we notice lower relative entropy except for low κ in the snow-drift game. At κ ≈ 0.18, the initial conditions are at the stable equilibrium,

and hence relative diversity is 1. For κ below this value, we are on the other side of the equilibrium, and hence an unsuccessful strategy is invading.

2.4.4 Rates of convergence

Here we depict the results for the Stag Hunt game with initial conditions perturbed from the unstable equilibrium. Letting κ = 0.5, the unstable equilibrium is at pc1 + pc2 = pd1 + pd2 = 0.5. The initial conditions we used are (pc1, pc2, pd1, pd2) = (0.4, 0.05, 0.05, 0.5).

Figure 2.5a shows the time series for rjj = 0.5, 0.45, 0.35, and 0.25. In figure 2.5b, we

show the time to convergence for varying rjj relative to SRE. Although the population will

always converge to all defectors in these realizations, the rates of convergence for HRE can

differ significantly from SRE, and are negatively correlated with the degree of homophily. 32

(a)

htag vs rjj, PD, scenario 2 1.2 p1 = 0.45 p = 0.60 1.1 1 p1 = 0.70 p1 = 0.80 1

0.9 tag h 0.8

0.7

0.6

0.5 0.25 0.3 0.35 0.4 0.45 0.5

rjj (b)

htag vs rjj, SH, scenario 2 1.2 p1 = 0.45 p = 0.60 1.1 1 p1 = 0.70 p1 = 0.80 1

0.9 tag h 0.8

0.7

0.6

0.5 0.25 0.3 0.35 0.4 0.45 0.5

rjj

Figure 2.3: Relative entropy vs. κ for the Snowdrift (a,b), and Stag Hunt (c). Initial con- ditions were at the unstable equilibrium (all cooperators for the Snowdrift) invaded by (pd1, pd2) = (0.05, 0.05) (invasion scenario 2). rjj = 0.45, rjl = 0.05. 33

(a)

htag vs κ, SD, scenario 2 1.2 p1 = 0.45 p = 0.60 1.1 1 p1 = 0.70 p1 = 0.80 1

0.9 tag h 0.8

0.7

0.6

0.5 0 0.2 0.4 0.6 0.8 1 κ (b)

htot vs κ, SD, scenario 2 1.2 p1 = 0.45 p = 0.60 1.1 1 p1 = 0.70 p1 = 0.80 1

0.9 tot h 0.8

0.7

0.6

0.5 0 0.2 0.4 0.6 0.8 1 κ 34

(a)

htag vs κ, SH, scenario 2 1.2 p1 = 0.45 p = 0.60 1.1 1 p1 = 0.70 p1 = 0.80 1

0.9 tag h 0.8

0.7

0.6

0.5 0 0.2 0.4 0.6 0.8 1 κ Figure 2.4: Relative entropy vs. κ for the Snowdrift (a,b), and Stag Hunt (c). Initial con- ditions were at the unstable equilibrium (all cooperators for the Snowdrift) invaded by (pd1, pd2) = (0.05, 0.05) (invasion scenario 2). rjj = 0.45, rjl = 0.05. 35

(a)

pc vs time 0.45 rjj = 0.25 0.4 rjj = 0.35 rjj = 0.45 0.35 rjj = 0.50 0.3 0.25 c p 0.2 0.15 0.1 0.05 0 0 20 40 60 80 100 Time (b)

Relative convergence vs rjj 1.9 1.8 1.7 1.6 1.5 1.4 1.3

Relative convergence 1.2 1.1 1 0.25 0.3 0.35 0.4 0.45 0.5

rjj

Figure 2.5: (a) Stag Hunt time series with rjj = 0.5, 0.45, 0.35, and 0.25.(b) rela- tive convergence time for the Stag Hunt vs rjj. Initial conditions are (pc1, pc2, pd1, pd2) = (0.4, 0.05, 0.05, 0.5), and κ = 0.5. 36

2.5 Discussion

We developed homophilic replicator equations and have shown that their fixed points and the stability of these points depend solely on the game and thus strategies involved, not tag structure. With perturbations, a population may wander through a stable manifold with no preference to evolve distinct affiliations between tags and strategies. However, tag structure does have a significant impact on the diversity of the population.

Our model demonstrates the phenomenon of coat-tailing, whereby tags associated with successful strategies increase in abundance, even though the tags themselves do not directly confer greater success. Diversity decreases as we move along a trajectory from an initial position toward the g.a.s. manifold. Thus, a tag can increase in frequency due to an initially higher density among those replicators employing a particular strategy. That is, the tag propagates through the population coat-tailing on the growth of successful strategies.

Tag structure changes the rate of convergence to an ESS, for the same average imitation rate between HRE and SRE. In highly homophilic and tag-strategy correlated populations, unfit strategies may be buffered from changing their strategies due to a phobia of replicators with different tags. However, since the strategy profile of the entire population determines

fitness, in the long run, the system will converge to the ESS. This phenomenon was depicted in the Stag Hunt scenario we observed. The tag 1 subpopulation’s equilibrium was all cooperators, and the tag 2 subpopulation’s equilibrium was all defectors. Since we were slightly off of the unstable equilibrium by having too many defectors, the tag 1 replicators would eventually be defectors. However, movement to the ESS was exceptionally slow for 37

highly homophilic systems.

Technology has greatly increased communication (Rogers (2010)), which can cause

increase cultural homogeneity (Greig (2002)). We observed this phenomenon in scenario

2. Increased diversity has been shown in spatial models that employ homophilic imita-

tion (Axelrod (1997)). Our results are surprising, since they show homophily increasing

diversity without spatial structure. This phenomenon occurs via two mechanisms. The

first mechanism is through invasion scenario 1. Novel tags introduced into the population

result in increased diversity with the appropriate conditions. The second is by increasing

convergence times, which keep the population away from the boundary equilibria where

the diversity is lowest (as in the Stag Hunt scenario). This observation is important, since

populations are typically off-equilibrium in the real world (Hastings (2004)).

HRE and modifications thereof may be applied to human social groups, phenotypes, or

species. Further research could explore applications of HRE to group formation. Invasions

of mutants with a new strategy and tag composition can increase diversity (scenario 1) and thus provide a tag pool from which coat-tailing can associate strategies and traits under other invasions (scenario 2). Repeated applications of these invasion scenarios may then serve as a first step to group formation, as the population begins to cluster around partic- ular configurations of strategies and tags. The diversity dynamics uncovered in this paper suggest that this work would be fruitful. 38

2.6 Appendix: Games

The following background describes the Snowdrift, Prisoner’s Dilemma, and Stag Hunt games. For simplicity in the work above, we alter payoff matrices so that the games are in terms of one parameter, κ = c/b.

Imagine that two people, who will be the players in this game, are driving along a highway during a blizzard. They careen off of the road into a Snowdrift. Snow must be shovelled away from the vehicle so that it may continue its journey. Each player has the choice to step outside and shovel away the snow or remain in the vehicle. If at least one of the players cleans away the snow, then they both can reach home, thereby receiving the benefit, b. If no one does, they receive a payoff of 0. Snow shovelling is exhausting; therefore, there is a cost, c, to shovelling. This energy cost decreases the overall payoff.

If both players shovel, then they split the cost; if only one shovels, then that player suffers the entire energy cost. There are three Nash equilibria: (Shovel, Don’t Shovel), (Don’t

Shovel, Shovel), and one mixed strategy. However, the ESS for this game is the internal equilibrium. The following payoff matrix represents this game:

  b − c/2 b − c   ΠSD =   (2.15)   b 0 where b > c > 0.

In the Prisoner’s Dilemma, payoff matrix 2.16, we have two players that choose from a strategy set of cooperate and defect. Let b > c > 0. If both cooperate, they receive the 39 socially optimal payoff, b − c. If one cooperates and the other defects, the cooperator earns

−c, and the defector earns b, the temptation. If both defect, they each receive nothing. The socially optimal outcome of cooperation is unstable. At the ESS, the population is entirely composed of defectors.   b − c −c   ΠPD =   (2.16)   b 0

Consider two hunters, the players, who have the choice between hunting a stag or a hare. Cooperation is required to successfully harvest a stag and receiving the benefit, b; if only one player hunts stag, then that player will fail at catching anything. Hunting stag requires an investment of c regardless of the success. A player who hunts hare will receive a payoff of 0 regardless of the strategy decision of the other player. Again, we have that b > c > 0. The payoff matrix for this game is:

  b − c −c   ΠSH =   (2.17)   0 0

There are two ESSes for this game: all Stag Hunting or all hare hunting. The internal equilibrium is unstable. 40

Chapter 3

Truncation selection and payoff distributions applied to the replicator equation1

B. MORSKY & C. T. BAUCH

3.1 Abstract

The replicator equation has been frequently used in the theoretical literature to explain a diverse array of biological phenomena. However, it makes several simplifying assumptions, namely: complete mixing, an infinite population, asexual reproduction, proportional selec- tion, and mean payoffs. Here, we relax the conditions of mean payoffs and proportional

1accepted in The Journal of Theoretical Biology 41 selection by incorporating payoff distributions and truncation selection into extensions of the replicator equation and agent-based models. In truncation selection, replicators with

fitnesses above a threshold survive. The reproduction rate is equal for all survivors and is sufficient to replace the replicators that did not survive. We distinguish between two types of truncation: independent and dependent with respect to the fitness threshold. If the pay- off variances from all strategy pairing are the same, then we recover the replicator equation from the independent truncation equation. However, if all payoff variances are not equal, then any boundary fixed point can be made stable (or unstable) if only the fitness thresh- old is chosen appropriately. We observed transient and complex dynamics in our models, which are not observed in replicator equations incorporating the same games. We conclude that the assumptions of mean payoffs and proportional selection in the replicator equation significantly impact replicator dynamics.

3.2 Introduction

The range of applications of evolutionary dynamics is great, and includes fields such as animal behaviour (Dugatkin and Reeve (1998)) to economics (Dopfer (2005); Fried- man (1991, 1998)). Introduced in Maynard Smith (1982), evolutionary game theory has bloomed in the last few decades as a means of explaining biological phenomena (Hammer- stein et al. (1994); Hofbauer and Sigmund (2003); Nowak and Sigmund (2004)). Examples of evolutionary dynamics include: Brown-von Neumann-Nash, imitation, best response, 42

and replicator dynamics Hofbauer and Sigmund (2003). In particular, replicator dynam-

ics is immensely important, with applications that span such fields as: genetics, ecology,

chemistry, sociology (Schuster and Sigmund (1983)).

Replicators are the focus of replicator dynamics. They are agents that can replicate

themselves with, potentially, mutations. The evolutionary dynamics determine the change

in frequencies of these agents over time. Commonly, we envisage this process as selection

of replicators for survival and reproduction. ‘Fit’ replicators survive to reproduce, which

determines the replicator frequencies in the next generation. The replicator equation is

frequently used in this framework to model the frequency dynamics of replicators due to

proportional selection, where the increase in frequencies of replicators is proportional to

the difference between their fitness and the average fitness of the population (Taylor and

Jonker (1978)).

The replicator equation makes several assumptions: the population is infinite; if the ele-

ments of the payoff matrix are stochastic, the replicators earn the mean payoffs; each repli-

cator interacts with every other replicator non-preferentially; and selection is proportional.

Much work has explored relaxations of these assumptions with other replicator dynamics,

and development of further evolutionary stability concepts (Ohtsuki and Nowak (2008);

Nowak and Sigmund (2004)). Examples include: finite populations (Taylor et al. (2004)),

heterogeneity (Bergstrom and Godfrey-Smith (1998)), networks (Roca et al. (2009); Szabo´ and Fath (2007)), and stochasticity (Traulsen et al. (2006)).

Other selection methods have been employed in the literature (Back¨ et al. (2000); 43

Blickle and Thiele (1995); Ficici et al. (2000)). In truncation selection, after players have interacted, we rank the players from highest to lowest fitness, and the a top fraction of the population survives to reproduce. The reproduction rates are equal for all survivors. The population is then normalized. The key differences between this method and proportional selection are that survival is dependent upon meeting a threshold fitness, that the reproduc- tion rate is identical for all surviving players. In proportional selection, reproduction rates are proportional to the difference between the fitness of a player and the average fitness of the population. This selection method is important in biology where thresholds for survival exist, and because survival (and often reproduction) are binary events (they either happen or they do not). In truncation selection, selection pressure is near the threshold for sur- vival and reproduction, and thus selection pressure is weaker at the extreme high end of the

fitness distribution; the system is not selecting for excellence, but for adequacy.

Agent-based round-robin simulations have suggested that the ESS is not a useful con- cept in biology when truncation selection is used (Fogel et al. (1998); Fogel and Fogel

(2011)). Oscillations and apparent chaos may occur in such games, where the ESS predicts no such phenomena. Further, the average population frequencies are significantly different from the ESS. The difference between these results and the ESS are due to selection at the extreme lower ends of the payoff distributions of each replicator (caused by stochastic elements of the agent-based models). Thus, the discrepancy is due to asymmetric selection pressures on either side of the ESS (Ficici et al. (2005); Ficici and Pollack (2007)).

The primary objective of our paper is to explore how relaxing simplifying assump- 44 tions of the classic replicator equation — namely, relaxing mean payoffs and proportional selection in favour of payoff distributions and truncation selection, respectively — influ- ence replicator dynamics and the corresponding evolutionary stable states (ESSes). The assumptions made in the replicator equation are for mathematical tractability. However, in modeling biological systems, we should be wary of an axiomatic approach that rests on such assumptions (Gintis (2009); Mailath (1998)). To explore the relaxations to the mean payoff and proportional selection assumptions, we develop and analyze truncation equations and agent-based simulations.

3.3 Methods

Here we examine two assumptions of the replicator equation, namely: mean payoffs and proportional selection. We will show that there are significant differences between models when both of these assumptions are altered. Focussing on the Hawk-Dove game, we will begin with a discussion of fitness distributions followed by truncation selection methods.

3.3.1 Fitness distributions

The Hawk-Dove, Prisoner’s Dilemma, Stag Hunt, and harmony games (Axelrod and

Hamilton (1981); Skyrms (2004); Sugden (1986)) are important two player models of bi- ological systems with a rich body of literature. In contrast to the other games, the Hawk- 45

Dove game yields a stable interior equilibrium. We will primarily focus on the Hawk-Dove

game for our examples. It is set up as follows.

The Hawk-Dove game has two strategies: hawk, Sh, and dove, Sd. Let there be a resource worth a payoff of 50 that may be gained when any two players meet. If a hawk meets a dove, the hawk receives the resource and the dove receives nothing. If, however, a hawk meets another hawk, they fight. Each having an equal chance of winning, the winner receives the resource, and the loser is wounded, receiving a negative payoff, −100. If two

doves meet, they posture, attempting to intimidate each other, which has a payoff cost of

−10. With probability 0.5, a dove intimidates its opponent thus receiving the resource as its

opponent flees. Therefore, the average payoff is 15. The following payoff matrix represents

these averages of the game:

    ahh ahd −25 50     A =   =   . (3.1)     adh add 0 15

This payoff matrix is used in the Hawk-Dove replicator equation. However, notice that in same strategy pairings, no player receives these averages (e.g between two doves, one will earn −10 and the other 40). As more pairings occur, the fitness (which is the average of all the payoffs earned from the pairings), will approach this average for most players.

However, there will be players that will receive much higher and much lower fitnesses.

To factor in the range of possible fitnesses that can be earned in the Hawk-Dove game, we begin with observing the fitness distributions of each strategy pair. For simplicity, we 46

will assume that the fitnesses are normally distributed. Thus, for hawks, the mean fitness

of a hawk playing a hawk is µhh = −25, and the standard deviation is:

q 2 2 σhh = (50 − (−25)) /2 + (−100 − (−25)) /2 = 75. (3.2)

So that we may have a normal distribution for hawks vs. doves, let us assume that

doves are quicker than hawks and thus may take the resource before hawks arrive with

probability 0.1. Otherwise, the hawk receives the resource as usual. Table 3.1 summarizes

these parameter values.

i, j µij σij

h, h -25 75

h, d 45 15

d, h 5 15

d, d 15 25

Table 3.1: Parameter values for the Hawk-Dove game.

Now, we may derive the fitness distribution for Si, which is dependent upon the fre-

quencies of hawks (xh) and doves (xd). Since we have normal distributions for all strategy

2 2 2 2 2 pairings, we have that µi = µiixi + µijxj, and σi = σiixi + σijxj . The fitness probability 47

density function for xi is thus:

0 2 ! 0 1 (φ − µi) ρi(φ ) = √ exp − 2 . (3.3) σi 2π 2σi

In section 3.3.4, we introduce independent and dependent truncation, which model se- lection upon the fitness distribution defined by Equation 3.3.

3.3.2 The replicator equation

The replicator equation is a mean field model that incorporates concepts from game theory. Replicators each have a strategy, Si, drawn from a strategy set, S, of size n. The strategies determine the replicators’ payoffs, and fitness is a function of the strategy profile of the entire population. Let xi be the population frequency of replicators playing Si, x =

T [x1, x2, ··· , xn] , and let fi(x) be their fitness earned from playing the entire population.

Using proportional selection, we say that x˙ i is equal to xi proportional to the difference

¯ between fi(x) and the average fitness, f(x). Mathematically, this is:

 n  ¯ X x˙ i = xi(fi(x) − f(x)) = xi fi(x) − xjfj(x) . (3.4) j=1

fi(x) is derived from the payoffs received from interacting with the entire population weighted by the frequencies of each strategy. Letting aij be the payoff to the player playing 48

strategy Si versus an opponent playing Sj, we have:

n X fi(x) = aijxj. (3.5) j=1

An evolutionary stable state (ESS, not to be confused with an evolutionary stable strat- egy) is a composition of the population that is stable under small perturbations due to mutations or invasions. It is one of a variety of related definitions of evolutionary stability

(Lessard (1990)), which have evolved over time (Eshel (1996); Vincent and Brown (1988)).

If we perturb the system slightly, we will asymptotically return to the ESS (Hofbauer and

Sigmund (1998); Weibull (1997)). The folk theorem of evolutionary game theory (Cress- man (2003)) states that all ESSes are Nash equilibria of the game defined by the payoff matrix, A = {aij}. However, not all Nash equilibria of A are ESSes (although they all are

fixed points).

There is an important connection between the replicator equation and the Lotka-Volterra equation that we note. The replicator equation can be imbedded into the Lotka-Volterra equation; further, the Lotka-Volterra equation with equivalent growth rates for all species can be reduced to the replicator equation (Bomze (1983, 1995); Hofbauer and Sigmund

(1998)). 49

3.3.3 Proportional selection

Here we modify the continuous replicator equation by using fitness distributions in order to explore the relaxation of the mean payoff assumption. We begin by introducing

T the winning profile: wi = [wi1, wi2, . . . win] , where wij is the probability of xi winning in a paired match against xj. Let fi(x, wi) be the fitness of xi playing against the entire

0 population, x, with winning profile, wi. Further, let aij and aij be the payoff of Si vs. Sj winning and losing, respectively. We thus have:

n X 0 fi(x, wi) = xj(wijaij + (1 − wij)aij). (3.6) j=1

Let ψi(x, wi) be the probability density function of the winning profile, wi. Apply- ing the familiar replicator equation to find the rate of change of xi with respect to xj we have: xixj(fi(x, wi) − fj(x, wj))ψi(x, wi)ψj(x, wj). However, we must integrate over all possible winning profiles for all strategies in S, giving us the equation:

n X Z Z x˙ i = xi xj (fi(x, wi) − fj(x, wj))ψi(x, wi)ψj(x, wj)dwjdwi. (3.7) j=1 wi wj

However, we find that this equation is equivalent to the replicator equation using the mean payoffs (see 3.7). Therefore, removing the mean payoff assumption alone does not have an effect. For results different from the replicator equation, we need to couple fitness distributions with truncation selection as detailed in the following section. 50

3.3.4 Truncation selection

Two ways to view truncation selection are: survival of individuals above some threshold

fitness; and survival of some top percentile of individuals. In the former case, the truncation

threshold is independent of the fitnesses of the population, and thus the number of individ-

uals that survive will vary each selection event. We call this our independent truncation

model. In the latter case, the truncation threshold is dependent upon the fitnesses of the

population, and thus we denote it dependent truncation.

Let φ be the truncation threshold. The proportion of the population playing strategy

Si with fitness above φ survive. We determine this proportion by integrating the fitness

probability density functions (3.3) from φ to infinity. Where erfc(·) is the complementary

error function and zi = (φ − µi)/σi, we have:

Z ∞ ! ! 0 0 1 φ − µi 1 zi ρi(φ )dφ = erfc √ = erfc √ . (3.8) φ 2 2σi 2 2

√ Thus, the frequency of survivors of Si is xi erfc(zi/ 2)/2. The population is then

normalized (i.e. the survivors’ offspring replace the culled population with the same growth

rate). Therefore, we divide the survivors of Si over all of the survivors to determine the

0 frequencies after selection and reproduction, xi. Since xh + xd = 1 and x˙ h +x ˙ d = 0, we

will focus on hawks:

√ 0 xh erfc(zh/ 2) xh = √ √ . (3.9) xh erfc(zh/ 2) + xd erfc(zd/ 2) 51

Taking the time limit of the difference quotient of xh, we may convert this discrete process to the following differential equation:

" ! ! !!# zh zh zd x˙ h = xh erfc √ − xh erfc √ + xd erfc √ . (3.10) 2 2 2

Equation 3.10 is our independent truncation model where φ is a parameter. Recalling that µi, σi, and thus zi are functions of x, we note that the independent truncation equation √ is of the same form as the replicator equation where we have erfc(zi/ 2) instead of fi(x).

For dependent truncation, we permit φ to vary, but fix the proportion that survive, τ.

Therefore, φ is determined by solving the following algebraic constraint:

"x z ! x z !# g(φ, x) = τ − h erfc √h + d erfc √d = 0. (3.11) 2 2 2 2

Since:

2 ! 2 ! xh zh xd zd ∂φg(φ, x) = √ exp √ + √ exp √ 6= 0, (3.12) 2πσh 2 2πσd 2

we have an index 1 system of differential algebraic equations (DAEs), which can be reduced

giving us our dependent truncation model:

 ! !  1 zh  x˙ h = xh erfc √ − τ  2 2  . (3.13) ∂x g(φ, x)x ˙ h + ∂x g(φ, x)x ˙ d  φ˙ = − h d   ∂φg(φ, x)  52

The dependent truncation DAE, Equation 3.13, is a continuous time and infinite popu- lation model of the agent-based model in Fogel et al. (1998); Fogel and Fogel (2011).

3.3.5 Agent-based models

We ran computer simulations for agent-based models using stochastic payoffs with independent and dependent truncation. The dependent truncation agent-based model is a replication of models in the literature (Fogel et al. (1998); Fogel and Fogel (2011)).

Each turn consists of a round robin tournament. Payoffs are calculated for each game played, and the fitness for each player is the average of these payoffs. At the end of each tournament, truncation selection occurs. For independent truncation, those players with

fitnesses below φ are culled. For dependent truncation, the top τ players survive. In both cases, the population is normalized by the survivors spawning 1/τ players with strategy composition equal to the composition of the survivors. We round to the nearest player.

Both models used a population size of 500, random initial strategy compositions, were run for 200 turns, and 500 simulations were averaged per chosen parameter.

3.4 Results

3.4.1 Evolutionary stability

We prove in 3.8 that if the variance is constant across all strategy payoffs, then the fixed points of both truncation equations and the classical replicator equation coincide. Further, 53

if the variance is constant across all strategy payoffs and x∗ is an ESS of the replicator

equation, then x∗ is evolutionary stable in the independent truncation equation, and vice

versa. Thus, we must have different payoff variances to observe qualitatively different

dynamics.

Let us return to our Hawk-Dove model. We may linearize with respect to xh to find that

we have stability when:

! !! ∂ zh zd (x ˙ h) = (1 − 2xh) erfc √ − erfc √ ∂xh 2 2 ! 2 ∂xh zd ∂xh zh + (x − x ) √ √ − √ √ < 0. h h z2/ 2 z2 / 2 2πe d 2πe h

We can simplify this expression greatly depending on if the fixed point is interior or on

the boundary. If xh = 0, then we have stability when zh > zd. If xh = 1, then we have stability when zh < zd. Finally, in the interior, we have a fixed point if zh = zd. This point

is stable when ∂xh zh > ∂xh zd. An interesting implication of these stability conditions is that since the conditions are affine functions of φ, if σij 6= σii for i 6= j, we may make any boundary fixed point stable by selecting an appropriate threshold, φ. We can see this attribute of the independent truncation equation by expanding the stability conditions to:

zd − zh = φ(σhd − σdd) + (µhdσdd − µddσhd) < 0, (3.14)

zh − zd = φ(σdh − σhh) + (µdhσhh − µhhσdh) < 0, (3.15) ! ∂xh σh ∂xh σd ∂xh zd − ∂xh zh = φ 2 − 2 σh σd 54

! ∂xh µhσh − µh∂xh σh ∂xh µdσd − µd∂xh σd + 2 − 2 < 0. (3.16) σh σd

∗ ∗ Conditions 3.14 and 3.15 are for xh = 0 and xh = 1, respectively. Where an interior

fixed point exists, its stability is determined by Condition 3.16. Here we may similarly adjust φ to affect the inequality. However, we cannot guarantee that we can make this fixed point stable, since we are constrained by xh ∈ [0, 1].

We may observe these effect upon the Hawk-Dove model parameters we have chosen.

Since σhd − σdd < 0 and σdh − σhh < 0, Conditions 3.14 and 3.15 are decreasing linear functions. Thus, both boundary points may be stable (or unstable). From this fact, we can infer by topological reasoning that there is at least one interior equilibrium when this condition is met. Further, since the payoff variances for hawks is greater than or equal to that for doves, a higher proportion of hawks than doves will be in the upper echelons of

fitnesses. This fact has less importance for low fitness threshold, as we are selecting for ad- equacy. However, for high fitness thresholds, this greater variance of fitness permits hawks, as a strategy, to dominate. We observe these effects upon the domain φ ∈ [−100, 100] in

Figure 3.1, which we shall discuss in the following section. 55

Independent truncation equation equilibria 1 0.09 0.9 0.08 0.8 0.07 0.7 0.06 | )

0.6 h 0.05 h x 0.5 φ, x (

0.04 h ˙

0.4 x | 0.3 0.03 0.2 0.02 0.1 0.01 0 0 -100 -75 -50 -25 0 25 50 75 100 φ ∗ ∗ xh (stable) xh (unstable)

Figure 3.1: We observe a variety of fixed point regimes dependent upon φ. Here is plotted the stable and unstable equilibria along with |x˙ h(φ, xh)| for the independent truncation equation.

3.4.2 Agent-based simulations vs the 2-strategy model

3.4.2.1 Independent truncation

In figure 3.1, we plot the equilibria for the independent truncation equation along with

|x˙ h(φ, xh)|. For φ < 12.5 the model behaves qualitatively like the standard Hawk-Dove game; there is a stable interior fixed point. However, this equilibrium rises sharply as we

∗ increase φ until we have stability only at xh = 1. As we continue along the φ axis, a blue

sky bifurcation occurs producing two interior fixed points, one stable and one unstable. At

this point, we observe more equilibria than the three that are supported in symmetric two

player games. Further, there are areas of phase space where x˙ h ≈ 0 and thus dynamics are 56

(a) (b)

Mean simulation results Extinction frequencies 1 1 0.8 0.8

h 0.6 h 0.6 x 0.4 x 0.4 0.2 0.2 0 0 -100 -50 0 50 100 -100 -50 0 50 100 φ φ

x¯h sh h d Figure 3.2: There are significant differences between the equilibria in the equation (Fig- ure 3.1) and agent-based model for independent truncation. For the frequency of hawks, we observe the mean (x¯h) and standard deviation (sh) in panel (a). Further, extinction can occur in the agent-based model. The extinction rates for hawks and doves, (h, and d, respectively) are produced in panel (b).

slow. As move back along the φ axis below φ = 12.5, the area around the stable equilibria with slow dynamics increases and thus convergence takes longer (results not shown). This result is not surprising intuitively; the truncation threshold is so low that the vast majority of the replicator population is above it and thus a very small fraction are culled. This phenomenon is an example of transient dynamics (Hastings (2004)) and their importance in these models.

We notice significant differences between the equilibria curves in Figure 3.1 and the average hawk frequency, x¯h, in Figure 3.2a. For low φ, x¯h ≈ 0.5 and sh ≈ 0.3. We expect that this result is due to the weakness of selection for this domain of φ, and thus the stochastic effects are dominating the dynamics. However, x¯h decreases to 0 as we increase 57

Independent truncation simulation results 1 0.9 0.8 0.7 0.6 h

x 0.5 0.4 0.3 0.2 0.1 0 -100 -75 -50 -25 0 25 50 75 100 φ

Figure 3.3: Final frequency of hawks for 100 simulations after 200 steps for the independent truncation agent-based model.

φ. We include extinction rates for hawks and doves, h and d (Figure 3.2b). Since, it’s possible for no agents to meet high culling thresholds. Notice that h > d. Thus, hawks are more prone to becoming extinct. However, both extinction rates rise sharply to 1 as φ

∗ approaches 12.5, the point at which xh = 1 in the equations becomes stable.

Figure 3.3 depicts the final frequencies of hawks for 100 simulations run for 200 steps at random initial conditions for φ from −100 to 100 in increments of 1. Below φ ≈ −30 the distribution of final frequencies appears random. However, as we increase φ, a linear boundary appears that decreases until φ ≈ 10. It then increases- matching the equilibria curve for the independent truncation equation (Figure 3.1) until φ ≈ 12.5, at which point all final sizes are 0 (due to extinction). 58

3.4.2.2 Dependent truncation

Dependent truncation 1 0.09 0.9 0.08 0.8 0.07 0.7 0.06 | )

0.6 h 0.05 h τ, x x 0.5 (

0.04 h ˙ 0.4 x | 0.3 0.03 0.2 0.02 0.1 0.01 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 τ ∗ xh x¯h sh

Figure 3.4: For contrast, here is plotted the stable equilibria and |x˙ h(φ, xh)| for the depen- dent truncation equations, and the agent-based average hawk frequency (x¯h) and sample standard deviation (sh).

In Figure 3.4, we depict the stable equilibria for the dependent truncation equations vs. the agent-based simulation results. For τ . 0.3, the equilibrium is all hawks. However,

∗ as we move along the τ axis greater than 0.3, we observe stable equilibria. Further, xh decreases as τ increases. Near the equilibria, |x˙ h(τ, xh)| is small, and thus dynamics slow as they approach the equilibria curve. Unlike independent truncation (Figure 3.1), we do not observe all doves as a stable equilibrium. Figure 3.4 further depicts x¯ and sh from the agent-

∗ based simulations. We observe significant differences between x¯h and xh. For τ ∈ (0, 0.4), x¯h < 1 and increases as we increase τ. In figure 3.5, we observe that the frequencies 59

Dependent truncation simulation results 1 0.9 0.8 0.7 0.6 h

x 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 τ

Figure 3.5: Here we plot final frequencies of hawks for 100 simulations after 200 steps for the dependent truncation agent-based model.

of hawks by the end of the simulations are either 0 or 1 in this domain. Thus, x¯ is the

frequency that doves become extinct. For τ & 0.4, we observe a precipitous drop in x¯h and

∗ xh. As τ is increased, the rate of decrease of x¯h decreases. Unlike independent truncation, the thresholds at which the results for equations and simulations each qualitatively change with respect to the truncation parameter are not equal. They differ by approximately 0.1.

Figure 3.5 depicts the final frequencies of hawks for 100 simulations run for 200 steps at random initial conditions for τ from 0.01 to 0.99 in increments of 0.01. As we decrease

τ from 1, the range of final xh values increases. By observing the time series in this regime

(see figure 3.6), we see that this is due to increased amplitudes for lower τ. At τ ≈ 0.4, this behaviour ends. We observe final frequencies of 0 and 1 for τ . 0.4 except for a few 60

(a) (b)

Time series, τ = 0.5 Time series, τ = 0.75 1 1 0.8 0.8 0.6 0.6 h h x x 0.4 0.4 0.2 0.2 0 0 0 50 100 150 200 0 50 100 150 200 turn turn Figure 3.6: Time series of the agent-based dependent truncation model for τ = 0.5 in (a) and τ = 0.75 in panel (b). We initialized the systems at xh = 0.5.

internal points.

3.5 Discussion

Here we removed two assumptions of the replicator equation: mean payoffs and pro-

portional selection. We have done so by exploring payoff distributions and two types of

truncation selection methods: dependent and independent truncation with respect to the

fitness threshold, φ. The independent truncation equation is similar to the replicator equa- tion, but with modified fitness functions. The dependent truncation equation incorporates an additional constraint on the system. We further explored agent-based representations of these truncation types.

We uncovered that the standard replicator equation is a special case of our independent 61 truncation equation when the payoff variances are equal for all strategy pairings. If, how- ever, the payoff variance is different for all strategy pairing, we showed that the stability of our fixed points is dependent upon our choice of φ. In particular, we may always choose a

fitness threshold such that a boundary fixed point is stable (or unstable).

We observed many similarities and differences between the models, which suggests that this approach to modeling the behaviours observed in Fogel et al. (1998); Fogel and Fogel

(2011) can be fruitful. However, the dynamics are complex and further research should be conducted to examine the effects of finite populations. An analysis of the fixed points of the equations did not factor in the strength of selection. We observed transient dynamics when selection was weak. Convergence to the equilibrium was weak. And, in the agent-based model, this phenomenon resulted in the selection pressure being lost in the stochasticity.

Where this occurred, selection did not alter the populations in the agent-based models due to the population being finite. Extinction was possible in the agent-based models, yet was not modeled into the equations. Thus, for a high φ for independent truncation, the agent- based simulations led to extinction where the equations showed survival of hawks and doves.

We observed complex dynamics in the agent-based dependent truncation model unlike the stable interior fixed point of the corresponding differential equation. We are conducting further studies to explore this behaviour. It may be due to several factors of the agent-based model: the finite population, stochasticity, discrete time steps, or the discrete nature of the

fitness distributions (our equations use continuous distributions). 62

3.6 Conclusions

The implications of the fact that we may control stability by adjusting the fitness thresh- old extends beyond the modeling of conflict in the Hawk-Dove game. For example, we may apply this idea to the Prisoner’s Dilemma, which is frequently used to model coop- eration. A variety of methods have been discovered to enable cooperation to occur: kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection

(Nowak (2006)). With independent truncation, we may choose a fitness threshold such that all replicators cooperating is stable, which is a novel method of enabling cooperation. Note that these cooperators cooperate unconditionally, and no other mechanisms, such as spatial assortment or conditional cooperation, are required.

Reflecting upon natural selection, one questions which selection method is most appro- priate: proportional or truncation. Perhaps, Herbert Spencer’s “survival of the fittest” is less apt than “survival of the fit” (Smith (2012a,b)) or “non-survival of the non-fit” (Den Boer

(1999)), which truncation selection can arguably model. Alfred Russel Wallace thought that the relatively unfit could coexist (though in fewer number) than the fit Bulmer (2005).

Thus, the population is more genotypically diverse and the risk of extinction is spread over a variety of genotypes (den Boer (1968)). These concepts are absent from the classical replicator equation, where the relatively most fit survive and reproduce. Truncation equa- tion permits relatively unfit replicators to survive where they would be driven to extinction in the replicator equation.

Because of these ideas, we suggest that there needs to be more focus on truncation 63 selection. Indeed, truncation dynamics can profoundly differ from those of proportional selection as has been shown here and in the literature (Fogel et al. (1998); Fogel and Fogel

(2011)). Due to the importance of evolutionary dynamics in a variety of fields (Dosi and

Nelson (1994); Hines (1987)), much important future work could be done to better under- stand these equations and to apply them in substitute of the replicator equation and other selection methods in the literature.

Acknowledgements

We thank the editor and the two anonymous reviewers for many helpful comments on the manuscript.

3.7 Appendix: mean payoffs and the proportional selec-

tion

In the following theorem and proof, we show that removing the mean payoff assumption of the replicator equation does not alter the qualitative dynamics.

Theorem 3.7.1. The modified replicator equation that incorporates payoff distributions, as given by Equation 3.7, is equivalent to the classical replicator equation. 64

Proof.

n X Z Z x˙ i = xi xj (fi − fj)ψiψjdwjdwi j=1 wi wj n Z Z n " X X 0 = xi xj xk wikaik + (1 − wik)aik − wjkajk j=1 wi wj k=1 # − (1 − wjk) ψiψjdwjdwi

n n " Z Z X X 0 0 = xi xj xk (aik − akj) ψidwik ψjdwjk j=1 k=1 wik wjk Z Z 0 + (aik − aik) wikψidwik ψjdwjk wik wjk Z Z # 0 − (ajk − ajk) ψidwik wjkψjdwjk wik wjk n n " # X X 0 0 0 0 = xi xj xk (aik − ajk + (aik − aik)w ¯ik − (ajk − ajk)w ¯jk) j=1 k=1 n " n ! X X 0 = xi xj xk(w ¯ikaik + (1 − w¯ik)aik) j=1 k=1 n !# X 0 − xk(w ¯jkajk + (1 − w¯jk)ajk) k=1 n  n  X ¯ ¯ ¯ X ¯ = xi xj(fi − fj) = xi fi − xjfj j=1 j=1

¯ Where w¯ik is the mean probability of i winning against k, and fi is the mean value of fi. We have recovered the standard replicator equation. Therefore, the mean payoff assumption for the replicator equation has no effect upon the model other than to simplify the formula. 65

3.8 Fixed points and stability proofs

∗ Lemma 3.8.1. Let σi = σ ∀i, then x is an interior fixed point for the independent and dependent truncation equations (Equations 3.10 and 3.13) if and only if it is a Nash Equi- librium.

Proof. Let x∗ be a Nash Equilibrium. Thus, it is a fixed point of the replicator equation

Pn ∗ Pn ∗ with payoff matrix, A = {aij}, and satisfies: k=1 aikxk = k=1 ajkxk ∀i, j.

∗ For both truncation equations, we have boundary equilibria when xi = 1, and internal equilibria when zi = zj ∀i, j. Let σi = σ ∀i, then zi = zj implies that µi = µj, where

Pn ∗ ∗ µi = j=1 µijxj . Thus, where aij = µij, x is a fixed point if and only if it is a Nash

Equilibrium in this special case.

∗ Note: If xi = 1, then this holds if and only if σii = σji ∀j (since this implies that

σi = σj).

qPn 2 ∗ Theorem 3.8.2. Let σ > 0 and σi = σ j=1 xj ∀i, then x is asymptotically stable for

the independent truncation equation if and only if it is an ESS of the replicator equation.

Proof. We will use the Kullback-Leibler divergence as our Lyapunov function, V (x) =

∗ Pn ∗ ∗ ∗ DKL(x ||x) = i=1 xi log(xi /xi). Suppose x is an ESS of the replicator equation em- ˙ ploying payoff matrix, A = {aij}. Then, by Hofbauer and Sigmund (1998), V (x) =

Pn ∗ ∗ − i=1(xi − xi)fi(x) < 0 for x 6= x .

Pn Pn ∗ Pn ∗ Let µi = fi(x) = j=1 aijxi and note that: i=1(xi − xi)φ = 0, since i=1 xi = 66

√ Pn i=1 xi = 1; and −1/2 2σi < 0. We thus have:

n X ∗ 0 > −(xi − xi)fi(x) i=1 n X ∗ ∗ = [(xi − xi)φ − (xi − xi)µi] i=1 n X ∗ ∗ −1 0 < [(xi − xi)φ − (xi − xi)µi] √ i=1 2 2σi n X 1 ∗ φ − µi = − (xi − xi) √ i=1 2 2σi n X 1 ∗ zi = − (xi − xi)√ . i=1 2 2

Since erfc(·) is monotonically decreasing and logarithmically concave, we have the following:

n ! X 1 ∗ zi 1 = erfc(0) > erfc (−xi + xi)√ i=1 2 2 n x∗ z ! x z !! = erfc X i −√i + i √i i=1 2 2 2 2 x∗ ! i ! xi n z 2 z 2 ≥ Y erfc −√i erfc √i . (3.17) i=1 2 2

The n strategy form of the independent truncation equation is:

 ! n ! zi X zj xi = xi erfc √ − xj erfc √  . (3.18) 2 j=1 2 67

Applying V (x) to 3.18 and taking the derivative with respect to time, we have:

n ˙ X ∗ x˙ i V (x) = − xi i=1 xi n  ! n ! X ∗ zi X zj = − xi erfc √ − xj erfc √  i=1 2 j=1 2 " n !#  n n ! X ∗ zi X X zj = − xi erfc √ +  xi xj erfc √  i=1 2 i=1 j=1 2 " n !#  n ! X ∗ zi X zj = − xi erfc √ +  xj erfc √  i=1 2 j=1 2 n ! X ∗ zi = − (xi − xi) erfc √ . (3.19) i=1 2

We have stability when 3.19 is negative. Since − erfc(z) = erfc(−z) − 2, and by the

AM-GM inequality, we have:

n ! X ∗ zi 0 > − (xi − xi) erfc √ i=1 2 n ! ! X ∗ zi zi = −2 + xi erfc −√ + xi erfc √ i=1 2 2 n x∗ z ! x z ! 1 > X i erfc −√i + i erfc √i i=1 2 2 2 2 x∗ ! i ! xi n z 2 z 2 ≥ Y erfc −√i erfc √i . (3.20) i=1 2 2

Inequalities 3.17 and 3.20 are the same. Thus, if the payoff variances are identical, x∗ is asymptotically stable for the independent truncation model if and only if it is an ESS of the replicator equation. 68

Chapter 4

Truncation selection facilitates

cooperation on random spatially

structured populations of replicators1

B. MORSKY & C. T. BAUCH

4.1 Abstract

Two-strategy evolutionary games on graphs have been extensively studied in the liter- ature. A variety of graph structures, degrees of graph dynamics, and behaviours of repli- cators have been explored. These models have primarily been studied in the framework facilitation of cooperation, and much excellent work has shed light on this field of study.

1in preparation for submission to Physical Review E 69

However, there has been little attention to truncation selection as most models employ proportional selection (reminiscent of the replicator equation) or “imitate the best.” Thus, here we systematically explore truncation selection on random graphs, where replicators below a fitness threshold are culled and the reproduction probabilities are equal for all survivors, and find that truncation selection generally results in greater cooperation than proportional selection. We employ two variations of this method: independent truncation, where the threshold is fixed; and dependent truncation, which is a generalization of “im- itate the best.” Further, we explore the effects of diffusion in our networks, and vary the order of the following operations of our algorithm: contests, reproduction, and diffusion.

For independent truncation, we find three regimes determined by the fitness threshold: co- operation decreases as we raise the threshold; cooperation initially rises as we raise the threshold and there exists an optimal threshold for facilitating cooperation; and the entire population goes extinct. For dependent truncation, we find that culling a large proportion of the population hinders cooperation in the Hawk-Dove game and promotes it for the Stag

Hunt. Conversely, culling a small portion of the population promotes cooperation in the

Hawk-Dove game and hinders it in the Stag Hunt. DCO reduces cooperation in the static case. However, CDO has approximately as much or more cooperation than the static case. 70

4.2 Introduction

The evolution of cooperation is frequently modelled by the Prisoner’s Dilemma. How-

ever, this model faces a in which defection is favoured over coop-

eration. The Prisoner’s Dilemma is a with two strategies: cooperate and

defect with the following payoff matrix:

CD   RS C Π =  . (4.1)     TP D

with T > R > P > S. Thus, though the socially optimal strategy profile is (C,C), due to the temptation to cheat, T , we will be in the suboptimal evolutionary stable strategy

(ESS), (D,D) (Hofbauer and Sigmund (1998)). This game and others are frequently stud- ied in the parameter space determined by: R = 1, −1 ≤ S ≤ 1, P = 0, and 0 ≤ T ≤ 2

(Santos et al. (2006)). Thus, for −1 ≤ S < 0 and 1 < T ≤ 2, we have the Prisoner’s

Dilemma. 0 < S ≤ 1 and 1 < T ≤ 2, we have the Hawk-Dove game and a mixed

ESS. For −1 ≤ S < 0 and 0 ≤ T < 1, we have the Stag Hunt and bistability. And, for

0 < S ≤ 1 and 0 ≤ T < 1, we have the harmony game where the ESS is socially opti- mal. Figure 4.1 depicts the frequency of cooperators at the interior equilibrium (if there is one) or at the exterior ESS. For the area of parameter space that represents the Hawk-Dove game, this is the ESS. For the area of the Stag Hunt, this represents the size of the basin of 71

Cooperation in parameter space 1

0.5

S 0

-0.5

-1 0 0.5 1 1.5 2 T

Figure 4.1: Interpolated heatmap of cooperation in parameter space. White corresponds to defection and black to cooperation. In the harmony (top left quadrant), Hawk-Dove (top right quadrant), and Prisoner’s Dilemma (bottom right quadrant) areas of parameter space, the colour represents the equilibrium frequency of cooperation (white is All Defect, black is All Cooperate, and grey an intermediate amount of cooperation). In the Stag Hunt area (bottom left quadrant), we have bistability. Thus, the heatmap represents the magnitude of the basin of attraction of cooperation (Buesser and Tomassini (2012)).

attraction of cooperation.

A variety of evolutionary dynamics have been used to explore these games (Lehmann and Keller (2006); Perc and Szolnoki (2010)). Here we will focus on an extensively studied framework, evolutionary games on graphs (reviewed in (Nowak et al. (2010); Roca et al.

(2009); Szabo´ and Fath (2007)), where agents are represented by vertices and interact with other vertices with which they share edges (Hauert (2001); Nowak and May (1992)). Each vertex has a specific strategy that it follows. In many studies, including this one, we as- sume that the players always play the same pure strategy regardless of the strategies of their neighbours. From these interactions, vertices earn payoffs — the average or sum of which is their fitness — that determine survival by some selection method. Players playing the 72 strategy, tit-for-tat, will initially cooperate, but will defect if their opponent defects. How- ever, if their opponent returns to cooperating, then so will they. Lattice models of this kind have shown that tit-for-tat can invade defectors (Nakamaru et al. (1997)). Further, such models can increase cooperation (Killingback and Doebeli (1996)) or reduce cooperation in the Snowdrift game (Hauert and Doebeli (2004)), depending on the selection method and games employed. A variety of different invasion scenarios have been studied in this frame- work (Fu et al. (2010)), and it has been shown that cooperators can successfully invade

(Langer et al. (2008)).

Once fitnesses have been calculated, selection occurs. Each vertex compares its pay- off to those of its neighbours to determine what strategy will occupy the vertex next turn.

A common selection mechanism used in spatial games is proportional selection, where a vertex will randomly choose one of its neighbours, and adopt the neighbour’s strategy with a probability proportional to the differences between the payoffs of the vertices. Another similarly proportional selection mechanism is where the probabilities of switching to the payoffs of neighbouring vertices is dependent upon the fitnesses of each. The vertex in question will adopt the strategy of its neighbour (or, equivalently, be replaced by an off- spring of its neighbour) when selection occurs.

Another common selection mechanism is “imitate the best,” in which the focal vertex will compare its fitness (the sum of all interactions with its neighbours) to the fitness of neighbouring vertices (Hauert (2001)). Its strategy will then become the strategy of the vertex with the greatest fitness. If there is a tie, it will be determined randomly from the 73 maximal fitness neighbours.

Truncation selection occurs when a proportion of the population is culled and the sur- vivors reproduce to fill the gap in the population. However, the reproduction rate is equal amongst all survivors (it’s not scaled by fitness). Two types of truncation selection are de- pendent and independent (Morsky and Bauch (2016)). In dependent truncation, the top τ proportion of the population survives and reproduces, while the bottom 1 − τ is culled. For

τ = 1/n, where n is the number of neighbours, we have the “imitate the best” rule. In inde- pendent truncation, replicators with fitnesses greater than some fitness threshold φ survive and reproduce, and those below it are culled. Note that reproduction is not dependent upon the degree to which a replicator is above the threshold for survival. This asymmetry in se- lection results in significant differences from the replicator equation, displaying chaos and significant levels of cooperation where none are represented in the replicator equation em- ploying identical games (Ficici et al. (2005); Ficici and Pollack (2007); Fogel et al. (1998);

Fogel and Fogel (2011); Morsky and Bauch (2016)).

A model that employs a degree of independent truncation is studied in (Zhang et al.

(2011)). Vertices are removed from the graph if their fitnesses are below a threshold. Ver- tices created as replacements have preferential connections to vertices with high fitnesses.

After this process, there is proportional selection. Thus, this model has elements of inde- pendent truncation and proportional selection.

A variety of graphs have been explored, ranging from lattices with periodic or aperiodic boundaries, to small world graphs, and to random regular graphs (Buesser and Tomassini 74

(2012)). Scale-free networks with different levels of degree-degree correlations and en- hanced clustering have been shown to facilitate cooperation (Pusch et al. (2008)). Cooper- ators perform better on random regular graphs than they do on regular small world graphs, which perform better than square lattices (Hauert and Szabo´ (2005)). Further, dynamic graphs have been studied where the graph changes over time due to either random pro- cesses or due to nodal behaviour in which vertices will break edges with uncooperative vertices and attach edges to cooperative ones.

Dynamic graphs are graphs where the edges change over time. Edges may be broken between some vertices, and formed between others that do not share one. By altering the degree of dynamism of the graph, a variety of mechanisms (such as the Red Queen) can lead to high levels of cooperation (Szolnoki and Perc (2009)). This process can be random or determined by vertex behaviour (Wardil and Hauert (2014)). In the behavioural model, vertices may choose to break edges by examining the payoffs earned from neighbours with whom they share them (Cavaliere et al. (2012); Pacheco et al. (2008)), breaking edges with non-cooperating neighbours (Rezaei and Kirley (2012)), or form edges with those vertices that have high payoffs (Wu et al. (2010, 2011)). Other means to study this behaviour include models where the agents move on a plane (Antonioni et al. (2014); Gomez-Garde´ nes˜ et al.

(2007)). They interact with those within some Euclidean distance, which in some models is heterogeneous (Zhang et al. (2011)). After a certain time they reproduce. Cooperation can be supported in such models, but only when the agents’ velocities are low (Meloni et al.

(2009)). Scale-free graphs are the most resilient to this effect (Kun and Scheuring (2009)). 75

Another method of graph dynamism occurs when vertices swap places in the graphs,

or, equivalently, vertices swap strategies with neighbouring vertices. This process is called

diffusion. The order of the operations: contest, C; diffusion (graph dynamism), D; and

offspring, O, heavily affects the results (Sicardi et al. (2009); Vainstein et al. (2007)). CDO

ordering of operations often inhibits the effects of graph structure (Sicardi et al. (2009)). A

discussion of these operations is found in 4.3.

Here we systematically explore independent and dependent truncation selection on ran-

dom graphs, since it has not been sufficiently studied in the literature. Additionally, we

study diffusion with both DCO and CDO operations. We compare our results to models

that use proportional selection.

4.3 Methods

Let G(V,E) be an undirected graph with vertex set V and edge set E. Vi = {j :

{i, j} ∈ E} is the set of neighbours of vertex i. We construct a random graph using the Erdos-R˝ enyi´ G(n, p) model with expected vertex degree, E[|Vi|] = 5, and population

size, |V | = 500. We assign to vertices the cooperator strategy with probability, 0.5, and the

remaining vertices are defectors. We averaged 100 simulations with 200 turns each for each

parameter value we explored, and employed synchronous contests and reproduction. The

order of operations each turn is: contest, diffusion, and offspring for CDO; and diffusion,

contest, and offspring for DCO, which we detail in the following paragraphs in the order: 76

combat, diffusion, and offspring.

During the contest phase, players interact with all their neighbours, and earn payoffs

P from these interactions. From this we calculate the fitness of vertex i, fi = j∈Vi πij/|Vi|.

The payoffs come from πij ∈ Π (payoff matrix 4.1). We vary T and S with increments of

0.1.

Diffusion occurs by randomly selecting vertices to swap strategies with their neighbours n times per turn. We run simulations with mean diffusion rates, d = n/|V | = 0 to 25.

However, we present the results for d = 1. We explored higher diffusion rates (d =

0, 1, 2, ··· , 25). However, we found that these diffusion rates did not sufficiently affect our results.

During the offspring phase, the vertices’ strategies are updated. We employ three differ- ent selection/updating rules: proportional, dependent truncation, and independent trunca- tion. For each vertex, we examine it and its neighbours’ fitnesses and employ our selection method to determine what strategy will occupy the vertex next turn. For proportional selec- tion, vertex i with strategy si will randomly choose a neighbour, j, and adopt its strategy, sj, with probability P (si → sj):

   fj −fi f − f > 0,  max{1,T }−min{0,S} if j i P (si → sj) = (4.2)   0 otherwise,

P where fi = j πij/ni is the fitness of vertex i and ni is the number of neighbours of vertex 77

i, which permits 0 ≤ P (si → sj) ≤ 1. max{1,T }−min{0,S} is the maximum difference

in fitness between two vertices for given T and S, and thus the probabilities are always

less than or equal to 1. This method is identical to the replicator equation for an infinite

population (Helbing (1992)). This selection mechanism is proportional to the differences

between the payoffs of the vertices.

For dependent truncation, we compare the fitnesses of vertex i and its neighbouring

vertices, Vi, and determine the set of vertices that are in the top τ proportion (rounding up)

with respect to fitness. We then set the strategy of vertex i to the strategy of a randomly

selected vertex from this set.

For independent truncation, for each vertex i we determine the set:

0 Vi = {j ∈ Vi ∪ {i} : fj ≥ φ}, (4.3)

where φ is the truncation threshold. We then set vertex i’s strategy to the strategy of a

0 0 randomly selected vertex of Vi . Note that independent truncation can result in Vi = ∅, and thus we may have empty vertices. These empty vertices hold no strategy and do not compete with neighbours. However, they are still a part of the graph and thus offspring may be born at them. We ran simulations for τ from 0.05 to 0.95 in increments of 0.05, and

φ from −1 to 2 (the range of possible fitnesses) in increments of 0.1. 78

4.4 Results

4.4.1 Proportional selection

To enable comparison with the truncation selection results, in figure 4.2 we plot the interpolated heatmap for proportional selection with no diffusion, and d = 1 for DCO and

CDO. Not shown here are results for d > 1, because we did not observe an appreciable impact from those rates. Clustering in structured models facilitates cooperation in the Stag

Hunt game, and inhibits it in the Hawk-Dove game (Roca et al. (2009)). We observe less cooperation in figure 4.2 than in figure 4.1, since the clustering coefficient, C¯ = p = 0.02, is low for our graph (Albert and Barabasi´ (2002)). Notice that the results for DCO do not differ significantly from the no diffusion case. However, CDO diffusion increases coopera- tion as observed in the Hawk-Dove, Stag Hunt, and harmony domains of parameter space.

Cooperators within a cluster of cooperators will have a higher fitness than defectors in a defector cluster. Due to contests occurring before diffusion in CDO, cooperators within a cluster of cooperators will have a higher fitness than defectors in a defector cluster. And, after diffusion, the competion for reproduction ismore likely to occur between unrelated vertices than related vertices. i.e. cooperators that diffuse into a defector cluster will pro- duce many offspring replacing neighbouring defectors, whilst defectors that diffuse into a cooperator cluster will likely die. Thus, although clustering is low in our graphs, it is exploited to promote cooperation in the CDO algorithm. This effect has been similarly observed in (Sicardi et al. (2009); Vainstein et al. (2007)). 79

(a)

Proportional selection, d = 0 1

0.5

S 0

-0.5

-1 0 0.5 1 1.5 2 (b) T

Proportional selection, d = 1, DCO 1

0.5

S 0

-0.5

-1 0 0.5 1 1.5 2 (c) T

Proportional selection, d = 1, CDO 1

0.5

S 0

-0.5

-1 0 0.5 1 1.5 2 T Figure 4.2: Bilinearly interpolated heatmaps of average number of cooperating vertices over 100 simulations for proportional selection with simulation length of 200 turns and d = 0 (a), DCO d = 1 (b), and CDO d = 1 (c). White corresponds to defection and black to cooperation. 80

4.4.2 Independent truncation

In general we observe greater cooperation for both indepenent and depenent truncation

than for proportional selection. Further, we observe a variety of behaviours for indepen-

dent truncation that can be classified into three regimes that are dependent upon the fitness

threshold, φ: φ ∈ [−1, 0), φ ∈ [0, 1), and φ ∈ (1, 2]. In the first, cooperation decreases as we increase φ. In the second, cooperation initially rises. And, in the third, no cooperation is present as extinction occurs.

Figure 4.3 portrays a variety of heatmaps of the independent truncation model. In panels

(a)-(d) we explore the effects of various values of φ for the no diffusion case. These results are summarized in panel (a) of figure 4.4, which plots the density of cooperators, ρc, in each game region for various φ.

For independent truncation, we can alter the nature of fixed points by altering φ (Morsky

and Bauch (2016)). The possible fitness values range from −1 to 2, and from this we have

three regimes: −1 ≤ φ < 0, 0 ≥ φ ≤ 1, and 1 < φ ≤ 2. All players will make the threshold φ = −1, and thus we would expect, and observe, ρc = 1/2. As we increase

φ to 0, we will only select against cooperators, since we may have S < 0, but T ≥ 0.

Therefore, we observe less cooperation as φ rises to 0. Panel (a) of figure 4.3 depicts this case for φ = −1/2; we observe little cooperation below the line S = −1/2.

Panels (b)-(f) display the heatmap results for φ > 0. We observe greater cooperation than in the proportional selection model even with diffusion, which is due to the low clus- 81

(a) (b)

1 1 Independent, d = 0, φ = − 2 Independent, d = 0, φ = 4 1 1

0.5 0.5

S 0 S 0

-0.5 -0.5

-1 -1 0 0.5 1 1.5 2 0 0.5 1 1.5 2 (c) T (d) T

1 3 Independent, d = 0, φ = 2 Independent, d = 0, φ = 4 1 1

0.5 0.5

S 0 S 0

-0.5 -0.5

-1 -1 0 0.5 1 1.5 2 0 0.5 1 1.5 2 (e) T (f) T

1 1 Independent, d = 1, DCO, φ = 2 Independent, d = 1, CDO, φ = 2 1 1

0.5 0.5

S 0 S 0

-0.5 -0.5

-1 -1 0 0.5 1 1.5 2 0 0.5 1 1.5 2 T T Figure 4.3: Bilinearly interpolated heatmaps of average number of cooperating vertices over 100 simulations for independent truncation with simulation length of 200 turns. Black corresponds to the density of cooperators. Note, that unlike the proportional selection and dependent truncation heatmaps, white corresponds to the density of defectors and empty vertices (which do not occur in those other models). 82

tering of the random graphs. Further, cooperation is maximized when at approximately

φ = 1/2 for d = 0 and the DCO model (as can be additionally seen in panels (a) and (b) of

figure 4.4).

For 0 ≤ φ ≤ 1, selection will occur on both strategies. We observe more cooperation

than in panels (b)-(d) of figure 4.3 than the ESS predicts (figure 4.1) and than we observe

in the proportional selection model (figure 4.2). Cooperation increases for all games as φ

is raised. Cooperation reaches a peak and then decreases (figure 4.3).

For φ > 1, cooperators cannot survive, since their maximum fitness is 1. Therefore,

populations could only consist of defectors, which earn a payoff of 0 playing one another.

Therefore, they too will become extinct. Thus, we don’t plot this parameter range.

When we incorporate diffusion into our models, the effects of diffusion rates of 1, 2, ··· , 25

are not significantly different. Thus, we only depict the diffusion results for d = 1. The

DCO results depicted in panels (b) and (c) of figure 4.4 are qualitatively similar to the no diffusion case in figure 4.3. However, we have less cooperation in the regime 0 ≤ φ ≤ 1 for the DCO model. Diffusion permits defectors to invade clusters of cooperators and thereby disrupt them- reducing cooperation. We can see this effect in the heatmaps of panels (c) and

(e) (figure 4.3). The impact is greatest in the Prisoner’s Dilemma region, but also affects parameter space bordering it.

The CDO model exhibits many of the same phenomena as the non diffusion and DCO cases: cooperation initially decreases as we increase φ from −1, and increase for φ > 0.

However, we observe far greater densities of cooperators as we continue to increase φ, 83

(a)

Independent truncation 1 0.8

c 0.6 ρ 0.4 0.2 0 -1 -0.5 0 0.5 1 φ (b)

Independent truncation, d = 1, DCO 1 0.8

c 0.6 ρ 0.4 0.2 0 -1 -0.5 0 0.5 1 φ (c)

Independent truncation, d = 1, CDO 1 0.8

c 0.6 ρ 0.4 0.2 0 -1 -0.5 0 0.5 1 φ Harmony Hawk dove Stag hunt Prisoner’s dilemma

Figure 4.4: Cooperator density, ρc, of each game vs. φ. 84

and the rise in cooperators does not decrease as it does in the other models. Rather, the population is nearly entirely cooperating at φ = 1. This occurs because cooperators in a cooperator cluster earn good payoffs with their neighbours, and then may disperse into defector regions, where they will dominate the defectors due to their higher fitnesses. For

φ > 1, extinction occurs for the previously discussed reasons.

4.4.3 Dependent truncation

In general, we observe more cooperation in dependent truncation than we do in pro- portional selection. Further, the levels of truncation across the space of game parameters is roughly the same (as depicted in figure 4.6). However, the proportion of replicators that survive, τ, affects each game differently.

Figure 4.5 depicts the heatmap results for dependent truncation for a variety of cases.

By comparing the figures in the left column with those on the right, we may observe the effects of low and high τ upon cooperation. There is no effect upon the harmony game; all players cooperate. There is little effect upon the low levels of cooperation for the Prisoner’s

Dilemma. However, increasing τ increases cooperation in the Hawk-Dove game and re- duces it for the Stag Hunt. We can see this summarized in figure 4.6 for various values of

τ.

Panels (c) - (f) of figure 4.5 display the effects of the DCO vs CDO algorithms with d =

1, τ = 1/4 and τ = 3/4. We observe more cooperation for CDO than DCO, which is true regardless of τ as is summarized in panels (b) and (c) of figure 4.6. Further, these figures 85

(a) (b)

1 3 Dependent, d = 0, τ = 4 Dependent, d = 0, τ = 4 1 1

0.5 0.5

S 0 S 0

-0.5 -0.5

-1 -1 0 0.5 1 1.5 2 0 0.5 1 1.5 2 (c) T (d) T

1 3 Dependent, d = 1, DCO, τ = 4 Dependent, d = 1, DCO, τ = 4 1 1

0.5 0.5

S 0 S 0

-0.5 -0.5

-1 -1 0 0.5 1 1.5 2 0 0.5 1 1.5 2 (e) T (f) T

1 3 Dependent, d = 1, CDO, τ = 4 Dependent, d = 1, CDO, τ = 4 1 1

0.5 0.5

S 0 S 0

-0.5 -0.5

-1 -1 0 0.5 1 1.5 2 0 0.5 1 1.5 2 T T Figure 4.5: Bilinearly interpolated heatmaps of average number of cooperating vertices over 100 simulations for dependent truncation with simulation length of 200 turns. White corresponds to defection and black to cooperation. 86

contain the same phenomenon in the case with no diffusion; as τ increases (i.e. fewer replicators are culled), cooperation in the Hawk-Dove game increases while it decreases in the Stag Hunt. This phenomenon is due to the interior equilibrium in the Hawk-Dove game and the bistability in the Stag Hunt game. With low culling rates, we increase the survival of doves and hare hunters, which are the cooperators and defectors for the Hawk-Dove and

Stag Hunt games, respectively.

4.5 Discussion

similarities differences inde and dep

Truncation facilitates cooperation Cluster coefficient is 0.2 Grafen and Archetti (2008)

Here we have systematically explored diffusion and different selection mechanisms on a random graph of cooperators and defectors. We have expanded the analysis of these se- lection mechanisms from proportional selection and ’imitate the best’ to incorporate two truncation schemes and various levels of truncation. In general, we have found that trunca- tion, both independent and dependent, facilitates cooperation in comparison to proportional selection.

We have uncovered two regimes for independent truncation: cooperation decreases as we increase the threshold parameter, φ; and cooperation increases for φ > 0 where it will peak in the diffusionless and DCO cases and reach nearly 100% cooperation in the CDO case before the whole population becomes extinct for φ > 1. 87

(a)

Dependent truncation 1 0.8

c 0.6 ρ 0.4 0.2 0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 τ (b)

Dependent truncation, d = 1, DCO 1 0.8

c 0.6 ρ 0.4 0.2 0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 τ (c)

Dependent truncation, d = 1, CDO 1 0.8

c 0.6 ρ 0.4 0.2 0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 τ Harmony Hawk dove Stag hunt Prisoner’s dilemma

Figure 4.6: Cooperator density, ρc, of each game vs. τ. 88

The impact of diffusion is most profound for one diffusion event per player on average,

d = 1. We ran simulations for d = 1, 2,..., 25 and observed only negligible effects upon

our results. Although the cluster coefficient is small ( = 0), we still observe some cluster ef-

fects. DCO and CDO differ to the extent to which a vertex is competing over reproduction

with its interaction partners. For CDO, a vertex’s interaction partners are relatives (“spatial

reciprocity”). However, for DCO, kinship is low among interaction partners because diffu-

sion occurs before the combat, and competition is very local (a vertexs interaction partners

are also its main competitors for reproduction). Cooperation is adaptive under high kinship

and global competition (CDO), but is not adaptive under low kinship and local competition

(DCO) (Grafen and Archetti (2008); Rachlin and Jones (2008); Taylor and Grafen (2010)).

DCO and CDO differ to the extent to which a vertex is competing over reproduction

with its interaction partners. For CDO, a vertex’s interaction partners are relatives (“spatial

reciprocity”). However, for DCO, kinship is low among interaction partners because diffu-

sion occurs before the combat, and competition is very local (a vertexs interaction partners

are also its main competitors for reproduction). Cooperation is adaptive under high kinship

and global competition (CDO), but is not adaptive under low kinship and local competition

(DCO) (Grafen and Archetti (2008)).

Dependent truncation is an extension of “imitate the best.” However, we vary how

many of the best players from which we choose for reproduction (by choosing τ). For low τ, we select from the very best of the population. For high τ, the majority of the players may be chosen to reproduce. τ has different effects on the density of cooperators 89 for different games. While the harmony and Prisoner’s Dilemma were not much affected, the Hawk-Dove and Stag Hunt were. As we raise τ, we increase cooperation in the Hawk-

Dove game, but decrease it in the Stag Hunt. This phenomenon occurs with and without diffusion (and for both DCO and CDO).

We have several suggestions for future work to expand on the ideas in this paper. For one, we should explore stochastic payoffs, which have been shown to have a significant im- pact upon non-spatial models (Fogel and Fogel (2011); Morsky and Bauch (2016)). Further, we believe that a systematic exploration of other graphs would be a worthwhile endeavour. 90

Chapter 5

Discussion

5.1 Summary

We have explored replicator dynamics by modifying the classical replicator equation in a variety of ways. To explore social group formation, we incorporated tags and ho- mophilic imitation into the replicator equation. We relaxed the assumptions of mean pay- offs and proportional selection to develop independent and dependent truncation equations and agent-based models. Finally, we studied truncation selection on evolutionary graphs with diffusion. We have been concerned with how these model alterations affect the nature and existence of fixed point as well as cooperation in 2 player games.

We showed that although the fixed points and their stability for the homophilic replica- tor equations are determined by the underlying game, tag structure can play an important role in the population’s diversity through coat-tailing. Further, the rate of convergence to 91 an ESS is impacted by homophily. Although homophily is predicted to reduce diversity, which we observed, we found that it could increase it given the appropriate initial condi- tions. This phenomenon produces a stronger establishment of an invading novel tag than in a system that is less homophilic.

We explored both the limitations and features of homophily applied to the replicator equation, which is an important step towards understanding group formation. Other ap- proaches have combined a variety of methods, and we wished to singularly focus on ho- mophily. It may be that it is a necessary condition to induce group formation. However, as we have modeled it, it is not a sufficient one.

The replicator equation can be recovered when only the mean payoff assumption is re- laxed, and when the standard deviations are equal for all strategy pairings for independent truncation. When these standard deviations are not equal, we may choose a fitness thresh- old to change the stability of a boundary fixed point. There were some similarities and differences between the truncation equations and the agent-based models. However, we observed some curious complex dynamics in the dependent truncation agent-based model.

We hypothesize that there may be several explanations for these observations: stochasticity, discrete time steps, or the discrete nature of the fitness distributions.

In the final project, we used dependent truncation to generalize the selection method,

“imitate the best.” Both truncation mechanisms had generally more cooperation than in the proportional selection models. However, this effect was highly dependent on the degree of truncation. The truncation parameter τ (i.e. the proportion of survivors) has opposite 92 effects upon the levels of cooperation in the Hawk-Dove and Stag Hunt games resulting in lower levels of cooperation for one where it’s higher for the other. For independent truncation, we discovered two interesting regimes: cooperation decreases as we increase the truncation threshold parameter; and cooperation increases (but peaks before it decreases in the diffusionless and DCO models). In general, the DCO model had less cooperation than the CDO and diffusionless models.

We wish to leave the reader with a clear contrast of the selection methods and how important they are to replicator dynamics. Further, we stress the important discussion of natural selection as a mechanism of “survival of the fittest” vs. “survival of the fit”. This is a subtle, yet very important, distinction that has a tremendous impact upon the behaviour of the dynamical systems we’ve studied in this work. This distinction may play a role in fields outside of evolutionary biology that employ replicator dynamics. This potential provides us a vast array of future intriguing research projects.

5.2 Directions for future work

Here we will propose several ideas for further work that this research has inspired. This work is in a variety of stages — some nebulous and some more concrete.

We wish to establish the optimal conditions under which diversity is maximized and minimized for homophilic replicator equations. Additionally, we would generalize the equations to incorporate a distribution of homophily. Agent-based models could be em- 93

ployed to explore invasion scenarios and mutations. Perhaps, we could show that repeated

applications of the two invasion scenarios we studied could proved a tag pool from which

groups could be formed. Finally, we only applied proportional selection to our study of ho-

mophily in chapter 2. We wish to combine our truncation and homophily models to explore how truncation selection impacts diversity of social groups under homophilic imitation.

Chapter 3 presents us with some interesting work still to be completed. As mentioned above, we would like to better understand the dynamics we observed in the truncation agent-based models by applying the Fokker-Planck equation to the truncation equations, and studying the effects of discretization of our systems in time and population. Further, we think it would be fruitful to apply these truncation equations to problems in the literature that use the replicator equation and compare the results.

We would like to explore the effects of stochastic payoffs to the spatial models in chap- ter 4, since we have only used mean payoffs. Exploring other graphs would also be an interesting extension to this study. 94

Appendix A

Appendix of Java code

A.1 Code for chapter 3

A.1.1 DepFogel. java

//Dependent truncation package paper2; import java. util .ArrayList; import java.util.Collections; import java. util .Comparator;

/ **

* 95

* @author brycemorsky

* / public class DepFogel extends IndepFogel {

public DepFogel( i n t x ) { s u p e r ( x ) ;

t a u = f ; }

p u b l i c s t a t i c i n t t a u ;

@Override p u b l i c void selection(ArrayList

p ) {

Collections.sort(p, new Comparator

() {

@Override

p u b l i c i n t compare(Player a, Player b) {

return ( i n t ) Math.signum(b. fitness − a.fitness); }} );

double temp hawks ;

double temp ;

double m i s s i n g ;

double s u r v i v o r h a w k s = 0 ;

double survivors = 0;

i n t i = −1;

boolean satisfied = false; 96 do{

temp hawks = 0 ;

temp = 0 ;

do{

i ++;

i f (p.get(i). strat==Strategy.Hawk) { temp hawks ++;}

temp ++;

} while (p.get(i).fitness == p.get(i+1).fitness);

i f (survivors + temp <= t a u ) {

survivors += temp;

s u r v i v o r hawks += temp hawks ; }

i f (survivors + temp > t a u ) {

missing = tau − s u r v i v o r s ;

survivors = tau;

s u r v i v o r hawks += missing *( temp hawks/temp); }

i f (survivors == tau) { satisfied=true; }

} while (satisfied==false);

hawks = ( i n t ) Math.round((( double ) t o t a l p o p ) *(

s u r v i v o r hawks/survivors));

doves = total p o p − hawks ; 97

i f (survivors==0) { doves = 0 ; }

}}

A.1.2 IndepFogel. java

package paper2;

import java. util .ArrayList;

import java. util .Random;

/ *

* The game for paper 2. This object is a shell. It defines

the strategies

* available , the number of hawks and total population , and

t h e

* parameter tau. totalHawks returns the number of hawks.

initializeGame and

* runRound are shells to be defined in the subclasses

PDFGame and IndepFogel .

* runGame initializes the game the a random initial

c o n d i t i o n ,

* runs initializeGame , and does runRound for game l e n g t h ( 98

usually 200).

* /

public class IndepFogel {

p u b l i c enum S t r a t e g y {Hawk , Dove } ; p u b l i c s t a t i c i n t g a m e length = 200; p u b l i c s t a t i c i n t doves ; p u b l i c s t a t i c i n t hawks ; p u b l i c s t a t i c i n t f ; p u b l i c s t a t i c i n t t o t a l p o p = 500; p u b l i c static double b l u f f = −10; p u b l i c static double c o s t = −100; p u b l i c static double resource = 50; p u b l i c s t a t i c A r r a y L i s t

Population;

Random rn = new Random();

public IndepFogel( i n t x ) { f = x ; }

p u b l i c void initializeGame( i n t x , i n t y ) { 99

Population = new ArrayList

(x+y ) ;

Population = resetGame(x,y); }

p u b l i c void averageFitness(ArrayList

p ) {

double a v g f i t ;

f o r ( i n t i = 0 ; i < p.size(); i++) {

a v g fit = p.get(i).fitness;

p.get(i).fitness = avg fit/(p.size() −1) ; }}

p u b l i c void calculatePayoff(Player m, Player n) {

i f (m. strat==Strategy .Hawk && n. strat==Strategy .Hawk) {

i f (rn.nextBoolean()) {m.fitness += resource; n.

fitness += cost; }

e l s e {m.fitness += cost; n.fitness += resource; }}

i f (m. strat==Strategy .Hawk && n. strat==Strategy .Dove) {

i f (rn.nextDouble() <= 0 . 9 ) {m.fitness += resource; n.

fitness += 0; }

e l s e {m.fitness += 0; n.fitness += resource; }}

i f (m. strat==Strategy .Dove && n. strat==Strategy .Hawk) { 100

i f (rn.nextDouble() > 0 . 9 ) {m.fitness += resource; n.

fitness += 0; }

e l s e {m.fitness += 0; n.fitness += resource; }}

i f (m. strat==Strategy.Dove && n. strat==Strategy.Dove) {

i f (rn.nextBoolean()) {m.fitness += resource+bluff; n.

fitness += bluff; }

e l s e {m.fitness += bluff; n.fitness += resource+bluff

;}}}

p u b l i c void match(ArrayList

p ) {

f o r ( i n t i =0; i < p . s i z e ( ) −1; i ++) {

f o r ( i n t j = i +1; j < p.size(); j++) {

calculatePayoff(p.get(i), p.get(j));}}}

public ArrayList

resetGame ( i n t x , i n t y ) {

A r r a y L i s t

z = new ArrayList

(x+y ) ;

f o r ( i n t i = 0 ; i < x ; i ++) {z.add(new Player(Strategy.

Hawk ) ) ; }

f o r ( i n t i = 0 ; i < y ; i ++) {z.add(new Player(Strategy.

Dove ) ) ; } 101

return z ; }

p u b l i c double [] runGame() {

double [ ] i = new double [ 2 ] ;

hawks = rn.nextInt(total p o p ) ;

initializeGame(hawks, total p o p −hawks ) ;

f o r ( i n t j = 0 ; j < g a m e length; j++) {

Population = runRound(Population);

i f (doves==0 && hawks==0) { break ; }}

i f (hawks+doves != 0) {

i [ 0 ] = ( ( double ) hawks ) / ( ( double )(hawks+doves));

i [ 1 ] = ( ( double ) doves ) / ( ( double )(hawks+doves)); }

e l s e {

i [ 0 ] = 0 ;

i [ 1 ] = 0 ; }

return i ; }

public ArrayList

runRound(ArrayList

p ) {

match ( p ) ;

averageFitness(p);

selection(p); 102

p = resetGame(hawks,doves);

i f (doves==0 && hawks==0) { return n u l l ; }

e l s e { return p ; }}

p u b l i c void selection(ArrayList

p ) {

double s u r v i v o r h a w k s = 0 ;

double survivors = 0;

f o r ( i n t i = 0 ; i < t o t a l p o p ; i ++) {

i f (p.get(i).fitness >= f ) {

survivors++;

i f (p.get(i). strat==Strategy.Hawk) { s u r v i v o r h a w k s

++;}}}

i f (survivors != 0) {

hawks = ( i n t ) Math.round((( double ) t o t a l p o p ) *(

s u r v i v o r hawks/survivors));

doves = total p o p − hawks ;

System.out. println(hawks +” ”+doves ) ; }

e l s e {hawks = 0; doves = 0;}}}

A.1.3 Paper2. java 103 package paper2;

import java.io.BufferedWriter; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FileWriter; import java.io.IOException; import java.io.PrintStream; import java.io.PrintWriter;

/ *

* The top class. Runs num sims number of games (either

Fogel or PDFGame) , sums

* the total number of hawks at the end of a simulation to

calculate the

* average and standard deviation of the hawk population in

terms of frequencies

* (not absolute amounts). computeSTD computes the standard

deviation from an

* array input that contains the final number of hawks after 104

each simulation.

* /

public class Paper2 { p u b l i c static double a v e r a g e ; p u b l i c static double extinction; p u b l i c static double num sims = 100; p u b l i c static double s t d e v ; p u b l i c static double [ ] f i n a l p o p = new double [ 2 ] ; p u b l i c static double [ ] s t h a w k s = new double [( i n t ) num sims ] ;

p u b l i c static double e x t i n c t h a w k s ; p u b l i c static double e x t i n c t d o v e s ;

p u b l i c s t a t i c void main(String[] args) {

f o r ( i n t i = −100; i <= 100; i ++) {

a v e r a g e = 0 ;

extinction = 0;

s t d e v = 0 ;

e x t i n c t h a w k s = 0 ; 105 e x t i n c t d o v e s = 0 ; f o r ( i n t j = 0 ; j < num sims ; j ++) {

IndepFogel f = new IndepFogel(i);

f i n a l pop = f.runGame();

t r y {

File file = new File(”indep fogel.txt”);

i f (!file.exists()) { file .createNewFile(); }

FileWriter fw = new FileWriter(file.

getAbsoluteFile() ,true);

BufferedWriter bw = new BufferedWriter(fw);

bw.write(i + ” ” + f i n a l p o p [ 0 ] ) ;

bw.newLine() ;

bw.close();

} catch (IOException e) {e.printStackTrace(); }

i f ( f i n a l pop[0]==0 && final p o p [ 1 ] = = 0 ) {

extinction++;

}

i f ( f i n a l p o p [ 0 ] == 0) {

e x t i n c t h a w k s ++;

}

i f ( f i n a l p o p [ 1 ] == 0) { 106

e x t i n c t d o v e s ++;

}

average += final p o p [ 0 ] ;

s t hawks[j] = final p o p [ 0 ] ; }

average = average/num sims ;

extinction = extinction/num sims ;

stdev = computeStd();

e x t i n c t hawks = extinct h a w k s / num sims ;

e x t i n c t doves = extinct d o v e s / num sims ;

}

f o r ( i n t i = 5 ; i < 500; i = i + 5) {

i n t i = 250;

double to bw = 0 ;

a v e r a g e = 0 ;

extinction = 0;

s t d e v = 0 ;

f o r ( i n t j = 0 ; j < num sims ; j ++) {

DepFogel f = new DepFogel(i);

f i n a l pop = f.runGame();

to bw = ( ( double ) i ) / 5 0 0 ; 107

t r y {

File file = new File(”timeseries d e p . t x t ” ) ;

i f (!file.exists()) { file .createNewFile(); }

FileWriter fw = new FileWriter(file.

getAbsoluteFile() ,true);

BufferedWriter bw = new BufferedWriter(fw);

bw. write(to bw + ” ” + f i n a l p o p [ 0 ] ) ;

bw.newLine() ;

bw.close();

} catch (IOException e) {e.printStackTrace(); }

i f ( f i n a l pop[0]==0 && final p o p [ 1 ] = = 0 ) {

extinction++;}

average += final p o p [ 0 ] ;

s t hawks[j] = final p o p [ 0 ] ; }

average = average/num sims ;

extinction = extinction/num sims ;

stdev = computeStd();

System.out. println(average+” ”+ s t d e v +” ”+extinction+

” ”+ i ) ;

} 108

p u b l i c static double computeStd () {

double x = 0 ;

f o r ( i n t i = 0 ; i < num sims ; i ++) {

x += Math.pow(st h a w k s [ i ] − average ,2); }

x = Math.sqrt(x/(num sims −1) ) ;

return x ; }}

A.1.4 Player. java

package paper2;

/ *

* A player object for IndepFogel.java. Each player has a

strategy (from enum

* Strategy: Hawk or Dove) and a fitness (double). The

constructor assigns a

* fitness of 0 and a strategy input. resetFitness resets

the fitness to 0.

* /

import paper2.IndepFogel.Strategy; 109

public class Player {

p u b l i c double f i t n e s s ;

public Strategy strat;

public Player(Strategy i) {

s t r a t = i ;

f i t n e s s = 0;}}

A.2 Code for chapter 4

A.2.1 CA. java package paper3;

import java. util .ArrayList; import java. util .Random;

public class CA {

p u b l i c static double [] hawks = new double [ 2 ] ;

p u b l i c enum S t r a t e g y {Coop, Def, Empty } ;

p u b l i c s t a t i c i n t g a m e length = 200;

p u b l i c s t a t i c i n t f l i p s ; 110

p u b l i c static double S;

p u b l i c static double T;

p u b l i c s t a t i c i n t truncation;

p u b l i c s t a t i c i n t p o p s i z e = 500;

p u b l i c s t a t i c A r r a y L i s t

Population;

Random rn = new Random();

p u b l i c CA( double s , double t , i n t x ) {

S = s ;

T = t ;

f l i p s = x ; }

p u b l i c void calculatePayoff(Player m, Player n) {

i f (m. strat==Strategy.Coop && n. strat==Strategy.Coop)

{m.fitness += 1; n.fitness += 1; }

i f (m. strat==Strategy.Coop && n. strat==Strategy.Def) {

m.fitness += S; n.fitness += T; }

i f (m. strat==Strategy.Def && n. strat==Strategy.Coop) {

m.fitness += T; n.fitness += S; }

i f (m. strat==Strategy.Def && n. strat==Strategy.Def) {m

.fitness += 0; n.fitness += 0; } 111

}

p u b l i c void diffusion () {

P l a y e r c p l a y e r ;

P l a y e r n p l a y e r ;

S t r a t e g y c s t r a t ;

double c f i t n e s s ;

i n t c ;

i n t n ;

f o r ( i n t i = 0 ; i < f l i p s ; i ++) {

c = rn.nextInt(Population.size());

c player = Population.get(c);

c s t r a t = c player.strat;

c f i t n e s s = c player.fitness;

i f ( ! c player .neighbours.isEmpty()) {

n = rn.nextInt(c player.neighbours.size());

n player = Population.get(n);

c player.strat = n player.strat;

c player.fitness = n player.fitness;

n player.strat = c s t r a t ; 112

n player.fitness = c f i t n e s s ;

Population.set(c, c p l a y e r ) ;

Population.set(n, n p l a y e r ) ; }}

}

p u b l i c void initializeNetwork () {

double l i n k p r o b = 0 . 0 1 ;

f o r ( i n t i = 0 ; i < p o p s i z e −1; i ++) {

f o r ( i n t j = i +1; j < p o p s i z e ; j ++) {

i f (rn.nextDouble() < l i n k p r o b ) {

Population.get(i).neighbours.add(j);

Population.get(j).neighbours.add(i); }

}

}

}

p u b l i c void initializeStrategies () {

Population = new ArrayList <>( p o p s i z e ) ;

f o r ( i n t i = 0 ; i < p o p s i z e ; i ++) {

i f (rn.nextBoolean()) { Population.add(i , new

Player(Strategy .Coop)); } 113

e l s e { Population.add(i , new Player(Strategy.Def))

;}}}

p u b l i c void match ( ) {

P l a y e r c p l a y e r ;

f o r ( i n t i = 0 ; i < p o p s i z e ; i ++) {

c player = Population.get(i);

f o r ( i n t j = 0 ; j < c player.neighbours.size(); j

++) {

i f ( c player.neighbours.get(j) > i ) {

calculatePayoff(c player , Population.get

( c player.neighbours.get(j))); }

}

}

f o r ( i n t i = 0 ; i < p o p s i z e ; i ++) {

c player = Population.get(i);

c player.fitness = c player.fitness /(( double )

c player.neighbours.size());

}

} 114

p u b l i c void resetPopulation () {

f o r ( i n t i = 0 ; i < p o p s i z e ; i ++) { Population.get(i).

r e s e t ( ) ; }}

p u b l i c double [] runGame() {

double [ ] o u t = new double [ 3 ] ;

o u t [ 0 ] = 0 ;

o u t [ 1 ] = 0 ;

o u t [ 2 ] = 0 ;

initializeStrategies ();

initializeNetwork ();

f o r ( i n t i = 0 ; i < g a m e length; i++) {

/ / match ( ) ;

i f ( f l i p s != 0) { diffusion(); }

match ( ) ;

Population = selection();

resetPopulation(); } 115

f o r ( i n t i = 0 ; i < p o p s i z e ; i ++) {

i f (Population.get(i). strat == Strategy.Coop) { o u t

[ 0 ] + + ; }

i f (Population.get(i).strat == Strategy.Def) { o u t

[ 1 ] + + ; }

i f (Population.get(i). strat == Strategy.Empty) {

o u t [ 2 ] + + ; }

}

out[0] = out[0]/500;

out[1] = out[1]/500;

out[2] = out[2]/500;

return o u t ; }

public ArrayList

selection () {

P l a y e r c p l a y e r ;

P l a y e r n p l a y e r ;

A r r a y L i s t

ShadowPopulation = new ArrayList <

P l a y e r >( p o p s i z e ) ;

f o r ( i n t i = 0 ; i < p o p s i z e ; i ++) {

c player = Population.get(rn.nextInt(pop s i z e ) ) ; 116

i f ( ! c player .neighbours.isEmpty()) {

n player = Population.get(c p l a y e r .

neighbours.get(rn.nextInt(c p l a y e r .

neighbours.size())));

i f ( n player. fitness − c player. fitness > 0) {

i f (rn.nextDouble() <= ( n player. fitness

− c player. fitness)/(Math.max(1, T) −

Math.min(0, S))) {

c player.strat = n player.strat; }

}

}

ShadowPopulation.add(i , c p l a y e r ) ;

}

return ShadowPopulation; }

}

A.2.2 CAdep. java

/ *

* To change this template , choose Tools | Templates

* and open the template in the editor.

* / 117 package paper3;

import java. util .ArrayList; import java.util.Collections; import java. util .Comparator;

/ **

*

* @author brycemorsky

* / public class CAdep extends CA {

p u b l i c static double t a u ;

public CAdep( double s , double t , i n t x , double y ) { s u p e r (

s,t,x); tau = y; }

@Override

public ArrayList

selection () {

P l a y e r c p l a y e r ;

A r r a y L i s t

ShadowPopulation = new ArrayList <

P l a y e r >( p o p s i z e ) ; 118

A r r a y L i s t

l o c a l s ;

f o r ( i n t i = 0 ; i < Population.size(); i++) {

c player = Population.get(i);

locals = new ArrayList <>( c player .neighbours.

s i z e ( ) ) ;

f o r ( i n t j = 0 ; j < c player.neighbours.size(); j

++) {

locals .add(Population.get(c p l a y e r .

neighbours.get(j))); }

locals.add(c p l a y e r ) ;

i f (rn.nextDouble() <= sorter(locals)) { c p l a y e r .

strat = Strategy.Coop; }

e l s e { c player.strat = Strategy.Def; }

ShadowPopulation.add(i , c p l a y e r ) ;

}

return ShadowPopulation; }

p u b l i c double sorter(ArrayList

p ) {

Collections.sort(p, new Comparator

() { 119

@Override

p u b l i c i n t compare(Player a, Player b) {

return ( i n t ) Math.signum(b. fitness − a . f i t n e s s )

; }} );

double n coop = 0 ;

i n t t r u n c = ( i n t )Math.round ((( double ) p . s i z e ( ) ) * t a u ) ;

f o r ( i n t i = 0 ; i < t r u n c ; i ++) {

i f (p.get(i).strat == Strategy.Coop) { n coop ++;}}

return n coop / ( ( double ) t r u n c ) ; }

}

A.2.3 CAindep. java

/ *

* To change this template , choose Tools | Templates

* and open the template in the editor.

* / package paper3;

import java. util .ArrayList; 120

/ **

*

* @author brycemorsky

* / public class CAindep extends CA {

p u b l i c static double p h i ;

public CAindep( double s , double t , i n t x , double y ) {

super(s,t,x);

/ / p h i = y*Math.max(1,t) + (1−y ) *Math.min(0,s);

p h i = y ; }

@Override

p u b l i c void calculatePayoff(Player m, Player n) {

i f (m. strat==Strategy.Coop && n. strat==Strategy.Coop)

{m.fitness += 1; n.fitness += 1; }

i f (m. strat==Strategy.Coop && n. strat==Strategy.Def) {

m.fitness += S; n.fitness += T; }

i f (m. strat==Strategy.Def && n. strat==Strategy.Coop) {

m.fitness += T; n.fitness += S; } 121

i f (m. strat==Strategy.Def && n. strat==Strategy.Def) {m

.fitness += 0; n.fitness += 0; }

i f (n. strat==Strategy .Empty) {m. emptyNeighbours++;}}

@Override

p u b l i c void match ( ) {

P l a y e r c p l a y e r ;

double n ;

f o r ( i n t i = 0 ; i < p o p s i z e ; i ++) {

c player = Population.get(i);

f o r ( i n t j = 0 ; j < c player.neighbours.size(); j

++) {

i f ( c player.neighbours.get(j) > i ) {

calculatePayoff(c player , Population.get

( c player.neighbours.get(j))); }

}

}

f o r ( i n t i = 0 ; i < p o p s i z e ; i ++) {

c player = Population.get(i);

n = ( ( double ) c player.neighbours.size()) − 122

c player .emptyNeighbours;

i f ( n != 0) { c player.fitness = c player.fitness/n

; }

}

}

@Override

public ArrayList

selection () {

P l a y e r c p l a y e r ;

A r r a y L i s t

ShadowPopulation = new ArrayList <

P l a y e r >( p o p s i z e ) ;

A r r a y L i s t

l o c a l s ;

double c u l l p r o b ;

f o r ( i n t i = 0 ; i < Population.size(); i++) {

c player = Population.get(i);

locals = new ArrayList <>( c player .neighbours.

s i z e ( ) ) ;

f o r ( i n t j = 0 ; j < c player.neighbours.size(); j

++) {

locals .add(Population.get(c p l a y e r . 123

neighbours.get(j))); }

locals.add(c p l a y e r ) ;

c u l l prob = culling(locals);

i f ( c u l l p r o b == 100) { c player.strat = Strategy.

Empty ; }

e l s e {

i f (rn.nextDouble() <= c u l l p r o b ) { c p l a y e r .

strat = Strategy.Coop; }

e l s e { c player.strat = Strategy.Def; }

}

ShadowPopulation.add(i , c p l a y e r ) ;

}

return ShadowPopulation; }

p u b l i c double culling(ArrayList

p ) {

double n coop = 0 ;

double survivors = 0;

f o r ( i n t i = 0 ; i < p.size(); i++) {

i f (p.get(i).fitness >= p h i ) { 124

survivors ++;

i f (p.get(i).strat == Strategy.Coop) { n coop

++;}

}

}

i f (survivors == 0) { return 100;}

e l s e { return n coop/survivors; }

}

}

A.2.4 Paper3. java

package paper3;

import java.io.BufferedWriter;

import java.io.File;

import java.io.FileWriter;

import java.io.IOException;

public class Paper3 {

p u b l i c static double [ ] a v e r a g e ;

p u b l i c static double s t d e v ; 125 p u b l i c static double s u r v i v o r s ; p u b l i c static double num sims = 100; p u b l i c s t a t i c i n t n u m f l i p s ; p u b l i c static double [] output = new double [ 2 ] ; p u b l i c static double [ ] s t c o o p = new double [( i n t )

num sims ] ; p u b l i c static double s t d e v s u r v ; p u b l i c static double [ ] s t s u r v = new double [( i n t )

num sims ] ; p u b l i c static double [ ] t o t a l h a w k s = new double [ 2 ] ; p u b l i c static double [ ] sh a vg ; p u b l i c static double [ ] pd avg ; p u b l i c static double [ ] h avg ; p u b l i c static double [ ] hd avg ;

p u b l i c s t a t i c void main(String[] args) {

//Proportional selection.

/ * for(int t = 0; t <= 20; t ++){

f o r ( i n t s = −10; s <= 10; s ++){

double averageM = 0; 126

/ / s t d e v = 0; for(int j = 0; j < num sims ; j ++){

CA c = new CA(((double)s)/10.0,((

double)t)/10.0,500);

output = c.runGame();

averageM += output[0];

/ / s t coop[j] = output[0];

} averageM = averageM/num sims ;

//stdev = computeStd(st coop , average); t r y {

File file = new File(”prop DCO . t x t ”)

;

if(!file.exists()) { f i l e .

createNewFile();}

FileWriter fw = new FileWriter(file.

getAbsoluteFile() ,true);

BufferedWriter bw = new

BufferedWriter(fw);

bw.write(0+ ” ” +((double)t)/10.0+ ”

” +((double)s)/10.0+ ” ” + 127

averageM ) ;

bw.newLine() ;

bw.close();

} catch (IOException e) {e .

printStackTrace ();}}} * /

/ * //Dependent truncation.

for(int t = 0; t <= 20; t ++){

f o r ( i n t s = −10; s <= 10; s ++){

double averageM = 0;

for(int k = 0; k < num sims ; k++){

CAdep c = new CAdep(((double)s)

/10.0 ,((double)t)

/10.0,500,0.75);

output = c.runGame();

averageM += output[0];

}

averageM = averageM/num sims ;

t r y {

File file = new File(”

dep t75 DCO . t x t ”) ; 128

if(!file.exists()) { f i l e .

createNewFile();}

FileWriter fw = new FileWriter(

file .getAbsoluteFile() ,true);

BufferedWriter bw = new

BufferedWriter(fw);

bw.write(((double)t)/10.0+ ” ”

+((double)s)/10.0+ ” ” +

averageM ) ;

bw.newLine() ;

bw.close();

} catch (IOException e) {e .

printStackTrace ();}}} * /

//Independent truncation.

/ * for(int t = 0; t <= 20; t ++){

f o r ( i n t s = −10; s <= 10; s ++){

double averageM = 0;

s t d e v = 0;

survivors = 0;

s t d e v s u r v = 0; 129

for(int k = 0; k < num sims ; k++){

CAindep c = new CAindep(((double

)s)/10.0,((double)t)

/10.0,500,0.5);

output = c.runGame();

averageM += output[0];

s t coop[k] = output[0];

survivors += output[1];

s t surv[k] = output[1]; } averageM = averageM/num sims ;

survivors = survivors/num sims ;

//stdev = computeStd(st c o o p ,

average ) ;

/ / s t d e v surv = computeStd(st s u r v ,

survivors);

t r y {

File file = new File(”

indep phi 5 CDO . t x t ”) ;

if(!file.exists()) { f i l e .

createNewFile();}

FileWriter fw = new FileWriter( 130

file .getAbsoluteFile() ,true);

BufferedWriter bw = new

BufferedWriter(fw);

bw.write (((double)t)/10.0+” ”+((

double)s)/10.0+” ”+averageM);

bw.newLine() ;

bw.close();

} catch (IOException e) {e .

printStackTrace ();}}} * /

f o r ( double j = −1; j <= 1; j=j+0.05) {

s h a vg = new double [ 3 ] ;

pd avg = new double [ 3 ] ;

h avg = new double [ 3 ] ;

hd avg = new double [ 3 ] ;

f o r ( i n t t = 0 ; t <= 2 0 ; t ++) {

f o r ( i n t s = −10; s <= 1 0 ; s ++) {

average = new double [ 3 ] ;

f o r ( i n t k = 0 ; k < num sims ; k ++) {

CAindep c = new CAindep((( double 131

)s)/10.0,(( double ) t )

/10.0,500,j);

output = c.runGame();

average[0] += output[0];

average[1] += output[1];

average[2] += output[2];

} average[0] = average[0]/num sims ; average[1] = average[1]/num sims ; average[2] = average[2]/num sims ;

i f (0 <= t && t < 10 && −10 <= s && s

< 0) {

s h avg[0] += average[0];

s h avg[1] += average[1];

s h avg[2] += average[2];

}

i f (10 < t && t <= 20 && −10 <= s &&

s < 0) {

pd avg[0] += average[0];

pd avg[1] += average[1]; 132

pd avg[2] += average[2];

}

i f (0 <= t && t < 10 && 0 < s && s <=

10) {

h avg[0] += average[0];

h avg[1] += average[1];

h avg[2] += average[2];

}

i f (10 < t && t <= 20 && 0 < s && s

<= 10) {

hd avg[0] += average[0];

hd avg[1] += average[1];

hd avg[2] += average[2];

}

}

}

pd avg [ 0 ] = pd avg[0]/400; pd avg [ 1 ] = pd avg[1]/400; pd avg [ 2 ] = pd avg[2]/400; 133 hd avg [ 0 ] = hd avg[0]/400; hd avg [ 1 ] = hd avg[1]/400; hd avg [ 2 ] = hd avg[2]/400;

s h a vg [ 0 ] = s h avg[0]/400; s h a vg [ 1 ] = s h avg[1]/400; s h a vg [ 2 ] = s h avg[2]/400;

h avg [ 0 ] = h avg[0]/400; h avg [ 1 ] = h avg[1]/400; h avg [ 2 ] = h avg[2]/400;

t r y {

File file = new File(”indep d 0 . t x t ” ) ;

i f (!file.exists()) { file .createNewFile()

; }

FileWriter fw = new FileWriter(file.

getAbsoluteFile() ,true);

BufferedWriter bw = new BufferedWriter(

fw ) ;

//bw.write(pd avg[0]+” ”+pd avg [2]+” ”+ 134

hd avg[0]+” ”+hd avg[2]+” ”+sh a vg

[0]+” ”+sh avg[2]+” ”+h avg [0]+” ”+

h avg [ 2 ] ) ;

bw.write(j+” ”+ pd avg [ 0 ] + ” ”+ hd avg [ 0 ] + ”

”+ sh a vg [ 0 ] + ” ”+ h avg [ 0 ] ) ;

bw.newLine() ;

bw.close();

} catch (IOException e) {e.printStackTrace()

; }

System.out.println(j);

}

}

p u b l i c static double computeStd( double [ ] x , double y ) {

double z = 0 ;

f o r ( i n t i = 0 ; i < num sims ; i ++) {

z += Math.pow(x[i] − y , 2 ) ; }

z = Math.sqrt(z/(num sims −1) ) ;

return z ; }

} 135

A.2.5 Player. java package paper3;

import java. util .ArrayList; import paper3.CA.Strategy;

public class Player {

p u b l i c double emptyNeighbours ;

p u b l i c double f i t n e s s ;

public Strategy strat;

public ArrayList neighbours;

public Player(Strategy i) {

s t r a t = i ;

f i t n e s s = 0 ;

neighbours = new ArrayList <>();

}

p u b l i c void r e s e t ( ) { fitness = 0; emptyNeighbours = 0;}

} 136

References

Reka´ Albert and Albert-Laszl´ o´ Barabasi.´ Statistical mechanics of complex networks. Re- views of modern physics, 74(1):47, 2002.

Tibor Antal, Hisashi Ohtsuki, John Wakeley, Peter D. Taylor, and Martin A. Nowak. Evo- lution of cooperation by phenotypic similarity. PNAS, 106(21):8597–8600, 2009.

Alberto Antonioni, Marco Tomassini, and Pierre Buesser. Random diffusion and cooper- ation in continuous two-dimensional space. Journal of theoretical biology, 344:40–48, 2014.

Robert Axelrod. The dissemination of culture a model with local convergence and global polarization. Journal of conflict resolution, 41(2):203–226, 1997.

Robert Axelrod and W. D. Hamilton. The evolution of cooperation. Science, 211(4489): 1390–1396, 1981.

Thomas Back,¨ David B. Fogel, and Zbigniew Michalewicz. Evolutionary computation 1: Basic algorithms and operators, volume 1. CRC Press, 2000.

Jonathan Bendor and Piotr Swistak. The evolution of norms. American Journal of Sociol- ogy, 106(6):1493–1545, 2001.

Wolfgang H. Berger and Frances L. Parker. Diversity of planktonic foraminifera in deep- sea sediments. Science, 168(3937):1345–1347, 1970.

Carl T. Bergstrom and Peter Godfrey-Smith. On the evolution of behavioral heterogeneity in individuals and populations. Biology and Philosophy, 13(2):205–231, 1998.

Helen Bernhard, Ernst Fehr, and Urs Fischbacher. Group affiliation and altruistic norm enforcement. The American Economic Review, 96(2):217–221, 2006.

Tobias Blickle and Lothar Thiele. A mathematical analysis of tournament selection. In ICGA, pages 9–16, 1995.

Immanuel M. Bomze. Lotka-volterra equation and replicator dynamics: a two-dimensional classification. Biological cybernetics, 48(3):201–211, 1983. 137

Immanuel M. Bomze. Lotka-volterra equation and replicator dynamics: new issues in classification. Biological cybernetics, 72(5):447–453, 1995.

Pierre Buesser and Marco Tomassini. Evolution of cooperation on spatially embedded networks. Physical Review E, 86(6):066107, 2012.

Michael Bulmer. The theory of natural selection of alfred russel wallace frs. Notes and Records of the Royal Society, 59(2):125–136, 2005.

Matteo Cavaliere, Sean Sedwards, Corina E. Tarnita, Martin A. Nowak, and Attila Csikasz-´ Nagy. Prosperity is associated with instability in dynamical networks. Journal of theo- retical biology, 299:126–138, 2012.

Damon Centola, Juan Carlos Gonzalez-Avella, Victor M. Eguiluz, and Maxi San Miguel. Homophily, cultural drift, and the co-evolution of cultural groups. Journal of Conflict Resolution, 51(6):905–929, 2007.

Robert B. Cialdini and Melanie R. Trost. Social influence: Social norms, conformity and compliance. 1998.

Ross Cressman. Evolutionary dynamics and extensive form games, volume 5. MIT Press, 2003.

Richard Dawkins. The selfish gene. new york. Oxford Univ. Press, 1:976, 1976.

P. J. den Boer. Spreading of risk and stabilization of animal numbers. Acta biotheoretica, 18(1):165–194, 1968.

P. J. Den Boer. Natural selection or the non-survival of the non-fit. Acta biotheoretica, 47 (2):83–97, 1999.

Kurt Dopfer. The evolutionary foundations of economics. Cambridge University Press, 2005.

Giovanni Dosi and Richard R. Nelson. An introduction to evolutionary theories in eco- nomics. Journal of evolutionary economics, 4(3):153–172, 1994.

Lee Alan Dugatkin and Hudson Kern Reeve. Game theory and animal behavior. Oxford University Press, 1998.

Richard Durrett and Simon A. Levin. Can stable social groups be maintained by ho- mophilous imitation alone? Journal of Economic Behavior & Organization, 57(3): 267–286, 2005.

Paul R. Ehrlich and Simon A. Levin. The evolution of norms. PLoS Biol, 3(6):e194, 2005. 138

Ilan Eshel. On the changing concept of evolutionary population stability as a reflection of a changing point of view in the quantitative theory of evolution. Journal of mathematical biology, 34(5-6):485–510, 1996.

Ernst Fehr and Urs Fischbacher. Why social preferences matter - the impact of non-selfish motives on competition, cooperation and incentives. The Economic Journal, 112:C1– C33, 2002.

Ernst Fehr and Urs Fischbacher. The nature of human altruism. Nature, 425(6960):785– 791, 2003.

Ernst Fehr and Urs Fischbacher. Third-party punishment and social norms. Evolution and human behavior, 25(2):63–87, 2004.

Sevan G. Ficici and Jordan B. Pollack. Evolutionary dynamics of finite populations in games with polymorphic fitness equilibria. Journal of theoretical biology, 247(3):426– 441, 2007.

Sevan G. Ficici, Ofer Melnik, and Jordan B. Pollack. A game-theoretic investigation of selection methods used in evolutionary algorithms. In Evolutionary Computation, 2000. Proceedings of the 2000 Congress on, volume 2, pages 880–887. IEEE, 2000.

Sevan G. Ficici, Ofer Melnik, and Jordan B. Pollack. A game-theoretic and dynamical- systems analysis of selection methods in coevolution. Evolutionary Computation, IEEE Transactions on, 9(6):580–602, 2005.

Gary B. Fogel and David B. Fogel. Simulating natural selection as a culling mechanism on finite populations with the hawk–dove game. Biosystems, 104(1):57–62, 2011.

Gary B. Fogel, Peter C. Andrews, and David B. Fogel. On the instability of evolutionary stable strategies in small populations. Ecological Modelling, 109(3):283–294, 1998.

Daniel Friedman. Evolutionary games in economics. Econometrica: Journal of the Econo- metric Society, pages 637–666, 1991.

Daniel Friedman. On economic applications of evolutionary game theory. Journal of Evolutionary Economics, 8(1):15–43, 1998.

Feng Fu, Martin A. Nowak, and Christoph Hauert. Invasion and expansion of cooperators in lattice populations: Prisoner’s dilemma vs. snowdrift games. Journal of theoretical biology, 266(3):358–366, 2010.

Herbert Gintis. The bounds of reason: Game theory and the unification of the behavioral sciences. Princeton University Press, 2009. 139

Jesus´ Gomez-Garde´ nes,˜ M. Campillo, L.M. Flor´ıa, and Yamir Moreno. Dynamical orga- nization of cooperation in complex topologies. Physical Review Letters, 98(10):108103, 2007.

Alan Grafen and Marco Archetti. Natural selection of altruism in inelastic viscous homo- geneous populations. Journal of Theoretical Biology, 252(4):694–710, 2008.

J. Michael Greig. The end of geography? globalization, communications, and culture in the international system. Journal of Conflict Resolution, 46(2):225–243, 2002.

W. D. Hamilton. The genetical evolution of social behaviour i. Journal of Theoretical Biology, 7:1–16, 1964a.

W. D. Hamilton. The genetical evolution of social behaviour ii. Journal of Theoretical Biology, 7:17–52, 1964b.

Peter Hammerstein, , et al. Game theory and evolutionary biology. Hand- book of game theory with economic applications, 2:929–993, 1994.

Marc Harper. Escort evolutionary game theory. Physica D: Nonlinear Phenomena, 240 (18):1411–1415, 2011.

Alan Hastings. Transients: the key to long-term ecological understanding? Trends in Ecology & Evolution, 19(1):39–45, 2004.

Christoph Hauert. Fundamental clusters in spatial 2× 2 games. Proceedings of the Royal Society of London B: Biological Sciences, 268(1468):761–769, 2001.

Christoph Hauert and Michael Doebeli. Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature, 428(6983):643–646, 2004.

Christoph Hauert and Gyorgy¨ Szabo.´ Game theory and physics. American Journal of Physics, 73(5):405–414, 2005.

Dirk Helbing. Interrelations between stochastic equations for systems with pair interac- tions. Physica A: Statistical Mechanics and its Applications, 181(1):29–52, 1992.

Dirk Helbing and Anders Johansson. Cooperation, norms, and revolutions: a unified game- theoretical approach. PloS one, 5(10):e12530, 2010.

Mark O. Hill. Diversity and evenness: a unifying notation and its consequences. Ecology, 54(2):427–432, 1973.

W. G. S. Hines. Evolutionary stable strategies: a review of basic theory. Theoretical Population Biology, 31(2):195–272, 1987. 140

Josef Hofbauer and Karl Sigmund. Evolutionary games and population dynamics. Cam- bridge University Press, 1998.

Josef Hofbauer and Karl Sigmund. Evolutionary game dynamics. Bulletin of the American Mathematical Society, 40(4):479–519, 2003.

Daniel J. Hruschka and Joseph Henrich. Friendship, cliquishness, and the emergence of cooperation. Journal of Theoretical Biology, 239(1):1–15, 2006.

Vincent Jansen and Minus Van Baalen. Altruism through beard chromodynamics. Nature, 440(7084):663–666, 2006.

Lou Jost. Phylogenetic diversity measures based on hill numbers. Oikos, 113:363–375, 2006.

Timothy Killingback and Michael Doebeli. Spatial evolutionary game theory: Hawks and doves revisited. Proceedings of the Royal Society of London B: Biological Sciences, 263 (1374):1135–1144, 1996.

Ad´ am´ Kun and Istvan´ Scheuring. Evolution of cooperation on dynamical graphs. Biosys- tems, 96(1):65–68, 2009.

Robert A. Laird. Green-beard effect predicts the evolution of traitorousness in the two-tag prisoner’s dilemma. Journal of Theoretical Biology, 288:84–91, 2011.

Philipp Langer, Martin A. Nowak, and Christoph Hauert. Spatial invasion of cooperation. Journal of Theoretical Biology, 250(4):634–641, 2008.

Laurent Lehmann and Laurent Keller. The evolution of cooperation and altruism–a general framework and a classification of models. Journal of evolutionary biology, 19(5):1365– 1376, 2006.

Laurent Lehmann and Nicolas Perrin. Altruism, dispersal, and phenotype-matching kin recognition. The American Naturalist, 159(5):451–468, 2002.

Sabin Lessard. Evolutionary stability: one concept, several meanings. Theoretical popula- tion biology, 37(1):159–170, 1990.

Anne E. Magurran. Measuring biological diversity. John Wiley & Sons, 2013.

George J. Mailath. Do people play nash equilibrium? lessons from evolutionary game theory. Journal of Economic Literature, 36(3):1347–1374, 1998.

John Maynard Smith. The theory of games and the evolution of animal conflicts. Journal of theoretical biology, 47(1):209–221, 1974. 141

John Maynard Smith. Evolution and the Theory of Games. Cambridge university press, 1982.

John Maynard Smith and George R. Price. The logic of animal conflict. Nature, 246:15, 1973.

John M. McNamara, Zoltan Barta, Lutz Fromhage, and Alasdair I. Houston. The coevolu- tion of choosiness and cooperation. Nature, 451(7175):189–192, 2008.

S. Meloni, A. Buscarino, L. Fortuna, M. Frasca, J. Gomez-Garde´ nes,˜ V. Latora, and Y. Moreno. Effects of mobility in a population of prisoners dilemma players. Physi- cal Review E, 79(6):067101, 2009.

Bryce Morsky and C. T. Bauch. Truncation selection and payoff distributions applied to the replicator equation. Manuscript submitted for publication (copy on file with author), 0:0, 2016.

Mayuko Nakamaru, H. Matsuda, and Y. Iwasa. The evolution of cooperation in a lattice- structured population. Journal of theoretical Biology, 184(1):65–81, 1997.

Martin A. Nowak. Five rules for the evolution of cooperation. science, 314(5805):1560– 1563, 2006.

Martin A. Nowak and Robert M. May. Evolutionary games and spatial chaos. Nature, 359 (6398):826–829, 1992.

Martin A. Nowak and Karl Sigmund. Evolutionary dynamics of biological games. science, 303(5659):793–799, 2004.

Martin A. Nowak, Corina E. Tarnita, and Tibor Antal. Evolutionary dynamics in structured populations. Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1537):19–30, 2010.

Hisashi Ohtsuki and Martin A. Nowak. Evolutionary stability on graphs. Journal of Theo- retical Biology, 251(4):698–707, 2008.

Jorge M. Pacheco, Arne Traulsen, Hisashi Ohtsuki, and Martin A. Nowak. Repeated games and direct reciprocity under active linking. Journal of theoretical biology, 250(4):723– 731, 2008.

Matjazˇ Perc and Attila Szolnoki. Coevolutionary gamesa mini review. BioSystems, 99(2): 109–125, 2010.

Andreas Pusch, Sebastian Weber, and Markus Porto. Impact of topology on the dynamical organization of cooperation in the prisoners dilemma game. Physical Review E, 77(3): 036120, 2008. 142

David C. Queller. Expanded social fitness and hamilton’s rule for kin, kith, and kind. Proceedings of the National Academy of Sciences, 108(Supplement 2):10792–10799, 2011.

Howard Rachlin and Bryan A. Jones. Altruism among relatives and non-relatives. Be- havioural Processes, 79(2):120–123, 2008.

Golriz Rezaei and Michael Kirley. Dynamic social networks facilitate cooperation in the n-player prisoners dilemma. Physica A: Statistical Mechanics and its Applications, 391 (23):6199–6211, 2012.

Rajiv N. Rimal and Kevin Real. Understanding the influence of perceived norms on behav- iors. Communication Theory, 13(2):184–203, 2003.

Rajiv N. Rimal, Maria K. Lapinski, Rachel J. Cook, and Kevin Real. Moving toward a theory of normative influences: How perceived benefits and similarity moderate the impact of descriptive norms on behaviors. Journal of health communication, 10(5):433– 450, 2005.

Rick L. Riolo, Michael D. Cohen, and Robert Axelrod. Evolution of cooperation without reciprocity. Nature, 414(6862):441–443, 2001.

Carlos P. Roca, Jose´ A. Cuesta, and Angel Sanchez.´ Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics. Physics of Life Reviews, 6(4):208–249, 2009.

Everett M. Rogers. Diffusion of innovations. Simon and Schuster, 2010.

Francisco C. Santos, Jorge M. Pacheco, and Tom Lenaerts. Evolutionary dynamics of social dilemmas in structured heterogeneous populations. Proceedings of the National Academy of Sciences of the United States of America, 103(9):3490–3494, 2006.

Peter Schuster and Karl Sigmund. Replicator dynamics. Journal of theoretical biology, 100(3):533–538, 1983.

C. E. Shannon. A mathematical theory of communication. Bell System Tech. J., 27(3): 379–423, 1948a.

C. E. Shannon. A mathematical theory of communication. Bell System Tech. J., 27(4): 623–656, 1948b.

Estrella A. Sicardi, Hugo Fort, Mendeli H. Vainstein, and Jeferson J. Arenzon. Random mobility and spatial structure often enhance cooperation. Journal of theoretical biology, 256(2):240–246, 2009.

E. H. Simpson. Measurement of diversity. Nature, 163:688, 1949. 143

Brian Skyrms. The stag hunt and the evolution of social structure. Cambridge University Press, 2004.

Charles H. Smith. Natural selection: A concept in need of some evolution? Complexity, 17(3):8–17, 2012a.

Charles H. Smith. Alfred russel wallace and the elimination of the unfit. Journal of bio- sciences, 37(2):203–205, 2012b.

Robert Sugden. The economics of rights, co-operation and welfare. Blackwell Oxford, 1986.

Gyorgy¨ Szabo´ and Gabor Fath. Evolutionary games on graphs. Physics Reports, 446(4): 97–216, 2007.

Attila Szolnoki and Matjazˇ Perc. Resolving social dilemmas on evolving random networks. EPL (Europhysics Letters), 86(3):30007, 2009.

Jun Tanimoto. Does a tag system effectively support emerging cooperation? Journal of Theoretical Biology, 247:756–764, 2007.

Christine Taylor, , Akira Sasaki, and Martin Nowak. Evolutionary game dynamics in finite populations. Bulletin of mathematical biology, 66(6):1621–1644, 2004.

Peter D. Taylor and Alan Grafen. Relatedness with different interaction configurations. Journal of theoretical biology, 262(3):391–397, 2010.

Peter D. Taylor and Leo B. Jonker. Evolutionary stable strategies and game dynamics. Mathematical biosciences, 40(1):145–156, 1978.

Arne Traulsen and Martin Nowak. Chromodynamics of cooperation in finite populations. PLoS ONE, 2:1–6, 2007.

Arne Traulsen and Heinz Georg Schuster. Minimal model for tag-based cooperation. Phys- ical Review E, 68(4):046129, 2003.

Arne Traulsen, Jorge Pacheco, and Lorens Imhof. Stochasticity and evolutionary stability. Physical review E, 74(2):021905, 2006.

Mendeli Vainstein, Ana Silva, and Jeferson Arenzon. Does mobility decrease cooperation? Journal of Theoretical Biology, 244(4):722–728, 2007.

Erica van de Waal, Christele` Borgeaud, and Andrew Whiten. Potent social learning and conformity shape a wild primates foraging decisions. Science, 340(6131):483–485, 2013. 144

Thomas Vincent and Joel Brown. The evolution of ess theory. Annual Review of Ecology and Systematics, pages 423–443, 1988.

Lucas Wardil and Christoph Hauert. Origin and structure of dynamic cooperative networks. Scientific reports, 4, 2014.

Jorgen¨ Weibull. Evolutionary game theory. MIT press, 1997.

Bin Wu, Da Zhou, Feng Fu, Qingjun Luo, Long Wang, and Arne Traulsen. Evolution of cooperation on stochastic dynamical networks. PLoS ONE, 5:e11187, 06 2010.

Bin Wu, Da Zhou, and Long Wang. Evolutionary dynamics on stochastic evolving net- works for multiple-strategy games. Physical Review E, 84(4):046111, 2011.

Jun Zhang, Wei-Ye Wang, Wen-Bo Du, and Xian-Bin Cao. Evolution of cooperation among mobile agents with heterogenous view radii. Physica A: Statistical Mechanics and its Applications, 390(12):2251–2257, 2011.