A Sheep in Wolf’s Clothing: Levels of Deceit and Detection in the Evolution of Communication
by
Shahab Zareyan
B.Sc., Honours in Biology and Minor in Mathematics, 2017
a thesis submitted in partial fulfillment of the requirements for the degree of
Master of Science
in
the faculty of graduate and postdoctoral studies (Zoology)
The University of British Columbia (Vancouver)
December 2018
c Shahab Zareyan, 2018 The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the thesis entitled:
A Sheep in Wolf’s Clothing: Levels of Deceit and Detection in the Evolution of Communication submitted by Shahab Zareyan in partial fulfillment of the requirements for the degree of Master of Science in Zoology.
Examining Committee: Christoph Hauert, Mathematics Co-Supervisor Sarah Otto, Zoology Co-Supervisor Darren Irwin, Zoology Additional Examiner
Additional Supervisory Committee Members: Michael Doebeli, Zoology and Mathematics Supervisory Committee Member
ii Abstract
Trivers has hypothesized that self-deception in our species has evolved for the better deception of others: in an arms race between deception and deception- detection, the dishonest individuals evolve ever-more complex trickery and the deceived an ever-more refined ability to distinguish honesty from de- ception. Detection at some point becomes so precise that a degree of self- deception can evolve to avoid emitting secondary cues that otherwise give away the deceit. In an attempt to formalize this, we focus on aspects of self-deception that can be generalized to non-humans, as human self-deception by itself relies on concepts that are difficult to define or to apply to other organisms. We formally explore one central aspect of Trivers’ hypothesis: the evolution of costly and well-integrated or deep deceptive morphs that span multiple signals and cues. We demonstrate that the depth of deception in a commu- nicative interaction is correlated with the number of signals detected, the cost of errors in judgment for signal detectors, and the benefits of success- ful deception. We also show that the frequency of well-integrated deceptive strategies is highest when the cost of errors in judgment is high and the cost of detection of other less well-integrated forms of deception is low. These results may partially explain variation in deception in nature and provide researchers with predictions that can be tested empirically, with obvious implications for self-deception. Moreover, we argue that self-deception under Trivers’ hypothesis is the product of a hierarchical system, in this case, the cognitive system, with some parts (ex. the subconscious) controlling and ultimately manipulating
iii the information that is received by other parts (ex. the conscious). Al- though we do not model this, we emphasize that hierarchies are integral parts of many systems such as gene regulatory networks. Thus, in response to an arms race with an adversary, these hierarchies can potentially evolve “internal deception”, with some parts transmitting manipulated informa- tion to other parts to prevent information leakage. We argue that modeling how properties of hierarchies affect the evolution of deception can allow for testable predictions and a better understanding of deception and self- deception in general.
iv Lay Summary
It has been hypothesized that self-deception has evolved for the better de- ception of others: those that believe in their lies are better at convincing others. The focus of this thesis is using mathematical models to investigate whether aspects of the hypothesis can be generalized to non-humans and thus can be empirically tested and placed on a more firm evolutionary foun- dation. Many deceptive strategies in nature, like self-deception, are costly and exaggerated; they involve dishonest signaling as well as attempts to hide cues of dishonesty. We show that these well-integrated forms of deception evolve when many signals are detected, when there is a large benefit to the deception, and when the cost of errors in judgment for those that are de- ceived is high. We further comment on other aspects of self-deception that can be tested in non-humans.
v Preface
Chapter 2 has been submitted for publication. The candidate’s contributions are outlined below:
• Zareyan, S. Otto, S. P. and Hauert, C. (2019) A Sheep in Wolf’s Clothing: Levels of Deceit and Detection in the Evolution of Commu- nication. submitted. Conceived and designed the experiments: SZ CH SPO. Performed the experiments: SZ. Analyzed the data: SZ SPO CH. Wrote the paper: SZ CH SPO.
vi Table of Contents
Abstract ...... iii
Lay Summary ...... v
Preface ...... vi
Table of Contents ...... vii
List of Figures ...... ix
List of Supplementary Material ...... xi
Acknowledgments ...... xii
1 Introduction ...... 1 1.0.1 Thesis Overview ...... 1 1.0.2 Trivers’ Hypothesis ...... 1 1.0.3 Evidence for Trivers’ Hypothesis ...... 4 1.0.4 Difficulties Modelling Self-Deception ...... 5 1.0.5 Deceptive Arms Races ...... 7
2 Evolution of Well-Integrated Deception in A Simple Sig- naling Game ...... 9 2.1 Introduction ...... 9 2.2 Evolution of Deception ...... 11 2.3 Evolution of Deception-Detection ...... 16
vii 2.4 Discussion ...... 22 2.5 Appendix ...... 25
3 Conclusion ...... 27 3.0.1 Hierarchical Information Processing Systems . . . . . 27 3.0.2 Concluding Remarks ...... 33
Bibliography ...... 34
viii List of Figures
Figure 2.1 Surface dynamics around the (H, D; T, I) equilib- rium under the (a) standard and the (b) adjusted repli- cator dynamics. For both panels, β = 2, γ = 1, c = 2, and s = 1. Equilibria are indicated by block dots. . . . . 16 Figure 2.2 Cycles of invasions and re-invasions upon the intro- duction of well-integrated deceivers...... 21 Figure 2.3 Dynamics of prey (blue) and predators (black) around the (H, D, W; T, D, I) equilibrium point under the (a) standard and (b) adjusted replicator dy- namics. The time period over which the replicator equa- tions were numerically solved is indicated at the bottom of each sub-figure. For both panels, t indicates time, β = 3, c = 3, s = 1, γ = 2, δ = 1, and g = 2...... 21
ix Figure 2.4 Response to perturbations away from the tri-morphic equilibrium in prey (blue) and predator (black) populations. For all panels, we perturb the system in directions indicated at the top of the figure. We then determine the response to each perturbation using the standard replicator dynamics (second row), and visualize the response by drawing vector fields that depict selection gradients following each perturbation. Note that perturb- ing the system in one of the populations induces selection only in the other population - not in the original popula- tion. Parameters: β = 3, γ = 2, δ = 1, c = 3, s = 1, and g =2...... 23
Figure 3.1 Schematic representation of the hierarchical infor- mation processing system underlying Trivers’ hypoth- esis on the evolution of self-deception. S: subconscious module; C: conscious module; 2o: secondary module. . . . 28
x List of Supplementary Material
A Mathematica file is attached to this thesis. It contains all the calculations for deriving the results in Chapter 2, as well as the code for recreating all the figures.
xi Acknowledgments
I would like to thank my supervisors, Christoph and Sally, for their uncondi- tional support; for accepting me as a student even though my undergraduate background was different than their research focus; and for letting me pur- sue the ambitious questions I was interested in. I am deeply grateful for this - I do not think this would have been possible at any other institution or research group. I would like to thank the members of the Otto, Hauert, and Doebeli labs, and Michael Doebeli in particular, my committee adviser, for valuable feed- back on my projects, and for their support throughout my degree. I am also grateful to Darren Irwin, who, as the departmental examiner, thoroughly assessed the scholarship of this thesis and provided very helpful suggestions. I would like to thank Ali for constant, forceful, and in depth critique of the work and ideas related to the work, the likes of which - with regards to the depth of the critique - I have rarely seen. I also like to thank him for much-needed guidance on how to rigorously apply the scientific method to questions of interest, how to ask questions, and what type of questions to ask, an influence which permeates every single sentence in this thesis. I would like to thank my family who, despite unfamiliarity with my research, gave me all the tools I needed to survive. I like to particularly thank Yas for giving me much-needed hope. Finally, I am grateful to NSERC CGS-M and the Zoology Department for financial support.
xii Chapter 1
Introduction
1.0.1 Thesis Overview The core of this thesis, presented in Chapter 2, focuses on modeling the co-evolutionary arms race between deception and deception-detection. The type of deception modeled is generic, and thus, qualitatively, the results of the model apply to many different systems of communication in nature. What is not addressed in the thesis core, however, is the original motivation behind the project, the original question, and specifically how the modeling framework in Chapter 2 relates to that question. We address these here by placing this work in the broader, but more vague, context of self-deception. We then connect this material to the formal model in Chapter 2. In Chapter 3, we expand on the relevance of the results of Chapter 2 for the original question.
1.0.2 Trivers’ Hypothesis The original intention of the thesis was to expand a hypothesis put forward by Robert Trivers on the arms race between deception and detection and the effects of that on cognitive evolution in humans [38, 40]. In the 1970s, Trivers had applied the logic of natural selection to a variety of social behaviors and had provided predictions that are to this day being tested in a variety of organisms [39]. Yet he encountered an interesting puzzle around the same
1 time [39]: “On the one hand, our sense organs have evolved to give us a mar- velously detailed and accurate view of the outside world (...) But once this information arrives in our brains, it is often distorted and biased to our con- scious minds. We deny the truth to ourselves. We project onto others traits that are in fact true of ourselves and then attack them! We repress painful memories, create completely false ones, rationalize immoral behavior, act repeatedly to boost positive self-opinion, and show a suite of ego-defense mechanisms. Why?” (p. 2). Trivers used the phrase self-deception, or the construction and the subse- quent belief in a false sense of the world and the self [10], to encapsulate all these phenomena. His interest in the question was warranted particularly given the implications of self-deception for our intentions: if one can fabri- cate a false sense of the world, one can also fabricate one’s intentions. This is a source of moral dilemmas as intention plays a central part in our concep- tion of morality [23]: we praise ourselves or our compatriots for “trying” to do the right thing or simply “having” benevolent thoughts; our criminal code is heavily based on wrongdoers’ intentions (intentional battery, intentional homicide, intentional infliction of emotional distress, etc.); and we justify wars that kill hundreds of thousands of innocent civilians by the “good” intentions of our nation states. Given the dilemmas to which self-deception gives rise, it is of no surprise that great works of philosophy, religion, and lit- erature, since the dawn of history, have grappled with our tendency towards exercising self-deception and its colossal consequences [10]. Nevertheless, at the time when Trivers was puzzled by the concept, there existed very few testable hypotheses and little empirical work on self-deception. Indeed, the scientific study of self-deception has historically lagged behind the many debates it has elicited, perhaps because the phe- nomenon has been questioned on all possible grounds, best illustrated by the many attempts to try to define exactly what constitutes self-deception [10]. How can an organism be the deceiver and the deceived at the same time? Self-deception perhaps necessitates the conception of a non-unitary mind - itself a contested view [4] - with some parts deceiving other parts.
2 To demonstrate that an individual is self-deceiving, one would need to de- termine what these units are, assess the information stored in each one of them, and assess how they share information with each other, a task that is complicated by the organism hiding the information from itself and others. These roadblocks are further exacerbated when one attempts to apply the concept to non-humans. More phenomenological studies are, however, available on certain human biases that are arguably consistent with self-deception. Examples include: 1) the better-than-average effect, or the belief of the majority that they are better than average in their attributes [12], as when 94% of academics believe that their teaching abilities are better than the average at their institution [5]; 2) self-handicapping, or engaging in a costly behavior that ensures failure on tasks that one is likely to fail anyways; the benefit being that the self- handicapper can then blame the costly behavior, as opposed to ineptitude, for his/her failures, and by doing so, maintaining self-confidence in a self- deceiving manner [41]; and 3) not gathering critical information about one’s health status, or doing so in a biased way, out of fear of obtaining diagnostic results that one might not want to hear [8, 11]. In light of observations similar to above, the dominant school of thought at the time of Trivers considered self-deceptive biases as necessary illusions required for us to feel better about ourselves and to maintain our mental health [42]. Trivers, being partly a naturalist, observing psychological war- fare in primates such as baboons, began first by asking questions about self-deception’s survival value [40]. In that context, there was a problem with the aforementioned explanation [40]: “Even if being happier is associated with higher survival and reproduc- tion, as expected, why should we use such a dubious - and potentially costly - mechanism as self-deception to regulate our happiness?” (p. 4). Various hypotheses can provide an answer to the above. Trivers pro- posed an alternative: self-deception is an offensive strategy in the service of deceit - those that believe in their own lies are also better at convinc- ing others of their lies [38]. This, Trivers postulated, can originate from an evolutionary arms race, where evolution of deception-detection selects for
3 better deception, which then selects for better detection. At some point, deception-detection becomes so accurate that an organism might benefit from self-deceiving in order to not give away secondary cues of its decep- tion. The advantage of Trivers’ hypothesis in comparison with previous at- tempts is that it, simultaneously, addresses three of Tinbergen’s four cate- gories of explanation of animal behavior [37]: 1) phylogeny: what can lead to the evolution of self-deception; 2) survival value: what are the benefits of self-deception to an organism that exercises it; and 3) mechanism, which is implicit in the hypothesis, and which we discuss in more detail in the conclusion of this thesis. As a consequence, it suggests avenues of research that previously could not be conceived.
1.0.3 Evidence for Trivers’ Hypothesis The hypothesis has led to the discovery of some interesting patterns. A series of recent papers show that people, when incentivized to argue for some position, are likely to bias their initial perception of that position [1, 28] or are likely to bias their information gathering so that they are more likely to encounter information that supports their position [34]. As a consequence, these people become more convincing in the position for which they are asked to argue, in line with Trivers’ hypothesis. Most importantly, the biases are maintained even when they are incentivized to objectively assess and correct their position, indicating that they perhaps truly believe in them, that they are self-deceived, and that their self-deception comes at the cost of inflexibility [34]. Although the cost in such experimental settings is negligible, one can imagine that informational distortion of this type, when practiced regularly, can have substantial consequences for those that distort. On the theoretical side, three important papers have attempted to model aspects of Trivers’ hypothesis [2, 18, 26]. All papers use some version of the hawk-dove game [33]: players are paired with one another and are given the option of engaging in a fight with their opponent over a resource. If no one
4 fights, no one gets anything. If one fights but the other does not, the one that chose to fight will get all the resource at no cost. If both fight, the out- come of the fight is determined by some attribute of each player. Players’ decision to fight or not is determined by their perception of their attribute. Furthermore, players can signal their perception to their opponents, which can affect the opponent’s decision to engage in the fight. Self-deception is defined as using an exaggerated version of the attribute to decide whether to fight or not, and it can be in the service of deceit if the exaggerated perception reduces the opponents’ chance of engaging in the fight. Only one of the studies defines a non-self-deceptive deceptive strategy, involving a player that does not believe in an exaggerated attribute but nevertheless signals an exaggerated trait value, and competes this strategy against the self-deceptive one [26]. These studies, in general, report that under appro- priate costs for self-deception as compared to the other strategies, and/or under assumptions about the gullibility of the opponent, self-deception is favored.
1.0.4 Difficulties Modelling Self-Deception Although the above studies address the evolution of interesting strategies, none directly tackle why the self-deceptive strategies they considered are indeed self-deceptive as opposed to simple cognitive algorithms; that is, none fully addressed what aspects of their strategies are self-deceptive, and what aspects of self-deception are not included in their modelling framework. Most importantly, although the studies demonstrate how certain cognitive algorithms outperform other algorithms, none attempted to test Trivers’ hypothesis in its entirety, that is, none considered both the survival value and the phylogeny aspects of Tinbergen’s criteria [37]. The original aim of the thesis was to begin with deception in a population and attempt to demonstrate how such forms, when placed in an arms race with deception- detection, evolve to become self-deceptive. Nevertheless, addressing the first of these problems, that is, attempt- ing to formulate a comprehensive mathematical definition of self-deception,
5 turned out to be unfruitful. Such definitions necessitate the formulation of many different complicated assumptions that inevitably limit the generality of the model and hence constrain the applicability of its predictions. For example, what constitutes “self”? Does it refer to the conscious part of the brain? How are we to define consciousness mathematically? Unfortu- nately, these questions cannot be answered given our current understanding of humans and non-humans; attempting to provide an answer can lead to unproductive speculative work [9]. Indeed, Trivers himself had, seemingly, recognized this problem and at- tempted to address it. In one of his books [39], Trivers emphasized that although hypotheses on social behavior can be motivated by observations in our own species, for them to ultimately attain the vigor of a hard science as opposed to a just-so-story the researcher should focus on generalizing the behavior to non-humans. In his most comprehensive work on self-deception [40], he states that: “it stands to reason that if our theory of self-deception rests on a theory of deception, advances in the latter will be especially valu- able”. Thus, by attempting to study analogous behaviors in non-humans, one is more likely to find more generalizable results with greater predictive power that can then guide future research on the question of interest. Instead of focusing exclusively on self-deception, the main aim of the thesis thus shifted towards developing a better understanding of the evolu- tion of deception in arms races in general. Do deceivers evolve to conceal secondary cues of deception even when this would be of a very high cost? What would be the response of the deceived to sophisticated deception? Does the arms race continue forever, or does it settle on some stable equi- librium population state permanently? Yes, it is true that these questions do not address self-deception directly, but attempting to answer them can result in robust, testable predictions that ultimately allow for a better un- derstanding of self-deception.
6 1.0.5 Deceptive Arms Races Arguably, many antagonistic interactions in nature are consequences of an arms race between deception and detection. In host-parasite interactions, the parasite attempts to mimic the host through novel strategies, and the host is constantly engaged in differentiating between new forms of deceit and non-deceit. In the context of sexual selection, individuals attempt to deter- mine the quality of potential mates, and potential mates attempt to mislead through ever-exaggerated signals. In predator-prey interactions, predators benefit from knowing whether a potential prey is worth pursuing/eating, while prey evolve to appear toxic or able to defend themselves [24]. The importance of arms races in signaling interactions was popularized by Dawkins and Krebs in two papers in the 1980s [7, 21]. They argued that a player in a signaling interaction assumes either of two roles: a “manipulator” that sends deceptive signals as its interests do not overlap with a “mind- reader”, who nevertheless attempts to read through the deception. In an arms race between the two, signals and sensory systems evolve to become ever-more complex. At the time, however, some argued that the “cynical” view of commu- nication proposed by Dawkins and Krebs [32] is not logical [29] because it failed to explain why the deceived do not simply evolve to ignore signals in- stead of evolving to detect deception. The latter would make signaling futile, causing the collapse of communication altogether [29]. This so-called “log- ical flaw” of Dawkins’ and Krebs’ argument lead to its dismissal by some. Alternatively, Zahavi’s handicap principle [44] proposed that full honesty might be achieved in signaling systems if receivers evolve to only pay atten- tion to signals that are very costly to fake. Theoretical work on the subject subsequently focused on the analysis of Zahavi’s alternative [29]. Indeed, to our knowledge, only one model has focused on the possibility of an arms race between deception and deception-detection [20]. Instead of looking for stable population equilibria in a game theoretical model, which has been the standard theoretical approach to the handicap principle [29, 32], the authors use simulations to show that populations of signaler and
7 receiver neural networks can be locked into never-ending arms races. They demonstrate that in such contexts, deception jumps from one type of display to another. Whether and when deception evolves to become exaggerated, however, is not explicitly addressed in their model. A main goal of Chapter 2 is to present an analytical model that si- multaneously encapsulates Dawkins’ and Krebs’ arms race perspective and Zahavi’s handicap principle. Building on recent work [16, 45], we address the “logical flaw” of the arms race perspective and demonstrate that it is not really a flaw, even when the standard game theoretical approach is utilized. We demonstrate that signaling systems can easily evolve stable forms of de- ception and detection, even when the deceived are given the opportunity to simply ignore signals. This then provides exactly the forces required for the subsequent evolution of complex forms of detection, or “mind-reading”. Importantly, our model further finds that exaggerated displays, as argued in Dawkins and Krebs, can evolve and be stably maintained in communication systems, resulting in rich evolutionary dynamics.
8 Chapter 2
Evolution of Well-Integrated Deception in A Simple Signaling Game
2.1 Introduction Natural selection has led to the evolution of complex systems of signalling and communication across the tree of life [29, 35]. Whenever the interests of interacting partners differ, however, communication systems are prone to cheating. Zahavi’s handicap principle suggests that communication remains immune to deception as long as the production cost of unreliable signaling is high [44]. When this does not hold, however, cheating invades and poten- tially sets off antagonistic co-evolution between deception and detection of that deception: selection could, for example, favor prey that communicate strength and high escape capability to predators even if they are truly unde- fended. In response, predators that are better able to discern weak prey are favoured, which in turn selects for prey that better hide their susceptibility, resulting in an arms race [7, 21]. Implicit in the notion that better discrimination evolves is the assump- tion that false signals are poorly coordinated with other cues that indicate
9 the status of an organism (e.g., defended or undefended). Hence, a more integrated level of deception is possible if these other cues evolve and be- come consistent with the false signals. This, however, is not always possible: physico-chemical and developmental factors, for example, constrain trait co- variance, making certain alterations, such as the ones required for attaining signal consistency, very costly or impossible. This is particularly true in the context of an arms race: although there are no reasons to assume that con- straints are at play initially, as the arms race proceeds, the number of signals and cues detected increases, and with it the likelihood that some co-vary in a limited number of ways. In such cases, successful deception may still be possible through highly pleiotropic and thus costly changes to the nature of development in the organism, as this can supply the variation needed for signal consistency. In light of these factors, our focus here is to determine how these well-integrated or deep forms of deception - defined more formally as deception involving manipulation of multiple signals and secondary cues that otherwise give away the deceit - evolve and are maintained in natural populations. In nature, cases of multi-signal deception, spanning multiple domains (ex. morphology, behavior) and manifested through large-scale broad-acting developmental changes, provide the most convincing evidence for stable maintenance of deep deceivers, as such deception is expected to be of par- ticularly high cost given the nature of the underlying changes. The clearest examples of this are: 1) female mimicry in a variety of animal species, in- cluding insects, fish, birds, and mammals, initiated, in some cases, early on during development [27]; 2) Batesian mimicry in certain butterfly species, regulated by major developmental transcription factors [22, 25, 36]; 3) Mor- phological and behavioral mimicry of ants by spiders to evade predators [30], which, importantly, is also associated with substantial cost such as reduction in the number of eggs laid per eggsac due to the narrowing of the bodies in the mimics [6]; and 4) mimicry of sticks by stick insects (Phasmatodea), which necessitates very thin bodies and has resulted in loss of some of the internal organs that normally come in pairs [40]. A more relevant example for humans is self-deception, defined as decep-
10 tion of the conscious part of the mind by the subconscious, either through biasing the gathering of information or biasing the gathered information. Trivers [38] hypothesized that self-deception evolved for better deception of others: by virtue of believing in their own lies, self-deceivers do not give sec- ondary cues that otherwise give away the deceit. Besides involving multiple signals and cues, and thus fulfilling the criteria of a well-integrated decep- tion, its underlying mechanism of early information manipulation in the subconscious is interestingly analogous to the early developmental changes that underlie deception in the above examples. Self-deception is expected to come at a cost, however, in this case due to a biased perception of reality and suboptimal decision-making, which has been outlined in many different contexts in humans [40]. The aim of this work is to formulate that which is common to all of the above cases in mathematical terms. We extend recent work on the evolution of partially honest communication [16, 17, 45] by deriving analytical con- ditions for the evolution of stable systems with multi-signal detection and well-integrated but costly forms of deception. We extend a signaling game that has traditionally been used to provide support for costly signaling the- ory to demonstrate shifts in the levels of deception. For clarity, we discuss a prey-predator signaling interaction, though various aspects can easily be applied to other situations.
2.2 Evolution of Deception The story behind the model is simple: A predator’s hunting success is af- fected by the type of prey it pursues. A prey that is strong, fast, and well-armed is unlikely to be caught; the predator wastes energy attempting to pursue such a prey, and it can also sustain injuries if the pursuit results in confrontation. However, pursuit of undefended prey, which are by definition slow and physically incapable of fighting back, is more likely to result in capture. As a result, it is beneficial for predators to distinguish between the different prey types and pursue only the undefended. Thus, prey fall into two categories of defended and undefended, a trait
11 which we assume is exclusively a function of environmental factors such as prenatal or early-life conditions. The defended corresponds to those that received adequate care and nutrition and the undefended to the ones that did not. We further assume that at any point in time a constant proportion of prey are defended, and all other are undefended. Prior to the evolution of any signaling, the predator population is ex- pected to be composed of those that always pursue prey. We refer to these as indiscriminate (I). Their payoff, ΠI, is the weighted benefit of pursuing the undefended, β, added to the weighted costs of pursuing the defended, γ:
ΠI = β − γ (2.1)
Hence, if there exists phenotypic differences between the two prey types, predators could evolve to detect these and pursue only prey with the unde- fended phenotypes. Such a predator, which we refer to as trusting (T), as it trusts the signal conveyed by the phenotype of the prey, has a payoff:
ΠT = β (2.2) which is always higher than ΠI (equation 2.1). On the other hand, we assume prey pay a weighted cost c when they are pursued by predators when they are undefended1. Thus a prey that faces a mixed population of indiscriminate and trusting predators with frequencies xi and xt, respectively, will always pay the cost of pursuit:
PH = −xic − xtc = −c (2.3)
We refer to this prey as the honest (H) prey as it honestly communicates its phenotype. With payoffs defined for the three strategies, we can determine whether trusting predators invade a population of indiscriminate predators. The
1We could also consider the cost associated with being pursued when defended. How- ever, as we will see, the difference between the payoff of the various prey strategies that we consider is manifested only when undefended. Including this additional cost does not change the qualitative results.
12 evolutionary dynamics is governed by the replicator equations:
x˙ k = xk Πk(y) − Π¯ (2.4a) y˙k = yk Pk(x) − P¯ (2.4b)