Evolutionary Dynamics in Changing Environments

Total Page:16

File Type:pdf, Size:1020Kb

Evolutionary Dynamics in Changing Environments EVOLUTIONARYDYNAMICS INCHANGING ENVIRONMENTS Dissertation to acquire the doctoral degree in mathematics and natural science ’Doctor rerum naturalium’ at the Georg-August-Universität Göttingen in the doctoral degree programme ’Physics of Biological and Complex Systems’ at the Georg-August University School of Science (GAUSS) Submitted by Frank Stollmeier from Lippstadt, Germany Göttingen, 2018 Thesis Committee: • Prof. Dr. Marc Timme (First Referee) Institute for Theoretical Physics, Technical University of Dres- den • Prof. Dr. Theo Geisel (Second Referee) Max Planck Institute for Dynamics and Self-Organization • Dr. Jan Nagler (Supervisor) Computational Physics for Engineering Materials, IfB, ETH Zurich • Prof. Dr. Ulrich Parlitz Max Planck Institute for Dynamics and Self-Organization Further members of the examination board: • Prof. Dr. Stefan Klumpp Institute for Nonlinear Dynamics, University of Göttingen • Prof. Dr. Reiner Kree Institute for Theoretical Physics, University of Göttingen Date of the oral examination: April 19, 2018 RELATEDPUBLICATIONS A F. Stollmeier and J. Nagler. Unfair and Anomalous Evolutionary Dynamics from Fluctuating Payoffs. In: Physical Review Letters 120.5 (2018), p. 058101. B F. Stollmeier. [Re] A simple rule for the evolution of cooperation on graphs and social networks. In: ReScience 3.1 (2017). C F. Stollmeier and J. Nagler. Phenospace and mutational stability of shared antibiotic resistance. Submitted to Physical Review X. Chapter 3 is based on publication A, chapter 4.3 is based on publica- tion B, and chapter 5 is based on submitted paper C. Each chapter also includes some additional content and the texts, the equations and the figures in these chapters are modified and partly rearranged compared to the publications. iii CONTENTS 1 introduction1 1.1 Motivation . 1 1.2 Outline . 3 2 fundamentals of evolutionary game theory5 2.1 Unexpected results of Darwinian evolution . 5 2.2 Evolutionarily stable state . 6 2.3 Replicator equation . 8 2.4 Frequency-dependent Moran process . 9 2.5 Evolutionary games . 10 3 evolutionary game theory with payoff fluctua- tions 15 3.1 Introduction . 15 3.2 Anomalous evolutionarily stable states . 16 3.3 Deterministic payoff fluctuations . 20 3.4 Classification of games with payoff fluctuations . 22 3.5 Discussion . 27 3.6 Proofs and methods . 28 3.6.1 Approximation of the geometric mean . 28 3.6.2 Stochastic payoff fluctuations . 28 3.6.3 Correspondence of payoff rank criteria and gen- eralized criteria with constant payoffs . 33 4 evolutionary games on networks 37 4.1 Introduction . 37 4.2 Existing results on structured populations indicate ef- fects of inherent noise . 37 4.3 Implementation of evolutionary games on networks . 38 4.3.1 Motivation . 38 4.3.2 Short review of Ohtsuki et al.: A simple rule for the evolution of cooperation on graphs and social net- works (Nature, 2006)................ 39 4.3.3 Description of algorithms . 40 4.3.4 Comparison of own implementation with re- sults in the paper by Ohtsuki et al. 41 4.4 The effect of inherent payoff noise on evolutionary dy- namics . 45 4.4.1 Methods . 45 4.4.2 Results . 45 4.5 Discussion . 47 5 shared antibiotic resistance 49 5.1 Motivation . 49 5.2 Introduction . 50 v vi contents 5.3 The phenospace of antibiotic resistance . 51 5.4 Cell model . 52 5.5 Colony model . 53 5.6 Fitness landscape . 55 5.7 Evolutionary dynamics . 55 5.7.1 Strain A alone . 55 5.7.2 Strain A invaded by B . 55 5.7.3 Strain A and B, invaded by C . 57 5.8 Mutational stability and the direction of evolution . 58 5.9 Conclusion . 60 5.10 Outlook . 61 6 adaptation to fluctuating temperatures 63 6.1 Introduction . 63 6.2 Outline . 64 6.3 Theoretical examples . 64 6.3.1 Periodically changing temperature . 64 6.3.2 Randomly fluctuating temperature . 67 6.4 Modeling the growth of nematodes . 70 6.4.1 Development . 70 6.4.2 Reproduction . 72 6.4.3 Population growth . 75 6.4.4 Growth rates at constant temperatures . 76 6.4.5 Growth rates with switching temperatures . 77 6.5 Weather data . 79 6.6 Estimation of soil temperatures . 80 6.7 Results . 82 6.8 Discussion . 89 7 conclusion 91 7.1 Summary and Discussion . 91 7.1.1 Overview . 91 7.1.2 Chapter 3: Evolutionary game theory with pay- off fluctuations . 92 7.1.3 Chapter 4: Evolutionary games on networks . 93 7.1.4 Chapter 5: Shared antibiotic resistance . 93 7.1.5 Chapter 6: Adaptation to fluctuating temperatures 94 7.2 Outlook: Towards applied evolutionary dynamics . 95 a appendix 97 a.1 Parameters for figure 5.4 .................. 97 a.2 Metadata and results of weather data analysis . 99 bibliography 103 acknowledgments 113 1 INTRODUCTION 1.1 motivation The aim of evolutionary dynamics is to describe how species change. It emerged after two fundamental insights. First, that species change at all, and second, that this happens due to variation, heredity and natural selection. Both have been controversially debated, but since Charles Darwin presented a line of arguments with a large number of supporting observations these insights became the established the- ory of evolution. Which open questions remain after these underlying mechanisms of evolution are discovered? Can we calculate the course of evolution using a set of basic laws for variation, heredity and nat- ural selection? The starting point of such calculations is the fitness function, which is defined as the reproductive success of an organism. The fitness de- pends first of all on the phenotype of the organism. This part of the fitness function, often metaphorically referred to as the “fitness land- scape”, can be considered as the potential of evolutionary dynamics and we can obtain the direction of evolution from the gradient of the fitness with respect to the phenotype. While the species evolves, the fitness landscape changes dynamically, because the fitness also depends on the environmental conditions, on the interactions with other individuals of the same species and on the interactions with other individuals of other evolving species. At the same time, the in- dividuals of the species also have an impact on the environment and on the fitness of individuals of other species. Taken together, these coupled fitness functions constitute a complex, nonlinear, high dimensional, heterogeneous, stochastic system. Even if we knew all relevant biological mechanisms, seemingly simple ques- tions remain challenging problems. For example, which conditions lead to extinction or speciation? Or how many species can coexist and how stable is such a system of coexisting species? The literature contains already a great number of answers, but each relies on a very specific set of assumptions. For example, let us assume a homogeneous habitat with species whose fitness depends only on their own phenotype and on the environmen- tal conditions. We further assume that the environmental conditions 1 2 introduction do not change and that the populations are infinitely large. Since the species with the highest fitness outcompetes all species with a lower fitness regardless how small the advantage is, only one species can survive under these assumptions. This reasoning is known as the competitive exclusion principle. However, the relevance for natural systems is very limited, because the assumptions are almost never sat- isfied. A famous case is the high diversity of phytoplankton species in ocean water. Due to the supposed homogeneity of ocean water and the limited number of resources, the surprisingly high diversity of phytoplankton species in ocean water is called the “paradox of the plankton”. Today, it is known that none of the aforementioned as- sumptions is satisfied for phytoplankton in ocean water, so the com- petitive exclusion principle does not apply to this case [1]. We can extend the example by assuming that the fitness of a species also depends on the phenotype of other species. To keep it simple, we further assume that the individuals interact with equal probabil- ity with every other individual, and that no new mutations occur except for those that are already present at the beginning. The re- sulting coupled evolutionary dynamics is the subject of evolutionary game theory. Imagine two species which each have a high fitness if their population is small and a low fitness if their population is large compared to the population of the other species. These two species can coexist, because natural selection always favors the species that is rare compared to the other species, which leads to an evolutionarily stable state at the ratio of populations for which both species have an equal fitness. These typical assumptions for evolutionary game theory are still far from being realistic. Changing only one of them can lead to com- pletely different results, but can also make it difficult or even impos- sible to find exact solutions. Hence, almost any attempt to relax one assumption has resulted in a new direction of research, e. g. evolu- tionary game theory with finite instead of infinite populations [2–11], with populations on networks or spatially distributed populations in- stead of well-mixed populations [12–19] or with mutations occurring during the process [7, 20, 21]. The attempts to replace the assump- tion of a constant environment with variable environments headed in different directions because the environment can affect several pa- rameters in the system. For example, the environment may affect the selection strength [8], the reproduction rate [22–25], or the population size [10, 11, 26–28] for each species independently. Typical properties of interest in these studies are the probability of extinction or fixation of a mutation and the mean time to extinction or fixation of a muta- tion [3, 5, 8, 9, 27, 29–43]. In this thesis, we aim to study the effects of changing environments on evolutionary dynamics in a different way.
Recommended publications
  • Statistical Mechanics of Evolutionary Dynamics
    Statistical Mechanics of Evolutionary Dynamics Kumulative Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakult¨at der Christian-Albrechts-Universit¨at zu Kiel vorgelegt von Torsten Rohl¨ Kiel 2007 Referent: Prof. Dr. H. G. Schuster Koreferent(en): Prof. Dr. G. Pfister Tagderm¨undlichenPr¨ufung: 4.Dezember2007 ZumDruckgenehmigt: Kiel,den12.Dezember2007 gez. Prof. Dr. J. Grotemeyer (Dekan) Contents Abstract 1 Kurzfassung 2 1 Introduction 3 1.1 Motivation................................... 3 1.1.1 StatisticalPhysicsandBiologicalSystems . ...... 3 1.1.1.1 Statistical Physics and Evolutionary Dynamics . ... 4 1.1.2 Outline ................................ 7 1.2 GameTheory ................................. 8 1.2.1 ClassicalGameTheory. .. .. .. .. .. .. .. 8 1.2.1.1 TheMatchingPenniesGame . 11 1.2.1.2 ThePrisoner’sDilemmaGame . 13 1.2.2 From Classical Game Theory to Evolutionary Game Theory.... 16 1.3 EvolutionaryDynamics. 18 1.3.1 Introduction.............................. 18 1.3.2 ReplicatorDynamics . 22 1.3.3 GamesinFinitePopulations . 25 1.4 SurveyofthePublications . 29 1.4.1 StochasticGaininPopulationDynamics. ... 29 1.4.2 StochasticGaininFinitePopulations . ... 31 1.4.3 Impact of Fraud on the Mean-Field Dynamics of Cooperative Social Systems................................ 36 2 Publications 41 2.1 Stochasticgaininpopulationdynamics . ..... 42 2.2 Stochasticgaininfinitepopulations . ..... 47 2.3 Impact of fraud on the mean-field dynamics of cooperative socialsystems . 54 Contents II 3 Discussion 63 3.1 ConclusionsandOutlook . 63 Bibliography 67 Curriculum Vitae 78 Selbstandigkeitserkl¨ arung¨ 79 Abstract Evolutionary dynamics is an essential component of a mathematical and computational ap- proach to biology. In recent years the mathematical description of evolution has moved to a description of any kind of process where information is being reproduced in a natural envi- ronment. In this manner everything that lives is a product of evolutionary dynamics.
    [Show full text]
  • A Stochastic Differential Equation Derived from Evolutionary Game Theory
    U.U.D.M. Project Report 2019:5 A stochastic differential equation derived from evolutionary game theory Brian Treacy Examensarbete i matematik, 15 hp Handledare: Ingemar Kaj Examinator: Denis Gaidashev Februari 2019 Department of Mathematics Uppsala University Abstract Evolutionary game theory models population dynamics of species when a concept of fit- ness is introduced. Initally ordinary differential equation models were the primary tool but these fail to capture inherent randomness in sufficiently small finite populations. There- fore stochastic models were created to capture the stochasticity in these populations. This thesis connects these two modeling paradigms. It does this by defining a stochastic process model for fitness-dependent population dynamics and deriving its associated in- finitesimal generator. This is followed by linking the finite population stochastic process model to the infinite population ordinary differential equation model by taking the limit of the total population which results in a stochastic differential equation. The behaviour of this stochastic differential equation is analyzed, showing how certain parameters af- fect how closely this stochastic differential equation mimics the behaviour of the ordinary differential equation. The thesis concludes by providing graphical evidence that the one- third rule holds with this stochastic differential equation. 3 Contents Introduction 5 1 Moran Process 6 1.1 Payoff ....................................6 1.2 Fitness . .7 1.3 Transition Probabilities . .7 1.4 Fixation Probabilities and Times . .8 2 Infinitesimal Generator 9 2.1 Infinitesimal Generator for alternative fitness function . 10 3 Stochastic Differential Equations 11 3.1 Heuristics . 11 3.2 Existence and Uniqueness . 11 3.3 Infinitesimal Generator . 12 4 Analysis 13 4.1 Analysis of deterministic component .
    [Show full text]
  • Populations in Environments with a Soft Carrying Capacity Are Eventually
    Populations in environments with a soft carrying capacity are eventually extinct † Peter Jagers∗ Sergei Zuyev∗ August 5, 2020 Abstract Consider a population whose size changes stepwise by its members reproducing or dying (disappearing), but is otherwise quite general. Denote the initial (non-random) size by Z0 and the size of the nth change by Cn, n = 1,2,.... Population sizes hence develop successively as Z1 = Z0 +C1, Z2 = Z1 +C2 and so on, indefinitely or until there are no further size changes, due to extinction. Extinction is thus assumed final, so that Zn = 0 implies that Zn+1 = 0, without there being any other finite absorbing class of population sizes. We make no assumptions about the time durations between the successive changes. In the real world, or more specific models, those may be of varying length, depending upon individual life span distributions and their interdepen- dencies, the age-distribution at hand and intervening circumstances. We could con- sider toy models of Galton-Watson type generation counting or of the birth-and-death type, with one individual acting per change, until extinction, or the most general multi- type CMJ branching processes with, say, population size dependence of reproduction. Changes may have quite varying distributions. The basic assumption is that there is a carrying capacity, i.e. a non-negative number K such that the conditional expectation of the change, given the complete past history, is non-positive whenever the population exceeds the carrying capacity. Further, to avoid unnecessary technicalities, we assume that the change Cn equals -1 (one individual dying) with a conditional (given the past) probability uniformly bounded away from 0.
    [Show full text]
  • An Ergodic Theorem for Fleming-Viot Models in Random Environments
    An Ergodic Theorem for Fleming-Viot Models in Random Environments 1 Arash Jamshidpey 2 2016 Abstract The Fleming-Viot (FV) process is a measure-valued diffusion that models the evolution of type frequencies in a countable population which evolves under resampling (genetic drift), mutation, and selection. In the classic FV model the fitness (strength) of types is given by a measurable function. In this paper, we introduce and study the Fleming-Viot process in random environment (FVRE), when by random environment we mean the fitness of types is a stochastic process with c`adl`ag paths. We identify FVRE as the unique solution to a so called quenched martingale problem and derive some of its properties via martingale and duality methods. We develop the duality methods for general time-inhomogeneous and quenched martingale problems. In fact, some important aspects of the duality relations only appears for time-inhomogeneous (and quenched) martingale problems. For example, we see that duals evolve backward in time with respect to the main Markov process whose evolution is forward in time. Using a family of function-valued dual processes for FVRE, we prove that, as the number eN of individuals N tends to ∞, the measure-valued Moran process µN (with fitness process eN ) e converges weakly in Skorokhod topology of c`adl`ag functions to the FVRE process µ (with fitness process e), if eN → e a.s. in Skorokhod topology of c`adl`ag functions. We also study e the long-time behaviour of FVRE process (µt )t≥0 joint with its fitness process e =(et)t≥0 and e prove that the joint FV-environment process (µt ,et)t≥0 is ergodic under the assumption of weak ergodicity of e.
    [Show full text]
  • An Extended Moran Process That Captures the Struggle for Fitness
    An extended Moran process that captures the struggle for fitness a, b c Marthe M˚aløy ⇤, Frode M˚aløy, Rafael Lahoz-Beltr´a , Juan Carlos Nu˜no , Antonio Brud aDepartment of Mathematics and Statistics, The Arctic University of Norway. bDepartment of Biodiversity, Ecology and Evolution, Complutense University of Madrid, Spain. cDepartamento de Matem´atica Aplicada a los Recursos Naturales, Universidad Polit´ecnica de Madrid, Spain dDepartment of Applied Mathematics, Complutense University of Madrid, Spain Abstract When a new type of individual appears in a stable population, the newcomer is typically not advantageous. Due to stochasticity, the new type can grow in numbers, but the newcomers can only become advantageous if they manage to change the environment in such a way that they increase their fitness. This dy- namics is observed in several situations in which a relatively stable population is invaded by an alternative strategy, for instance the evolution of cooperation among bacteria, the invasion of cancer in a multicellular organism and the evo- lution of ideas that contradict social norms. These examples also show that, by generating di↵erent versions of itself, the new type increases the probability of winning the struggle for fitness. Our model captures the imposed cooperation whereby the first generation of newcomers dies while changing the environment such that the next generations become more advantageous. Keywords: Evolutionary dynamics, Nonlinear dynamics, Mathematical modelling, Game theory, Cooperation 1. Introduction When unconditional cooperators appear in a large group of defectors, they are exploited until they become extinct. The best possible scenario for this type of cooperators is to change the environment such that another type of cooper- 5 ators that are regulated and only cooperate under certain conditions becomes advantageous.
    [Show full text]
  • Dynamics of Markov Chains for Undergraduates
    Dynamics of Markov Chains for Undergraduates Ursula Porod February 19, 2021 Contents Preface 6 1 Markov chains 8 1.1 Introduction . .8 1.2 Construction of a Markov chain . .9 1.2.1 Finite-length trajectories . .9 1.2.2 From finite to infinite-length trajectories . 11 1.3 Basic computations for Markov chains . 16 1.4 Strong Markov Property . 21 1.5 Examples of Markov Chains . 23 1.6 Functions of Markov chains . 35 1.7 Irreducibility and class structure of the state space . 42 1.8 Transience and Recurrence . 44 1.9 Absorbing chains . 53 1.9.1 First step analysis . 53 1.9.2 Finite number of transient states . 55 1.9.3 Infinite number of transient states . 61 1.10 Stationary distributions . 66 1.10.1 Existence and uniqueness of an invariant measure . 68 1.10.2 Positive recurrence versus null recurrence . 73 1.10.3 Stationary distributions for reducible chains . 74 1.10.4 Steady state distributions . 76 1.11 Periodicity . 77 1.12 Exercises . 85 2 Limit Theorems for Markov Chains 87 2.1 The Ergodic Theorem . 87 2.2 Convergence . 93 2.3 Long-run behavior of reducible chains . 98 1 CONTENTS 2 2.4 Exercises . 100 3 Random Walks on Z 101 3.1 Basics . 101 3.2 Wald's Equations . 103 3.3 Gambler's Ruin . 105 3.4 P´olya's Random Walk Theorem . 111 3.5 Reflection Principle and Duality . 115 3.5.1 The ballot problem . 120 3.5.2 Dual walks . 121 3.5.3 Maximum and minimum .
    [Show full text]
  • Scaling Limits of a Model for Selection at Two Scales
    Home Search Collections Journals About Contact us My IOPscience Scaling limits of a model for selection at two scales This content has been downloaded from IOPscience. Please scroll down to see the full text. 2017 Nonlinearity 30 1682 (http://iopscience.iop.org/0951-7715/30/4/1682) View the table of contents for this issue, or go to the journal homepage for more Download details: IP Address: 152.3.102.254 This content was downloaded on 01/06/2017 at 10:54 Please note that terms and conditions apply. You may also be interested in: Stochastic switching in biology: from genotype to phenotype Paul C Bressloff Ergodicity in randomly forced Rayleigh–Bénard convection J Földes, N E Glatt-Holtz, G Richards et al. Regularity of invariant densities for 1D systems with random switching Yuri Bakhtin, Tobias Hurth and Jonathan C Mattingly Cusping, transport and variance of solutions to generalized Fokker–Planck equations Sean Carnaffan and Reiichiro Kawai Anomalous diffusion for a class of systems with two conserved quantities Cédric Bernardin and Gabriel Stoltz Feynman–Kac equation for anomalous processes with space- and time-dependent forces Andrea Cairoli and Adrian Baule Mixing rates of particle systems with energy exchange A Grigo, K Khanin and D Szász Ergodicity of a collective random walk on a circle Michael Blank Stochastic partial differential equations with quadratic nonlinearities D Blömker, M Hairer and G A Pavliotis IOP Nonlinearity Nonlinearity London Mathematical Society Nonlinearity Nonlinearity 30 (2017) 1682–1707 https://doi.org/10.1088/1361-6544/aa5499
    [Show full text]
  • Stochastische Prozesse
    University of Vienna Stochastische Prozesse January 27, 2015 Contents 1 Random Walks 5 1.1 Heads or tails . 5 1.2 Probability of return . 6 1.3 Reflection principle . 7 1.4 Main lemma for symmetric random walks . 7 1.5 First return . 8 1.6 Last visit . 9 1.7 Sojourn times . 10 1.8 Position of maxima . 11 1.9 Changes of sign . 11 1.10 Return to the origin . 12 1.11 Random walks in the plane Z2 . 13 1.12 The ruin problem . 13 1.13 How to gamble if you must . 14 1.14 Expected duration of the game . 15 1.15 Generating function for the duration of the game . 15 1.16 Connection with the diffusion process . 17 2 Branching processes 18 2.1 Extinction or survival of family names . 18 2.2 Proof using generating functions . 19 2.3 Some facts about generating functions . 22 2.4 Moment and cumulant generating functions . 23 2.5 An example from genetics by Fischer (1930) . 24 2.6 Asymptotic behaviour . 26 3 Markov chains 27 3.1 Definition . 27 3.2 Examples of Markov chains . 27 3.3 Transition probabilities . 30 3.4 Invariant distributions . 31 3.5 Ergodic theorem for primitive Markov chains . 31 3.6 Examples for stationary distributions . 34 3.7 Birth-death chains . 34 3.8 Reversible Markov chains . 35 3.9 The Markov chain tree formula . 36 1 3.10 Mean recurrence time . 39 3.11 Recurrence vs. transience . 40 3.12 The renewal equation . 42 3.13 Positive vs. null-recurrence .
    [Show full text]
  • The Moran Process As a Markov Chain on Leaf-Labeled Trees
    The Moran Process as a Markov Chain on Leaf-labeled Trees David J. Aldous∗ University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860 [email protected] http://www.stat.berkeley.edu/users/aldous March 29, 1999 Abstract The Moran process in population genetics may be reinterpreted as a Markov chain on a set of trees with leaves labeled by [n]. It provides an interesting worked example in the theory of mixing times and coupling from the past for Markov chains. Mixing times are shown to be of order n2, as anticipated by the theory surrounding Kingman’s coalescent. Incomplete draft -- do not circulate AMS 1991 subject classification: 05C05,60C05,60J10 Key words and phrases. Coupling from the past, Markov chain, mixing time, phylogenetic tree. ∗Research supported by N.S.F. Grant DMS96-22859 1 1 Introduction The study of mixing times for Markov chains on combinatorial sets has attracted considerable interest over the last ten years [3, 5, 7, 14, 16, 18]. This paper provides another worked example. We must at the outset admit that the mathematics is fairly straightforward, but we do find the example instructive. Its analysis provides a simple but not quite obvious illustration of coupling from the past, reminiscent of the elementary analysis ([1] section 4) of riffle shuffle, and of the analysis of move-to-front list algorithms [12] and move-to-root algorithms for maintaining search trees [9]. The main result, Proposition 2, implies that while most combinatorial chains exhibit the cut-off phenomenon [8], this particular example has the opposite diffusive behavior.
    [Show full text]
  • The Interface Theory of Perception
    !1 The Interface Theory of Perception (To appear in The Stevens' Handbook of Experimental Psychology and Cognitive Neuroscience) DONALD D. HOFFMAN ABSTRACT Our perceptual capacities are products of evolution and have been shaped by natural selection. It is often assumed that natural selection favors veridical perceptions, namely, perceptions that accurately describe those aspects of the environment that are crucial to survival and reproductive fitness. However, analysis of perceptual evolution using evolutionary game theory reveals that veridical perceptions are generically driven to extinction by equally complex non-veridical perceptions that are tuned to the relevant fitness functions. Veridical perceptions are not, in general, favored by natural selection. This result requires a comprehensive reframing of perceptual theory, including new accounts of illusions and hallucinations. This is the intent of the interface theory of perception, which proposes that our perceptions have been shaped by natural selection to hide objective reality and instead to give us species-specific symbols that guide adaptive behavior in our niche. INTRODUCTION Our biological organs — such as our hearts, livers, and bones — are products of evolution. So too are our perceptual capacities — our ability to see an apple, smell an orange, touch a grapefruit, taste a carrot and hear it crunch when we take a bite. Perceptual scientists take this for granted. But it turns out to be a nontrivial fact with surprising consequences for our understanding of perception. The
    [Show full text]
  • Allen&Tarnita
    J. Math. Biol. DOI 10.1007/s00285-012-0622-x Mathematical Biology Measures of success in a class of evolutionary models with fixed population size and structure Benjamin Allen Corina E. Tarnita · Received: 3 November 2011 / Revised: 24 October 2012 © Springer-Verlag Berlin Heidelberg 2012 Abstract We investigate a class of evolutionary models, encompassing many established models of well-mixed and spatially structured populations. Models in this class have fixed population size and structure. Evolution proceeds as a Markov chain, with birth and death probabilities dependent on the current population state. Starting from basic assumptions, we show how the asymptotic (long-term) behav- ior of the evolutionary process can be characterized by probability distributions over the set of possible states. We then define and compare three quantities characterizing evolutionary success: fixation probability, expected frequency, and expected change due to selection. We show that these quantities yield the same conditions for success in the limit of low mutation rate, but may disagree when mutation is present. As part of our analysis, we derive versions of the Price equation and the replicator equation that describe the asymptotic behavior of the entire evolutionary process, rather than the change from a single state. We illustrate our results using the frequency-dependent Moran process and the birth–death process on graphs as examples. Our broader aim is to spearhead a new approach to evolutionary theory, in which general principles of evolution are proven as mathematical theorems from axioms. B. Allen (B) Department of Mathematics, Emmanuel College, Boston, MA 02115, USA e-mail: [email protected] B.
    [Show full text]
  • Rates of Convergence of Some Multivariate Markov Chains With
    The Annals of Applied Probability 2009, Vol. 19, No. 2, 737–777 DOI: 10.1214/08-AAP562 c Institute of Mathematical Statistics, 2009 RATES OF CONVERGENCE OF SOME MULTIVARIATE MARKOV CHAINS WITH POLYNOMIAL EIGENFUNCTIONS By Kshitij Khare and Hua Zhou Stanford University and University of California, Los Angeles We provide a sharp nonasymptotic analysis of the rates of conver- gence for some standard multivariate Markov chains using spectral techniques. All chains under consideration have multivariate orthog- onal polynomial as eigenfunctions. Our examples include the Moran model in population genetics and its variants in community ecol- ogy, the Dirichlet-multinomial Gibbs sampler, a class of generalized Bernoulli–Laplace processes, a generalized Ehrenfest urn model and the multivariate normal autoregressive process. 1. Introduction. The theory of Markov chains is one of the most useful tools of applied probability and has numerous applications. Markov chains are used for modeling physical processes and evolution of a population in population genetics and community ecology. Another important use is sim- ulating from an intractable probability distribution. It is a well-known fact that under mild conditions discussed in [2], a Markov chain converges to its stationary distribution. In the applications mentioned above, often it is useful to know exactly how many steps it takes for a Markov chain to be reasonably close to its stationary distribution. Answering this question as accurately as possible is what finding “rates of convergence” of Markov chains is about. In the current paper, we provide a sharp nonasymptotic analysis of rates of convergence to stationarity for a variety of multivariate Markov chains.
    [Show full text]