Dynamics of Markov Chains for Undergraduates

Total Page:16

File Type:pdf, Size:1020Kb

Dynamics of Markov Chains for Undergraduates Dynamics of Markov Chains for Undergraduates Ursula Porod February 19, 2021 Contents Preface 6 1 Markov chains 8 1.1 Introduction . .8 1.2 Construction of a Markov chain . .9 1.2.1 Finite-length trajectories . .9 1.2.2 From finite to infinite-length trajectories . 11 1.3 Basic computations for Markov chains . 16 1.4 Strong Markov Property . 21 1.5 Examples of Markov Chains . 23 1.6 Functions of Markov chains . 35 1.7 Irreducibility and class structure of the state space . 42 1.8 Transience and Recurrence . 44 1.9 Absorbing chains . 53 1.9.1 First step analysis . 53 1.9.2 Finite number of transient states . 55 1.9.3 Infinite number of transient states . 61 1.10 Stationary distributions . 66 1.10.1 Existence and uniqueness of an invariant measure . 68 1.10.2 Positive recurrence versus null recurrence . 73 1.10.3 Stationary distributions for reducible chains . 74 1.10.4 Steady state distributions . 76 1.11 Periodicity . 77 1.12 Exercises . 85 2 Limit Theorems for Markov Chains 87 2.1 The Ergodic Theorem . 87 2.2 Convergence . 93 2.3 Long-run behavior of reducible chains . 98 1 CONTENTS 2 2.4 Exercises . 100 3 Random Walks on Z 101 3.1 Basics . 101 3.2 Wald's Equations . 103 3.3 Gambler's Ruin . 105 3.4 P´olya's Random Walk Theorem . 111 3.5 Reflection Principle and Duality . 115 3.5.1 The ballot problem . 120 3.5.2 Dual walks . 121 3.5.3 Maximum and minimum . 124 3.6 Arcsine Law . 126 3.6.1 Last returns . 126 3.6.2 How often in the lead? . 128 3.7 The Range of a Random Walk . 130 3.8 Law of the Iterated Logarithm . 134 3.9 Exercises . 136 4 Branching Processes 137 4.1 Generating functions . 138 4.2 Extinction . 143 4.3 Exercises . 148 5 Martingales 149 5.1 Definition of a Martingale . 149 5.2 Optional Stopping Theorem . 154 5.3 Martingale transforms . 159 5.4 Martingale Convergence Theorem . 161 5.5 Transience/recurrence via martingales . 162 5.6 Applications . 166 5.6.1 Waiting times for sequence patterns . 166 5.6.2 Gambler's ruin, revisited . 170 5.6.3 Branching process, revisited . 172 5.6.4 P´olya's Urn, revisited . 174 5.7 Exercises . 175 6 Reversibility 176 6.1 Time reversal of a Markov chain . 176 CONTENTS 3 6.2 Reversible Markov chains . 178 6.2.1 Linear-algebraic interpretation of reversibility . 181 6.3 Exercises . 184 7 Markov Chains and Electric Networks 185 7.1 Reversible chains and graph networks . 185 7.2 Harmonic functions . 187 7.3 Voltage and Current . 189 7.4 Effective resistance and Escape probabilities . 194 7.5 Commute times and Cover times . 201 7.6 Exercises . 209 8 Markov Chain Monte Carlo 211 8.1 MCMC Algorithms . 211 8.1.1 Metropolis-Hastings Algorithm . 211 8.1.2 Gibbs Sampler . 216 8.2 Stochastic Optimization and Simulated Annealing . 219 8.3 Exercises . 225 9 Random Walks on Groups 226 9.1 Generators, Convolution powers . 226 9.1.1 Time reversal of a random walk . 230 9.2 Card shuffling . 231 9.3 Abelian groups . 235 9.3.1 Characters and eigenvalues . 236 9.4 Exercises . 240 10 Rates of Convergence 241 10.1 Basic set-up . 241 10.2 Spectral bounds . 244 10.2.1 Spectral decomposition of the transition matrix . 246 10.2.2 Spectral bounds on total variation distance . 249 10.2.3 Random walk on the discrete circle . 251 10.2.4 The Ehrenfest chain . 253 10.3 Coupling . 255 10.3.1 Definition of Coupling . 255 10.3.2 Coupling of Markov chains . 258 10.4 Strong Stationary Times . 265 CONTENTS 4 10.5 The Cut-off phenomenon . 269 10.6 Exercises . 275 A Appendix 277 A.1 Miscellaneous . 277 A.2 Bipartite graphs . 278 A.3 Schur's theorem . 279 A.4 Iterated double series . 280 A.5 Infinite products. 282 A.6 The Perron-Frobenius Theorem . 283 B Appendix 285 B.1 Sigma algebras, Probability spaces . 285 B.2 Expectation, Basic inequalities . 288 B.3 Properties of Conditional Expectation . 289 B.4 Modes of Convergence of Random Variables . 290 B.5 Classical Limit Theorems . 292 B.6 Coupon Collector's Problem. 292 C Appendix 295 C.1 Lim sup, Lim inf . 295 C.2 Interchanging Limit and Integration . 296 Bibliography 300 Index 303 This book is a first version and a work in progress. U. P. 5 Preface This book is about the theory of Markov chains and their long-term dynamical properties. It is written for advanced undergraduates who have taken a course in calculus-based probability theory, such as a course at the level of Ash [3], and are familiar with conditional expectation and the classical limit theorems such as the Central Limit Theorem and the Strong Law of Large Numbers. Knowledge of linear algebra and a basic familiarity with groups are expected. Measure theory is not a prerequisite. The book covers in depth the classical theory of discrete-time Markov chains with count- able state space and introduces the reader to more contemporary areas such as Markov chain Monte Carlo methods and the study of convergence rates of Markov chains. For example, it includes a study of random walks on the symmetric group Sn as a model of card shuffling and their rates of convergence. A possible novelty for an undergraduate text is the book's approach to studying convergence rates for natural sequences of Markov chains with increasing state spaces of N elements, rather than for fixed sized chains. This approach allows for a simultaneous time T and size N asymptotics which reveals, in some cases, the so-called cut-off phenomenon, a kind of phase transition that occurs as these Markov chains converge to stationarity. The book also covers martingales and it covers random walks on graphs as electric networks. The analogy with electric networks reveals interesting connections between certain laws of Physics, discrete harmonic functions, and the study of reversible Markov chains, in particular for computations of their cover times and hitting times. Currently available undergraduate textbooks do not include these topics in any detail. The following is a brief summary of the book's chapters. Chapters 1 and 2 cover the classical theory of discrete-time Markov chains. They are the foundation for the rest of the book. Chapter 3 focuses in detail.
Recommended publications
  • On Singularity Properties of Convolutions of Algebraic Morphisms
    ON SINGULARITY PROPERTIES OF CONVOLUTIONS OF ALGEBRAIC MORPHISMS ITAY GLAZER AND YOTAM I. HENDEL Abstract. Let K be a field of characteristic zero, X and Y be smooth K-varieties, and let V be a finite dimensional K-vector space. For two algebraic morphisms ϕ : X → V and ψ : Y → V we define a convolution operation, ϕ ∗ ψ : X × Y → V , by ϕ ∗ ψ(x,y)= ϕ(x)+ ψ(y). We then study the singularity properties of the resulting morphism, and show that as in the case of convolution in analysis, it has improved smoothness properties. Explicitly, we show that for any morphism ϕ : X → V which is dominant when restricted to each irreducible component of X, there exists N ∈ N such that for any n > N the n-th convolution power ϕn := ϕ ∗···∗ ϕ is a flat morphism with reduced geometric fibers of rational singularities (this property is abbreviated (FRS)). By a theorem of Aizenbud and Avni, for K = Q, this is equivalent to good asymptotic behavior of the size of the Z/pkZ-fibers of ϕn when ranging over both p and k. More generally, we show that given a family of morphisms {ϕi : Xi → V } of complexity D ∈ N (i.e. that the number of variables and the degrees of the polynomials defining Xi and ϕi are bounded by D), there exists N(D) ∈ N such that for any n > N(D), the morphism ϕ1 ∗···∗ ϕn is (FRS). Contents 1. Introduction 2 1.1. Motivation 2 1.2. Main results 3 1.3. A brief discussion of the (FRS) property 4 1.4.
    [Show full text]
  • A Fourier Restriction Theorem Based on Convolution Powers
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 142, Number 11, November 2014, Pages 3897–3901 S 0002-9939(2014)12148-4 Article electronically published on July 21, 2014 A FOURIER RESTRICTION THEOREM BASED ON CONVOLUTION POWERS XIANGHONG CHEN (Communicated by Alexander Iosevich) Abstract. We prove a Fourier restriction estimate under the assumption that certain convolution power of the measure admits an r-integrable density. Introduction Let F be the Fourier transform defined on the Schwartz space by fˆ(ξ)= e−2πiξ,xf(x)dx Rd where ξ,x is the Euclidean inner product. We are interested in Borel measures μ defined on Rd for which F maps Lp(Rd) boundedly to L2(μ), i.e. ˆ d (1) fL2(μ) fLp(Rd), ∀f ∈S(R ). Here “” means the left-hand side is bounded by the right-hand side multiplied by a positive constant that is independent of f. If μ is a singular measure, then such a result can be interpreted as a restriction property of the Fourier transform. Such restriction estimates for singular measures were first obtained by Stein in the 1960’s. If μ is the surface measure on the ≤ ≤ 2(d+1) sphere, the Stein-Tomas theorem [12], [13] states that (1) holds for 1 p d+3 . Mockenhaupt [10] and Mitsis [9] have shown that Tomas’s argument in [12] can be used to obtain an L2-Fourier restriction theorem for a general class of finite Borel measures satisfying (2) |μˆ(ξ)|2 |ξ|−β, ∀ξ ∈ Rd, (3) μ(B(x, r)) rα, ∀x ∈ Rd,r >0, ≤ 4(d−α)+2β where 0 <α,β<d; they showed that (1) holds for 1 p<p0 = 4(d−α)+β .Bak and Seeger [2] proved the same result for the endpoint p0 and further strengthened it by replacing the Lp0 -norm with the Lp0,2-Lorentz norm.
    [Show full text]
  • 7. Transformations of Variables
    Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 7. Transformations of Variables Basic Theory The Problem As usual, we start with a random experiment with probability measure ℙ on an underlying sample space. Suppose that we have a random variable X for the experiment, taking values in S, and a function r:S→T . Then Y=r(X) is a new random variabl e taking values in T . If the distribution of X is known, how do we fin d the distribution of Y ? This is a very basic and important question, and in a superficial sense, the solution is easy. 1. Show that ℙ(Y∈B) = ℙ ( X∈r −1( B)) for B⊆T. However, frequently the distribution of X is known either through its distribution function F or its density function f , and we would similarly like to find the distribution function or density function of Y . This is a difficult problem in general, because as we will see, even simple transformations of variables with simple distributions can lead to variables with complex distributions. We will solve the problem in various special cases. Transformed Variables with Discrete Distributions 2. Suppose that X has a discrete distribution with probability density function f (and hence S is countable). Show that Y has a discrete distribution with probability density function g given by g(y)=∑ f(x), y∈T x∈r −1( y) 3. Suppose that X has a continuous distribution on a subset S⊆ℝ n with probability density function f , and that T is countable.
    [Show full text]
  • Finitary Isomorphisms of Renewal Point Processes and Continuous-Time Regenerative Processes
    FINITARY ISOMORPHISMS OF RENEWAL POINT PROCESSES AND CONTINUOUS-TIME REGENERATIVE PROCESSES YINON SPINKA Abstract. We show that a large class of stationary continuous-time regenerative processes are finitarily isomorphic to one another. The key is showing that any stationary renewal point process whose jump distribution is absolutely continuous with exponential tails is finitarily isomorphic to a Poisson point process. We further give simple necessary and sufficient conditions for a renewal point process to be finitarily isomorphic to a Poisson point process. This improves results and answers several questions of Soo [33] and of Kosloff and Soo [19]. 1. Introduction and main results An important problem in ergodic theory is to understand which measure-preserving systems are isomorphic to which other measure-preserving systems. The measure-preserving systems considered here are the flows associated to continuous-time processes and point processes, equipped with the action of R by translations. The simplest and most canonical of these processes are the Poisson point processes.1 It is therefore of particular interest to identify those processes which are isomorphic to a Poisson point process. A consequence of Ornstein theory [25] is that any two Poisson point processes are isomorphic to one another. In fact, Poisson point processes are examples of infinite-entropy Bernoulli flows, and any two infinite-entropy Bernoulli flows are isomorphic. The factor maps provided by Ornstein theory are obtained through abstract existence results. Consequently, such maps are non-explicit and one usually has no control on the coding window, i.e., the size of the window one needs to observe in the source process in order to be able to determine the target process on a fixed interval.
    [Show full text]
  • A Stochastic Differential Equation Derived from Evolutionary Game Theory
    U.U.D.M. Project Report 2019:5 A stochastic differential equation derived from evolutionary game theory Brian Treacy Examensarbete i matematik, 15 hp Handledare: Ingemar Kaj Examinator: Denis Gaidashev Februari 2019 Department of Mathematics Uppsala University Abstract Evolutionary game theory models population dynamics of species when a concept of fit- ness is introduced. Initally ordinary differential equation models were the primary tool but these fail to capture inherent randomness in sufficiently small finite populations. There- fore stochastic models were created to capture the stochasticity in these populations. This thesis connects these two modeling paradigms. It does this by defining a stochastic process model for fitness-dependent population dynamics and deriving its associated in- finitesimal generator. This is followed by linking the finite population stochastic process model to the infinite population ordinary differential equation model by taking the limit of the total population which results in a stochastic differential equation. The behaviour of this stochastic differential equation is analyzed, showing how certain parameters af- fect how closely this stochastic differential equation mimics the behaviour of the ordinary differential equation. The thesis concludes by providing graphical evidence that the one- third rule holds with this stochastic differential equation. 3 Contents Introduction 5 1 Moran Process 6 1.1 Payoff ....................................6 1.2 Fitness . .7 1.3 Transition Probabilities . .7 1.4 Fixation Probabilities and Times . .8 2 Infinitesimal Generator 9 2.1 Infinitesimal Generator for alternative fitness function . 10 3 Stochastic Differential Equations 11 3.1 Heuristics . 11 3.2 Existence and Uniqueness . 11 3.3 Infinitesimal Generator . 12 4 Analysis 13 4.1 Analysis of deterministic component .
    [Show full text]
  • Populations in Environments with a Soft Carrying Capacity Are Eventually
    Populations in environments with a soft carrying capacity are eventually extinct † Peter Jagers∗ Sergei Zuyev∗ August 5, 2020 Abstract Consider a population whose size changes stepwise by its members reproducing or dying (disappearing), but is otherwise quite general. Denote the initial (non-random) size by Z0 and the size of the nth change by Cn, n = 1,2,.... Population sizes hence develop successively as Z1 = Z0 +C1, Z2 = Z1 +C2 and so on, indefinitely or until there are no further size changes, due to extinction. Extinction is thus assumed final, so that Zn = 0 implies that Zn+1 = 0, without there being any other finite absorbing class of population sizes. We make no assumptions about the time durations between the successive changes. In the real world, or more specific models, those may be of varying length, depending upon individual life span distributions and their interdepen- dencies, the age-distribution at hand and intervening circumstances. We could con- sider toy models of Galton-Watson type generation counting or of the birth-and-death type, with one individual acting per change, until extinction, or the most general multi- type CMJ branching processes with, say, population size dependence of reproduction. Changes may have quite varying distributions. The basic assumption is that there is a carrying capacity, i.e. a non-negative number K such that the conditional expectation of the change, given the complete past history, is non-positive whenever the population exceeds the carrying capacity. Further, to avoid unnecessary technicalities, we assume that the change Cn equals -1 (one individual dying) with a conditional (given the past) probability uniformly bounded away from 0.
    [Show full text]
  • An Ergodic Theorem for Fleming-Viot Models in Random Environments
    An Ergodic Theorem for Fleming-Viot Models in Random Environments 1 Arash Jamshidpey 2 2016 Abstract The Fleming-Viot (FV) process is a measure-valued diffusion that models the evolution of type frequencies in a countable population which evolves under resampling (genetic drift), mutation, and selection. In the classic FV model the fitness (strength) of types is given by a measurable function. In this paper, we introduce and study the Fleming-Viot process in random environment (FVRE), when by random environment we mean the fitness of types is a stochastic process with c`adl`ag paths. We identify FVRE as the unique solution to a so called quenched martingale problem and derive some of its properties via martingale and duality methods. We develop the duality methods for general time-inhomogeneous and quenched martingale problems. In fact, some important aspects of the duality relations only appears for time-inhomogeneous (and quenched) martingale problems. For example, we see that duals evolve backward in time with respect to the main Markov process whose evolution is forward in time. Using a family of function-valued dual processes for FVRE, we prove that, as the number eN of individuals N tends to ∞, the measure-valued Moran process µN (with fitness process eN ) e converges weakly in Skorokhod topology of c`adl`ag functions to the FVRE process µ (with fitness process e), if eN → e a.s. in Skorokhod topology of c`adl`ag functions. We also study e the long-time behaviour of FVRE process (µt )t≥0 joint with its fitness process e =(et)t≥0 and e prove that the joint FV-environment process (µt ,et)t≥0 is ergodic under the assumption of weak ergodicity of e.
    [Show full text]
  • An Extended Moran Process That Captures the Struggle for Fitness
    An extended Moran process that captures the struggle for fitness a, b c Marthe M˚aløy ⇤, Frode M˚aløy, Rafael Lahoz-Beltr´a , Juan Carlos Nu˜no , Antonio Brud aDepartment of Mathematics and Statistics, The Arctic University of Norway. bDepartment of Biodiversity, Ecology and Evolution, Complutense University of Madrid, Spain. cDepartamento de Matem´atica Aplicada a los Recursos Naturales, Universidad Polit´ecnica de Madrid, Spain dDepartment of Applied Mathematics, Complutense University of Madrid, Spain Abstract When a new type of individual appears in a stable population, the newcomer is typically not advantageous. Due to stochasticity, the new type can grow in numbers, but the newcomers can only become advantageous if they manage to change the environment in such a way that they increase their fitness. This dy- namics is observed in several situations in which a relatively stable population is invaded by an alternative strategy, for instance the evolution of cooperation among bacteria, the invasion of cancer in a multicellular organism and the evo- lution of ideas that contradict social norms. These examples also show that, by generating di↵erent versions of itself, the new type increases the probability of winning the struggle for fitness. Our model captures the imposed cooperation whereby the first generation of newcomers dies while changing the environment such that the next generations become more advantageous. Keywords: Evolutionary dynamics, Nonlinear dynamics, Mathematical modelling, Game theory, Cooperation 1. Introduction When unconditional cooperators appear in a large group of defectors, they are exploited until they become extinct. The best possible scenario for this type of cooperators is to change the environment such that another type of cooper- 5 ators that are regulated and only cooperate under certain conditions becomes advantageous.
    [Show full text]
  • Scaling Limits of a Model for Selection at Two Scales
    Home Search Collections Journals About Contact us My IOPscience Scaling limits of a model for selection at two scales This content has been downloaded from IOPscience. Please scroll down to see the full text. 2017 Nonlinearity 30 1682 (http://iopscience.iop.org/0951-7715/30/4/1682) View the table of contents for this issue, or go to the journal homepage for more Download details: IP Address: 152.3.102.254 This content was downloaded on 01/06/2017 at 10:54 Please note that terms and conditions apply. You may also be interested in: Stochastic switching in biology: from genotype to phenotype Paul C Bressloff Ergodicity in randomly forced Rayleigh–Bénard convection J Földes, N E Glatt-Holtz, G Richards et al. Regularity of invariant densities for 1D systems with random switching Yuri Bakhtin, Tobias Hurth and Jonathan C Mattingly Cusping, transport and variance of solutions to generalized Fokker–Planck equations Sean Carnaffan and Reiichiro Kawai Anomalous diffusion for a class of systems with two conserved quantities Cédric Bernardin and Gabriel Stoltz Feynman–Kac equation for anomalous processes with space- and time-dependent forces Andrea Cairoli and Adrian Baule Mixing rates of particle systems with energy exchange A Grigo, K Khanin and D Szász Ergodicity of a collective random walk on a circle Michael Blank Stochastic partial differential equations with quadratic nonlinearities D Blömker, M Hairer and G A Pavliotis IOP Nonlinearity Nonlinearity London Mathematical Society Nonlinearity Nonlinearity 30 (2017) 1682–1707 https://doi.org/10.1088/1361-6544/aa5499
    [Show full text]
  • Stochastische Prozesse
    University of Vienna Stochastische Prozesse January 27, 2015 Contents 1 Random Walks 5 1.1 Heads or tails . 5 1.2 Probability of return . 6 1.3 Reflection principle . 7 1.4 Main lemma for symmetric random walks . 7 1.5 First return . 8 1.6 Last visit . 9 1.7 Sojourn times . 10 1.8 Position of maxima . 11 1.9 Changes of sign . 11 1.10 Return to the origin . 12 1.11 Random walks in the plane Z2 . 13 1.12 The ruin problem . 13 1.13 How to gamble if you must . 14 1.14 Expected duration of the game . 15 1.15 Generating function for the duration of the game . 15 1.16 Connection with the diffusion process . 17 2 Branching processes 18 2.1 Extinction or survival of family names . 18 2.2 Proof using generating functions . 19 2.3 Some facts about generating functions . 22 2.4 Moment and cumulant generating functions . 23 2.5 An example from genetics by Fischer (1930) . 24 2.6 Asymptotic behaviour . 26 3 Markov chains 27 3.1 Definition . 27 3.2 Examples of Markov chains . 27 3.3 Transition probabilities . 30 3.4 Invariant distributions . 31 3.5 Ergodic theorem for primitive Markov chains . 31 3.6 Examples for stationary distributions . 34 3.7 Birth-death chains . 34 3.8 Reversible Markov chains . 35 3.9 The Markov chain tree formula . 36 1 3.10 Mean recurrence time . 39 3.11 Recurrence vs. transience . 40 3.12 The renewal equation . 42 3.13 Positive vs. null-recurrence .
    [Show full text]
  • Approximation of a Free Poisson Process by Systems of Freely Independent Particles
    Approximation of a free Poisson process by systems of freely independent particles Article Accepted Version Boż ejko, M., da Silva, J. L., Kuna, T. and Lytvynov, E. (2018) Approximation of a free Poisson process by systems of freely independent particles. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 21 (3). 1850020. ISSN 1793- 6306 doi: https://doi.org/10.1142/s0219025718500200 Available at http://centaur.reading.ac.uk/78509/ It is advisable to refer to the publisher’s version if you intend to cite from the work. See Guidance on citing . To link to this article DOI: http://dx.doi.org/10.1142/s0219025718500200 Publisher: World Scientific Publishing All outputs in CentAUR are protected by Intellectual Property Rights law, including copyright law. Copyright and IPR is retained by the creators or other copyright holders. Terms and conditions for use of this material are defined in the End User Agreement . www.reading.ac.uk/centaur CentAUR Central Archive at the University of Reading Reading’s research outputs online Approximation of a free Poisson process by systems of freely independent particles Marek Bo_zejko Instytut Matematyczny, Uniwersytet Wroc lawski, Pl. Grunwaldzki 2/4, 50-384 Wroc law, Poland; e-mail: [email protected] Jos´eLu´ısda Silva Universidade da Madeira, Edif´ıcioda Penteada, Caminho da Penteada, 9020-105, Fun- chal, Madeira, Portugal; e-mail: [email protected] Tobias Kuna University of Reading, Department of Mathematics, Whiteknights, PO Box 220, Read- ing RG6 6AX, U.K. e-mail: [email protected] Eugene Lytvynov Department of Mathematics, Swansea University, Singleton Park, Swansea SA2 8PP, U.K.; e-mail: [email protected] Abstract d Let σ be a non-atomic, infinite Radon measure on R , for example, dσ(x) = z dx where z > 0.
    [Show full text]
  • The Moran Process As a Markov Chain on Leaf-Labeled Trees
    The Moran Process as a Markov Chain on Leaf-labeled Trees David J. Aldous∗ University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860 [email protected] http://www.stat.berkeley.edu/users/aldous March 29, 1999 Abstract The Moran process in population genetics may be reinterpreted as a Markov chain on a set of trees with leaves labeled by [n]. It provides an interesting worked example in the theory of mixing times and coupling from the past for Markov chains. Mixing times are shown to be of order n2, as anticipated by the theory surrounding Kingman’s coalescent. Incomplete draft -- do not circulate AMS 1991 subject classification: 05C05,60C05,60J10 Key words and phrases. Coupling from the past, Markov chain, mixing time, phylogenetic tree. ∗Research supported by N.S.F. Grant DMS96-22859 1 1 Introduction The study of mixing times for Markov chains on combinatorial sets has attracted considerable interest over the last ten years [3, 5, 7, 14, 16, 18]. This paper provides another worked example. We must at the outset admit that the mathematics is fairly straightforward, but we do find the example instructive. Its analysis provides a simple but not quite obvious illustration of coupling from the past, reminiscent of the elementary analysis ([1] section 4) of riffle shuffle, and of the analysis of move-to-front list algorithms [12] and move-to-root algorithms for maintaining search trees [9]. The main result, Proposition 2, implies that while most combinatorial chains exhibit the cut-off phenomenon [8], this particular example has the opposite diffusive behavior.
    [Show full text]