Markov Chains and Mixing Times Second Edition

Total Page:16

File Type:pdf, Size:1020Kb

Markov Chains and Mixing Times Second Edition Markov Chains and Mixing Times Second Edition David A. Levin Yuval Peres With contributions by Elizabeth L. Wilmer A MERICAN MATHEMATICAL SOCIETY 10.1090/mbk/107 Markov Chains and Mixing Times Second Edition David A. Levin University of Oregon Yuval Peres Microsoft Research With contributions by Elizabeth L. Wilmer With a chapter on “Coupling from the Past” by James G. Propp and David B. Wilson A MERICAN MATHEMATICAL SOCIETY Providence, Rhode Island 2010 Mathematics Subject Classification. Primary 60J10, 60J27, 60B15, 60C05, 65C05, 60K35, 68W20, 68U20, 82C22. FRONT COVER: The figure on the bottom left of the front cover, courtesy of David B. Wilson, is a uniformly random lozenge tiling of a hexagon (see Section 25.2). The figure on the bottom center, also from David B. Wilson, is a random sample of an Ising model at its critical temperature (see Sections 3.3.5 and 25.2) with mixed boundary conditions. The figure on the bottom right, courtesy of Eyal Lubetzky, is a portion of an expander graph (see Section 13.6). For additional information and updates on this book, visit www.ams.org/bookpages/mbk-107 Library of Congress Cataloging-in-Publication Data Names: Levin, David Asher, 1971- | Peres, Y. (Yuval) | Wilmer, Elizabeth L. (Elizabeth Lee), 1970- | Propp, James, 1960- | Wilson, David B. (David Bruce) Title: Markov chains and mixing times / David A. Levin, Yuval Peres ; with contributions by Elizabeth L. Wilmer. Description: Second edition. | Providence, Rhode Island : American Mathematical Society, [2017] | “With a chapter on Coupling from the past, by James G. Propp and David B. Wilson.” | Includes bibliographical references and indexes. Identifiers: LCCN 2017017451 | ISBN 9781470429621 (alk. paper) Subjects: LCSH: Markov processes–Textbooks. | Distribution (Probability theory)–Textbooks. | AMS: Probability theory and stochastic processes – Markov processes – Markov chains (discrete-time Markov processes on discrete state spaces). msc | Probability theory and sto- chastic processes – Markov processes – Continuous-time Markov processes on discrete state spaces. msc | Probability theory and stochastic processes – Probability theory on algebraic and topological structures – Probability measures on groups or semigroups, Fourier transforms, factorization. msc | Probability theory and stochastic processes – Combinatorial probability – Combinatorial probability. msc | Numerical analysis – Probabilistic methods, simulation and stochastic differential equations – Monte Carlo methods. msc | Probability theory and stochastic processes – Special processes – Interacting random processes; statistical mechan- ics type models; percolation theory. msc | Computer science – Algorithms – Randomized algorithms. msc | Computer science – Computing methodologies and applications – Simula- tion. msc | Statistical mechanics, structure of matter – Time-dependent statistical mechanics (dynamic and nonequilibrium) – Interacting particle systems. msc Classification: LCC QA274.7 .L48 2017 | DDC 519.2/33–dc23 LC record available at https://lccn. loc.gov/2017017451 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Permissions to reuse portions of AMS publication content are handled by Copyright Clearance Center’s RightsLink service. For more information, please visit: http://www.ams.org/rightslink. Send requests for translation rights and licensed reprints to [email protected]. Excluded from these provisions is material for which the author holds copyright. In such cases, requests for permission to reuse or reprint material should be addressed directly to the author(s). Copyright ownership is indicated on the copyright page, or on the lower right-hand corner of the first page of each article within proceedings volumes. Second edition c 2017 by the authors. All rights reserved. First edition c 2009 by the authors. All rights reserved. Printed in the United States of America. ∞ The paper used in this book is acid-free and falls within the guidelines established to ensure permanence and durability. Visit the AMS home page at http://www.ams.org/ 10987654321 171615141312 Contents Preface ix Preface to the Second Edition ix Preface to the First Edition ix Overview xi For the Reader xii For the Instructor xiii For the Expert xiv Acknowledgements xvi Part I: Basic Methods and Examples 1 Chapter 1. Introduction to Finite Markov Chains 2 1.1. Markov Chains 2 1.2. Random Mapping Representation 5 1.3. Irreducibility and Aperiodicity 7 1.4. Random Walks on Graphs 8 1.5. Stationary Distributions 9 1.6. Reversibility and Time Reversals 13 1.7. Classifying the States of a Markov Chain* 15 Exercises 17 Notes 19 Chapter 2. Classical (and Useful) Markov Chains 21 2.1. Gambler’s Ruin 21 2.2. Coupon Collecting 22 2.3. The Hypercube and the Ehrenfest Urn Model 23 2.4. The P´olya Urn Model 25 2.5. Birth-and-Death Chains 26 2.6. Random Walks on Groups 27 2.7. Random Walks on Z and Reflection Principles 30 Exercises 34 Notes 35 Chapter 3. Markov Chain Monte Carlo: Metropolis and Glauber Chains 38 3.1. Introduction 38 3.2. Metropolis Chains 38 3.3. Glauber Dynamics 41 Exercises 45 Notes 45 iii iv CONTENTS Chapter 4. Introduction to Markov Chain Mixing 47 4.1. Total Variation Distance 47 4.2. Coupling and Total Variation Distance 49 4.3. The Convergence Theorem 52 4.4. Standardizing Distance from Stationarity 53 4.5. Mixing Time 54 4.6. Mixing and Time Reversal 55 4.7. p Distance and Mixing 56 Exercises 57 Notes 58 Chapter 5. Coupling 60 5.1. Definition 60 5.2. Bounding Total Variation Distance 61 5.3. Examples 62 5.4. Grand Couplings 69 Exercises 73 Notes 73 Chapter 6. Strong Stationary Times 75 6.1. Top-to-Random Shuffle 75 6.2. Markov Chains with Filtrations 76 6.3. Stationary Times 77 6.4. Strong Stationary Times and Bounding Distance 78 6.5. Examples 81 6.6. Stationary Times and Ces`aro Mixing Time 83 6.7. Optimal Strong Stationary Times* 84 Exercises 85 Notes 86 Chapter 7. Lower Bounds on Mixing Times 87 7.1. Counting and Diameter Bounds 87 7.2. Bottleneck Ratio 88 7.3. Distinguishing Statistics 91 7.4. Examples 95 Exercises 97 Notes 98 Chapter 8. The Symmetric Group and Shuffling Cards 99 8.1. The Symmetric Group 99 8.2. Random Transpositions 101 8.3. Riffle Shuffles 106 Exercises 109 Notes 112 Chapter 9. Random Walks on Networks 115 9.1. Networks and Reversible Markov Chains 115 9.2. Harmonic Functions 116 9.3. Voltages and Current Flows 117 CONTENTS v 9.4. Effective Resistance 118 9.5. Escape Probabilities on a Square 123 Exercises 125 Notes 126 Chapter 10. Hitting Times 127 10.1. Definition 127 10.2. Random Target Times 128 10.3. Commute Time 130 10.4. Hitting Times on Trees 133 10.5. Hitting Times for Eulerian Graphs 135 10.6. Hitting Times for the Torus 136 10.7. Bounding Mixing Times via Hitting Times 139 10.8. Mixing for the Walk on Two Glued Graphs 143 Exercises 145 Notes 147 Chapter 11. Cover Times 149 11.1. Definitions 149 11.2. The Matthews Method 149 11.3. Applications of the Matthews Method 151 11.4. Spanning Tree Bound for Cover Time 153 11.5. Waiting for All Patterns in Coin Tossing 155 Exercises 157 Notes 157 Chapter 12. Eigenvalues 160 12.1. The Spectral Representation of a Reversible Transition Matrix 160 12.2. The Relaxation Time 162 12.3. Eigenvalues and Eigenfunctions of Some Simple Random Walks 164 12.4. Product Chains 168 12.5. Spectral Formula for the Target Time 171 12.6. An 2 Bound 171 12.7. Time Averages 172 Exercises 176 Notes 177 Part II: The Plot Thickens 179 Chapter 13. Eigenfunctions and Comparison of Chains 180 13.1. Bounds on Spectral Gap via Contractions 180 13.2. The Dirichlet Form and the Bottleneck Ratio 181 13.3. Simple Comparison of Markov Chains 185 13.4. The Path Method 188 13.5. Wilson’s Method for Lower Bounds 193 13.6. Expander Graphs* 196 Exercises 198 Notes 199 vi CONTENTS Chapter 14. The Transportation Metric and Path Coupling 201 14.1. The Transportation Metric 201 14.2. Path Coupling 203 14.3. Rapid Mixing for Colorings 207 14.4. Approximate Counting 209 Exercises 213 Notes 214 Chapter 15. The Ising Model 216 15.1. Fast Mixing at High Temperature 216 15.2. The Complete Graph 219 15.3. The Cycle 220 15.4. The Tree 221 15.5. Block Dynamics 224 15.6. Lower Bound for the Ising Model on Square* 227 Exercises 229 Notes 230 Chapter 16. From Shuffling Cards to Shuffling Genes 233 16.1. Random Adjacent Transpositions 233 16.2. Shuffling Genes 237 Exercises 241 Notes 242 Chapter 17. Martingales and Evolving Sets 244 17.1. Definition and Examples 244 17.2. Optional Stopping Theorem 245 17.3. Applications 247 17.4. Evolving Sets 250 17.5. A General Bound on Return Probabilities 255 17.6. Harmonic Functions and the Doob h-Transform 257 17.7. Strong Stationary Times from Evolving Sets 258 Exercises 260 Notes 260 Chapter 18. The Cutoff Phenomenon 262 18.1. Definition 262 18.2. Examples of Cutoff 263 18.3. A Necessary Condition for Cutoff 268 18.4. Separation Cutoff 269 Exercises 270 Notes 270 Chapter 19. Lamplighter Walks 273 19.1. Introduction 273 19.2. Relaxation Time Bounds 274 19.3. Mixing Time Bounds 276 19.4. Examples 278 Exercises 278 Notes 279 CONTENTS vii Chapter 20. Continuous-Time Chains* 281 20.1. Definitions 281 20.2. Continuous-Time Mixing 283 20.3.
Recommended publications
  • Mixing, Chaotic Advection, and Turbulence
    Annual Reviews www.annualreviews.org/aronline Annu. Rev. Fluid Mech. 1990.22:207-53 Copyright © 1990 hV Annual Reviews Inc. All r~hts reserved MIXING, CHAOTIC ADVECTION, AND TURBULENCE J. M. Ottino Department of Chemical Engineering, University of Massachusetts, Amherst, Massachusetts 01003 1. INTRODUCTION 1.1 Setting The establishment of a paradigm for mixing of fluids can substantially affect the developmentof various branches of physical sciences and tech- nology. Mixing is intimately connected with turbulence, Earth and natural sciences, and various branches of engineering. However, in spite of its universality, there have been surprisingly few attempts to construct a general frameworkfor analysis. An examination of any library index reveals that there are almost no works textbooks or monographs focusing on the fluid mechanics of mixing from a global perspective. In fact, there have been few articles published in these pages devoted exclusively to mixing since the first issue in 1969 [one possible exception is Hill’s "Homogeneous Turbulent Mixing With Chemical Reaction" (1976), which is largely based on statistical theory]. However,mixing has been an important component in various other articles; a few of them are "Mixing-Controlled Supersonic Combustion" (Ferri 1973), "Turbulence and Mixing in Stably Stratified Waters" (Sherman et al. 1978), "Eddies, Waves, Circulation, and Mixing: Statistical Geofluid Mechanics" (Holloway 1986), and "Ocean Tur- bulence" (Gargett 1989). It is apparent that mixing appears in both indus- try and nature and the problems span an enormous range of time and length scales; the Reynolds numberin problems where mixing is important varies by 40 orders of magnitude(see Figure 1).
    [Show full text]
  • Stat 8112 Lecture Notes Stationary Stochastic Processes Charles J
    Stat 8112 Lecture Notes Stationary Stochastic Processes Charles J. Geyer April 29, 2012 1 Stationary Processes A sequence of random variables X1, X2, ::: is called a time series in the statistics literature and a (discrete time) stochastic process in the probability literature. A stochastic process is strictly stationary if for each fixed positive integer k the distribution of the random vector (Xn+1;:::;Xn+k) has the same distribution for all nonnegative integers n. A stochastic process having second moments is weakly stationary or sec- ond order stationary if the expectation of Xn is the same for all positive integers n and for each nonnegative integer k the covariance of Xn and Xn+k is the same for all positive integers n. 2 The Birkhoff Ergodic Theorem The Birkhoff ergodic theorem is to strictly stationary stochastic pro- cesses what the strong law of large numbers (SLLN) is to independent and identically distributed (IID) sequences. In effect, despite the different name, it is the SLLN for stationary stochastic processes. Suppose X1, X2, ::: is a strictly stationary stochastic process and X1 has expectation (so by stationary so do the rest of the variables). Write n 1 X X = X : n n i i=1 To introduce the Birkhoff ergodic theorem, it says a.s. Xn −! Y; (1) where Y is a random variable satisfying E(Y ) = E(X1). More can be said about Y , but we will have to develop some theory first. 1 The SSLN for IID sequences says the same thing as the Birkhoff ergodic theorem (1) except that in the SLLN for IID sequences the limit Y = E(X1) is constant.
    [Show full text]
  • Pointwise and L1 Mixing Relative to a Sub-Sigma Algebra
    POINTWISE AND L1 MIXING RELATIVE TO A SUB-SIGMA ALGEBRA DANIEL J. RUDOLPH Abstract. We consider two natural definitions for the no- tion of a dynamical system being mixing relative to an in- variant sub σ-algebra H. Both concern the convergence of |E(f · g ◦ T n|H) − E(f|H)E(g ◦ T n|H)| → 0 as |n| → ∞ for appropriate f and g. The weaker condition asks for convergence in L1 and the stronger for convergence a.e. We will see that these are different conditions. Our goal is to show that both these notions are robust. As is quite standard we show that one need only consider g = f and E(f|H) = 0, and in this case |E(f · f ◦ T n|H)| → 0. We will see rather easily that for L1 convergence it is enough to check an L2-dense family. Our major result will be to show the same is true for pointwise convergence making this a verifiable condition. As an application we will see that if T is mixing then for any ergodic S, S × T is relatively mixing with respect to the first coordinate sub σ-algebra in the pointwise sense. 1. Introduction Mixing properties for ergodic measure preserving systems gener- ally have versions “relative” to an invariant sub σ-algebra (factor algebra). For most cases the fundamental theory for the abso- lute case lifts to the relative case. For example one can say T is relatively weakly mixing with respect to a factor algebra H if 1) L2(µ) has no finite dimensional invariant submodules over the subspace of H-measurable functions, or Date: August 31, 2005.
    [Show full text]
  • On Li-Yorke Measurable Sensitivity
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 143, Number 6, June 2015, Pages 2411–2426 S 0002-9939(2015)12430-6 Article electronically published on February 3, 2015 ON LI-YORKE MEASURABLE SENSITIVITY JARED HALLETT, LUCAS MANUELLI, AND CESAR E. SILVA (Communicated by Nimish Shah) Abstract. The notion of Li-Yorke sensitivity has been studied extensively in the case of topological dynamical systems. We introduce a measurable version of Li-Yorke sensitivity, for nonsingular (and measure-preserving) dynamical systems, and compare it with various mixing notions. It is known that in the case of nonsingular dynamical systems, a conservative ergodic Cartesian square implies double ergodicity, which in turn implies weak mixing, but the converses do not hold in general, though they are all equivalent in the finite measure- preserving case. We show that for nonsingular systems, an ergodic Cartesian square implies Li-Yorke measurable sensitivity, which in turn implies weak mixing. As a consequence we obtain that, in the finite measure-preserving case, Li-Yorke measurable sensitivity is equivalent to weak mixing. We also show that with respect to totally bounded metrics, double ergodicity implies Li-Yorke measurable sensitivity. 1. Introduction The notion of sensitive dependence for topological dynamical systems has been studied by many authors; see, for example, the works [3, 7, 10] and the references therein. Recently, various notions of measurable sensitivity have been explored in ergodic theory; see for example [2, 9, 11–14]. In this paper we are interested in formulating a measurable version of the topolog- ical notion of Li-Yorke sensitivity for the case of nonsingular and measure-preserving dynamical systems.
    [Show full text]
  • Perfect Filtering and Double Disjointness
    ANNALES DE L’I. H. P., SECTION B HILLEL FURSTENBERG YUVAL PERES BENJAMIN WEISS Perfect filtering and double disjointness Annales de l’I. H. P., section B, tome 31, no 3 (1995), p. 453-465 <http://www.numdam.org/item?id=AIHPB_1995__31_3_453_0> © Gauthier-Villars, 1995, tous droits réservés. L’accès aux archives de la revue « Annales de l’I. H. P., section B » (http://www.elsevier.com/locate/anihpb) implique l’accord avec les condi- tions générales d’utilisation (http://www.numdam.org/conditions). Toute uti- lisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit conte- nir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ Ann. Inst. Henri Poincaré, Vol. 31, n° 3, 1995, p. 453-465. Probabilités et Statistiques Perfect filtering and double disjointness Hillel FURSTENBERG, Yuval PERES (*) and Benjamin WEISS Institute of Mathematics, the Hebrew University, Jerusalem. ABSTRACT. - Suppose a stationary process ~ Un ~ is used to select from several stationary processes, i.e., if Un = i then we observe Yn which is the n’th variable in the i’th process. when can we recover the selecting sequence ~ Un ~ from the output sequence ~Yn ~ ? RÉSUMÉ. - Soit ~ Un ~ un processus stationnaire utilise pour la selection de plusieurs processus stationnaires, c’ est-a-dire si Un = i alors on observe Yn qui est le n-ième variable dans le i-ième processus. Quand peut-on reconstruire ~ Un ~ à partir de ~ Yn ~ ? 1. INTRODUCTION Suppose a discrete-time stationary stochastic signal ~ Un ~, taking integer values, is transmitted over a noisy channel.
    [Show full text]
  • Chaos Theory
    By Nadha CHAOS THEORY What is Chaos Theory? . It is a field of study within applied mathematics . It studies the behavior of dynamical systems that are highly sensitive to initial conditions . It deals with nonlinear systems . It is commonly referred to as the Butterfly Effect What is a nonlinear system? . In mathematics, a nonlinear system is a system which is not linear . It is a system which does not satisfy the superposition principle, or whose output is not directly proportional to its input. Less technically, a nonlinear system is any problem where the variable(s) to be solved for cannot be written as a linear combination of independent components. the SUPERPOSITION PRINCIPLE states that, for all linear systems, the net response at a given place and time caused by two or more stimuli is the sum of the responses which would have been caused by each stimulus individually. So that if input A produces response X and input B produces response Y then input (A + B) produces response (X + Y). So a nonlinear system does not satisfy this principal. There are 3 criteria that a chaotic system satisfies 1) sensitive dependence on initial conditions 2) topological mixing 3) periodic orbits are dense 1) Sensitive dependence on initial conditions . This is popularly known as the "butterfly effect” . It got its name because of the title of a paper given by Edward Lorenz titled Predictability: Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas? . The flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena.
    [Show full text]
  • Turbulence, Fractals, and Mixing
    Turbulence, fractals, and mixing Paul E. Dimotakis and Haris J. Catrakis GALCIT Report FM97-1 17 January 1997 Firestone Flight Sciences Laboratory Guggenheim Aeronautical Laboratory Karman Laboratory of Fluid Mechanics and Jet Propulsion Pasadena Turbulence, fractals, and mixing* Paul E. Dimotakis and Haris J. Catrakis Graduate Aeronautical Laboratories California Institute of Technology Pasadena, California 91125 Abstract Proposals and experimenta1 evidence, from both numerical simulations and laboratory experiments, regarding the behavior of level sets in turbulent flows are reviewed. Isoscalar surfaces in turbulent flows, at least in liquid-phase turbulent jets, where extensive experiments have been undertaken, appear to have a geom- etry that is more complex than (constant-D) fractal. Their description requires an extension of the original, scale-invariant, fractal framework that can be cast in terms of a variable (scale-dependent) coverage dimension, Dd(X). The extension to a scale-dependent framework allows level-set coverage statistics to be related to other quantities of interest. In addition to the pdf of point-spacings (in 1-D), it can be related to the scale-dependent surface-to-volume (perimeter-to-area in 2-D) ratio, as well as the distribution of distances to the level set. The application of this framework to the study of turbulent -jet mixing indicates that isoscalar geometric measures are both threshold and Reynolds-number dependent. As regards mixing, the analysis facilitated by the new tools, as well as by other criteria, indicates en- hanced mixing with increasing Reynolds number, at least for the range of Reynolds numbers investigated. This results in a progressively less-complex level-set geom- etry, at least in liquid-phase turbulent jets, with increasing Reynolds number.
    [Show full text]
  • Blocking Conductance and Mixing in Random Walks
    Blocking conductance and mixing in random walks Ravindran Kannan Department of Computer Science Yale University L´aszl´oLov´asz Microsoft Research and Ravi Montenegro Georgia Institute of Technology Contents 1 Introduction 3 2 Continuous state Markov chains 7 2.1 Conductance and other expansion measures . 8 3 General mixing results 11 3.1 Mixing time estimates in terms of a parameter function . 11 3.2 Faster convergence to stationarity . 13 3.3 Constructing mixweight functions . 13 3.4 Explicit conductance bounds . 16 3.5 Finite chains . 17 3.6 Further examples . 18 4 Random walks in convex bodies 19 4.1 A logarithmic Cheeger inequality . 19 4.2 Mixing of the ball walk . 21 4.3 Sampling from a convex body . 21 5 Proofs I: Markov Chains 22 5.1 Proof of Theorem 3.1 . 22 5.2 Proof of Theorem 3.2 . 24 5.3 Proof of Theorem 3.3 . 26 1 6 Proofs II: geometry 27 6.1 Lemmas . 27 6.2 Proof of theorem 4.5 . 29 6.3 Proof of theorem 4.6 . 30 6.4 Proof of Theorem 4.3 . 32 2 Abstract The notion of conductance introduced by Jerrum and Sinclair [JS88] has been widely used to prove rapid mixing of Markov Chains. Here we introduce a bound that extends this in two directions. First, instead of measuring the conductance of the worst subset of states, we bound the mixing time by a formula that can be thought of as a weighted average of the Jerrum-Sinclair bound (where the average is taken over subsets of states with different sizes).
    [Show full text]
  • Russell David Lyons
    Russell David Lyons Education Case Western Reserve University, Cleveland, OH B.A. summa cum laude with departmental honors, May 1979, Mathematics University of Michigan, Ann Arbor, MI Ph.D., August 1983, Mathematics Sumner Myers Award for best thesis in mathematics Specialization: Harmonic Analysis Thesis: A Characterization of Measures Whose Fourier-Stieltjes Transforms Vanish at Infinity Thesis Advisers: Hugh L. Montgomery, Allen L. Shields Employment Indiana University, Bloomington, IN: James H. Rudy Professor of Mathematics, 2014{present. Indiana University, Bloomington, IN: Adjunct Professor of Statistics, 2006{present. Indiana University, Bloomington, IN: Professor of Mathematics, 1994{2014. Georgia Institute of Technology, Atlanta, GA: Professor of Mathematics, 2000{2003. Indiana University, Bloomington, IN: Associate Professor of Mathematics, 1990{94. Stanford University, Stanford, CA: Assistant Professor of Mathematics, 1985{90. Universit´ede Paris-Sud, Orsay, France: Assistant Associ´e,half-time, 1984{85. Sperry Research Center, Sudbury, MA: Researcher, summers 1976, 1979. Hampshire College Summer Studies in Mathematics, Amherst, MA: Teaching staff, summers 1977, 1978. Visiting Research Positions University of Calif., Berkeley: Visiting Miller Research Professor, Spring 2001. Microsoft Research: Visiting Researcher, Jan.{Mar. 2000, May{June 2004, July 2006, Jan.{June 2007, July 2008{June 2009, Sep.{Dec. 2010, Aug.{Oct. 2011, July{Oct. 2012, May{July 2013, Jun.{Oct. 2014, Jun.{Aug. 2015, Jun.{Aug. 2016, Jun.{Aug. 2017, Jun.{Aug. 2018. Weizmann Institute of Science, Rehovot, Israel: Rosi and Max Varon Visiting Professor, Fall 1997. Institute for Advanced Studies, Hebrew University of Jerusalem, Israel: Winston Fellow, 1996{97. Universit´ede Lyon, France: Visiting Professor, May 1996. University of Wisconsin, Madison, WI: Visiting Associate Professor, Winter 1994.
    [Show full text]
  • Chaos and Chaotic Fluid Mixing
    Snapshots of modern mathematics № 4/2014 from Oberwolfach Chaos and chaotic fluid mixing Tom Solomon Very simple mathematical equations can give rise to surprisingly complicated, chaotic dynamics, with be- havior that is sensitive to small deviations in the ini- tial conditions. We illustrate this with a single re- currence equation that can be easily simulated, and with mixing in simple fluid flows. 1 Introduction to chaos People used to believe that simple mathematical equations had simple solutions and complicated equations had complicated solutions. However, Henri Poincaré (1854–1912) and Edward Lorenz [4] showed that some remarkably simple math- ematical systems can display surprisingly complicated behavior. An example is the well-known logistic map equation: xn+1 = Axn(1 − xn), (1) where xn is a quantity (between 0 and 1) at some time n, xn+1 is the same quantity one time-step later and A is a given real number between 0 and 4. One of the most common uses of the logistic map is to model variations in the size of a population. For example, xn can denote the population of adult mayflies in a pond at some point in time, measured as a fraction of the maximum possible number the adult mayfly population can reach in this pond. Similarly, xn+1 denotes the population of the next generation (again, as a fraction of the maximum) and the constant A encodes the influence of various factors on the 1 size of the population (ability of pond to provide food, fraction of adults that typically reproduce, fraction of hatched mayflies not reaching adulthood, overall mortality rate, etc.) As seen in Figure 1, the behavior of the logistic system changes dramatically with changes in the parameter A.
    [Show full text]
  • Generalized Bernoulli Process with Long-Range Dependence And
    Depend. Model. 2021; 9:1–12 Research Article Open Access Jeonghwa Lee* Generalized Bernoulli process with long-range dependence and fractional binomial distribution https://doi.org/10.1515/demo-2021-0100 Received October 23, 2020; accepted January 22, 2021 Abstract: Bernoulli process is a nite or innite sequence of independent binary variables, Xi , i = 1, 2, ··· , whose outcome is either 1 or 0 with probability P(Xi = 1) = p, P(Xi = 0) = 1 − p, for a xed constant p 2 (0, 1). We will relax the independence condition of Bernoulli variables, and develop a generalized Bernoulli process that is stationary and has auto-covariance function that obeys power law with exponent 2H − 2, H 2 (0, 1). Generalized Bernoulli process encompasses various forms of binary sequence from an independent binary sequence to a binary sequence that has long-range dependence. Fractional binomial random variable is dened as the sum of n consecutive variables in a generalized Bernoulli process, of particular interest is when its variance is proportional to n2H , if H 2 (1/2, 1). Keywords: Bernoulli process, Long-range dependence, Hurst exponent, over-dispersed binomial model MSC: 60G10, 60G22 1 Introduction Fractional process has been of interest due to its usefulness in capturing long lasting dependency in a stochas- tic process called long-range dependence, and has been rapidly developed for the last few decades. It has been applied to internet trac, queueing networks, hydrology data, etc (see [3, 7, 10]). Among the most well known models are fractional Gaussian noise, fractional Brownian motion, and fractional Poisson process. Fractional Brownian motion(fBm) BH(t) developed by Mandelbrot, B.
    [Show full text]
  • Affine Embeddings of Cantor Sets on the Line Arxiv:1607.02849V3
    Affine Embeddings of Cantor Sets on the Line Amir Algom August 5, 2021 Abstract Let s 2 (0; 1), and let F ⊂ R be a self similar set such that 0 < dimH F ≤ s . We prove that there exists δ = δ(s) > 0 such that if F admits an affine embedding into a homogeneous self similar set E and 0 ≤ dimH E − dimH F < δ then (under some mild conditions on E and F ) the contraction ratios of E and F are logarithmically commensurable. This provides more evidence for Conjecture 1.2 of Feng et al.(2014), that states that these contraction ratios are logarithmically commensurable whenever F admits an affine embedding into E (under some mild conditions). Our method is a combination of an argument based on the approach of Feng et al.(2014) with a new result by Hochman(2016), which is related to the increase of entropy of measures under convolutions. 1 Introduction Let A; B ⊂ R. We say that A can be affinely embedded into B if there exists an affine map g : R ! R; g(x) = γ · x + t for γ 6= 0; t 2 R, such that g(A) ⊆ B. The motivating problem of this paper to show that if A and B are two Cantor sets and A can be affinely embedded into B, then the there should be some arithmetic dependence between the scales appearing in the multiscale structures of A and B. This phenomenon is known to occur e.g. when A and B are two self similar sets satisfying the strong separation condition that are Lipschitz equivalent, see Falconer and Marsh(1992).
    [Show full text]