Markov Chains

Total Page:16

File Type:pdf, Size:1020Kb

Markov Chains Chapter 1 Markov Chains A sequence of random variables X0,X1,...with values in a countable set S is a Markov chain if at any time n, the future states (or values) Xn+1,Xn+2,... depend on the history X0,...,Xn only through the present state Xn.Markov chains are fundamental stochastic processes that have many diverse applica- tions. This is because a Markov chain represents any dynamical system whose states satisfy the recursion Xn = f(Xn−1,Yn), n ≥ 1, where Y1,Y2 ... are independent and identically distributed (i.i.d.) and f is a deterministic func- tion. That is, the new state Xn is simply a function of the last state and an auxiliary random variable. Such system dynamics are typical of those for queue lengths in call centers, stresses on materials, waiting times in produc- tion and service facilities, inventories in supply chains, parallel-processing software, water levels in dams, insurance funds, stock prices, etc. This chapter begins by describing the basic structure of a Markov chain and how its single-step transition probabilities determine its evolution. For in- stance, what is the probability of reaching a certain state, and how long does it take to reach it? The next and main part of the chapter characterizes the stationary or equilibrium distribution of Markov chains. These distributions are the basis of limiting averages of various cost and performance param- eters associated with Markov chains. Considerable discussion is devoted to branching phenomena, stochastic networks, and time-reversible chains. In- cluded are examples of Markov chains that represent queueing, production systems, inventory control, reliability, and Monte Carlo simulations. Before getting into the main text, a reader would benefit by a brief review of conditional probabilities in Section 1.22 of this chapter and related material on random variables and distributions in Sections 1–4 in the Appendix. The rest of the Appendix, which provides more background on probability, would be appropriate for later reading. R. Serfozo, Basics of Applied Stochastic Processes, 1 Probability and its Applications. c Springer-Verlag Berlin Heidelberg 2009 2 1MarkovChains 1.1 Introduction This section introduces Markov chains and describes a few examples. A discrete-time stochastic process {Xn : n ≥ 0} on a countable set S is a collection of S-valued random variables defined on a probability space (Ω,F,P). The P is a probability measure on a family of events F (a σ-field) in an event-space Ω.1 The set S is the state space of the process, and the value Xn ∈ S is the state of the process at time n.Then may represent a parameter other than time such as a length or a job number. The finite-dimensional distributions of the process are P {X0 = i0,...,Xn = in},i0,...,in ∈ S, n ≥ 0. These probabilities uniquely determine the probabilities of all events of the process. Consequently, two stochastic processes (defined on different probabil- ity spaces or the same one) are equal in distribution if their finite-dimensional distributions are equal. Various types of stochastic processes are defined by specifying the dependency among the variables that determine the finite- dimensional distributions, or by specifying the manner in which the process evolves over time (the system dynamics). A Markov chain is defined as follows. Definition 1. A stochastic process X = {Xn : n ≥ 0} on a countable set S is a Markov Chain if, for any i, j ∈ S and n ≥ 0, P {Xn+1 = j|X0,...,Xn} = P {Xn+1 = j|Xn}, (1.1) P {Xn+1 = j|Xn = i} = pij . (1.2) The pij is the probability that the Markov chain jumps from state i to state ∈ j.Thesetransition probabilities satisfy j∈S pij =1,i S,andthematrix P =(pij )isthetransition matrix of the chain. Condition (1.1), called the Markov property, says that, at any time n,the next state Xn+1 is conditionally independent of the past X0,...,Xn−1 given the present state Xn. In other words, the next state is dependent on the past and present only through the present state. The Markov property is an elementary condition that is satisfied by the state of many stochastic phe- nomena. Consequently, Markov chains, and related continuous-time Markov processes, are natural models or building blocks for applications. Condition (1.2) simply says the transition probabilities do not depend on thetimeparametern; the Markov chain is therefore “time-homogeneous”. If the transition probabilities were functions of time, the process Xn would be a non-time-homogeneous Markov chain. Such chains are like time-homogeneous 1 Further details on probability spaces are in the Appendix. We follow the convention of not displaying the space (Ω, F,P) every time random variables or processes are introduced; it is mentioned only when needed for clarity. 1.1 Introduction 3 chains, but the time dependency introduces added accounting details that we will not address here. See Exercises 12 and 13 for further insights. Since the state space S is countable, we will sometimes label the states by integers, such as S = {0, 1, 2,...} (or S = {1,...,m}). Under this labeling, the transition matrix has the form ⎡ ⎤ p00 p01 p02 ··· ⎢ ⎥ ⎢p10 p11 p12 ···⎥ ⎢ ⎥ P = ⎢p20 p21 p22 ···⎥ ⎣..............⎦ .............. We end this section with a few preliminary examples. Example 2. Binomial Markov Chain.ABernoulli process is a sequence of independent trials in which each trial results in a success or failure with respective probabilities p and q =1−p.LetXn denote the number of successes in n trials, for n ≥ 1. By direct reasoning, it follows that Xn has a binomial distribution with parameters n and p: n P {X = k} = pk(1 − p)n−k, 0 ≤ k ≤ n. n k Now, suppose at the nth trial that Xn = i. Then at the next trial, Xn+1 will equal i +1or i with probabilities p and 1 − p, respectively, regardless of the values of X1,...,Xn−1.ThusXn is a Markov chain with transition probabilities pi,i+1 = p, pii =1− p and pij = 0 otherwise. This binomial Markov chain is a special case of the following random walk. Example 3. Random Walk. Suppose Y1,Y2,... are i.i.d. integer-valued ran- dom variables, and define X0 =0and n Xn = Ym,n≥ 1. m=1 The process Xn is a random walk on the set of integers S,whereYn is the step size at time n. A random walk represents a quantity that changes over time (e.g., a stock price, an inventory level, or a gambler’s fortune) such that its increments (step sizes) are i.i.d. Since Xn+1 = Xn + Yn+1,andYn+1 is independent of X0,...,Xn, it follows that, for any i, j ∈ S and n ≥ 0, P {Xn+1 = j|X0,...,Xn−1,Xn = i} = P {Xn + Yn+1 = j|Xn = i} = P {Y1 = j − i}. Therefore, the random walk Xn is a Markov chain on the nonnegative integers S with transition probabilities pij = P {Y1 = j − i}. 4 1MarkovChains When the step sizes Yn take values 1 or −1withp = P {Y1 =1} and q = P {Y1 = −1},thechainXn is a simple random walk. Its transition probabilities, for each i,are pi,i+1 = p, pi,i−1 = q, pij =0, for j = i +1ori − 1. This type of walk restricted to a finite state space is described next. Example 4. Gambler’s Ruin. Consider a Markov chain on S = {0, 1,...,m} with transition matrix ⎡ ⎤ 100..... ⎢ ⎥ ⎢q 0 p 0 ...⎥ ⎢ ⎥ ⎢0 q 0 p 0 ..⎥ P = ⎢ ⎥ ⎢.........⎥ ⎣ ...0 q 0 p⎦ ......01 One can interpret the state of the Markov chain as the fortune of a Gambler who repeatedly plays a game in which the Gambler wins or loses $1 with respective probabilities p and q =1− p. If the fortune reaches state 0, the Gambler is ruined since p00 = 1 (state 0 is absorbing — the chain stays there forever). On the other hand, if the fortune reaches m, the Gambler retires with the fortune m since pmm =1(m is another absorbing state). A versatile generalization to state-dependent gambles (and other applica- tions as well) is with a transition matrix ⎡ ⎤ r0 p0 0 ............ ⎢ ⎥ ⎢q1 r1 p1 0 ........⎥ ⎢ ⎥ ⎢ 0 q2 r2 p2 0 ....⎥ P = ⎢ ⎥ ⎢..................⎥ ⎣ ⎦ .... 0 qm−1 rm−1 pm−1 .......... qm rm In this case, the outcome of the game depends on the Gambler’s fortune. When the fortune is i, the Gambler either wins or loses $1 with respective probabilities pi or qi, or breaks even (the fortune does not change) with probability ri. Another interpretation is that the state of the chain is the location of a random walk with state-dependent steps of size −1, 0, or 1. Markov chains are common models for a variety of systems and phenom- ena, such as the following, in which the Markov property is “reasonable”. Example 5. Flexible Manufacturing System. Consider a machine that is capa- ble of producing three types of parts. The state of the machine at time period n is denoted by a random variable Xn that takes values in S = {0, 1, 2, 3}, where 0 means the machine is idle and i =1, 2 or 3 means the machine pro- duces a type i in the time period. Suppose the machine’s production schedule is Markovian in the sense that the next type of part it produces, or a possible 1.2 Probabilities of Sample Paths 5 idle period, depends only on its current state, and the probabilities of these changes do not depend on time. Then Xn is a Markov chain. For instance, its transition matrix might be ⎡ ⎤ 1/51/51/52/5 ⎢1/10 1/21/10 3/10⎥ P = ⎢ ⎥ ⎣ 1/501/53/5 ⎦ 1/502/52/5 Such probabilities can be estimated as in Exercise 65, provided one can ob- serve the evolution of the system.
Recommended publications
  • Conditioning and Markov Properties
    Conditioning and Markov properties Anders Rønn-Nielsen Ernst Hansen Department of Mathematical Sciences University of Copenhagen Department of Mathematical Sciences University of Copenhagen Universitetsparken 5 DK-2100 Copenhagen Copyright 2014 Anders Rønn-Nielsen & Ernst Hansen ISBN 978-87-7078-980-6 Contents Preface v 1 Conditional distributions 1 1.1 Markov kernels . 1 1.2 Integration of Markov kernels . 3 1.3 Properties for the integration measure . 6 1.4 Conditional distributions . 10 1.5 Existence of conditional distributions . 16 1.6 Exercises . 23 2 Conditional distributions: Transformations and moments 27 2.1 Transformations of conditional distributions . 27 2.2 Conditional moments . 35 2.3 Exercises . 41 3 Conditional independence 51 3.1 Conditional probabilities given a σ{algebra . 52 3.2 Conditionally independent events . 53 3.3 Conditionally independent σ-algebras . 55 3.4 Shifting information around . 59 3.5 Conditionally independent random variables . 61 3.6 Exercises . 68 4 Markov chains 71 4.1 The fundamental Markov property . 71 4.2 The strong Markov property . 84 4.3 Homogeneity . 90 4.4 An integration formula for a homogeneous Markov chain . 99 4.5 The Chapmann-Kolmogorov equations . 100 iv CONTENTS 4.6 Stationary distributions . 103 4.7 Exercises . 104 5 Ergodic theory for Markov chains on general state spaces 111 5.1 Convergence of transition probabilities . 113 5.2 Transition probabilities with densities . 115 5.3 Asymptotic stability . 117 5.4 Minorisation . 122 5.5 The drift criterion . 127 5.6 Exercises . 131 6 An introduction to Bayesian networks 141 6.1 Introduction . 141 6.2 Directed graphs .
    [Show full text]
  • Poisson Representations of Branching Markov and Measure-Valued
    The Annals of Probability 2011, Vol. 39, No. 3, 939–984 DOI: 10.1214/10-AOP574 c Institute of Mathematical Statistics, 2011 POISSON REPRESENTATIONS OF BRANCHING MARKOV AND MEASURE-VALUED BRANCHING PROCESSES By Thomas G. Kurtz1 and Eliane R. Rodrigues2 University of Wisconsin, Madison and UNAM Representations of branching Markov processes and their measure- valued limits in terms of countable systems of particles are con- structed for models with spatially varying birth and death rates. Each particle has a location and a “level,” but unlike earlier con- structions, the levels change with time. In fact, death of a particle occurs only when the level of the particle crosses a specified level r, or for the limiting models, hits infinity. For branching Markov pro- cesses, at each time t, conditioned on the state of the process, the levels are independent and uniformly distributed on [0,r]. For the limiting measure-valued process, at each time t, the joint distribu- tion of locations and levels is conditionally Poisson distributed with mean measure K(t) × Λ, where Λ denotes Lebesgue measure, and K is the desired measure-valued process. The representation simplifies or gives alternative proofs for a vari- ety of calculations and results including conditioning on extinction or nonextinction, Harris’s convergence theorem for supercritical branch- ing processes, and diffusion approximations for processes in random environments. 1. Introduction. Measure-valued processes arise naturally as infinite sys- tem limits of empirical measures of finite particle systems. A number of ap- proaches have been developed which preserve distinct particles in the limit and which give a representation of the measure-valued process as a transfor- mation of the limiting infinite particle system.
    [Show full text]
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • STOCHASTIC CONTROL Contents 1. Introduction 1 2. Mathematical Probability 1 3. Stochastic Calculus 3 4. Diffusions 7 5. the Stoc
    STOCHASTIC CONTROL NAFTALI HARRIS Abstract. In this paper we discuss the Stochastic Control Problem, which deals with the best way to control a stochastic system and has applications in fields as diverse as robotics, finance, and industry. After quickly reviewing mathematical probability and stochastic integration, we prove the Hamilton- Jacobi-Bellman equations for stochastic control, which transform the Stochas- tic Control Problem into a partial differential equation. Contents 1. Introduction 1 2. Mathematical Probability 1 3. Stochastic Calculus 3 4. Diffusions 7 5. The Stochastic Control Problem 11 6. The Hamilton-Jacobi-Bellman Equation 12 7. Acknowledgements 18 References 18 1. Introduction The Stochastic Control Problem arises when we wish to control a continuous- time random system optimally. For example, we wish to fly an airplane through turbulence, or manage a stock portfolio, or precisely maintain the temperature and pressure of a system in the presence of random air currents and temperatures. To build up to a formal definition of this problem, we will first review mathematical probability and stochastic calculus. These will give us the tools to talk about It^o processes and diffusions, which will allow us to state the Stochastic Control Problem formally. Finally, we will spend the latter half of this paper proving necessary conditions and sufficient conditions for its solution. 2. Mathematical Probability In this section we briskly define the mathematical model of probability required in the rest of this paper. We define probability spaces, random variables, and conditional expectation. For more detail, see Williams [4]. Definition 2.1 (Probability Space). A probability space is a triple (Ω; F;P ), where Ω is an arbitrary set, F is a sigma algebra over Ω, and P is a measure on sets in F with P (Ω) = 1.
    [Show full text]
  • A Predictive Model Using the Markov Property
    A Predictive Model using the Markov Property Robert A. Murphy, Ph.D. e-mail: [email protected] Abstract: Given a data set of numerical values which are sampled from some unknown probability distribution, we will show how to check if the data set exhibits the Markov property and we will show how to use the Markov property to predict future values from the same distribution, with probability 1. Keywords and phrases: markov property. 1. The Problem 1.1. Problem Statement Given a data set consisting of numerical values which are sampled from some unknown probability distribution, we want to show how to easily check if the data set exhibits the Markov property, which is stated as a sequence of dependent observations from a distribution such that each suc- cessive observation only depends upon the most recent previous one. In doing so, we will present a method for predicting bounds on future values from the same distribution, with probability 1. 1.2. Markov Property Let I R be any subset of the real numbers and let T I consist of times at which a numerical ⊆ ⊆ distribution of data is randomly sampled. Denote the random samples by a sequence of random variables Xt t∈T taking values in R. Fix t T and define T = t T : t>t to be the subset { } 0 ∈ 0 { ∈ 0} of times in T that are greater than t . Let t T . 0 1 ∈ 0 Definition 1 The sequence Xt t∈T is said to exhibit the Markov Property, if there exists a { } measureable function Yt1 such that Xt1 = Yt1 (Xt0 ) (1) for all sequential times t0,t1 T such that t1 T0.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Stochastic Processes and Applications Mongolia 2015
    NATIONAL UNIVERSITY OF MONGOLIA Stochastic Processes and Applications Mongolia 2015 27th July - 7th August 2015 National University of Mongolia Ulan Bator Mongolia National University of Mongolia 1 1 Basic information Venue: The meeting will take place at the National University of Mongolia. The map below shows the campus of the University which is located in the North-Eastern block relative to the Government Palace and Chinggis Khaan Square (see the red circle with arrow indicating main entrance in map below) in the very heart of down-town Ulan Bator. • All lectures, contributed talks and tutorials will be held in the Room 320 at 3rd floor, Main building, NUM. • Registration and Opening Ceremony will be held in the Academic Hall (Round Hall) at 2nd floor of the Main building. • The welcome reception will be held at the 2nd floor of the Broadway restaurant pub which is just on the West side of Chinggis Khaan Square (see the blue circle in map below). NATIONAL UNIVERSITY OF MONGOLIA 2 National University of Mongolia 1 Facilities: The main venue is equipped with an electronic beamer, a blackboard and some movable white- boards. There will also be magic whiteboards which can be used on any vertical surface. White-board pens and chalk will be provided. Breaks: Refreshments will be served between talks (see timetable below) at the conference venue. Lunches: Arrangements for lunches will be announced at the start of the meeting. Accommodation: Various places are being used for accommodation. The main accommodation are indi- cated on the map below relative to the National University (red circle): The Puma Imperial Hotel (purple circle), H9 Hotel (black circle), Ulanbaatar Hotel (blue circle), Student Dormitories (green circle) Mentoring: A mentoring scheme will be running which sees more experienced academics linked with small groups of junior researchers.
    [Show full text]
  • A Class of Measure-Valued Markov Chains and Bayesian Nonparametrics
    Bernoulli 18(3), 2012, 1002–1030 DOI: 10.3150/11-BEJ356 A class of measure-valued Markov chains and Bayesian nonparametrics STEFANO FAVARO1, ALESSANDRA GUGLIELMI2 and STEPHEN G. WALKER3 1Universit`adi Torino and Collegio Carlo Alberto, Dipartimento di Statistica e Matematica Ap- plicata, Corso Unione Sovietica 218/bis, 10134 Torino, Italy. E-mail: [email protected] 2Politecnico di Milano, Dipartimento di Matematica, P.zza Leonardo da Vinci 32, 20133 Milano, Italy. E-mail: [email protected] 3University of Kent, Institute of Mathematics, Statistics and Actuarial Science, Canterbury CT27NZ, UK. E-mail: [email protected] Measure-valued Markov chains have raised interest in Bayesian nonparametrics since the seminal paper by (Math. Proc. Cambridge Philos. Soc. 105 (1989) 579–585) where a Markov chain having the law of the Dirichlet process as unique invariant measure has been introduced. In the present paper, we propose and investigate a new class of measure-valued Markov chains defined via exchangeable sequences of random variables. Asymptotic properties for this new class are derived and applications related to Bayesian nonparametric mixture modeling, and to a generalization of the Markov chain proposed by (Math. Proc. Cambridge Philos. Soc. 105 (1989) 579–585), are discussed. These results and their applications highlight once again the interplay between Bayesian nonparametrics and the theory of measure-valued Markov chains. Keywords: Bayesian nonparametrics; Dirichlet process; exchangeable sequences; linear functionals
    [Show full text]
  • Partnership As Experimentation: Business Organization and Survival in Egypt, 1910–1949
    Yale University EliScholar – A Digital Platform for Scholarly Publishing at Yale Discussion Papers Economic Growth Center 5-1-2017 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç Timothy Guinnane Follow this and additional works at: https://elischolar.library.yale.edu/egcenter-discussion-paper-series Recommended Citation Artunç, Cihan and Guinnane, Timothy, "Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949" (2017). Discussion Papers. 1065. https://elischolar.library.yale.edu/egcenter-discussion-paper-series/1065 This Discussion Paper is brought to you for free and open access by the Economic Growth Center at EliScholar – A Digital Platform for Scholarly Publishing at Yale. It has been accepted for inclusion in Discussion Papers by an authorized administrator of EliScholar – A Digital Platform for Scholarly Publishing at Yale. For more information, please contact [email protected]. ECONOMIC GROWTH CENTER YALE UNIVERSITY P.O. Box 208269 New Haven, CT 06520-8269 http://www.econ.yale.edu/~egcenter Economic Growth Center Discussion Paper No. 1057 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç University of Arizona Timothy W. Guinnane Yale University Notes: Center discussion papers are preliminary materials circulated to stimulate discussion and critical comments. This paper can be downloaded without charge from the Social Science Research Network Electronic Paper Collection: https://ssrn.com/abstract=2973315 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç⇤ Timothy W. Guinnane† This Draft: May 2017 Abstract The relationship between legal forms of firm organization and economic develop- ment remains poorly understood. Recent research disputes the view that the joint-stock corporation played a crucial role in historical economic development, but retains the view that the costless firm dissolution implicit in non-corporate forms is detrimental to investment.
    [Show full text]
  • Coalescence in Bellman-Harris and Multi-Type Branching Processes Jyy-I Joy Hong Iowa State University
    Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2011 Coalescence in Bellman-Harris and multi-type branching processes Jyy-i Joy Hong Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Mathematics Commons Recommended Citation Hong, Jyy-i Joy, "Coalescence in Bellman-Harris and multi-type branching processes" (2011). Graduate Theses and Dissertations. 10103. https://lib.dr.iastate.edu/etd/10103 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Coalescence in Bellman-Harris and multi-type branching processes by Jyy-I Hong A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Mathematics Program of Study Committee: Krishna B. Athreya, Major Professor Clifford Bergman Dan Nordman Ananda Weerasinghe Paul E. Sacks Iowa State University Ames, Iowa 2011 Copyright c Jyy-I Hong, 2011. All rights reserved. ii DEDICATION I would like to dedicate this thesis to my parents Wan-Fu Hong and Wen-Hsiang Tseng for their un- conditional love and support. Without them, the completion of this work would not have been possible. iii TABLE OF CONTENTS ACKNOWLEDGEMENTS . vii ABSTRACT . viii CHAPTER 1. PRELIMINARIES . 1 1.1 Introduction . 1 1.2 Discrete-time Single-type Galton-Watson Branching Processes .
    [Show full text]
  • A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks†
    A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks† Fernando Antoneli1,4,*, Renata C. Ferreira3, Marcelo R. S. Briones2,4 1 Departmento de Informática em Saúde, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 2 Departmento de Microbiologia, Imunologia e Parasitologia, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 3 College of Medicine, Pennsylvania State University (Hershey), PA, USA 4 Laboratório de Genômica Evolutiva e Biocomplexidade, EPM, UNIFESP, Ed. Pesquisas II, Rua Pedro de Toledo 669, CEP 04039-032, São Paulo, Brasil Abstract. Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving
    [Show full text]
  • POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis
    POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p.
    [Show full text]