STOCHASTIC MODELING Note

Total Page:16

File Type:pdf, Size:1020Kb

STOCHASTIC MODELING Note IMA BOOTCAMP: STOCHASTIC MODELING JASMINE FOO AND KEVIN LEDER Note: much of the material in these bootcamp lecture notes is adapted from the book `Introduction to Probability Models' by Sheldon Ross and lecture notes on the web by Janko Gravner for MAT135B. 1. Exponential random variables and their properties We begin with a review of exponential random variables, which appear a lot in mathe- matical models of real-world phenomena because of their convenient mathematical prop- erties. Definition 1. A continuous random variable X is said to have an exponential distribution with parameter λ, λ > 0, if its probability density function is given by ( λe−λx; x ≥ 0 f(x) = 0; x < 0 1.1. Moments. The mean of the exponential distribution, E[X] is given by Z 1 1 E[X] = xλe−λxdx = : −∞ λ The moment generating function φ(t) of the exponential distribution is Z 1 λ φ(t) = E[etX ] = etxλe−λxdx = : 0 λ − t Using this we can derive the variance 2 2 2 d 1 1 V ar(X) = E[X ] − (EX) = 2 φ(t) − 2 = 2 : dt t=0 λ λ 1.2. Memoryless property. Exponential distributions have the memoryless property. Specifically, it satisfies the following: P (X > s + tjX > t) = P (X > s); for all s; t ≥ 0: To see this, note that the property is equivalent to P (X > s + t; X > t) = P (X > s) P (X > t) or P (X > s + t) = P (X > s)P (X > t): 1 2 JASMINE FOO AND KEVIN LEDER For X exponentially distributed, we get e−λ(s+t) = e−λse−λt so the memoryless property is satisfied. It can be shown that the exponential distribution is the only distribution that satisfies this property. Example 1. A store must decide how much product to order to meet next month's demand. The demand is modeled as an exponential random variable with rate λ. The product costs c dollars per unit, and can be sold at a price of s > c dollars per unit. How much should be ordered to maximize the store's expected profit? Assume that any inventory left over at the end of the month is worthless. Solution. Let X be the demand. If the store orders t units of the product, the profit is P = s min(X; t) − ct In other words ( sX − ct; X < t P = (s − c)t; X ≥ t So E[P ] = (sE[XjX < t] − ct)P (X < t) + (s − c)tP (X ≥ t): To calculate this conditional expected value, we note that 1/λ = E[X] = E[XjX < t]P (X < t) + E[XjX ≥ t]P (X ≥ t) = E[XjX < t](1 − eλt) + (t + EX)e−λt where in the last line we have used the memoryless property: conditioned on having exceeded t, the amount by which X exceeds t is distributed as an exponential variable with parameter λ. Thus we have 1 − (t + 1 )e−λt E[XjX < t] = λ λ 1 − e−λt Plugging this into the expression for E[P ] we have 1 − (t + 1 )e−λt E[P ] = (s λ λ − ct)(1 − e−λt) + (s − c)teλt 1 − e−λt = s/λ − se−λt/λ − ct To maximize the expected profits, we take the derivative and set se−λt − c = 0, obtaining that the maximal profit is obtained when 1 t = log (s=c): λ Definition 2. The hazard rate or failure rate of a random variable with cumulative distri- bution function F and density f is: f(t) h(t) = : 1 − F (t) IMA BOOTCAMP: STOCHASTIC MODELING 3 Consider the following related quantity: conditioned on X > t, what is the probability that X 2 (t; t + dt)? P (X 2 (t; t + dt); X > t) P (X 2 (t; t + dt)) f(t)dt P (X 2 (t; t + dt)jX > t) = = ≈ P (X > t) P (X > t) 1 − F (t) So, if we think of X as the time to failure of some object, the hazard rate can be interpreted as the conditional probability density of failure at time t, given that the object has not failed until time t. The failure rate of an exponentially distributed random variable is a constant: h(t) = λe−λteλt = λ 1.3. Sum and minimums of exponential random variables. Suppose Xi; i = 1 : : : n are independent identically distributed exponential random variables with parameter λ. It can be shown (by induction, for example), that the sum X1 + X2 + ::: + Xn has a Gamma distribution with parameters n and 1/λ, with pdf (λt)n−1 f(t) = λe−λt : (n − 1)! Next, let's define Y to be the minimum of n independent exponential random variables, X1;:::;Xn with parameters λ1; : : : ; λn. n Y P (Y > x) = P (Xi > x) i=1 n Y = e−λix i=1 − Pn λ x = e ( i=1 i) Thus we see that the minimum, Y , is exponentially distributed with parameter equal to the sum of the parameters λ1 + λ2 ··· + λn. 2. Poisson Processes Definition 3. A counting process is a random process N(t); t ≥ 0 (e.g. number of events that have occurred by time t) such that (1) For each t, N(t) is a nonnegative integer (2) N(t) is nondecreasing in t (3) N(t) is right continuous. Thus N(t) − N(s) represents the number of events in (s; t]. Definition 4. A Poisson process is a random process with intensity (or rate) λ > 0 is a counting process N(t) such that 4 JASMINE FOO AND KEVIN LEDER (1) N(0) = 0 (2) it has independent increments: if (s1; t1] \ (st; t2] = ; then N(t1) − N(s1) and N(t2) − N(s2) are independent (3) number of events in any interval of length t is Poisson(λt) Recall that a Poisson random variable Y with parameter λ takes values k = 0; 1; 2;::: with probability λke−λ P (Y = k) = k! In particular, we have the probability mass function for the number of events in an interval of size h: (λh)k P (N(t + h) − N(t) = k) = e−λh ; k = 0; 1; 2;::: k! Now, if h ! 0 we have that the number of arrivals in a small interval of size h satisfies P (N(h) = 1) = λh and P (N(h) ≥ 2) = o(h): Also, from the properties of a Poisson random variable we have that E[N(t)] = λt. 2.1. Motivation for the Poisson process. Suppose we have a store where the customers arrive at a rate λ = 4:5 customers per hour on average, and the store owner is interested to find the distribution of the average number X of customers who arrive during a particular time period of length t hours. The arrival rate can be recast as 4.5/3600 seconds per hour = 0.00125 per second. We can interpret this probabilistically by saying that during each second either 0 or 1 customers will arrive, and the probability during any single second is 0.00125. Under this setup the Binomial(3600t; 0:00125) distribution describes X, the number of customers who arrive during a time period of length t. Using the Poisson approximation to the Binomial we then have that X is approximated by a Poisson variable with mean 3600t ∗ 0:00125 = 4:5t = λt, which is consistent with our definition of a Poisson process. Recall that property 3 of a Poisson process is that the number of arrivals in any given time period of length t is Poisson(λt). In addition, by construction we have that the number of arrivals in non overlapping time intervals is independent. This example motivates the definition of a Poisson process, which is often used to model arrival processes of events with a constant average arrival rate. 2.2. Interarrival and waiting time distributions. Consider a Poisson process with rate λ and let T1 denote the time of the first event, and for n > 1 let Tn denote the elapsed time between the (n − 1)st and nth event. The sequence fTn; n = 1; 2;:::g is called the sequence of inter arrival times. Proposition 1. Tn, n = 1; 2;::: are independent identically distributed exponential random variables with parameter λ. IMA BOOTCAMP: STOCHASTIC MODELING 5 To see this, first consider T1. −λt P (T1 > t) = P (N(t) = 0) = e Thus T1 is exponentially distributed with parameter λ. Then Z 1 P (T2 > t) = P (T2 > tjT1 2 (s; s + ds))P (T1 2 (s; s + ds))ds 0 Z 1 = P (no events in (s, s+t])λe−λsds 0 Z 1 = e−λtλe−λsds 0 = e−λt So, T2 is also exponentially distributed with parameter λ. From this argument we can also see that T2 is independent of T1 since conditioning on the value of T1 has no impact on the pdf of T2. Repeating the argument for n ≥ 2 gives the result. Intuitively, remember that the Poisson process has independent increments (so from any point onward it is independent of what happened in the past) and stationary increments (any increment of the same length has the same distribution), so the process has no memory. Hence exponential waiting times until the next arrival are expected. Next we consider the waiting time until the nth event, Sn, and its distribution. Note that n X Sn = Ti i=1 and since the Ti are iid exponential(λ) variables, the variable Sn is Gamma(n; λ). In other words, Sn has pdf (λt)n−1 f (t) = λe−λt ; t ≥ 0: Sn (n − 1)! Note that we could also equivalently define a Poisson process by starting with iid ex- ponential inter arrival times with parameter λ, and then defining a counting process by saying that the nth event occurs at time Sn = T1 + T2 + ::: + Tn: The resulting counting process would be a Poisson process with rate λ.
Recommended publications
  • Ewens' Sampling Formula
    Ewens’ sampling formula; a combinatorial derivation Bob Griffiths, University of Oxford Sabin Lessard, Universit´edeMontr´eal Infinitely-many-alleles-model: unique mutations ......................................................................................................... •. •. •. ............................................................................................................ •. •. .................................................................................................... •. •. •. ............................................................. ......................................... .............................................. •. ............................... ..................... ..................... ......................................... ......................................... ..................... • ••• • • • ••• • • • • • Sample configuration of alleles 4 A1,1A2,4A3,3A4,3A5. Ewens’ sampling formula (1972) n sampled genes k b Probability of a sample having types with j types represented j times, jbj = n,and bj = k,is n! 1 θk · · b b 1 1 ···n n b1! ···bn! θ(θ +1)···(θ + n − 1) Example: Sample 4 A1,1A2,4A3,3A4,3A5. b1 =1, b2 =0, b3 =2, b4 =2. Old and New lineages . •. ............................... ......................................... •. .............................................. ............................... ............................... ..................... ..................... ......................................... ..................... ..................... ◦ ◦◦◦◦◦ n1 n2 nm nm+1
    [Show full text]
  • Copyrighted Material
    Index alternating renewal process, 353, 354 reflection principle, 474 scaling invariance, 471 Bayes rule, 23, 28 simulation, 480 Bernoulli process, 228, 293, 301–305, 309, 316 time invariance, 472 beta function, 60, 170, 265 Buffon’s needle problem, 83, 94 birth and catastrophe chain, 377 birth and death chain, 375, 392 Cauchy–Bunyakovsky–Schwarz inequality, 135, birthday problem, 30–31 137, 143, 149 Black–Karasinski model, 509 Central Limit Theorem, 2, 87, 92, 241, Blackwell renewal theorem, 346, 348 244, 245, 273, 282, 302, 342, 343, block replacement policy, 344 482, 523 Borel–Cantelli lemmas, 34–142, 224, 304 DeMoivre-Laplace version, 151 bounded convergence theorem, 131, 251, 422, changing the order of derivation and integra- 448, 455 tion, 243 branching process, 363, 366, 368, 441, 461 Chapman-Kolmogorov equation, 378, 379, 381, Brownian bridge, 483 389, 420 Brownian motion conditional distribution, 468 characteristic function, 192–194 covariance structure,COPYRIGHTED 471 MATERIALcontinuity theorem, 203 definition, 467 examples, 194–199 drift, 471 multivariate random vector, 194 examples, 468, 471 properties, 193 generation, 176 uniqueness, 200 hitting time, 475, 476 characterization theorem for convergence in dis- multidimensional, 473 tribution, 218 quadratic variation, 478 Cholesky decomposition, 174–176, 514 547 Probability and Stochastic Processes, First Edition. Ionut¸Florescu C 2015 John Wiley & Sons, Inc. Published 2015 by John Wiley & Sons, Inc. 548 INDEX CIR model, 507 Darboux sums, 349 moments, 510, 511 De Morgan laws,
    [Show full text]
  • On a Fractional Alternating Poisson Process
    AIMS Mathematics, 1(3): 212-224 DOI:10.3934/Math.2016.3.212 Received: 5 September 2016 Accepted: 12 September 2016 http://www.aimspress.com/journal/Math Published: 20 September 2016 Research article On a fractional alternating Poisson process Antonio Di Crescenzo∗and Alessandra Meoli Department of Mathematics, University of Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano (SA), Italy ∗ Correspondence: Email: [email protected]; Tel: +39 089 96 3349; Fax: +39 089 96 3303. Abstract: We propose a generalization of the alternating Poisson process from the point of view of fractional calculus. We consider the system of differential equations governing the state probabilities of the alternating Poisson process and replace the ordinary derivative with the fractional derivative (in the Caputo sense). This produces a fractional 2-state point process. We obtain the probability mass function of this process in terms of the (two-parameter) Mittag-Leffler function. Then we show that it can be recovered also by means of renewal theory. We study the limit state probability, and certain proportions involving the fractional moments of the sub-renewal periods of the process. In conclusion, in order to derive new Mittag-Leffler-like distributions related to the considered process, we exploit a transformation acting on pairs of stochastically ordered random variables, which is an extension of the equilibrium operator and deserves interest in the analysis of alternating stochastic processes. Keywords: Caputo derivative; Mittag-Leffler function; fractional process; alternating process; renewal process; renewal function 1. Introduction Extension of continuous-time point processes to the fractional case has been a major topic in liter- ature for several years.
    [Show full text]
  • EE414 - Birth-Death Processes 1/17 Birth-Death Processes
    Birth-Death Processes Birth-Death Processes: Transient Solution Poisson Process: State Distribution Poisson Process: Inter-arrival Times Dr Conor McArdle EE414 - Birth-Death Processes 1/17 Birth-Death Processes Birth-Death Processes are Markov chains where transitions are allowed only between neighbouring states of the process. They may be discrete- or continuous-time. Our primary interest is in the continuous-time case. Birth-Death processes are used to simplify the analysis of systems that see random arrivals and departures of customers (packets, requests, or calls etc.) over time. We will develop results that are useful for performance evaluation of servers (or transmission lines) with associated input queues. The results also apply to circuit switches with limited numbers of outgoing channels. We start in this section by seeking the transient solution to the Birth-Death system. Recall that the general transient solution π(t) of a continuous-time Markov chain is given implicitly by π_ (t) = π(t)Q where Q is the matrix of state transition-rates between each pair of states in the process. The Birth-Death assumption imposes a certain structure on Q which gives the solution π(t) a simplified form. Dr Conor McArdle EE414 - Birth-Death Processes 2/17 Birth-Death Processes: Transition-rates Consider the state transition-rate diagram of the general Birth-Death process below. ë0 ë1 ëk-1 ëk 0 1 2 k-1 k k+1 ì1 ì2 ìk ìk+1 The states of the process are represented by numbered circles, the number indicating the state. For a birth-death process this number represents the number of individuals in the population, which for our purposes we interpret as the number of packets (or requests or calls) in the system.
    [Show full text]
  • Notes on the Poisson Point Process
    Notes on the Poisson point process Paul Keeler January 5, 2016 This work is licensed under a “CC BY-SA 3.0” license. Abstract The Poisson point process is a type of random object known as a point pro- cess that has been the focus of much study and application. This survey aims to give an accessible but detailed account of the Poisson point process by covering its history, mathematical definitions in a number of settings, and key properties as well detailing various terminology and applications of the process, with rec- ommendations for further reading. 1 Introduction In probability, statistics and related fields, a Poisson point process or a Poisson process or a Poisson point field is a type of random object known as a point pro- cess or point field that consists of randomly positioned points located on some underlying mathematical space [68]. The process has convenient mathematical properties [39], which has led to it being frequently defined in Euclidean space and used as a mathematical model for seemingly random processes in numer- ous disciplines such as astronomy [5], biology [53], ecology [70], geology [16], physics [61], image processing [12], and telecommunications [7][28]. The Poisson point process is often defined on the real line playing an impor- tant role in the field of queueing theory [40] where it is used to model certain random events happening in time such as the arrival of customers at a store or phone calls at an exchange. In the plane, the point process, also known as a spa- tial Poisson process [9], may represent scattered objects such as users in a wireless network [2, 6, 7, 27], particles colliding into a detector, or trees in a forest [68].
    [Show full text]
  • General Birth-Death Processes: Probabilities, Inference, and Applications
    Electronic Thesis and Dissertations UCLA Peer Reviewed Title: General birth-death processes: probabilities, inference, and applications Author: Crawford, Forrest Wrenn Acceptance Date: 2012 Series: UCLA Electronic Theses and Dissertations Degree: Ph.D., Biomathematics 0121UCLA Advisor(s): Suchard, Marc A Committee: Lange, Kenneth, Sinsheimer, Janet, Novembre, John Permalink: http://escholarship.org/uc/item/4gc165m0 Abstract: Copyright Information: All rights reserved unless otherwise indicated. Contact the author or original publisher for any necessary permissions. eScholarship is not the copyright owner for deposited works. Learn more at http://www.escholarship.org/help_copyright.html#reuse eScholarship provides open access, scholarly publishing services to the University of California and delivers a dynamic research platform to scholars worldwide. University of California Los Angeles General birth-death processes: probabilities, inference, and applications A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Biomathematics by Forrest Wrenn Crawford 2012 Abstract of the Dissertation General birth-death processes: probabilities, inference, and applications by Forrest Wrenn Crawford Doctor of Philosophy in Biomathematics University of California, Los Angeles, 2012 Professor Marc A. Suchard, Chair A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. Each particle can give birth to another particle or die, and the rate of births and deaths at any given time depends on how many extant particles there are. Birth-death processes are popular modeling tools in evolution, population biology, genetics, epidemiology, and ecology. Despite the widespread interest in birth-death models, no efficient method exists to evaluate the finite-time transition probabilities in a process with arbitrary birth and death rates.
    [Show full text]
  • Genealogical Constructions of Population Models March 2, 2018
    Genealogical constructions of population models Alison M. Etheridge ∗ Thomas G. Kurtz y Department of Statistics Departments of Mathematics and Statistics Oxford University University of Wisconsin - Madison 24-29 St Giles 480 Lincoln Drive Oxford OX1 3LB Madison, WI 53706-1388 UK USA [email protected] [email protected] http://www.stats.ox.ac.uk/~etheridg/ http://www.math.wisc.edu/~kurtz/ March 2, 2018 Abstract Representations of population models in terms of countable systems of particles are constructed, in which each particle has a `type', typically recording both spatial position and genetic type, and a level. For finite intensity models, the levels are distributed on [0; λ], whereas in the infinite intensity limit λ ! 1, at each time t, the joint distribution of types and levels is conditionally Poisson, with mean measure Ξ(t)×` where ` denotes Lebesgue measure and Ξ(t) is a measure-valued population process. The time-evolution of the levels captures the genealogies of the particles in the population. Key forces of ecology and genetics can be captured within this common framework. Models covered incorporate both individual and event based births and deaths, one-for- one replacement, immigration, independent `thinning' and independent or exchange- able spatial motion and mutation of individuals. Since birth and death probabilities can depend on type, they also include natural selection. The primary goal of the paper is to present particle-with-level or lookdown constructions for each of these elements of a population model. Then the elements can be combined to specify the desired model. In particular, a non-trivial extension of the spatial Λ-Fleming-Viot process is constructed.
    [Show full text]
  • ROBUSTNESS of SCALE-FREE SPATIAL NETWORKS Emmanuel Jacob, Morters Peter
    ROBUSTNESS OF SCALE-FREE SPATIAL NETWORKS Emmanuel Jacob, Morters Peter To cite this version: Emmanuel Jacob, Morters Peter. ROBUSTNESS OF SCALE-FREE SPATIAL NETWORKS. 2015. hal-01139102 HAL Id: hal-01139102 https://hal.archives-ouvertes.fr/hal-01139102 Preprint submitted on 3 Apr 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ROBUSTNESS OF SCALE-FREE SPATIAL NETWORKS EMMANUEL JACOB AND PETER MORTERS¨ Abstract: A growing family of random graphs is called robust if it retains a giant component after percolation with arbitrary positive retention probabil- ity. We study robustness for graphs, in which new vertices are given a spatial position on the d-dimensional torus and are connected to existing vertices with a probability favouring short spatial distances and high degrees. In this model of a scale-free network with clustering we can independently tune the power law exponent τ of the degree distribution and the rate δd at which the connection probability decreases with the distance of two− vertices. We 1 show that the network is robust if τ < 2 + δ , but fails to be robust if τ > 3. In the case of one-dimensional space we also show that the network is not 1 robust if τ > 2 + δ−1 .
    [Show full text]
  • Evolutionary Dynamics of Speciation and Extinction
    Scholars' Mine Doctoral Dissertations Student Theses and Dissertations Fall 2015 Evolutionary dynamics of speciation and extinction Dawn Michelle King Follow this and additional works at: https://scholarsmine.mst.edu/doctoral_dissertations Part of the Biophysics Commons Department: Physics Recommended Citation King, Dawn Michelle, "Evolutionary dynamics of speciation and extinction" (2015). Doctoral Dissertations. 2464. https://scholarsmine.mst.edu/doctoral_dissertations/2464 This thesis is brought to you by Scholars' Mine, a service of the Missouri S&T Library and Learning Resources. This work is protected by U. S. Copyright Law. Unauthorized use including reproduction for redistribution requires the permission of the copyright holder. For more information, please contact [email protected]. 1 EVOLUTIONARY DYNAMICS OF SPECIATION AND EXTINCTION by DAWN MICHELLE KING A DISSERTATION Presented to the Faculty of the Graduate Faculty of the MISSOURI UNIVERSITY OF SCIENCE AND TECHNOLOGY and UNIVERSITY OF MISSOURI AT SAINT LOUIS In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY in PHYSICS 2015 Approved by: Sonya Bahar, Advisor Ricardo Flores Nevena Marić Paul Parris Thomas Vojta 1 iii ABSTRACT Presented here is an interdisciplinary study that draws connections between the fields of physics, mathematics, and evolutionary biology. Importantly, as we move through the Anthropocene Epoch, where human-driven climate change threatens biodiversity, understanding how an evolving population responds to extinction stress could be key to saving endangered ecosystems. With a neutral, agent-based model that incorporates the main principles of Darwinian evolution, such as heritability, variability, and competition, the dynamics of speciation and extinction is investigated. The simulated organisms evolve according to the reaction-diffusion rules of the 2D directed percolation universality class.
    [Show full text]
  • Stochastic Processes II/ Wahrscheinlichkeitstheorie III Lecture Notes
    BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universit¨atBerlin Sommersemester 2018 preliminary version April 18th 2019 Contents 1 Processes, Filtrations, Martingales 1 1.1 Foundations . 1 1.2 Filtrations and stopping times . 2 1.3 Martingales . 7 1.4 Semimartingales, Quadratic Variation . 11 2 Stochastic Integrals and Stochastic Differential Equations 19 2.1 The stochastic integral . 19 2.2 It^o'sformula . 24 2.3 Representation of local martingales . 27 2.4 Stochastic differential equations . 28 3 L´evyProcesses 39 4 Markov Processes 43 4.1 Markov transition functions and Markov processes . 43 4.2 Strong Markov property . 49 4.3 Hille-Yosida Theory . 54 4.4 Invariant measures: existence . 61 4.5 Invariant measures: uniqueness . 65 4.6 Coupling, optimal transportation and Wasserstein distance . 69 5 Appendix 71 Bibliography 73 i Chapter 1 Processes, Filtrations, Martingales 1.1 Foundations Most of this chapter can be found in the first chapter of the monograph [KS91] or in [RY94]. Definition 1.1. Let (Ω; F) and (E; E) be measurable spaces and I a (non-empty) index set. A collection X = (Xt)t2I of measurable maps from (Ω; F) to (E; E) is called (E-valued) process. If { in addition { a probability measure is specified on (Ω; F), then we call X a stochastic process. Notation 1.2. If (E; d) is a metric space (or, more generally, E is a topological space), then we denote its Borel-σ-algebra (i.e. the smallest σ-algebra containing all open subsets of E) by d B(E).
    [Show full text]
  • Stochastic Process
    Stochastic Process Kazufumi Ito∗ August 19, 2013 1 Markov Chain In this section consider the discrete time stochastic process. Let S be the state space, e.g., S = Z = fintegersg, S = f0; 1; ··· ;Ng and S = {−N; ··· ; 0; ··· ;Ng. Definition We say that a stochastic process fXng; n ≥ 0 is a Markov chain with initial distribution π (P (X0) = πi) and (one-step) transition matrix P if for each n and ik; 0 ≤ k ≤ n − 1 P (Xn+1 = jjXn = i; Xn−1 = in−1; ··· ;X0 = i0) = P (Xn+1 = jjXn = i) = pij ≥ with X pij = 1; pij ≥ 0: j2S Thus, the distribution of Xn+1 depends only on the current state Xn and is independent of the past. Example Xn+1 = f(Xn; wn); f : S × R ! S; where fwng is independent identically distributed random variables and P (f(x; w) = jjx = i) = pij: The following theorem follows; n n Theorem Let P = fpij. X 2 (2) P (Xn+2 = jjXn = i) = pikpkj = (P )ij = pij k2S n X (n) P (Xn = j) = (πP )j = πipi;j and (m+n) X (n) (m) pij = pik pkj (Chapman-Kolmogorov): k2S ∗Department of Mathematics, North Carolina State University, Raleigh, North Carolina, USA 1 1.1 Classification of the States In this action we analyze the asymptotic behavior of the Markov chain, e.g. including (n) Questions (1) The limits πj = limn!1 pij exist are are independent of i. P (2) The limits (π1; π2; ··· ) form a probability distribution, that is, π ≥ 0 and πi = 1. (3) The chain is ergotic, i.e., πi > 0.
    [Show full text]
  • Markovian Queueing Networks
    Richard J. Boucherie Markovian queueing networks Lecture notes LNMB course MQSN September 5, 2020 Springer Contents Part I Solution concepts for Markovian networks of queues 1 Preliminaries .................................................. 3 1.1 Basic results for Markov chains . .3 1.2 Three solution concepts . 11 1.2.1 Reversibility . 12 1.2.2 Partial balance . 13 1.2.3 Kelly’s lemma . 13 2 Reversibility, Poisson flows and feedforward networks. 15 2.1 The birth-death process. 15 2.2 Detailed balance . 18 2.3 Erlang loss networks . 21 2.4 Reversibility . 23 2.5 Burke’s theorem and feedforward networks of MjMj1 queues . 25 2.6 Literature . 28 3 Partial balance and networks with Markovian routing . 29 3.1 Networks of MjMj1 queues . 29 3.2 Kelly-Whittle networks. 35 3.3 Partial balance . 39 3.4 State-dependent routing and blocking protocols . 44 3.5 Literature . 50 4 Kelly’s lemma and networks with fixed routes ..................... 51 4.1 The time-reversed process and Kelly’s Lemma . 51 4.2 Queue disciplines . 53 4.3 Networks with customer types and fixed routes . 59 4.4 Quasi-reversibility . 62 4.5 Networks of quasi-reversible queues with fixed routes . 68 4.6 Literature . 70 v Part I Solution concepts for Markovian networks of queues Chapter 1 Preliminaries This chapter reviews and discusses the basic assumptions and techniques that will be used in this monograph. Proofs of results given in this chapter are omitted, but can be found in standard textbooks on Markov chains and queueing theory, e.g. [?, ?, ?, ?, ?, ?, ?]. Results from these references are used in this chapter without reference except for cases where a specific result (e.g.
    [Show full text]