Chapter 1 Markov Chains A sequence of random variables X0,X1,...with values in a countable set S is a Markov chain if at any time n, the future states (or values) Xn+1,Xn+2,... depend on the history X0,...,Xn only through the present state Xn.Markov chains are fundamental stochastic processes that have many diverse applica- tions. This is because a Markov chain represents any dynamical system whose states satisfy the recursion Xn = f(Xn−1,Yn), n ≥ 1, where Y1,Y2 ... are independent and identically distributed (i.i.d.) and f is a deterministic func- tion. That is, the new state Xn is simply a function of the last state and an auxiliary random variable. Such system dynamics are typical of those for queue lengths in call centers, stresses on materials, waiting times in produc- tion and service facilities, inventories in supply chains, parallel-processing software, water levels in dams, insurance funds, stock prices, etc. This chapter begins by describing the basic structure of a Markov chain and how its single-step transition probabilities determine its evolution. For in- stance, what is the probability of reaching a certain state, and how long does it take to reach it? The next and main part of the chapter characterizes the stationary or equilibrium distribution of Markov chains. These distributions are the basis of limiting averages of various cost and performance param- eters associated with Markov chains. Considerable discussion is devoted to branching phenomena, stochastic networks, and time-reversible chains. In- cluded are examples of Markov chains that represent queueing, production systems, inventory control, reliability, and Monte Carlo simulations. Before getting into the main text, a reader would benefit by a brief review of conditional probabilities in Section 1.22 of this chapter and related material on random variables and distributions in Sections 1–4 in the Appendix. The rest of the Appendix, which provides more background on probability, would be appropriate for later reading. R. Serfozo, Basics of Applied Stochastic Processes, 1 Probability and its Applications. c Springer-Verlag Berlin Heidelberg 2009 2 1MarkovChains 1.1 Introduction This section introduces Markov chains and describes a few examples. A discrete-time stochastic process {Xn : n ≥ 0} on a countable set S is a collection of S-valued random variables defined on a probability space (Ω,F,P). The P is a probability measure on a family of events F (a σ-field) in an event-space Ω.1 The set S is the state space of the process, and the value Xn ∈ S is the state of the process at time n.Then may represent a parameter other than time such as a length or a job number. The finite-dimensional distributions of the process are P {X0 = i0,...,Xn = in},i0,...,in ∈ S, n ≥ 0. These probabilities uniquely determine the probabilities of all events of the process. Consequently, two stochastic processes (defined on different probabil- ity spaces or the same one) are equal in distribution if their finite-dimensional distributions are equal. Various types of stochastic processes are defined by specifying the dependency among the variables that determine the finite- dimensional distributions, or by specifying the manner in which the process evolves over time (the system dynamics). A Markov chain is defined as follows. Definition 1. A stochastic process X = {Xn : n ≥ 0} on a countable set S is a Markov Chain if, for any i, j ∈ S and n ≥ 0, P {Xn+1 = j|X0,...,Xn} = P {Xn+1 = j|Xn}, (1.1) P {Xn+1 = j|Xn = i} = pij . (1.2) The pij is the probability that the Markov chain jumps from state i to state ∈ j.Thesetransition probabilities satisfy j∈S pij =1,i S,andthematrix P =(pij )isthetransition matrix of the chain. Condition (1.1), called the Markov property, says that, at any time n,the next state Xn+1 is conditionally independent of the past X0,...,Xn−1 given the present state Xn. In other words, the next state is dependent on the past and present only through the present state. The Markov property is an elementary condition that is satisfied by the state of many stochastic phe- nomena. Consequently, Markov chains, and related continuous-time Markov processes, are natural models or building blocks for applications. Condition (1.2) simply says the transition probabilities do not depend on thetimeparametern; the Markov chain is therefore “time-homogeneous”. If the transition probabilities were functions of time, the process Xn would be a non-time-homogeneous Markov chain. Such chains are like time-homogeneous 1 Further details on probability spaces are in the Appendix. We follow the convention of not displaying the space (Ω, F,P) every time random variables or processes are introduced; it is mentioned only when needed for clarity. 1.1 Introduction 3 chains, but the time dependency introduces added accounting details that we will not address here. See Exercises 12 and 13 for further insights. Since the state space S is countable, we will sometimes label the states by integers, such as S = {0, 1, 2,...} (or S = {1,...,m}). Under this labeling, the transition matrix has the form ⎡ ⎤ p00 p01 p02 ··· ⎢ ⎥ ⎢p10 p11 p12 ···⎥ ⎢ ⎥ P = ⎢p20 p21 p22 ···⎥ ⎣..............⎦ .............. We end this section with a few preliminary examples. Example 2. Binomial Markov Chain.ABernoulli process is a sequence of independent trials in which each trial results in a success or failure with respective probabilities p and q =1−p.LetXn denote the number of successes in n trials, for n ≥ 1. By direct reasoning, it follows that Xn has a binomial distribution with parameters n and p: n P {X = k} = pk(1 − p)n−k, 0 ≤ k ≤ n. n k Now, suppose at the nth trial that Xn = i. Then at the next trial, Xn+1 will equal i +1or i with probabilities p and 1 − p, respectively, regardless of the values of X1,...,Xn−1.ThusXn is a Markov chain with transition probabilities pi,i+1 = p, pii =1− p and pij = 0 otherwise. This binomial Markov chain is a special case of the following random walk. Example 3. Random Walk. Suppose Y1,Y2,... are i.i.d. integer-valued ran- dom variables, and define X0 =0and n Xn = Ym,n≥ 1. m=1 The process Xn is a random walk on the set of integers S,whereYn is the step size at time n. A random walk represents a quantity that changes over time (e.g., a stock price, an inventory level, or a gambler’s fortune) such that its increments (step sizes) are i.i.d. Since Xn+1 = Xn + Yn+1,andYn+1 is independent of X0,...,Xn, it follows that, for any i, j ∈ S and n ≥ 0, P {Xn+1 = j|X0,...,Xn−1,Xn = i} = P {Xn + Yn+1 = j|Xn = i} = P {Y1 = j − i}. Therefore, the random walk Xn is a Markov chain on the nonnegative integers S with transition probabilities pij = P {Y1 = j − i}. 4 1MarkovChains When the step sizes Yn take values 1 or −1withp = P {Y1 =1} and q = P {Y1 = −1},thechainXn is a simple random walk. Its transition probabilities, for each i,are pi,i+1 = p, pi,i−1 = q, pij =0, for j = i +1ori − 1. This type of walk restricted to a finite state space is described next. Example 4. Gambler’s Ruin. Consider a Markov chain on S = {0, 1,...,m} with transition matrix ⎡ ⎤ 100..... ⎢ ⎥ ⎢q 0 p 0 ...⎥ ⎢ ⎥ ⎢0 q 0 p 0 ..⎥ P = ⎢ ⎥ ⎢.........⎥ ⎣ ...0 q 0 p⎦ ......01 One can interpret the state of the Markov chain as the fortune of a Gambler who repeatedly plays a game in which the Gambler wins or loses $1 with respective probabilities p and q =1− p. If the fortune reaches state 0, the Gambler is ruined since p00 = 1 (state 0 is absorbing — the chain stays there forever). On the other hand, if the fortune reaches m, the Gambler retires with the fortune m since pmm =1(m is another absorbing state). A versatile generalization to state-dependent gambles (and other applica- tions as well) is with a transition matrix ⎡ ⎤ r0 p0 0 ............ ⎢ ⎥ ⎢q1 r1 p1 0 ........⎥ ⎢ ⎥ ⎢ 0 q2 r2 p2 0 ....⎥ P = ⎢ ⎥ ⎢..................⎥ ⎣ ⎦ .... 0 qm−1 rm−1 pm−1 .......... qm rm In this case, the outcome of the game depends on the Gambler’s fortune. When the fortune is i, the Gambler either wins or loses $1 with respective probabilities pi or qi, or breaks even (the fortune does not change) with probability ri. Another interpretation is that the state of the chain is the location of a random walk with state-dependent steps of size −1, 0, or 1. Markov chains are common models for a variety of systems and phenom- ena, such as the following, in which the Markov property is “reasonable”. Example 5. Flexible Manufacturing System. Consider a machine that is capa- ble of producing three types of parts. The state of the machine at time period n is denoted by a random variable Xn that takes values in S = {0, 1, 2, 3}, where 0 means the machine is idle and i =1, 2 or 3 means the machine pro- duces a type i in the time period. Suppose the machine’s production schedule is Markovian in the sense that the next type of part it produces, or a possible 1.2 Probabilities of Sample Paths 5 idle period, depends only on its current state, and the probabilities of these changes do not depend on time. Then Xn is a Markov chain. For instance, its transition matrix might be ⎡ ⎤ 1/51/51/52/5 ⎢1/10 1/21/10 3/10⎥ P = ⎢ ⎥ ⎣ 1/501/53/5 ⎦ 1/502/52/5 Such probabilities can be estimated as in Exercise 65, provided one can ob- serve the evolution of the system.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages454 Page
-
File Size-