Basic Markov Chain Theory
Chapter 2 Basic Markov Chain Theory To repeat what we said in the Chapter 1, a Markov chain is a discrete-time stochastic process X1, X2, . taking values in an arbitrary state space that has the Markov property and stationary transition probabilities: • the conditional distribution of Xn given X1, . , Xn−1 is the same as the conditional distribution of Xn given Xn−1 only, and • the conditional distribution of Xn given Xn−1 does not depend on n. The conditional distribution of Xn given Xn−1 specifies the transition proba- bilities of the chain. In order to completely specify the probability law of the chain, we need also specify the initial distribution , the distribution of X1. 2.1 Transition Probabilities 2.1.1 Discrete State Space For a discrete state space S, the transition probabilities are specified by defining a matrix P (x, y ) = Pr( Xn = y|Xn−1 = x), x,y ∈ S (2.1) that gives the probability of moving from the point x at time n − 1 to the point y at time n. Because of the assumption of stationary transition probabilities, the transition probability matrix P (x, y ) does not depend on the time n. Some readers may object that we have not defined a “matrix.” A matrix (I can hear such readers saying) is a rectangular array P of numbers pij , i = 1, . , m, j = 1, . , n, called the entries of P . Where is P ? Well, enumerate the points in the state space S = {x1,. , x d}, then pij = Pr {Xn = xj|Xn−1 = xi}, i = 1 ,...,d, j = 1 , .
[Show full text]