Markov Chains

Markov Chains

Stochastic Matrices The following 3 3matrixdefinesa × discrete time Markov process with three states: P11 P12 P13 P = P21 P22 P23 ⎡ ⎤ P31 P32 P33 ⎣ ⎦ where Pij is the probability of going from j i in one step. A stochastic matrix → satisfies the following conditions: P 0 ∀i, j ij ≥ and M j ∑ Pij = 1. ∀ i=1 Example The following 3 3matrixdefinesa × discrete time Markov process with three states: 0.90 0.01 0.09 P = 0.01 0.90 0.01 ⎡ ⎤ 0.09 0.09 0.90 ⎣ ⎦ where P23 = 0.01 is the probability of going from 3 2inonestep.Youcan → verify that P 0 ∀i, j ij ≥ and 3 j ∑ Pij = 1. ∀ i=1 Example (contd.) 0.9 0.9 0.01 1 2 0.01 0.09 0.09 0.01 0.09 3 0.9 Figure 1: Three-state Markov process. Single Step Transition Probabilities x(1) = Px(0) x(2) = Px(1) . x(t+1) = Px(t) 0.641 0.90 0.01 0.09 0.7 0.188 = 0.01 0.90 0.01 0.2 ⎡ ⎤ ⎡ ⎤⎡ ⎤ 0.171 0.09 0.09 0.90 0.1 ⎣ x(t+1) ⎦ ⎣ P ⎦⎣ x(t) ⎦ M % &' ( (t%+1) &' (t) (% &' ( xi = ∑ Pij x j j=1 n-step Transition Probabilities Observe that x(3) can be written as fol- lows: x(3) = Px(2) = P Px(1) = P)P Px*(0) = P3)x(0)). ** n-step Transition Probabilities (contd.) Similar logic leads us to an expression for x(n): x(n) = P P... Px(0) ) )n ** = Pnx(0). % &' ( An n-step transition probability matrix can be defined in terms of a single step matrix and a (n 1)-step matrix: − M ( n) = n 1 . P ij ∑ Pik P − kj k=1 + , Analysis of Two State Markov Process 1 ab P = − a 1 b - − . a 1 − a 1 2 1 − b b Figure 2: Two-state Markov process. Analysis of Two State Markov Process (contd.) It is readily shown by induction that the n-step transition probabilities for the two state Markov process are given by the following formula: 1 bb (1 a b)n a b Pn = + − − − . a + b aa a + b ab - . - − . For conciseness, we introduce the fol- lowing abbreviations: bb P = 1 aa - . and a b P2 = − . ab - − . Analysis of Two State Markov Process (contd.) The following identities will also help: Identity 1 • bb 1 ab bb P P = − = = P 1 aa a 1 b aa 1 - .- − . - . Identity 2 • a b 1 ab P P = − − 2 ab a 1 b - − .- − . a a2 ab b + ab + b2 = − − − a2 a + ab ab + b b2 - − − − . =(1 a b)P − − 2 Analysis of Two State Markov Process (contd.) To do a proof by induction, we need to prove the basis step and the induction step. First the basis step: 1 bb (1 a b) a b P1 = + − − − a + b aa a + b ab - . - − . 1 b + a a2 ab b b + ab + b2 = − − − a + b a a + a2 + ab a + b ab b2 - − − − . 1 ab = − a 1 b - − . = P. Analysis of Two State Markov Process (contd.) Now we prove the induction step: 1 (1 a b)n PnP = P + − − P P (a + b) 1 (a + b) 2 / 0 1 (1 a b)n = P P + − − P P (a + b) 1 (a + b) 2 1 (1 a b)n+1 = P + − − P (a + b) 1 (a + b) 2 = Pn+1. Limiting Distribution 1 ab P = − a 1 b - − . 1 bb (1 a b)n a b Pn = + − − − a + b aa a + b ab - . - − . Note that 1 a b < 1when0< a < 1 | − − | and 0 < b < 1. Thus, 1 a b n 0 | − − | → as n ∞. → Limiting Distribution (contd.) It follows that: b b n a+b a+b lim P = a a . n ∞ → - a+b a+b . It is easy to show that [b/(a+b),a/(a+ b)]T is an eigenvector with eigenvalue one of P: b b ab ab 1 ab a+b a+b a+b + a+b − a = ab − a ab a 1 b a+b a+b + a+b a+b - − .- . - b − . a+b = a . - a+b . Spectral Theorem (reprise) Pn = XΛn YT n T n T = λ1x1y1 + λ2x2y2 b a+b n 1 a b = 11 +(1 a b) − − a − − 1 a+b a+b - a+b . - . 1 1bb 2 (1 a b)n a b1 2 = + − − − . a + b aa a + b ab - . - − . Existence of Limiting Distribution In order to understand when a Markov process will have a limiting distribution and when it will not we will Prove that a stochastic matrix has no • eigenvalue with magnitude greater than one. Prove that a stochastic matrix always • has at least one eigenvalue equal to one. Identify those conditions in which this • eigenvalue will be the unique eigen- value of unit magnitude. (a) (b) 1 1 0 0 0 1 0 1 (c) 1 0 0 1 1 ab Figure 3: (a) − maps pts. in lines of constant 1 norm to pts. in a 1 b − - − . bb same line. These lines have slope equal to -1. (b) maps points to line of aa b b - . a a+b a+b slope b through origin. (c) a a maps distributions (thick segment) to a+b a+b b - . a+b point a . - a+b . Spectral Radius The spectral radius, ρ(A),ofama- • trix A is defined as the magnitude of its largest eigenvalue. The 1 norm of a vector x is defined • − as follows: x 1 = ∑ xi . || || i | | The 1 norm of a matrix A is defined • − as follows: A 1 = max Ax 1. || || x =1 || || || ||1 The Gelfand spectral radius theorem • states that 1 ρ(A)=lim An n. n ∞ 1 → || || Spectral Radius (contd.) Lemma 3.1 Let P be stochastic. Then P = 1. || ||1 Proof: yi = ∑Pijx j j ∑yi = ∑∑Pijx j i i j ∑yi = ∑∑Pijx j i j i ∑yi = ∑x j ∑Pij i j i ∑yi = ∑x j. i j It follows that Px = x . || ||1 || ||1 Spectral Radius (contd.) Consequently, P 1 = max Px 1 || || x =1 || || || ||1 P 1 = max x 1 || || x =1 || || || ||1 P = 1. || ||1 Spectral Radius (contd.) Lemma 3.2 The product of two stochas- tic matrices is stochastic. Proof: Let P and Q be stochastic, then (PQ)ij = ∑PikQkj k ∑(PQ)ij = ∑∑PikQkj i i k = ∑∑PikQkj k i = ∑Qkj∑Pik k i = ∑Qkj k = 1. Spectral Radius (contd.) Theorem 3 The spectral radius, ρ(P), of a stochastic matrix, P,isone. Proof: It is straightforward to show by induction on n and Lemma 3.2 that Pn is stochastic for all integers, n > 0. It follows, by Lemma 3.1, that Pn = 1 || ||1 for all integers, n > 0. Consequently, 1 ρ(P)=lim Pn n = 1 n ∞ 1 → || || by the Gelfand spectral radius theorem. Existence of Limiting Distribution (contd.) We just showed that a stochastic matrix cannot have an eigenvalue with mag- nitude greater than one. We will now show that every stochastic matrix has at least one eigenvalue equal to one. Existence of λ = 1 M Let P be a stochastic matrix. Since ∑i=1 Pij = M 1and∑i=1 Iij = 1, it follows that: M M 0 = ∑ Pij ∑ Iij i=1 − i=1 M = ∑ (Pij Iij). i=1 − Consequently, the rows of P I are not − linearly independent. Consequently, P − I is singular: det(P I)=0. − Existence of λ = 1(contd.) Recall that x is an eigenvector of P with eigenvalue, λ,iff: λx = Px. The eigenvalues of P are the roots of the characteristic polynomial: det(P λI)=0. − Since det(P I)=0, it follows that λ = − 1isaneigenvalueofP. Existence of Limiting Distribution (contd.) We just showed that Astochasticmatrixcannothavean • eigenvalue λ with magnitude greater than one. Every stochastic matrix has at least • one eigenvalue λ1 equal to one. We will now identify those conditions in which this eigenvalue will be the unique eigenvalue of unit magnitude. Uniqueness of λ = 1 | | Amatrix,P,ispositive if and only if for all i and j it is true that Pij > 0. In 1907, Perron proved that every pos- itive matrix has a positive eigenvalue, λ1,withlargermagnitudethanthere- maining eigenvalues. If P is positive and of size M M then: × λ > λ for 1 < i M. 1 | i| ≤ Irreducibility Two states, i and j in a Markov process communicate iff 1) i can be reached from j with non-zero probability: N1 n ∑ (P )ij > 0 n=1 and 2) j can be reached from i with non-zero probability: N2 n ∑ (P ) ji > 0 n=1 for some sufficiently large N1 and N2. If every state communicates with every other state, then the Markov process is irreducible. Aperiodicity Astatei in a Markov process is aperi- odic if for all sufficiently large N,there is a non-zero probability of returning to i in N steps: N P ii > 0. If a state is aperiodic,+ , then every state it communicates with is also aperiodic. If a Markov process is irreducible, then all states are either periodic or aperi- odic. Positive Stochastic Matrices Theorem 4 If P is irreducible and ape- riodic then PN is positive for some suf- ficiently large N: N i, j P > 0. ∀ ij Proof: Let P be+ irreducible, and aperi- odic and let Nij = min (N). N >0 (P )ij We observe that Nij is guaranteed to ex- ist for all i and j by irreducibility. Positive Stochastic Matrices (contd.) Now let N = M + max(Nij) i, j N Nij = M + max(Nij) Nij − i, j − M. ≥ where M satisfying M i P > 0 ∀ ii is guaranteed to+ exist, by aperiodicity.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    40 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us