Markov Chain Monte Carlo

Markov Chain Monte Carlo

Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov Chain Monte Carlo Todd Ebert Todd Ebert Markov Chain Monte Carlo Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Outline 1 Introduction 2 Markov-Chains 3 Hastings-Metropolis Algorithm 4 Simulated Annealing Todd Ebert Markov Chain Monte Carlo Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Sampling an Unknown Distribution The Problem Need to sample random variable X , but distribution π(x) is unknown. Only relative values π(x)/π(y) are known, for each x; y 2 dom(X ). jdom(X )j is too large to enumerate. Todd Ebert Markov Chain Monte Carlo Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Sampling an Unknown Distribution The Solution Devleop a method for randomly traversing through the elements of dom(X ). The elements of dom(X ) are called states. Moving from one state to the next is called a state transition, and is governed by a probability distribution p(yjx) that gives the probability of transitioning to y on condition the current visited state is x. For each x 2 dom(X ), the fraction of visits that are to state x converges to π(x). Todd Ebert Markov Chain Monte Carlo Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Outline 1 Introduction 2 Markov-Chains 3 Hastings-Metropolis Algorithm 4 Simulated Annealing Todd Ebert Markov Chain Monte Carlo Markov-chain Example States: f1 = no rain; 2 = raing. State Transition: moving from one day to the next. State-Transition matrix. 0:8 0:2 P = 0:5 0:5 Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Models Markov-Chain State Transition Model Given a finite set of states f1;:::; ng, a Markov-chain state-transition model is an n × n matrix P, where entry Pij is the probability of transitioning from state i to state j. Todd Ebert Markov Chain Monte Carlo States: f1 = no rain; 2 = raing. State Transition: moving from one day to the next. State-Transition matrix. 0:8 0:2 P = 0:5 0:5 Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Models Markov-Chain State Transition Model Given a finite set of states f1;:::; ng, a Markov-chain state-transition model is an n × n matrix P, where entry Pij is the probability of transitioning from state i to state j. Markov-chain Example Todd Ebert Markov Chain Monte Carlo State Transition: moving from one day to the next. State-Transition matrix. 0:8 0:2 P = 0:5 0:5 Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Models Markov-Chain State Transition Model Given a finite set of states f1;:::; ng, a Markov-chain state-transition model is an n × n matrix P, where entry Pij is the probability of transitioning from state i to state j. Markov-chain Example States: f1 = no rain; 2 = raing. Todd Ebert Markov Chain Monte Carlo State-Transition matrix. 0:8 0:2 P = 0:5 0:5 Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Models Markov-Chain State Transition Model Given a finite set of states f1;:::; ng, a Markov-chain state-transition model is an n × n matrix P, where entry Pij is the probability of transitioning from state i to state j. Markov-chain Example States: f1 = no rain; 2 = raing. State Transition: moving from one day to the next. Todd Ebert Markov Chain Monte Carlo Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Models Markov-Chain State Transition Model Given a finite set of states f1;:::; ng, a Markov-chain state-transition model is an n × n matrix P, where entry Pij is the probability of transitioning from state i to state j. Markov-chain Example States: f1 = no rain; 2 = raing. State Transition: moving from one day to the next. State-Transition matrix. 0:8 0:2 P = 0:5 0:5 Todd Ebert Markov Chain Monte Carlo States: f(no rain, no rain); (no rain, rain); (rain, no rain); (rain,rain)g. State Interpretation. For example, (no rain, rain) means \no rain yesterday, but rain today". State-Transition Matrix P (nr, nr) (nr, r) (r, nr) (r,r) (nr,nr) 0.85 0.15 0 0 (nr, r) 0 0 0.6 0.4 (r, nr) 0.65 0.35 0 0 (r, r) 0 0 0.7 0.3 Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Can Use Both Past and Present Markov-chain Example Todd Ebert Markov Chain Monte Carlo State Interpretation. For example, (no rain, rain) means \no rain yesterday, but rain today". State-Transition Matrix P (nr, nr) (nr, r) (r, nr) (r,r) (nr,nr) 0.85 0.15 0 0 (nr, r) 0 0 0.6 0.4 (r, nr) 0.65 0.35 0 0 (r, r) 0 0 0.7 0.3 Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Can Use Both Past and Present Markov-chain Example States: f(no rain, no rain); (no rain, rain); (rain, no rain); (rain,rain)g. Todd Ebert Markov Chain Monte Carlo State-Transition Matrix P (nr, nr) (nr, r) (r, nr) (r,r) (nr,nr) 0.85 0.15 0 0 (nr, r) 0 0 0.6 0.4 (r, nr) 0.65 0.35 0 0 (r, r) 0 0 0.7 0.3 Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Can Use Both Past and Present Markov-chain Example States: f(no rain, no rain); (no rain, rain); (rain, no rain); (rain,rain)g. State Interpretation. For example, (no rain, rain) means \no rain yesterday, but rain today". Todd Ebert Markov Chain Monte Carlo Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Markov-Chains Can Use Both Past and Present Markov-chain Example States: f(no rain, no rain); (no rain, rain); (rain, no rain); (rain,rain)g. State Interpretation. For example, (no rain, rain) means \no rain yesterday, but rain today". State-Transition Matrix P (nr, nr) (nr, r) (r, nr) (r,r) (nr,nr) 0.85 0.15 0 0 (nr, r) 0 0 0.6 0.4 (r, nr) 0.65 0.35 0 0 (r, r) 0 0 0.7 0.3 Todd Ebert Markov Chain Monte Carlo Proposition 1 Pt = P · P(t−1). In other words, the t-step transition matrix is obtained by multiplying the one-step matrix with the (t − 1)-step matrix. Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Predicting Further into the Future t-Step Transition Matrix Pt t t The t-step transition matrix P is defined so that Pij represents the probability of being in state j t steps after being in state i. Todd Ebert Markov Chain Monte Carlo Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing Predicting Further into the Future t-Step Transition Matrix Pt t t The t-step transition matrix P is defined so that Pij represents the probability of being in state j t steps after being in state i. Proposition 1 Pt = P · P(t−1). In other words, the t-step transition matrix is obtained by multiplying the one-step matrix with the (t − 1)-step matrix. Todd Ebert Markov Chain Monte Carlo Proof of Proposition 1: t = 2 Basis Step Let Si , i = 0; 1; 2;:::, be the current state at time i. Then for t = 2, 2 Pij = p(S2 = jjS0 = i) = n X p(S2 = jjS1 = k; S0 = i)p(S1 = kjS0 = i) = k=1 n n X X p(S2 = jjS1 = k)p(S1 = kjS0 = i) = Pik Pkj ; k=1 k=1 which is obtained by taking the inner product of row i of P with column j of P. Thus, P2 = P · P. Proof of Proposition 1: Inductive Step Now assume the result holds for some t ≥ 2. We show that it is also true for t + 1. (t+1) Pij = p(S(t+1) = jjS0 = i) = n X p(S(t+1) = jjSt = k; S0 = i)p(St = kjS0 = i) = k=1 n n X X t p(S(t+1) = jjSt = k)p(St = kjS0 = i) = Pik Pkj ; k=1 k=1 which is obtained by taking the inner product of row i of P with column j of Pt . Thus, P(t+1) = P · Pt , and the proposition is proved by induction on t. 0:74 0:26 0:74 0:26 0:7166 0:2834 P4 = = 0:65 0:35 0:65 0:35 0:7085 0:2915 Interpretation of P4 If it is not raining today, then there is a 71.66% chance of no rain in 4 days. If it is raining today, then there is a 29.15% chance of rain in 4 days. Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing t-Step Transition Weather Example 0:8 0:2 0:8 0:2 0:74 0:26 P2 = = 0:5 0:5 0:5 0:5 0:65 0:35 Todd Ebert Markov Chain Monte Carlo Interpretation of P4 If it is not raining today, then there is a 71.66% chance of no rain in 4 days. If it is raining today, then there is a 29.15% chance of rain in 4 days. Introduction Markov-Chains Hastings-Metropolis Algorithm Simulated Annealing t-Step Transition Weather Example 0:8 0:2 0:8 0:2 0:74 0:26 P2 = = 0:5 0:5 0:5 0:5 0:65 0:35 0:74 0:26 0:74 0:26 0:7166 0:2834 P4 = = 0:65 0:35 0:65 0:35 0:7085 0:2915 Todd Ebert Markov Chain Monte Carlo If it is not raining today, then there is a 71.66% chance of no rain in 4 days.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    86 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us