Probabilistic Modelling

Probabilistic Modelling

Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Probabilistic Modelling Georgy Gimel'farb COMPSCI 369 Computational Science 1 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields 1 Random walks 2 Markov Chains 3 Bayesian Networks 4 Markov Random Fields Learning outcomes on probabilistic modelling: Be familiar with basic probabilistic modelling techniques and tools • Be familiar with basic probability theory notions and Markov chains • Understand the maximum likelihood (ML) and identify problems ML can solve • Recognise and construct Markov models and hidden Markov models (HMMs) • Recognise problems amenable to Monte Carlo algorithms and be able to identify which computational tools can be best used to solve them Recommended reading: • G. Strang, Computational Science and Engineering. Wellesley-Cambridge Press, 2007: Section 2.8 • C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006: Chapters 1, 2, 8, 11 • L. Wasserman, All of Statistics: A Concise Course of Statistical Inference. Springer, 2004: Chapter 17 2 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields What Is a Random Walk? • 1D, 2D, 3D, or generally d-D trajectory consisting of successive random steps • Fundamental model for a random process evolving in time • Applications: computer science, physics, ecology, economics, . • Random walk hypothesis { a financial theory stating that stock market prices evolve as a random walk and thus cannot be predicted on the basis of the past movement. 3 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Random 1D Walk −3∆ −2∆ −∆ 0 ∆ 2∆ 3∆ 1D grid ... ... 1D walk P (−∆) = 1 − p P (+∆) = p • The drunkard's walk: p = 0:5 • Distance Ln from the origin after n independent steps: • Expected (mean) distance: E[Ln] = n∆(2p − 1) 2 • Distance variance: V[Ln] = 4n∆ p(1 − p) p p • Standard deviation: sn ≡ V[Ln] = 2∆ np(1 − p) 2 p • If p = 0:5: E[Ln] = 0; V(Ln) = n∆ (or sn = n∆) 4 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields 1D Random Walk - A Few Numerical Examples Step length ∆ = 1; P (+1) = P (−1) = 0:5 (the drunkard's walk): p E[Ln] = 0; sn = n Step n 1 10 100 1; 000 10; 000 100; 000 1; 000; 000 Mean E[Ln] 0 0 0 0 0 0 0 St. d. sn 1 3:2 10 31:6 100 316:2 1; 000 Step length ∆ = 1; P (+1) = 0:64; P (−1) = 0:36: p E[Ln] = 0:28n; sn = 0:96 n Step n 1 10 100 1; 000 10; 000 100; 000 1; 000; 000 Mean E[Ln] 0:28 2:8 28 280 2; 800 28; 000 280; 000 St. d. sn 0:96 3:0 9:6 30:4 96 303:5 960 Step length ∆ = 1; P (+1) = 0:9; P (−1) = 0:1: p E[Ln] = 0:8n; sn = 0:6 n Step n 1 10 100 1; 000 10; 000 100; 000 1; 000; 000 Mean E[Ln] 0:8 8 80 800 8; 000 80; 000 800; 000 St. d. sn 0:6 1:9 6 19 60 189:7 600 5 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Pseudocode to Simulate an 1D Random Walk Unit step: ∆ = ±1; P+1 + P−1 = 1 Pleft ≡ P−1 Pright ≡ P+1 Threshold: T = Pright r = random() ==a computed pseudo-random ==number: 0 ≤ r ≤ 1 if r ≤ T then move right ==∆ = +1 else move left ==∆ = −1 Example (P+1 = 0:75): n r ∆ Ln n r ∆ Ln n r ∆ Ln 1 0.84 −1 −1 6 0.20 +1 −2 11 0.48 +1 1 2 0.39 +1 0 7 0.34 +1 −1 12 0.63 +1 2 3 0.78 −1 −1 8 0.77 −1 −2 13 0.36 +1 3 4 0.80 −1 −2 9 0.28 +1 −1 14 0.51 +1 4 5 0.91 −1 −3 10 0.55 +1 0 15 0.95 −1 3 6 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Simulated 1D Random Walk (P+1 = 0:75) 6 Ln E[Ln] 5 4 3 2 1 n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 −1 −2 −3 −4 n 10 20 30 40 50 60 70 80 90 100 ::: L 0 6 12 16 16 24 26 32 36 36 ::: −5 n E[Ln] 5 10 15 20 25 30 35 40 45 50 ::: −6 sn 2:74 3:87 4:74 5:48 6:12 6:71 7:25 7:75 8:22 8:66 ::: 7 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields 2D and 3D Random Walks 2D walk 3D walk y y x x z 8 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Example: 2D Random Walk 9 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Example: 3D Walks http://www.audienceoftwo.com/pics/upload/542px-Walk3d 0.png 10 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Pseudocode to Simulate a 2D Random Walk Pup ≡ P0;+1 Pleft ≡ P−1;0 Pright ≡ P+1;0 Pright + Pup + Pleft + Pdown = 1 Pdown ≡ P0;−1 Thresholds: T1 = Pright; T2 = T1 + Pup; T3 = T2 + Pleft r = random() ==a computed pseudo-random ==number: 0 ≤ r ≤ 1 if r ≤ T1 then move right ==∆x = 1; ∆y = 0 else if r ≤ T2 then move up ==∆x = 0; ∆y = 1 else if r ≤ T3 then move left ==∆x = −1; ∆y = 0 else then move down ==∆x = 0; ∆y = −1 11 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Some Properties of Random Walks • Gambler's ruin, or recurrence phenomenon: a simple 1D random walk (P−1 = P+1 = 0:5) crosses every point an infinite number of times • Gambler with a finite amount of money playing a fair game against a bank with infinite funds will surely lose! • Probability Pr(d) that a random walk on a d-D hypercubic lattice returns to the origin: Pr(1) = 1; Pr(2) = 1 Recurrent walks: d ≤ 2 Pr(3) = 0:3405 ::: ; Pr(4) = 0:1932 ::: Transient walks: d ≥ 3 • Drunkard eventually gets back to his house from the bar if his random walk is on the set of all points in the line or plane with integer coordinates • But in three dimensions, the probability of returning decreases to roughly 34% 12 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Markov Chains x1 ::: xn−1 xn xn+1 ::: xN • 1st-order Markov chain: a series of random variables x1;:::; xN with the conditional independence property: for n = 1;:::;N − 1 P (xn+1jx1;:::; xn) = P (xn+1jxn) • Homogeneous Markov chain: the same transition probabilities for all n • Transition matrix α,β=K P = [pαβ]α,β=1 α,β=K ≡ [P (xn+1 = βjxn = α)]α,β=1 13 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Markov Chains x1 ::: xn−1 xn xn+1 ::: xN Invariant marginal distribution for a homogeneous chain: ∗ X ∗ P (xn+1) = P (xn+1jxn)P (xn) xn • A given Markov chain may have more than one invariant distribution Detailed balance { a sufficient (but not necessary) condition of invariance: ∗ ∗ P (xn+1)P (xnjxn+1) = P (xn)P (xn+1jxn) Reversible Markov chain: if the detailed balance holds for it 14 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Markov Chain: An Example 1D random walk with reflecting barriers: at 1 2 3 4 5 each step n, the chain variable x[n] take values from f1; 2; 3; 4; 5g [n+1] [n] α,β=5 Transition matrix P ≡ P (x = βjx = α) α,β=1 2 0 1 − p 0 0 0 3 6 1 0 1 − p 0 0 7 6 7 = 6 0 p 0 1 − p 0 7 6 7 4 0 0 p 0 1 5 0 0 0 p 0 2 (1 − p)3 3 6 (1 − p)2 7 ∗ 1 6 7 Invariant p.d. P (x) = 2 6 p(1 − p) 7 1+(1−2p) 6 7 4 p2 5 p3 15 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Markov Chains • Ergodicity: if irrespectively of P (x1) the distribution P (xn) for n ! 1 converges to the required invariant distribution P ∗(x) • A homogeneous Markov chain is ergodic under weak restrictions on the invariant distribution and the transition probabilities • The invariant distribution is called the equilibrium distribution • An ergodic Markov chain has only one equilibrium distribution • Higher-order Markov chains: P (xn+1jx1; : : : ; xn) = P (xn+1jxn; : : : ; xn−k) • Generally, not necessarily the nearest k dependencies 16 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields First-order Markov Chains • m-step transition probability of going from state α to state β in m steps: pαβ(m) = P (xn+m = βjxn = α) • Chapman{Kolmogorov equations: pαβ(m + n) = P pαγ(m)pγβ(n) or γ P P (xk+m = γjxk = α)P (xk+m+n = βjxk+m = γ) γ P = P (xk+m+n = β; xk+m = γjxk = α) γ ≡ P (xk+m+n = βjxk = α) • In the matrix form: P(1) = P by definition; P(n) = Pn; P(m + n) = P(m)P(n) 17 / 33 Outline Random walks Markov Chains Bayesian Networks Markov Random Fields Simulation of a Homogeneous Markov Chain • Initial data: the marginal p.d. P0(x) and transition matrix P • Sample x0 = a from the initial marginal distribution P0(x), i.e.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    33 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us