
MA3H2 Markov Processes and Percolation theory Stefan Adams 2011, update 13.03.2015 Contents 1 Simple random walk 1 1.1 Nearest neighbour random walk on Z ..............1 1.2 How often random walkers return to the origin? . .6 1.3 Transition function . .9 1.4 Boundary value problems . 15 1.5 Summary . 20 2 Markov processes 21 2.1 Poisson process . 23 2.2 Compound Poisson process on Zd ................ 25 2.3 Markov processes with bounded rates . 28 2.4 The Q-matrix and Kolmogorov's backward equations . 30 2.5 Jump chain and holding times . 41 2.6 Summary - Markov processes . 44 3 Examples, hitting times, and long-time behaviour 46 3.1 Birth-and-death process . 46 3.2 Hitting times and probabilities. Recurrence and transience . 48 4 Percolation theory 61 4.1 Introduction . 61 4.2 Some basic techniques . 69 4.3 Bond percolation in Z2 ...................... 69 Preface These notes are from 2011-2015. Stefan Adams 1 Simple random walk 1.1 Nearest neighbour random walk on Z Pick p 2 (0; 1), and suppose that (Xn : n 2 N) is a sequence (family) of {−1; +1g-valued, identically distributed Bernoulli random variables with P(Xi = 1) = p and P(Xi = −1) = 1 − p = q for all i 2 N. That is, for any n n 2 N and sequence E = (e1; : : : ; en) 2 {−1; 1g , N(E) n−N(E) P(X1 = e1;:::;Xn = en) = p q ; Pn n+ m=1 em where N(E) = ]fm: em = 1g = 2 is the number of "1"s in the sequence E. Imagine a walker moving randomly on the integers Z. The walker starts at a 2 Z and at every integer time n 2 N the walker flips a coin and moves one step to the right if it comes up heads (P(fheadg) = P(Xn = 1) = p) and moves one step to the left if it comes up tails. Denote the position of the walker at time n by Sn. The position Sn is a random variable, it depends on the outcome of the n flips of the coin. We set n X S0 = a and Sn = S0 + Xi: (1.1) i=1 Then S = (Sn)n2N is often called a nearest neighbour random walk on Z. The 1 random walk is called symmetric if p = q = 2 . We may record the motion of the walker as the set f(n; Sn): n 2 N0g of Cartesian coordinates of points in the plane (x-axis is the time and y-axis is the position Sn). We write Pa for the conditional probability P(·|S0 = a) when we set S0 = a implying P(S0 = a) = 1. It will be clear from the context which deterministic starting point we consider. Lemma 1.1 (a) The random walk is spatially homogeneous, i.e., Pa(Sn = j) = Pa+b(Sn = j + b); j; b; a 2 Z. (b) The random walk is temporally homogeneous, i.e., P(Sn = jjS0 = a) = P(Sn+m = jjSm = a). 1 (c) Markov property P(Sm+n = jjS0;S1;:::;Sm) = P(Sm+n = jjSm); n ≥ 0: Pn Pn Proof. (a) Pa(Sn = j) = Pa( i=1 Xi = j − a) = Pa+b( i=1 Xi = j − a). (b) n m+n X X LHS = P( Xi = j − a) = P( Xi = j − a) = RHS: i=1 i=m+1 (c) If one knows the value of Sm, then the distribution of Sm+n depends only on the jumps Xm+1;:::;Xm+n, and cannot depend on further information concerning the values of S0;S1;:::;Sm−1. 2 Having that, we get the following stochastic process oriented description of (Sn)n2N0 replacing (1.1). Let (Sn)n2N0 be a family of random variables Sn taking values in Z such that P (S0 = a) = 1 and p ; if e = 1 (S − S = ejS ;:::;S ) = : (1.2) P n n−1 0 n−1 q ; if e = −1 Markov property: conditional upon the present, the future does not de- pend on the past. The set of realisations of the walk is the set of sequences S = (s0; s1;:::) with s0 = a and si+1 − si = ±1 for all i 2 N0, and such a sequence may be thought of as a sample path of the walk, drawn as in figure 1. 1 Let us assume that S0 = 0 and p = 2 . Natural questions to ask are: • On the average, how far is the walker from the starting point? • What is the probability that at a particular time the walker is at the origin? • More generally, what is the probability distribution for the position of the walker? • Does the random walker keep returning to the origin or does the walker eventually leave forever? 1 We easily get that E(Sn) = 0 when p = 2 . For the average distance from the origin we compute the squared position at time n, i.e., n 2 2 X 2 X E(Sn) = E (X1 + ··· + Xn) = E(Xj ) + E(XiXj): j=1 i6=j 2 2 Now Xj = 1 and the independence of the Xi's gives E(XiXj) = E(Xi)E(Xj) = 0 whenever i 6= j. Hence, (S2) = n, and the expected distance from the p E n origin is ∼ c n for some constant c > 0. In order to get more detailed information of the random walk at a given time n we consider the set of possible sample paths. The probability that the first r ` n steps of the walk follow a given path S = (s0; s1;:::; sn) is p q , where r = ] of steps of S to the right = ]fi: si+1 − si = 1g ` = ] of steps of S to the left = ]fi: si+1 − si = −1g. Hence, any event for the random walk may be expressed in terms of an appropriate set of paths. X r r n−r P(Sn = b) = Mn(a; b)p q ; P(S0 = a) = 1; r r where Mn(a; b) is the number of paths (s0; s1;:::; sn) with s0 = a and sn = b having exactly r rightward steps. Note that r + ` = n and r − ` = b − a. Hence, 1 1 r = (n + b − a) and ` = (n − b + a): 2 2 1 If 2 (n + b − a) 2 f0; 1; : : : ; ng, then n 1 1 2 (n+b−a) 2 (n−b+a) P(Sn = b) = 1 p q ; (1.3) 2 (n + b − a) n and P(Sn = b) = 0 otherwise, since there are exactly r paths with length n having r rightward steps and n − r leftward steps. Thus to compute probabilities of certain random walk events we shall count the corresponding set of paths. The following result is an important tool for this counting. Notation: Nn(a; b) = ] of possible paths from (0; a) to (n; b). We denote by 0 Nn(a; b) the number of possible paths from (0; a) to (n; b) which touch the origin, i.e., which contain some point (k; 0); 1 ≤ k < n. Theorem 1.2 (The reflection principle) If a; b > 0 then 0 Nn(a; b) = Nn(−a; b): Proof. Each path from (0; −a) to (n; b) intersects the x-axis at some earli- est point (k; 0). Reflect the segment of the path with times 0 ≤ m ≤ k in the x-axis to obtain a path joining (0; a) and (n; b) which intersects/touches the x-axis, see figure 2. This operation gives a one-one correspondence between the collections of such paths, and the theorem is proved. 2 3 Lemma 1.3 n Nn(a; b) = 1 : 2 (n + b − a) Proof. Choose a path from (0; a) to (n; b) and let α and β be the numbers of positive and negative steps, respectively, in this path. Then α + β = n 1 and α−β = b−a, so that α = 2 (n+b−a). Now the number of such paths is exactly the number of ways picking α positive steps out of n available steps. Hence, n N (a; b) = : n α 2 Corollary 1.4 (Ballot theorem) If b > 0 then the number of paths from b (0; 0) to (n; b) which do not revisit the x-axis (origin) equals n Nn(0; b). Proof. The first step of all such paths is to (1; 1), and so the number of such paths is 0 Nn−1(1; b) − Nn−1(1; b) = Nn−1(1; b) − Nn−1(−1; b) n − 1 n − 1 = 1 − 1 : 2 (n − 1 + b − 1) 2 (n − 1 + b + 1) Elementary computations give the result. 2 What can be deduced from the reflection principle? We first consider the probability that the walk does not revisit its starting point in the first n steps. Theorem 1.5 Let S0 = 0; b 2 Z, and p 2 (0; 1). Then jbj (S S ··· S 6= 0;S = b) = (S = b); P 1 2 n n n P n 1 implying P(S1S2 ··· Sn 6= 0) = n E(jSnj). Proof. Pick b > 0. The admissible paths for the event in question do not visit the x-axis in the time interval [1; n], and the number of such paths is by b 1 the Ballot theorem exactly n Nn(0; b), and each path has 2 (n + b) rightward 1 and 2 (n − b) leftward steps. b 1 (n+b) 1 (n−b) b (S S ··· S 6= 0;S = b) = N (0; b)p 2 q 2 = (S = b): P 1 2 n n n n nP n 4 The case for b < 0 follows similar, and b = 0 is obvious.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages71 Page
-
File Size-