1 Simple Random Walk

Total Page:16

File Type:pdf, Size:1020Kb

1 Simple Random Walk 1 Simple Random Walk We consider one of the basic models for random walk, simple random walk on the integer lattice Zd. At each time step, a random walker makes a random move of length one in one of the lattice directions. 1.1 One dimension We start by studying simple random walk on the integers. At each time unit, a walker flips a fair coin and moves one step to the right or one step to the left depending on whether the coin comes up heads or tails. We let Sn denote the position of the walker at time n. If we assume that the walker starts at x, we can write S = x + X + + X n 1 ··· n where X1,X2,... are independent random variables with P Xj =1 = P Xj = 1=1/2. In this chapter we assume that the walker starts at the{ origin }(x ={ 0) and} investigate− where we expect the walker to be. The main questions are: On the average, how far is the walker from the starting point? • What is the probability distribution for the position of the walker? • What is the probability that at a particular time the walker is at the origin? • Does the random walker keep returning to the origin or does the walker eventually • leave forever? Probabilists use the notation E for expectation (also called expected value, mean, average value) defined for discrete random variables by E[X]= z P X = z . { } z X The random walk satisifes E[S ] = 0 since steps of +1 and 1 are equally likely. To compute n − the average distance, one might try to compute E [ Sn ]. It turns out to be much easier to E 2 | | compute [Sn], n 2 E[S2] = E X n j j=1 ! X n n = E XjXk " j=1 k=1 # n Xn X = E[XjXk]= n + E[XjXk]. j=1 k=1 j=k X X X6 1 This calculation uses an easy (but very important) property of average values: E[X + Y ]= E[X]+ E[Y ]. If j = k, then X X is equally likely to be +1 and 1 and so the expectation 6 j k − is zero. We therefore get E 2 Var[Sn]= [Sn]= n. The variance of a random variable is defined by Var[X]= E (X EX)2 = E[X2] (EX)2 − − (a simple calculation establishes the second equality). Our calculation illustrates an impor- tant fact about variances of sums: if X1,...,Xn are independent, then Var[X + + X ] = Var[X ]+ + Var[X ]. 1 ··· n 1 ··· n Sum rules like this make it much easier to compute averages of the square of a random variable than other powers. In many ways, this is just an analogy of the Pythagorean theorem from geometry: the independence of the random variables is the analogue of the idea of perpendicular or orthogonal vectors. The next question is: what is the probability that the walker is at the origin after n steps? E 2 Before computing this very accurately, let us do a heuristic argument. Since [Sn]= n, the typical distance away from the origin is of order √n. There are about √n even integers that are distance √n from the starting point, so one might guess that the probability for being 1/2 at a particular one should decay like n− . This is indeed the case as we demonstrate. In fact, it is easy to give an exact formula for the distribution of the walker after n steps. It is easy to see that after an odd number of steps the walker is at at odd integer and after an even number of steps the walker is at an even integer. Therefore, P Sn = x =0 if n + x is odd. Let us suppose the walker has taken an even number of steps,{ 2n. In} order for the walker to be back at the origin at time 2n there need to be exactly n “+1” steps and n “ 1” 2n − steps. The number of ways to choose which n steps are +1 is n and each particular choice 2n of 2n +1s and 1s has probability 2− of occuring. Therfore, − 2n 2n (2n)! 2n P S =0 = 2− = 2− . { 2n } n n! n! More generally, 2n 2n (2n)! 2n P S =2j = 2− = 2− . { 2n } n + j (n + j)!(n j)! − While these formulas are exact, it is not obvious how to use them because they contain ratios of very large numbers. Trying to understand this quantity leads to studying the behavior of n! as n gets large. This we discuss in the next section. 2 1.2 Stirling’s formula Stirling’s formula states that as n , →∞ 1 n+ n n! √2π n 2 e− , ∼ where means that the ratio of the two sides tends to 1. We will prove this in the next two ∼ subsections. In this subsection we will prove that there is a positive number C0 such that n! lim bn = C0, where bn = 1 , (1) n n+ 2 n →∞ n e− and in Section 1.3 we show that C0 = √2π. To start, it is easy to check that b1 = e and if n 2, 1 ≥ n n 1/2 b n 1 − 2 1 1 − n = e − = e 1 1 . bn 1 n − n − n − Let δn =(bn/bn 1) 1 so that for n 2, − − ≥ n b n b = b m = e [1 + δ ]. n 1 b m m=2 m 1 m=2 Y − Y Then (1) can be restated as saying that the infinite product N ∞ e [1 + δn] = lim e [1 + δn] N n=2 →∞ n=2 Y Y converges to a positive constant that we call C0. By taking the logarithm of both sides, we see that convergence to a positive constant is equivalent to ∞ log[1 + δ ] < . (2) n ∞ n=2 X One of the most important tools for determining limits is Taylor’s theorem with remain- k+1 der, a version of which we now recall. Suppose f is a C function. Let Pk(x) denote the kth order Taylor series polynomial about the origin. Then, for x> 0 f(x) P (x) a xk+1, | − k | ≤ k where 1 (k+1) ak = max f (t) . 0 t x (k + 1)! ≤ ≤ | | A similar estimate is derived for negative x by considering f˜(x)= f( x). The Taylor series − for the logarithm gives u2 u3 log(1 + u)= u + , − 2 3 −··· 3 which is valid for u < 1. In fact, the Taylor series with remainder tells us that for every positive integer k | | log(1 + u)= P (u)+ O( u k+1), (3) k | | 2 k+1 k k+1 where Pk(u) = u (u /2) + +( 1) (u /k). The O( u ) denotes a term that is bounded by a constant− time u···k+1 for− small u. The Taylor| series| with remainder implies | | that there is a constant c such that for all u 1/2, k | | ≤ log(1 + u) P (u) c u k+1. (4) | − k | ≤ k | | We will use the O( ) notation as in (3) when doing asymptotics — in all cases this will be shorthand for a more· precise statement as in (4). By taking the Taylor series, we can see that to prove (2) assuming 1 + δn > 0 for all n, it 2 suffices to show that δn < . We will establish this by showing that δn = O(n− ), i.e., there is a c such that | | ∞ c P δ . | n| ≤ n2 1 n 1 To see this consider (1 ) which we know approaches e− as n gets large. We use the − n Taylor series to estimate how fast it converges. We write n 1 1 1 1 3 1 2 log 1 = n log 1 = n + O(n− ) = 1 + O(n− ). − n − n −n − 2n2 − − 2n 1/2 1 − 1 2 log 1 = + O(n− ). − n 2n By adding the terms we finish the proof of (1). In fact (see Exercise 6.19) we can show that 1 n+ 2 n 1 n!= C0 n e− 1+ O(n− ) . (5) 1.3 Central limit theorem Let Sn be the position of a simple random walker on the integers assuming S0 = 0. For every integer j, 2n 2n 2n! P S =2j = 2− = . { 2n } n + j (n + j)!(n j)! − Let us assume that j n/2 . Then plugging into Stirling’s formula and simplifying gives | | ≤ n j j 1/2 √2 j2 − j − j n P S =2j 1 1+ 1 . (6) { 2n } ∼ C − n2 n − n n2 j2 0 − In fact (if one uses (5)), there is a c such that the ratio of the two sides is within distance c/n of 1 (assuming j n/2). | | ≤ 4 What does this look like as n tends to infinity? Let us first consider the case j = 0. Then we get that √ P 2 S2n =0 1/2 . { } ∼ C0 n We now consider j of order √n. If we write j = r√n, the right hand side of (6) becomes n √n r √n r 1/2 √2 r2 − r − r − − 1 1 1+ 1 . C √n − n √n − √n 1 (r2/n) 0 " # " # − As n , this is asymptotic to →∞ √2 r2 r2 r2 √2 j2/n e e− e− = e− . C0 √n C0 √n For every a < b, √2 j2/n lim P a√2n S2n b√2n = lim e− , n n →∞ { ≤ ≤ } →∞ C0 √n X where the sum is over all j with a√2n 2j b√2n. The right hand side is the Riemann ≤ ≤ sum approximation of an integral where the intervals in the sum have length 2/n.
Recommended publications
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • Lecture 19: Polymer Models: Persistence and Self-Avoidance
    Lecture 19: Polymer Models: Persistence and Self-Avoidance Scribe: Allison Ferguson (and Martin Z. Bazant) Martin Fisher School of Physics, Brandeis University April 14, 2005 Polymers are high molecular weight molecules formed by combining a large number of smaller molecules (monomers) in a regular pattern. Polymers in solution (i.e. uncrystallized) can be characterized as long flexible chains, and thus may be well described by random walk models of various complexity. 1 Simple Random Walk (IID Steps) Consider a Rayleigh random walk (i.e. a fixed step size a with a random angle θ). Recall (from Lecture 1) that the probability distribution function (PDF) of finding the walker at position R~ after N steps is given by −3R2 2 ~ e 2a N PN (R) ∼ d (1) (2πa2N/d) 2 Define RN = |R~| as the end-to-end distance of the polymer. Then from previous results we q √ 2 2 ¯ 2 know hRN i = Na and RN = hRN i = a N. We can define an additional quantity, the radius of gyration GN as the radius from the centre of mass (assuming the polymer has an approximately 1 2 spherical shape ). Hughes [1] gives an expression for this quantity in terms of hRN i as follows: 1 1 hG2 i = 1 − hR2 i (2) N 6 N 2 N As an example of the type of prediction which can be made using this type of simple model, consider the following experiment: Pull on both ends of a polymer and measure the force f required to stretch it out2. Note that the force will be given by f = −(∂F/∂R) where F = U − TS is the Helmholtz free energy (T is the temperature, U is the internal energy, and S is the entropy).
    [Show full text]
  • 8 Convergence of Loop-Erased Random Walk to SLE2 8.1 Comments on the Paper
    8 Convergence of loop-erased random walk to SLE2 8.1 Comments on the paper The proof that the loop-erased random walk converges to SLE2 is in the paper \Con- formal invariance of planar loop-erased random walks and uniform spanning trees" by Lawler, Schramm and Werner. (arxiv.org: math.PR/0112234). This section contains some comments to give you some hope of being able to actually make sense of this paper. In the first sections of the paper, δ is used to denote the lattice spacing. The main theorem is that as δ 0, the loop erased random walk converges to SLE2. HOWEVER, beginning in section !2.2 the lattice spacing is 1, and in section 3.2 the notation δ reappears but means something completely different. (δ also appears in the proof of lemma 2.1 in section 2.1, but its meaning here is restricted to this proof. The δ in this proof has nothing to do with any δ's outside the proof.) After the statement of the main theorem, there is no mention of the lattice spacing going to zero. This is hidden in conditions on the \inner radius." For a domain D containing 0 the inner radius is rad0(D) = inf z : z = D (364) fj j 2 g Now fix a domain D containing the origin. Let be the conformal map of D onto U. We want to show that the image under of a LERW on a lattice with spacing δ converges to SLE2 in the unit disc. In the paper they do this in a slightly different way.
    [Show full text]
  • Random Walk in Random Scenery (RWRS)
    IMS Lecture Notes–Monograph Series Dynamics & Stochastics Vol. 48 (2006) 53–65 c Institute of Mathematical Statistics, 2006 DOI: 10.1214/074921706000000077 Random walk in random scenery: A survey of some recent results Frank den Hollander1,2,* and Jeffrey E. Steif 3,† Leiden University & EURANDOM and Chalmers University of Technology Abstract. In this paper we give a survey of some recent results for random walk in random scenery (RWRS). On Zd, d ≥ 1, we are given a random walk with i.i.d. increments and a random scenery with i.i.d. components. The walk and the scenery are assumed to be independent. RWRS is the random process where time is indexed by Z, and at each unit of time both the step taken by the walk and the scenery value at the site that is visited are registered. We collect various results that classify the ergodic behavior of RWRS in terms of the characteristics of the underlying random walk (and discuss extensions to stationary walk increments and stationary scenery components as well). We describe a number of results for scenery reconstruction and close by listing some open questions. 1. Introduction Random walk in random scenery is a family of stationary random processes ex- hibiting amazingly rich behavior. We will survey some of the results that have been obtained in recent years and list some open questions. Mike Keane has made funda- mental contributions to this topic. As close colleagues it has been a great pleasure to work with him. We begin by defining the object of our study. Fix an integer d ≥ 1.
    [Show full text]
  • Particle Filtering Within Adaptive Metropolis Hastings Sampling
    Particle filtering within adaptive Metropolis-Hastings sampling Ralph S. Silva Paolo Giordani School of Economics Research Department University of New South Wales Sveriges Riksbank [email protected] [email protected] Robert Kohn∗ Michael K. Pitt School of Economics Economics Department University of New South Wales University of Warwick [email protected] [email protected] October 29, 2018 Abstract We show that it is feasible to carry out exact Bayesian inference for non-Gaussian state space models using an adaptive Metropolis-Hastings sampling scheme with the likelihood approximated by the particle filter. Furthermore, an adaptive independent Metropolis Hastings sampler based on a mixture of normals proposal is computation- ally much more efficient than an adaptive random walk Metropolis proposal because the cost of constructing a good adaptive proposal is negligible compared to the cost of approximating the likelihood. Independent Metropolis-Hastings proposals are also arXiv:0911.0230v1 [stat.CO] 2 Nov 2009 attractive because they are easy to run in parallel on multiple processors. We also show that when the particle filter is used, the marginal likelihood of any model is obtained in an efficient and unbiased manner, making model comparison straightforward. Keywords: Auxiliary variables; Bayesian inference; Bridge sampling; Marginal likeli- hood. ∗Corresponding author. 1 Introduction We show that it is feasible to carry out exact Bayesian inference on the parameters of a general state space model by using the particle filter to approximate the likelihood and adaptive Metropolis-Hastings sampling to generate unknown parameters. The state space model can be quite general, but we assume that the observation equation can be evaluated analytically and that it is possible to generate from the state transition equation.
    [Show full text]
  • A Course in Interacting Particle Systems
    A Course in Interacting Particle Systems J.M. Swart January 14, 2020 arXiv:1703.10007v2 [math.PR] 13 Jan 2020 2 Contents 1 Introduction 7 1.1 General set-up . .7 1.2 The voter model . .9 1.3 The contact process . 11 1.4 Ising and Potts models . 14 1.5 Phase transitions . 17 1.6 Variations on the voter model . 20 1.7 Further models . 22 2 Continuous-time Markov chains 27 2.1 Poisson point sets . 27 2.2 Transition probabilities and generators . 30 2.3 Poisson construction of Markov processes . 31 2.4 Examples of Poisson representations . 33 3 The mean-field limit 35 3.1 Processes on the complete graph . 35 3.2 The mean-field limit of the Ising model . 36 3.3 Analysis of the mean-field model . 38 3.4 Functions of Markov processes . 42 3.5 The mean-field contact process . 47 3.6 The mean-field voter model . 49 3.7 Exercises . 51 4 Construction and ergodicity 53 4.1 Introduction . 53 4.2 Feller processes . 54 4.3 Poisson construction . 63 4.4 Generator construction . 72 4.5 Ergodicity . 79 4.6 Application to the Ising model . 81 4.7 Further results . 85 5 Monotonicity 89 5.1 The stochastic order . 89 5.2 The upper and lower invariant laws . 94 5.3 The contact process . 97 5.4 Other examples . 100 3 4 CONTENTS 5.5 Exercises . 101 6 Duality 105 6.1 Introduction . 105 6.2 Additive systems duality . 106 6.3 Cancellative systems duality . 113 6.4 Other dualities .
    [Show full text]
  • ONE-DIMENSIONAL RANDOM WALKS Definition 1. a Random Walk on the Integers with Step Distribution F and Initial State X ∈ Is A
    ONE-DIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers Z with step distribution F and initial state x Z is a sequence Sn of random variables whose increments are independent, identically distributed2 random variables ξi with common distribution F , that is, n X (1) Sn = x + ξi . i =1 The definition extends in an obvious way to random walks on the d dimensional integer lat- d d tice Z : the increments are then random d vectors. Simple random walk− on Z is the particular case where the step distribution is the uniform− distribution on the 2d nearest neighbors of the 1 origin; in one dimension, this is the Rademacher- 2 distribution, the distribution that puts mass 1=2 at each of the two values 1. The moves of a simple random walk in 1D are determined by independent fair coin tosses:± For each Head, jump one to the right; for each Tail, jump one to the left. 1.1. Gambler’s Ruin. Simple random walk describes (among other things) the fluctuations in a speculator’s wealth when he/she is fully invested in a risky asset whose value jumps by either 1 in each time period. Although this seems far too simple a model to be of any practical value, when± the unit of time is small (e.g., seconds) it isn’t so bad, at least over periods on the order of days or weeks, and in fact it is commonly used as the basis of the so-called tree models for valuing options.
    [Show full text]
  • Stochastic Processes: Theory and Methods. Handbook of Statistics 19 (Ed
    Stochastic Processes: Theory and Methods. Handbook of Statistics 19 (ed. C. R. Rao & D. N. Shanbhag) 171-213, Elsevier, 2001) RANDOM WALK AND FLUCTUATION THEORY N. H. BINGHAM CONTENTS PART I: RANDOM WALKS ON EUCLIDEAN SPACE x1. Introduction x2. Simple random walk on ZZ x3. Recurrence and transience x4. Simple random walk on ZZd; P¶olya's theorem x5. Random walk on IRd x6. Harmonic analysis x7. Potential theory x8. Coupling x9. Renewal theory x10. Limit theorems and Brownian motion x11. Conditioned random walk PART II: RANDOM WALK IN MORE GENERAL CONTEXTS x1. Random walk and electrical networks x2. Random walk on graphs x3. Random walk on groups x4. Brownian motion on Riemannian manifolds x5. Random walk on homogeneous spaces, Gelfand pairs, hypergroups and semigroups x6. Random walk on graphs with special structure x7. Variants on random walk PART III: FLUCTUATION THEORY x1. Spitzer's identity x2. Ladder epochs and heights x3. Spitzer's arc-sine law x4. Ballot theorems x5. Queues x6. Continuous time x7. Barrier problems x8. Fluctuation theory in higher dimensions and algebraic generalisations x9. Distribution-free results and non-parametric statistics x10. Postscript 1 PART I: RANDOM WALK ON EUCLIDEAN SPACE x1. Introduction. The theory and applications of random walks are ubiquitous in the modern prob- ability literature, and random walks form perhaps the simplest and most important exam- ples of stochastic processes - random phenomena unfolding with time. The term `random walk' can be traced back (at least) to P¶olya (1921) (`zufÄallige Irrfahrt' in the German), but the context is much older. If we think of X1;X2;::: as the gains or losses of a gambler on successive plays of a gambling game, and the partial sums Sn := X1 + ¢ ¢ ¢ + Xn (S0 := 0) as his cumulative gain or loss to date (by time n), then the behaviour of the stochastic 1 process (Sn)n=0 - the random walk with steps Xn - describes the evolution of the gambler's fortune.
    [Show full text]
  • Random Walk Simulation of Pn Junctions
    Random Walk Simulation of p-n Junctions by Jennifer Wang Submitted to the Department of Physics in partial fulfillment of the requirements for the degree of Bachelor of Arts in Physics at WELLESLEY COLLEGE May 2020 c Wellesley College 2020. All rights reserved. Author.............................................................. Department of Physics May 6th, 2020 Certified by. Yue Hu Professor of Physics Thesis Supervisor Accepted by . Yue Hu Chair, Department of Physics Random Walk Simulation of p-n Junctions by Jennifer Wang Submitted to the Department of Physics on May 6th, 2020, in partial fulfillment of the requirements for the degree of Bachelor of Arts in Physics Abstract Random walks rely on stochastic sampling instead of analytical calculations to simulate complex processes; thus, these methods are flexible and mathematically simple. In the semiconductor field, random walks are often utilized to model bulk transport of charge carriers. However, junction transport models generally rely on more complex numerical methods, which are computationally unwieldy and opaque to a general audience. To aim for greater accessibility, this thesis applies random walks to simulate carrier transport in p-n junctions. We first show that our random walk methods are consistent with finite-difference methods to model drift-diffusion processes. Next, we apply a basic random walk to model a p-n junction at equilibrium. This demonstrates successful proof-of-concept simulation of the depletion layer, as well as calculation of the the electric field and potential of a typical p-n junction. Additionally, because the basic random walk assumption cannot incorporate different mobility parameters for electrons and holes, we develop an improved compound p- n junction model.
    [Show full text]
  • Simple Random Walks: Improbability of Profitable Stopping
    SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING EMILY GENTLES Abstract. This paper introduces the basics of the simple random walk with a flair for the statistical approach. Applications in biology and game theory will be discussed. The Impossibility of Profitable Stopping will be explored and proven. Contents 1. Introduction 1 2. What it looks like 2 3. Brownian Motion 3 4. Game Theory 5 Acknowledgments 6 References 6 1. Introduction Imagine you are on an integer number line, standing at zero. You flip a fair coin and if it comes up heads you move one step right to +1 and if it comes up tails you move one step left to -1. You continue this process, moving one step right or left for each coin toss, until you have returned to your starting position or have taken a predetermined number of steps. You have just executed a symmetric simple random walk. This simple random walk spans many fields of study and was first noticed by the Romans in 60BC. However, one of the subject's most notable beginnings occurred in biology when Scottish botanist Robert Brown rediscovered the phenomena in 1827. While studying pollen suspended in water Brown noticed the jittery motion of the pollen's organelles. At the time Brown speculated that the organelles were tiny organisms, alive and moving. Although his hypothesis was later disproved, the movement was coined Brownian motion. Since its start the subject has been fundamental in economics and physics with less significant uses in various other courses of study. In this paper we will explore the basic attributes of simple random walks in Section 2, how they relate to Brownian motion and the various distributions they can be described with in Section 3, and we will touch on their role in game theory in Section 4.
    [Show full text]
  • CONDITIONAL EXPECTATION and MARTINGALES 1.1. Definition of A
    CONDITIONAL EXPECTATION AND MARTINGALES 1. DISCRETE-TIME MARTINGALES 1.1. Definition of a Martingale. Let {Fn}n 0 be an increasing sequence of σ algebras in a ¸ ¡ probability space (­,F ,P). Such a sequence will be called a filtration. Let X0, X1,... be an adapted sequence of integrable real-valued random variables, that is, a sequence with the prop- erty that for each n the random variable X is measurable relative to F and such that E X n n j nj Ç . The sequence X0, X1,... is said to be a martingale relative to the filtration {Fn}n 0 if it is 1 ¸ adapted and if for every n, (1) E(Xn 1 Fn) Xn. Å j Æ Similarly, it is said to be a supermartingale (respectively, submartingale) if for every n, (2) E(Xn 1 Fn) ( )Xn. Å j · ¸ Observe that any martingale is automatically both a submartingale and a supermartingale. 1.2. Martingales and Martingale Difference Sequences. The most basic examples of martin- gales are sums of independent, mean zero random variables. Let Y0,Y1,... be such a sequence; then the sequence of partial sums n X (3) Xn Y j Æ j 1 Æ is a martingale relative to the natural filtration generated by the variables Yn. This is easily verified, using the linearity and stability properties and the independence law for conditional expectation: E(Xn 1 Fn) E(Xn Yn 1 Fn) Å j Æ Å Å j E(Xn Fn) E(Yn 1 Fn) Æ j Å Å j Xn EYn 1 Æ Å Å X .
    [Show full text]
  • Random Walk Methods for Modeling Hydrodynamic Transport in Porous and Fractured Media from Pore to Reservoir Scale
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Digital.CSIC Noname manuscript No. (will be inserted by the editor) Random walk methods for modeling hydrodynamic transport in porous and fractured media from pore to reservoir scale Benoit Noetinger · Delphine Roubinet · Anna Russian · Tanguy Le Borgne · Frederick Delay · Marco Dentz · Jean-Raynald de Dreuzy · Philippe Gouze Abstract Random walk (RW) methods are recurring Monte Carlo methods used to model convective and diffusive transport in complex heterogeneous media. Many applications can be found, including fluid mechanic, hydrology, and chemical re- actors modeling. These methods are easy to implement, very versatile and flexible enough to become appealing for many applications because they generally overlook or deeply simplify the building of explicit complex meshes required by deterministic methods. RW provide a good physical understanding of the interactions between the space scales of heterogeneities and the transport phenomena under consider- ation. In addition, they can result in efficient up-scaling methods, especially in the context of flow and transport in fractured media. In the present study, we review the applications of RW to several situations that cope with diverse spatial scales, and different insights into up-scaling problems. The advantages and down- B. Noetinger IFPEN 1-4 avenue de Bois Pr´eau,92852, France E-mail: [email protected] D. Roubinet Applied and Environmental Geophysics Group, Institute of Earth Sciences, University of Lau- sanne, 1015 Lausanne, Switzerland A. Russian G´eosciences,Universit´ede Montpellier 2, CNRS T. le Borgne Universit´ede Rennes 1, CNRS, G´eosciencesRennes, UMR 6118, Rennes, France F.
    [Show full text]