Notes on WI4430 Martingales and Brownian Motion Robbert Fokkink

Total Page:16

File Type:pdf, Size:1020Kb

Notes on WI4430 Martingales and Brownian Motion Robbert Fokkink Notes on WI4430 Martingales and Brownian Motion Robbert Fokkink TU Delft E-mail address: [email protected] Abstract. These notes accompany the course WI4430 on Martingales and Brownian Motion that I teach in the fall of 2016 at Delft University. Normally, Frank Redig teaches this course, but he has a sabbatical and I step in for one time. The course is mainly based on chapter 10 of Gut's book Probability, a graduate course which is available through the WI4430 homepage. Apart from Gut, Brze´zniak and Zastawniak's text on Basic Stochastic Processes is also recommended. It has many exercises and solutions. I wrote these notes to help you digest Gut. I have included exercises, but they are not meant to be very hard. Frank Redig has also prepared notes { you can find them under `scribenotes' { and sets of more difficult exercises. This is your `homework', which I include in between my notes. There is a homework session once every week in which you can seek assistance in solving these exercises. The difficult exercises are marked as challenges, to warn you. Sometimes a challenge offers a financial reward as an incentive. You can claim the reward by handing in a written solution and a transfer of copyright. These challenges are usually open research problems. Contents Chapter 1. A recap of measure theory 1 1.1. The Riemann integral ... and beyond 1 1.2. Putting the sigma into the algebra 3 1.3. A call for better integration 5 1.4. Swap until you drop 7 1.5. A final word 10 Chapter 2. E-learning 15 2.1. E-definitions 15 2.2. E-properties 17 2.3. E-laboration 20 Chapter 3. Meet the Martingales 25 3.1. Coming to terms with terminology 26 3.2. Thirteen examples 28 3.3. Linear Algebra Again 29 Chapter 4. Welcome to the California Casino 35 4.1. You can check out any time you like 35 4.2. But you can never lose 38 Chapter 5. A discourse on inequalities 47 5.1. Doob's optional stopping (or sampling) theorem 47 5.2. Doob's maximal inequality 48 5.3. Doob's p inequality 50 5.4. Doob's upcrossingL inequality 51 5.5. And beyond 53 Chapter 6. Don't Stop Me Now 55 6.1. 2 convergence 55 6.2. AlmostL sure convergence 56 6.3. p convergence 57 6.4. L1 convergence 57 6.5. WrappingL it up 59 Chapter 7. Sum ergo cogito 65 7.1. All, or nothing at all 65 7.2. The domino effect 66 7.3. Show me the money, Jerry 67 7.4. Take it to the limit, one more time 69 v vi CONTENTS Chapter 8. Get used to it 75 8.1. Are we there yet? 75 8.2. A deviation 78 Chapter 9. Walk the Line 83 9.1. Putting PDE's in your PC 84 9.2. Time's Arrow 86 9.3. Measuring motion on the atomic scale 89 Chapter 10. Get Real 93 10.1. A fishy frog 94 10.2. Get on up, get on the scene 97 Chapter 11. Meet the real Martingales 105 11.1. Continuous martingales 107 11.2. Crooked paths 108 Chapter 12. A tale of two thinkers 115 12.1. Cracking up crooked paths 116 12.2. The stochastic integral 119 Chapter 13. Stochastic Calculus 125 13.1. It^o'srule 127 13.2. Properties of the Stochastic Integral 130 13.3. The It^oformula 130 13.4. EXERCISES CHAPTER 13 135 Chapter 14. The End is Here 137 14.1. A review of lecture 9 137 14.2. Building Bridges 138 14.3. A final word 141 CHAPTER 1 A recap of measure theory This material is covered by Chapter 1 of Brze´zniakand Zastawniak's text on Basic Stochastic Processes Measure Theory was developed around the turn of the 20th century by the French mathematicians Emile´ Borel and Henri Lebesgue.1 Most of you have already learned this theory, for instance in the courses TW2090 Real Analysis, TW3560 Advanced Probability, TW3570 Fourier Analysis, or if you have followed the Minor Finance. If you dit not learn the theory yet, you need to catch up, because this here is only a partial recap to refresh your memory. Fortunately, there is plenty of material available on the internet. You could for instance consult the first part of the notes by Terry Tao, the world's most famous mathematician. You could also try Probability with Martingales, by David Williams, which is insightful but demanding. Finally, there are the very elegant notes on Probability by Varadhan. 1.1. The Riemann integral ... and beyond You have received excessive training on calculating the integral b (1.1) a f(x)dx But what does the integral mean? You need to remember your definitions. In R 1Delft pride: the Dutch mathematician Thomas Stieltjes, who studied in Delft but failed to get his BSA, got some of these ideas first. Figure 1. Upper and lower Riemann sum 1 2 1. A RECAP OF MEASURE THEORY particular, you need to remember your Riemann sums. Divide the interval [a; b] into a finite union of subintervals [a; b] = [x0; x1) [x1; x2) ::: [xn−1; xn] [ [ [ with x0 = a and xn = b and approximate the integral by b f(x)dx f(ξ0)(x1 x0) + f(ξ1)(x2 x1) + + f(ξn−1)(xn xn−1) ≈ − − ··· − Za As the mesh of the subintervals gets smaller, the approximation gets better and better, and (hopefully) converges to the integral. The summands f(ξi)(xi+1 xi) have two factors: a function value f(ξi) and a − length xi+1 xi. Suppose that we change the integral as follows: − b 2 (1.2) a f(x)d(x ) Then the function values remain the same, but we have changed the lengths. The Riemann sum with d(x2) insteadR of dx is equal to b 2 2 2 2 2 2 2 f(x)d(x ) f(ξ0)(x x ) + f(ξ1)(x x ) + + f(ξn−1)(x x ) ≈ 1 − 0 2 − 1 ··· n − n−1 Za So here is what you need to observe: the definition of the integral involves the definition of the lengths of the intervals. There are many possible notions of length. We can define the length of [x; y] to be y x or y2 x2 or ey ex and in general b − − − we may consider a f(x)d(g(x)) where the length of [x; y] is equal to g(y) g(x) for some monotonic function g. A measure is a more general notion of− length. R I did not tell you yet what the ξi are. They are arbitrary points in [xi; xi+1). If f is continuous and if the interval is very small, then f(ξi) hardly varies and the b choice of ξi is not really important. This is why a f(x)dx is well defined if f is continous. But if f is discontinuous, then the Riemann sums may not converge. You probably remember this example: R b (1.3) a 1Q(x)dx is not defined where the indicator function 1Q(x) is equal to 1 if x is rational and it is equal to 0 if x is irrational. However,R we can fix this if we can decide what the length of Q is. The rational numbers are countable, so we can enumerate them Q = r1; r2; r3;::: f g To decide what the length of Q is, we cover it by intervals (r1 0:01; r1 + 0:01) (r2 0:001; r2 + 0:001) (r3 0:0001; r3 + 0:0001);::: − [ − [ − −n−1 In other words, we cover each rn by an interval of length 2:10 . The total length of these intervals adds up to 0:0222 . We can do even better and cover Q by ··· intervals with lengths that add up to nearly nothing. So the length of Q must be b equal to zero and that is why a 1Q(x)dx = 0. Or to put this in technical terms: b the Lebesgue integral a 1QR(x)dx is equal to zero. R 1.2. PUTTING THE SIGMA INTO THE ALGEBRA 3 The advantages of the Lebesgue integral over the Riemann integral are: Lebesgue is an upgrade: it is well defined for many more functions and • allows you to integrate over spaces that are much more general than R Lebesgue is downward compatible: it produces the same value as Rie- • mann, if Riemann is well defined. Lebesgue extends accross platforms: it treats and Σ in the same • manner, and it connects Analysis to Probability. R Lebesgue has no ambiguity: there is no need for points ξi • Lebesgue has improved functionality: the swap between limn!1 fn • and limn!1 fn is handled in a much more transparent manner. R In short, the LebesgueR environment is more user friendly than the Riemann environ- ment. We are talking about a genuine upgrade here, we are not going Microsoft. The Lebesgue integral hinges on the idea of a measure, which is some kind of length function for sets rather than intervals. We need to describe these sets first. 1.2. Putting the sigma into the algebra Let Ω be any arbitrary set, but let it be [0; 1] in particular, or let it be R if you like. We would like to define the measure of all subsets of Ω. Unfortunately, this is impossible without running into fundamental difficulties, because there are so many subsets, more than you can ever describe.
Recommended publications
  • Markovian Bridges: Weak Continuity and Pathwise Constructions
    The Annals of Probability 2011, Vol. 39, No. 2, 609–647 DOI: 10.1214/10-AOP562 c Institute of Mathematical Statistics, 2011 MARKOVIAN BRIDGES: WEAK CONTINUITY AND PATHWISE CONSTRUCTIONS1 By Lo¨ıc Chaumont and Geronimo´ Uribe Bravo2,3 Universit´ed’Angers and Universidad Nacional Aut´onoma de M´exico A Markovian bridge is a probability measure taken from a disin- tegration of the law of an initial part of the path of a Markov process given its terminal value. As such, Markovian bridges admit a natural parameterization in terms of the state space of the process. In the context of Feller processes with continuous transition densities, we construct by weak convergence considerations the only versions of Markovian bridges which are weakly continuous with respect to their parameter. We use this weakly continuous construction to provide an extension of the strong Markov property in which the flow of time is reversed. In the context of self-similar Feller process, the last result is shown to be useful in the construction of Markovian bridges out of the trajectories of the original process. 1. Introduction and main results. 1.1. Motivation. The aim of this article is to study Markov processes on [0,t], starting at x, conditioned to arrive at y at time t. Historically, the first example of such a conditional law is given by Paul L´evy’s construction of the Brownian bridge: given a Brownian motion B starting at zero, let s s bx,y,t = x + B B + (y x) . s s − t t − t arXiv:0905.2155v3 [math.PR] 14 Mar 2011 Received May 2009; revised April 2010.
    [Show full text]
  • Modern Discrete Probability I
    Preliminaries Some fundamental models A few more useful facts about... Modern Discrete Probability I - Introduction Stochastic processes on graphs: models and questions Sebastien´ Roch UW–Madison Mathematics September 6, 2017 Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... 1 Preliminaries Review of graph theory Review of Markov chain theory 2 Some fundamental models Random walks on graphs Percolation Some random graph models Markov random fields Interacting particles on finite graphs 3 A few more useful facts about... ...graphs ...Markov chains ...other things Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... Graphs Definition (Undirected graph) An undirected graph (or graph for short) is a pair G = (V ; E) where V is the set of vertices (or nodes, sites) and E ⊆ ffu; vg : u; v 2 V g; is the set of edges (or bonds). The V is either finite or countably infinite. Edges of the form fug are called loops. We do not allow E to be a multiset. We occasionally write V (G) and E(G) for the vertices and edges of G. Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... An example: the Petersen graph Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about..
    [Show full text]
  • Patterns in Random Walks and Brownian Motion
    Patterns in Random Walks and Brownian Motion Jim Pitman and Wenpin Tang Abstract We ask if it is possible to find some particular continuous paths of unit length in linear Brownian motion. Beginning with a discrete version of the problem, we derive the asymptotics of the expected waiting time for several interesting patterns. These suggest corresponding results on the existence/non-existence of continuous paths embedded in Brownian motion. With further effort we are able to prove some of these existence and non-existence results by various stochastic analysis arguments. A list of open problems is presented. AMS 2010 Mathematics Subject Classification: 60C05, 60G17, 60J65. 1 Introduction and Main Results We are interested in the question of embedding some continuous-time stochastic processes .Zu;0Ä u Ä 1/ into a Brownian path .BtI t 0/, without time-change or scaling, just by a random translation of origin in spacetime. More precisely, we ask the following: Question 1 Given some distribution of a process Z with continuous paths, does there exist a random time T such that .BTCu BT I 0 Ä u Ä 1/ has the same distribution as .Zu;0Ä u Ä 1/? The question of whether external randomization is allowed to construct such a random time T, is of no importance here. In fact, we can simply ignore Brownian J. Pitman ()•W.Tang Department of Statistics, University of California, 367 Evans Hall, Berkeley, CA 94720-3860, USA e-mail: [email protected]; [email protected] © Springer International Publishing Switzerland 2015 49 C. Donati-Martin et al.
    [Show full text]
  • Constructing a Sequence of Random Walks Strongly Converging to Brownian Motion Philippe Marchal
    Constructing a sequence of random walks strongly converging to Brownian motion Philippe Marchal To cite this version: Philippe Marchal. Constructing a sequence of random walks strongly converging to Brownian motion. Discrete Random Walks, DRW’03, 2003, Paris, France. pp.181-190. hal-01183930 HAL Id: hal-01183930 https://hal.inria.fr/hal-01183930 Submitted on 12 Aug 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Discrete Mathematics and Theoretical Computer Science AC, 2003, 181–190 Constructing a sequence of random walks strongly converging to Brownian motion Philippe Marchal CNRS and Ecole´ normale superieur´ e, 45 rue d’Ulm, 75005 Paris, France [email protected] We give an algorithm which constructs recursively a sequence of simple random walks on converging almost surely to a Brownian motion. One obtains by the same method conditional versions of the simple random walk converging to the excursion, the bridge, the meander or the normalized pseudobridge. Keywords: strong convergence, simple random walk, Brownian motion 1 Introduction It is one of the most basic facts in probability theory that random walks, after proper rescaling, converge to Brownian motion. However, Donsker’s classical theorem [Don51] only states a convergence in law.
    [Show full text]
  • Arxiv:2004.05394V1 [Math.PR] 11 Apr 2020 Not Equal
    SIZE AND SHAPE OF TRACKED BROWNIAN BRIDGES ABDULRAHMAN ALSOLAMI, JAMES BURRIDGE AND MICHALGNACIK Abstract. We investigate the typical sizes and shapes of sets of points obtained by irregularly tracking two-dimensional Brownian bridges. The tracking process consists of observing the path location at the arrival times of a non-homogeneous Poisson process on a finite time interval. The time varying intensity of this observation process is the tracking strategy. By analysing the gyration tensor of tracked points we prove two theorems which relate the tracking strategy to the average gyration radius, and to the asphericity { a measure of how non-spherical the point set is. The act of tracking may be interpreted either as a process of observation, or as process of depositing time decaying \evidence" such as scent, environmental disturbance, or disease particles. We present examples of different strategies, and explore by simulation the effects of varying the total number of tracking points. 1. Introduction Understanding the statistical properties of human and animal movement processes is of interest to ecologists [1, 2, 3], epidemiologists [4, 5, 6], criminologists [7], physicists and mathematicians [8, 9, 10, 11], including those interested in the evolution of human culture and language [12, 13, 14]. Advances in information and communication technologies have allowed automated collection of large numbers of human and animal trajectories [15, 16], allowing real movement patterns to be studied in detail and compared to idealised mathematical models. Beyond academic study, movement data has important practical applications, for example in controlling the spread of disease through contact tracing [6]. Due to the growing availability and applications of tracking information, it is useful to possess a greater analytical understanding of the typical shape and size characteristics of trajectories which are observed, or otherwise emit information.
    [Show full text]
  • On Conditionally Heteroscedastic Ar Models with Thresholds
    Statistica Sinica 24 (2014), 625-652 doi:http://dx.doi.org/10.5705/ss.2012.185 ON CONDITIONALLY HETEROSCEDASTIC AR MODELS WITH THRESHOLDS Kung-Sik Chan, Dong Li, Shiqing Ling and Howell Tong University of Iowa, Tsinghua University, Hong Kong University of Science & Technology and London School of Economics & Political Science Abstract: Conditional heteroscedasticity is prevalent in many time series. By view- ing conditional heteroscedasticity as the consequence of a dynamic mixture of in- dependent random variables, we develop a simple yet versatile observable mixing function, leading to the conditionally heteroscedastic AR model with thresholds, or a T-CHARM for short. We demonstrate its many attributes and provide com- prehensive theoretical underpinnings with efficient computational procedures and algorithms. We compare, via simulation, the performance of T-CHARM with the GARCH model. We report some experiences using data from economics, biology, and geoscience. Key words and phrases: Compound Poisson process, conditional variance, heavy tail, heteroscedasticity, limiting distribution, quasi-maximum likelihood estimation, random field, score test, T-CHARM, threshold model, volatility. 1. Introduction We can often model a time series as the sum of a conditional mean function, the drift or trend, and a conditional variance function, the diffusion. See, e.g., Tong (1990). The drift attracted attention from very early days, although the importance of the diffusion did not go entirely unnoticed, with an example in ecological populations as early as Moran (1953); a systematic modelling of the diffusion did not seem to attract serious attention before the 1980s. For discrete-time cases, our focus, as far as we are aware it is in the econo- metric and finance literature that the modelling of the conditional variance has been treated seriously, although Tong and Lim (1980) did include conditional heteroscedasticity.
    [Show full text]
  • Arxiv:1910.05067V4 [Physics.Comp-Ph] 1 Jan 2021
    A FINITE-VOLUME METHOD FOR FLUCTUATING DYNAMICAL DENSITY FUNCTIONAL THEORY ANTONIO RUSSO†, SERGIO P. PEREZ†, MIGUEL A. DURAN-OLIVENCIA,´ PETER YATSYSHIN, JOSE´ A. CARRILLO, AND SERAFIM KALLIADASIS Abstract. We introduce a finite-volume numerical scheme for solving stochastic gradient-flow equations. Such equations are of crucial importance within the framework of fluctuating hydrodynamics and dynamic density func- tional theory. Our proposed scheme deals with general free-energy functionals, including, for instance, external fields or interaction potentials. This allows us to simulate a range of physical phenomena where thermal fluctuations play a crucial role, such as nucleation and other energy-barrier crossing transitions. A positivity-preserving algorithm for the density is derived based on a hybrid space discretization of the deterministic and the stochastic terms and different implicit and explicit time integrators. We show through numerous applications that not only our scheme is able to accurately reproduce the statistical properties (structure factor and correlations) of the physical system, but, because of the multiplicative noise, it allows us to simulate energy barrier crossing dynamics, which cannot be captured by mean-field approaches. 1. Introduction The study of fluid dynamics encounters major challenges due to the inherently multiscale nature of fluids. Not surprisingly, fluid dynamics has been one of the main arenas of activity for numerical analysis and fluids are commonly studied via numerical simulations, either at molecular scale, by using molecular dynamics (MD) or Monte Carlo (MC) simulations; or at macro scale, by utilising deterministic models based on the conservation of fundamental quantities, namely mass, momentum and energy. While atomistic simulations take into account thermal fluctuations, they come with an important drawback, the enormous computational cost of having to resolve at least three degrees of freedom per particle.
    [Show full text]
  • Markov Chain Monte Carlo Estimation of Exponential Random Graph Models
    Markov Chain Monte Carlo Estimation of Exponential Random Graph Models Tom A.B. Snijders ICS, Department of Statistics and Measurement Theory University of Groningen ∗ April 19, 2002 ∗Author's address: Tom A.B. Snijders, Grote Kruisstraat 2/1, 9712 TS Groningen, The Netherlands, email <[email protected]>. I am grateful to Paul Snijders for programming the JAVA applet used in this article. In the revision of this article, I profited from discussions with Pip Pattison and Garry Robins, and from comments made by a referee. This paper is formatted in landscape to improve on-screen readability. It is read best by opening Acrobat Reader in a full screen window. Note that in Acrobat Reader, the entire screen can be used for viewing by pressing Ctrl-L; the usual screen is returned when pressing Esc; it is possible to zoom in or zoom out by pressing Ctrl-- or Ctrl-=, respectively. The sign in the upper right corners links to the page viewed previously. ( Tom A.B. Snijders 2 MCMC estimation for exponential random graphs ( Abstract bility to move from one region to another. In such situations, convergence to the target distribution is extremely slow. To This paper is about estimating the parameters of the exponential be useful, MCMC algorithms must be able to make transitions random graph model, also known as the p∗ model, using frequen- from a given graph to a very different graph. It is proposed to tist Markov chain Monte Carlo (MCMC) methods. The exponen- include transitions to the graph complement as updating steps tial random graph model is simulated using Gibbs or Metropolis- to improve the speed of convergence to the target distribution.
    [Show full text]
  • Exploring Healing Strategies for Random Boolean Networks
    Exploring Healing Strategies for Random Boolean Networks Christian Darabos 1, Alex Healing 2, Tim Johann 3, Amitabh Trehan 4, and Am´elieV´eron 5 1 Information Systems Department, University of Lausanne, Switzerland 2 Pervasive ICT Research Centre, British Telecommunications, UK 3 EML Research gGmbH, Heidelberg, Germany 4 Department of Computer Science, University of New Mexico, Albuquerque, USA 5 Division of Bioinformatics, Institute for Evolution and Biodiversity, The Westphalian Wilhelms University of Muenster, Germany. [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. Real-world systems are often exposed to failures where those studied theoretically are not: neuron cells in the brain can die or fail, re- sources in a peer-to-peer network can break down or become corrupt, species can disappear from and environment, and so forth. In all cases, for the system to keep running as it did before the failure occurred, or to survive at all, some kind of healing strategy may need to be applied. As an example of such a system subjected to failure and subsequently heal, we study Random Boolean Networks. Deletion of a node in the network was considered a failure if it affected the functional output and we in- vestigated healing strategies that would allow the system to heal either fully or partially. More precisely, our main strategy involves allowing the nodes directly affected by the node deletion (failure) to iteratively rewire in order to achieve healing. We found that such a simple method was ef- fective when dealing with small networks and single-point failures.
    [Show full text]
  • Percolation Threshold Results on Erdos-Rényi Graphs
    arXiv: arXiv:0000.0000 Percolation Threshold Results on Erd}os-R´enyi Graphs: an Empirical Process Approach Michael J. Kane Abstract: Keywords and phrases: threshold, directed percolation, stochastic ap- proximation, empirical processes. 1. Introduction Random graphs and discrete random processes provide a general approach to discovering properties and characteristics of random graphs and randomized al- gorithms. The approach generally works by defining an algorithm on a random graph or a randomized algorithm. Then, expected changes for each step of the process are used to propose a limiting differential equation and a large deviation theorem is used to show that the process and the differential equation are close in some sense. In this way a connection is established between the resulting process's stochastic behavior and the dynamics of a deterministic, asymptotic approximation using a differential equation. This approach is generally referred to as stochastic approximation and provides a powerful tool for understanding the asymptotic behavior of a large class of processes defined on random graphs. However, little work has been done in the area of random graph research to investigate the weak limit behavior of these processes before the asymptotic be- havior overwhelms the random component of the process. This context is par- ticularly relevant to researchers studying news propagation in social networks, sensor networks, and epidemiological outbreaks. In each of these applications, investigators may deal graphs containing tens to hundreds of vertices and be interested not only in expected behavior over time but also error estimates. This paper investigates the connectivity of graphs, with emphasis on Erd}os- R´enyi graphs, near the percolation threshold when the number of vertices is not asymptotically large.
    [Show full text]
  • Probability on Graphs Random Processes on Graphs and Lattices
    Probability on Graphs Random Processes on Graphs and Lattices GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 44 Figures c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Contents Preface ix 1 Random walks on graphs 1 1.1 RandomwalksandreversibleMarkovchains 1 1.2 Electrical networks 3 1.3 Flowsandenergy 8 1.4 Recurrenceandresistance 11 1.5 Polya's theorem 14 1.6 Graphtheory 16 1.7 Exercises 18 2 Uniform spanning tree 21 2.1 De®nition 21 2.2 Wilson's algorithm 23 2.3 Weak limits on lattices 28 2.4 Uniform forest 31 2.5 Schramm±LownerevolutionsÈ 32 2.6 Exercises 37 3 Percolation and self-avoiding walk 39 3.1 Percolationandphasetransition 39 3.2 Self-avoiding walks 42 3.3 Coupledpercolation 45 3.4 Orientedpercolation 45 3.5 Exercises 48 4 Association and in¯uence 50 4.1 Holley inequality 50 4.2 FKGinequality 53 4.3 BK inequality 54 4.4 Hoeffdinginequality 56 c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 vi Contents 4.5 In¯uenceforproductmeasures 58 4.6 Proofsofin¯uencetheorems 63 4.7 Russo'sformulaandsharpthresholds 75 4.8 Exercises 78 5 Further percolation 81 5.1 Subcritical phase 81 5.2 Supercritical phase 86 5.3 Uniquenessofthein®nitecluster 92 5.4 Phase transition 95 5.5 Openpathsinannuli 99 5.6 The critical probability in two dimensions 103 5.7 Cardy's formula 110 5.8 The
    [Show full text]
  • 3 a Few More Good Inequalities, Martingale Variety 1 3.1 From
    3 A few more good inequalities, martingale variety1 3.1 From independence to martingales...............1 3.2 Hoeffding’s inequality for martingales.............4 3.3 Bennett's inequality for martingales..............8 3.4 *Concentration of random polynomials............. 10 3.5 *Proof of the Kim-Vu inequality................ 14 3.6 Problems............................. 18 3.7 Notes............................... 18 Printed: 8 October 2015 version: 8Oct2015 Mini-empirical printed: 8 October 2015 c David Pollard x3.1 From independence to martingales 1 Chapter 3 A few more good inequalities, martingale variety Section 3.1 introduces the method for bounding tail probabilities using mo- ment generating functions. Section 3.2 discusses the Hoeffding inequality, both for sums of independent bounded random variables and for martingales with bounded increments. Section 3.3 discusses the Bennett inequality, both for sums of indepen- dent random variables that are bounded above by a constant and their martingale analogs. *Section3.4 presents an extended application of a martingale version of the Bennett inequality to derive a version of the Kim-Vu inequality for poly- nomials in independent, bounded random variables. 3.1 From independence to martingales BasicMG::S:intro Throughout this Chapter f(Si; Fi): i = 1; : : : ; ng is a martingale on some probability space (Ω; F; P). That is, we have a sub-sigma fields F0 ⊆ F1 ⊆ · · · ⊆ Fn ⊆ F and integrable, Fi-measurable random variables Si for which PFi−1 Si = Si−1 almost surely. Equivalently, the martingale differ- ences ξi := Si − Si−1 are integrable, Fi-measurable, and PFi−1 ξi = 0 almost surely, for i = 1; : : : ; n. In that case Si = S0 + ξ1 + ··· + ξi = Si−1 + ξi.
    [Show full text]