Math 288 - Probability Theory and Stochastic Process

Total Page:16

File Type:pdf, Size:1020Kb

Math 288 - Probability Theory and Stochastic Process Math 288 - Probability Theory and Stochastic Process Taught by Horng-Tzer Yau Notes by Dongryul Kim Spring 2017 This course was taught by Horng-Tzer Yau. The lectures were given at MWF 12-1 in Science Center 310. The textbook was Brownian Motion, Martingales, and Stochastic Calculus by Jean-Fran¸coisLe Gall. There were 11 undergrad- uates and 12 graduate students enrolled. There were 5 problem sets and the course assistant was Robert Martinez. Contents 1 January 23, 2017 5 1.1 Gaussians . .5 1.2 Gaussian vector . .6 2 January 25, 2017 8 2.1 Gaussian process . .8 2.2 Conditional expectation for Gaussian vectors . .8 3 January 27, 2017 10 3.1 Gaussian white noise . 10 3.2 Pre-Brownian motion . 11 4 January 30, 2017 13 4.1 Kolmogorov's theorem . 13 5 February 1, 2017 15 5.1 Construction of a Brownian motion . 15 5.2 Wiener measure . 16 6 February 3, 2017 17 6.1 Blumenthal's zero-one law . 17 6.2 Strong Markov property . 18 1 Last Update: August 27, 2018 7 February 6, 2017 19 7.1 Reflection principle . 19 7.2 Filtrations . 20 8 February 8, 2017 21 8.1 Stopping time of filtrations . 21 8.2 Martingales . 22 9 February 10, 2017 23 9.1 Discrete martingales - Doob's inequality . 23 10 February 13, 2017 25 10.1 More discrete martingales - upcrossing inequality . 25 11 February 15, 2017 27 11.1 Levi's theorem . 27 11.2 Optional stopping for continuous martingales . 28 12 February 17, 2017 29 13 February 22, 2017 30 13.1 Submartingale decomposition . 30 14 February 24, 2017 31 14.1 Backward martingales . 31 14.2 Finite variance processes . 32 15 February 27, 2017 33 15.1 Local martingales . 33 16 March 1, 2017 35 16.1 Quadratic variation . 35 17 March 3, 2017 37 17.1 Consequences of the quadratic variation . 37 18 March 6, 2017 39 18.1 Bracket of local martingales . 39 18.2 Stochastic integration . 40 19 March 8, 2017 41 19.1 Elementary process . 41 20 March 20, 2017 43 20.1 Review . 43 2 21 March 22, 2017 45 21.1 Stochastic integration with respect to a Brownian motion . 45 21.2 Stochastic integration with respect to local martingales . 45 22 March 24, 2017 47 22.1 Dominated convergence for semi-martingales . 47 22.2 It^o'sformula . 48 23 March 27, 2017 50 23.1 Application of It^ocalculus . 50 24 March 29, 2017 52 24.1 Burkholder{Davis{Gundy inequality . 52 25 March 31, 2017 54 25.1 Representation of martingales . 54 25.2 Girsanov's theorem . 55 26 April 3, 2017 56 26.1 Cameron{Martin formula . 56 27 April 5, 2017 58 27.1 Application to the Dirichlet problem . 58 27.2 Stochastic differential equations . 59 28 April 7, 2017 61 28.1 Existence and uniqueness in the Lipschitz case . 61 28.2 Kolomogorov forward and backward equations . 62 29 April 10, 2017 64 29.1 Girsanov's formula as an SDE . 65 29.2 Ornstein{Uhlenbeck process . 65 29.3 Geometric Brownian motion . 66 30 April 12, 2017 67 30.1 Bessel process . 67 31 April 14, 2017 69 31.1 Maximum principle . 69 31.2 2-dimensional Brownian motion . 70 32 April 17, 2017 71 32.1 More on 2-dimensional Brownian motion . 71 33 April 19, 2017 73 33.1 Feynman{Kac formula . 73 33.2 Girsanov formula again . 74 3 34 April 21, 2017 76 34.1 Kipnis{Varadhan cutoff lemma . 76 35 April 24, 2017 78 35.1 Final review . 78 4 Math 288 Notes 5 1 January 23, 2017 The textbook is J. F. Le Gall, Brownian Motion, Martingales, and Stochastic Calculus. n A Brownian motion is a random function X(t): R≥0 ! R . This is a model for the movement of one particle among many in \billiard" dymanics. For instance, if there are many particles in box and they bump into each other. Robert Brown gave a description in the 19th century. X(t) is supposed to describe the trajectory of one particle, and it is random. To simplify this, we assume X(t) is a random walk. For instance, let N ( X 1 with probability 1=2 SN = Xi;Xi = −1 with probability 1=2: i=1 For non-integer values, you interpolate it linearly and let bNtc X SN (t) = Xi + (Nt − bNtc)XbNtc+1: i=1 2 Then E(SN ) = N. As we let N ! 1, we have three properties: 2 SN (t) • lim E p = t N!1 N E(SN (t)SN (s)) • lim p 2 = t ^ s = min(t; s) N!1 N p • SN (t)= N ! N(0; t) This is the model we want. 1.1 Gaussians Definition 1.1. X is a standard Gaussian random variable if 1 Z 2 P (X 2 A) = p e−x =2dx 2π A for a Borel set A ⊆ R. For a complex variable z 2 C, Z zX 1 zx −x2=2 z2=2 E(e ) = p e e dx = e : 2π This is the moment generating function. When z = iξ, we have 1 2 k 2 X (−ξ ) 2 (iξ) = e−ξ =2 = 1 + iξ X + X2 + ··· : k! E 2 E k=0 Math 288 Notes 6 From this we can read off the moments of the Gaussian EXn. We have (2n)! X2n = ; X2n+1 = 0: E 2nn! E Given X ∼ N (0; 1), we have Y = σX + m ∼ N (m; σ2). Its moment gener- ating function is given by zY imξ 2 2 Ee = e − σ ξ =2: 2 2 Also, if Y1 ∼ N (m1; σ1) and Y2 ∼ N (m2; σ2) and they are independent, then z(Y +Y ) zY zY i(m +m )ξ−(σ2+σ2)ξ2=2 Ee 1 2 = Ee 1 Ee 2 = e 1 2 1 2 : 2 2 So Y1 + Y2 ∼ N (m1 + m2; σ1 + σ2). 2 2 Proposition 1.2. If Xn ∼ N (mn; σn) and Xn ! X in L , then 2 2 (i) X is Gaussian with mean m = limn!1 mn and variance σ = limn!1 σn. p (ii) Xn ! X in probability and Xn ! X in L for p ≥ 1. 2 Proof. (i) By definition, E(Xn − X) ! 0 and so jEXn − EXj ≤ (EjXn − Xj2)1=2 ! 0. So EX = m. Similarly because L2 is complete, we have Var X = σ2. Now the characteristic function of X is iξX iξX im ξ−σ2 ξ2=2 imξ−σ2ξ2=2 e = lim e n = lim e n n = e : E n!1 n!1 So X ∼ N (m; σ2). (ii) Because EX2p can be expressed in terms of the mean and variance, we 2p 2p p have supn EjXnj < 1 and so supn EjXn − Xj < 1. Define Yn = jXn − Xj . 2 Then Yn is bounded in L and hence is uniformly integrable. Also it converges 1 to 0 in probability. It follows that Yn ! 0 in L , which means that Xn ! X in p L . It then follows that Xn ! X in probability. Definition 1.3. We say that Xn is uniformly integrable if for every > 0 there exists a K such that supn E[jXnj · 1jXnj>K ] < . 1 Proposition 1.4. A sequence of random variables Xn converge to X in L if and only if Xn are uniformly integrable and Xn ! X in probability. 1.2 Gaussian vector Definition 1.5. A vector X 2 Rn is a Gaussian vector if for every n 2 Rn, hu; Xi is a Gaussian random variable. Example 1.6. Define X1 ∼ N (0; 1) and = ±1 with probability 1=2 each. Let X2 = X1. Then X = (X1;X2) is not a Gaussian vector. Math 288 Notes 7 Note that u 7! E[u · X] is a linear map and so E[u · X] = u · mX for some n Pn mX 2 R . We call mX the mean of X. If X = i=1 Xiei for an orthonormal n normal basis ei of R , then the mean of X is N X mX = (EXi)ei: i=1 Likewise, the map u 7! Var u · X is a quadratic form and so Var uX = hu; qX ui for some matrix qX . We all qX the covariant matrix of X. We write qX (n) = hu; qX ui. Then the characteristic function of X is zi(u·X) 2 Ee = exp(izmX − z qX (u)=2): Proposition 1.7. If Cov(Xj;Xk)1≤j;k≤n is diagonal, and X = (X1;:::;Xn) is a Gaussian vector, then Xis are independent. Proof. First check that n X qX (u) = uiuj Cov(Xi;Xj): i;j=1 Then n 1 1 X exp(iu · X) = exp − q (u) = exp − u2 Var(X ) E 2 X 2 i i i=1 n Y 1 = exp − u2 Var(X ) : 2 i i i=1 This gives a full description of X. Theorem 1.8. Let γ : Rn ! Rn be a positive definite symmetric n × n matrix. Then there exists a Gaussian vector X with Cov X = γ. Proof. Diagonalize γ and let λj ≥ 0 be the eigenvalues. Choose v1; : : : ; vn an n orthonormal basis of eigenvectors of R and let wj = λjvj = γvj. Choose Yj to be Gaussians N (0; 1). Then the vector n X X = λjvjYj j=1 is a Gaussian vector with the right covariance. Math 288 Notes 8 2 January 25, 2017 We start with a probability space (Ω; F;P ) and joint Gaussians (or a Gaussian vector) X1;:::;Xd. We have Cov(Xj;Xk) = E[XjXk] − E[Xj]E[Xk]; γX = (Cov(Xj;Xk))jk = C: Then the probability distribution is given by 1 −hx;C−1xi=2 PX (x) = p e (2π)d=2 det C where C > 0 and the characteristic function is itx −ht;Cti=2 E[e ] = e : 2.1 Gaussian process A (centered) Gaussian space is a closed subspace of L2(Ω; F;P ) containing only Gaussian random variables.
Recommended publications
  • Stochastic Differential Equations
    Stochastic Differential Equations Stefan Geiss November 25, 2020 2 Contents 1 Introduction 5 1.0.1 Wolfgang D¨oblin . .6 1.0.2 Kiyoshi It^o . .7 2 Stochastic processes in continuous time 9 2.1 Some definitions . .9 2.2 Two basic examples of stochastic processes . 15 2.3 Gaussian processes . 17 2.4 Brownian motion . 31 2.5 Stopping and optional times . 36 2.6 A short excursion to Markov processes . 41 3 Stochastic integration 43 3.1 Definition of the stochastic integral . 44 3.2 It^o'sformula . 64 3.3 Proof of Ito^'s formula in a simple case . 76 3.4 For extended reading . 80 3.4.1 Local time . 80 3.4.2 Three-dimensional Brownian motion is transient . 83 4 Stochastic differential equations 87 4.1 What is a stochastic differential equation? . 87 4.2 Strong Uniqueness of SDE's . 90 4.3 Existence of strong solutions of SDE's . 94 4.4 Theorems of L´evyand Girsanov . 98 4.5 Solutions of SDE's by a transformation of drift . 103 4.6 Weak solutions . 105 4.7 The Cox-Ingersoll-Ross SDE . 108 3 4 CONTENTS 4.8 The martingale representation theorem . 116 5 BSDEs 121 5.1 Introduction . 121 5.2 Setting . 122 5.3 A priori estimate . 123 Chapter 1 Introduction One goal of the lecture is to study stochastic differential equations (SDE's). So let us start with a (hopefully) motivating example: Assume that Xt is the share price of a company at time t ≥ 0 where we assume without loss of generality that X0 := 1.
    [Show full text]
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • Perturbed Bessel Processes Séminaire De Probabilités (Strasbourg), Tome 32 (1998), P
    SÉMINAIRE DE PROBABILITÉS (STRASBOURG) R.A. DONEY JONATHAN WARREN MARC YOR Perturbed Bessel processes Séminaire de probabilités (Strasbourg), tome 32 (1998), p. 237-249 <http://www.numdam.org/item?id=SPS_1998__32__237_0> © Springer-Verlag, Berlin Heidelberg New York, 1998, tous droits réservés. L’accès aux archives du séminaire de probabilités (Strasbourg) (http://portail. mathdoc.fr/SemProba/) implique l’accord avec les conditions générales d’utili- sation (http://www.numdam.org/conditions). Toute utilisation commerciale ou im- pression systématique est constitutive d’une infraction pénale. Toute copie ou im- pression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ Perturbed Bessel Processes R.A.DONEY, J.WARREN, and M.YOR. There has been some interest in the literature in Brownian Inotion perturbed at its maximum; that is a process (Xt ; t > 0) satisfying (0.1) , where Mf = XS and (Bt; t > 0) is Brownian motion issuing from zero. The parameter a must satisfy a 1. For example arc-sine laws and Ray-Knight theorems have been obtained for this process; see Carmona, Petit and Yor [3], Werner [16], and Doney [7]. Our initial aim was to identify a process which could be considered as the process X conditioned to stay positive. This new process behaves like the Bessel process of dimension three except when at its maximum and we call it a perturbed three-dimensional Bessel process. We establish Ray-Knight theorems for the local times of this process, up to a first passage time and up to infinity (see Theorem 2.3), and observe that these descriptions coincide with those of the local times of two processes that have been considered in Yor [18].
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Introduction to Stochastic Processes - Lecture Notes (With 33 Illustrations)
    Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin Contents 1 Probability review 4 1.1 Random variables . 4 1.2 Countable sets . 5 1.3 Discrete random variables . 5 1.4 Expectation . 7 1.5 Events and probability . 8 1.6 Dependence and independence . 9 1.7 Conditional probability . 10 1.8 Examples . 12 2 Mathematica in 15 min 15 2.1 Basic Syntax . 15 2.2 Numerical Approximation . 16 2.3 Expression Manipulation . 16 2.4 Lists and Functions . 17 2.5 Linear Algebra . 19 2.6 Predefined Constants . 20 2.7 Calculus . 20 2.8 Solving Equations . 22 2.9 Graphics . 22 2.10 Probability Distributions and Simulation . 23 2.11 Help Commands . 24 2.12 Common Mistakes . 25 3 Stochastic Processes 26 3.1 The canonical probability space . 27 3.2 Constructing the Random Walk . 28 3.3 Simulation . 29 3.3.1 Random number generation . 29 3.3.2 Simulation of Random Variables . 30 3.4 Monte Carlo Integration . 33 4 The Simple Random Walk 35 4.1 Construction . 35 4.2 The maximum . 36 1 CONTENTS 5 Generating functions 40 5.1 Definition and first properties . 40 5.2 Convolution and moments . 42 5.3 Random sums and Wald’s identity . 44 6 Random walks - advanced methods 48 6.1 Stopping times . 48 6.2 Wald’s identity II . 50 6.3 The distribution of the first hitting time T1 .......................... 52 6.3.1 A recursive formula . 52 6.3.2 Generating-function approach .
    [Show full text]
  • Patterns in Random Walks and Brownian Motion
    Patterns in Random Walks and Brownian Motion Jim Pitman and Wenpin Tang Abstract We ask if it is possible to find some particular continuous paths of unit length in linear Brownian motion. Beginning with a discrete version of the problem, we derive the asymptotics of the expected waiting time for several interesting patterns. These suggest corresponding results on the existence/non-existence of continuous paths embedded in Brownian motion. With further effort we are able to prove some of these existence and non-existence results by various stochastic analysis arguments. A list of open problems is presented. AMS 2010 Mathematics Subject Classification: 60C05, 60G17, 60J65. 1 Introduction and Main Results We are interested in the question of embedding some continuous-time stochastic processes .Zu;0Ä u Ä 1/ into a Brownian path .BtI t 0/, without time-change or scaling, just by a random translation of origin in spacetime. More precisely, we ask the following: Question 1 Given some distribution of a process Z with continuous paths, does there exist a random time T such that .BTCu BT I 0 Ä u Ä 1/ has the same distribution as .Zu;0Ä u Ä 1/? The question of whether external randomization is allowed to construct such a random time T, is of no importance here. In fact, we can simply ignore Brownian J. Pitman ()•W.Tang Department of Statistics, University of California, 367 Evans Hall, Berkeley, CA 94720-3860, USA e-mail: [email protected]; [email protected] © Springer International Publishing Switzerland 2015 49 C. Donati-Martin et al.
    [Show full text]
  • Non-Local Branching Superprocesses and Some Related Models
    Published in: Acta Applicandae Mathematicae 74 (2002), 93–112. Non-local Branching Superprocesses and Some Related Models Donald A. Dawson1 School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Canada K1S 5B6 E-mail: [email protected] Luis G. Gorostiza2 Departamento de Matem´aticas, Centro de Investigaci´ony de Estudios Avanzados, A.P. 14-740, 07000 M´exicoD. F., M´exico E-mail: [email protected] Zenghu Li3 Department of Mathematics, Beijing Normal University, Beijing 100875, P.R. China E-mail: [email protected] Abstract A new formulation of non-local branching superprocesses is given from which we derive as special cases the rebirth, the multitype, the mass- structured, the multilevel and the age-reproduction-structured superpro- cesses and the superprocess-controlled immigration process. This unified treatment simplifies considerably the proof of existence of the old classes of superprocesses and also gives rise to some new ones. AMS Subject Classifications: 60G57, 60J80 Key words and phrases: superprocess, non-local branching, rebirth, mul- titype, mass-structured, multilevel, age-reproduction-structured, superprocess- controlled immigration. 1Supported by an NSERC Research Grant and a Max Planck Award. 2Supported by the CONACYT (Mexico, Grant No. 37130-E). 3Supported by the NNSF (China, Grant No. 10131040). 1 1 Introduction Measure-valued branching processes or superprocesses constitute a rich class of infinite dimensional processes currently under rapid development. Such processes arose in appli- cations as high density limits of branching particle systems; see e.g. Dawson (1992, 1993), Dynkin (1993, 1994), Watanabe (1968). The development of this subject has been stimu- lated from different subjects including branching processes, interacting particle systems, stochastic partial differential equations and non-linear partial differential equations.
    [Show full text]
  • Simulation of Markov Chains
    Copyright c 2007 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi- neering are Markovian, and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ= f:::; −2; −1; 0; 1; 2;:::g, the integers, or a proper subset of the integers. Typical examples are S = IN = f0; 1; 2 :::g, the non-negative integers, or S = f0; 1; 2 : : : ; ag, or S = {−b; : : : ; 0; 1; 2 : : : ; ag for some integers a; b > 0, in which case the state space is finite. Definition 1.1 A stochastic process fXn : n ≥ 0g is called a Markov chain if for all times n ≥ 0 and all states i0; : : : ; i; j 2 S, P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) (1) = Pij: Pij denotes the probability that the chain, whenever in state i, moves next (one unit of time later) into state j, and is referred to as a one-step transition probability. The square matrix P = (Pij); i; j 2 S; is called the one-step transition matrix, and since when leaving state i the chain must move to one of the states j 2 S, each row sums to one (e.g., forms a probability distribution): For each i X Pij = 1: j2S We are assuming that the transition probabilities do not depend on the time n, and so, in particular, using n = 0 in (1) yields Pij = P (X1 = jjX0 = i): (Formally we are considering only time homogenous MC's meaning that their transition prob- abilities are time-homogenous (time stationary).) The defining property (1) can be described in words as the future is independent of the past given the present state.
    [Show full text]
  • Bessel Process, Schramm-Loewner Evolution, and Dyson Model
    Bessel process, Schramm-Loewner evolution, and Dyson model – Complex Analysis applied to Stochastic Processes and Statistical Mechanics – ∗ Makoto Katori † 24 March 2011 Abstract Bessel process is defined as the radial part of the Brownian motion (BM) in the D-dimensional space, and is considered as a one-parameter family of one- dimensional diffusion processes indexed by D, BES(D). First we give a brief review of BES(D), in which D is extended to be a continuous positive parameter. It is well-known that D = 2 is the critical dimension such that, when D D (resp. c ≥ c D < Dc), the process is transient (resp. recurrent). Bessel flow is a notion such that we regard BES(D) with a fixed D as a one-parameter family of initial value x> 0. There is another critical dimension Dc = 3/2 and, in the intermediate values of D, Dc <D<Dc, behavior of Bessel flow is highly nontrivial. The dimension D = 3 is special, since in addition to the aspect that BES(3) is a radial part of the three-dimensional BM, it has another aspect as a conditional BM to stay positive. Two topics in probability theory and statistical mechanics, the Schramm-Loewner evolution (SLE) and the Dyson model (i.e., Dyson’s BM model with parameter β = 2), are discussed. The SLE(D) is introduced as a ‘complexification’ of Bessel arXiv:1103.4728v1 [math.PR] 24 Mar 2011 flow on the upper-half complex-plane, which is indexed by D > 1. It is explained (D) (D) that the existence of two critical dimensions Dc and Dc for BES makes SLE have three phases; when D D the SLE(D) path is simple, when D <D<D ≥ c c c it is self-intersecting but not dense, and when 1 < D D it is space-filling.
    [Show full text]
  • A Two-Dimensional Oblique Extension of Bessel Processes Dominique Lepingle
    A two-dimensional oblique extension of Bessel processes Dominique Lepingle To cite this version: Dominique Lepingle. A two-dimensional oblique extension of Bessel processes. 2016. hal-01164920v2 HAL Id: hal-01164920 https://hal.archives-ouvertes.fr/hal-01164920v2 Preprint submitted on 22 Nov 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A TWO-DIMENSIONAL OBLIQUE EXTENSION OF BESSEL PROCESSES Dominique L´epingle1 Abstract. We consider a Brownian motion forced to stay in the quadrant by an electro- static oblique repulsion from the sides. We tackle the question of hitting the corner or an edge, and find product-form stationary measures under a certain condition, which is reminiscent of the skew-symmetry condition for a reflected Brownian motion. 1. Introduction In the present paper we study existence and properties of a new process with values in the nonnegative quadrant S = R+ ×R+ where R+ := [0; 1). It may be seen as a two-dimensional extension of a usual Bessel process. It is a two-dimensional Brownian motion forced to stay in the quadrant by electrostatic repulsive forces, in the same way as in the one-dimensional case where a Brownian motion which is prevented from becoming negative by an electrostatic drift becomes a Bessel process.
    [Show full text]
  • A Representation for Functionals of Superprocesses Via Particle Picture Raisa E
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector stochastic processes and their applications ELSEVIER Stochastic Processes and their Applications 64 (1996) 173-186 A representation for functionals of superprocesses via particle picture Raisa E. Feldman *,‘, Srikanth K. Iyer ‘,* Department of Statistics and Applied Probability, University oj’ Cull~ornia at Santa Barbara, CA 93/06-31/O, USA, and Department of Industrial Enyineeriny and Management. Technion - Israel Institute of’ Technology, Israel Received October 1995; revised May 1996 Abstract A superprocess is a measure valued process arising as the limiting density of an infinite col- lection of particles undergoing branching and diffusion. It can also be defined as a measure valued Markov process with a specified semigroup. Using the latter definition and explicit mo- ment calculations, Dynkin (1988) built multiple integrals for the superprocess. We show that the multiple integrals of the superprocess defined by Dynkin arise as weak limits of linear additive functionals built on the particle system. Keywords: Superprocesses; Additive functionals; Particle system; Multiple integrals; Intersection local time AMS c.lassijication: primary 60555; 60H05; 60580; secondary 60F05; 6OG57; 60Fl7 1. Introduction We study a class of functionals of a superprocess that can be identified as the multiple Wiener-It6 integrals of this measure valued process. More precisely, our objective is to study these via the underlying branching particle system, thus providing means of thinking about the functionals in terms of more simple, intuitive and visualizable objects. This way of looking at multiple integrals of a stochastic process has its roots in the previous work of Adler and Epstein (=Feldman) (1987), Epstein (1989), Adler ( 1989) Feldman and Rachev (1993), Feldman and Krishnakumar ( 1994).
    [Show full text]
  • Lecture 18 : Itō Calculus
    Lecture 18 : Itō Calculus 1. Ito's calculus In the previous lecture, we have observed that a sample Brownian path is nowhere differentiable with probability 1. In other words, the differentiation dBt dt does not exist. However, while studying Brownain motions, or when using Brownian motion as a model, the situation of estimating the difference of a function of the type f(Bt) over an infinitesimal time difference occurs quite frequently (suppose that f is a smooth function). To be more precise, we are considering a function f(t; Bt) which depends only on the second variable. Hence there exists an implicit dependence on time since the Brownian motion depends on time. dBt If the differentiation dt existed, then we can easily do this by using chain rule: dB df = t · f 0(B ) dt: dt t We already know that the formula above makes no sense. One possible way to work around this problem is to try to describe the difference df in terms of the difference dBt. In this case, the equation above becomes 0 (1.1) df = f (Bt)dBt: Our new formula at least makes sense, since there is no need to refer to the dBt differentiation dt which does not exist. The only problem is that it does not quite work. Consider the Taylor expansion of f to obtain (∆x)2 (∆x)3 f(x + ∆x) − f(x) = (∆x) · f 0(x) + f 00(x) + f 000(x) + ··· : 2 6 To deduce Equation (1.1) from this formula, we must be able to say that the significant term is the first term (∆x) · f 0(x) and all other terms are of smaller order of magnitude.
    [Show full text]