Continuous Martingales and Stochastic Calculus
Total Page:16
File Type:pdf, Size:1020Kb
Continuous martingales and stochastic calculus Alison Etheridge March 11, 2018 Contents 1 Introduction 3 2 An overview of Gaussian variables and processes 5 2.1 Gaussian variables . 5 2.2 Gaussian vectors and spaces . 7 2.3 Gaussian spaces . 9 2.4 Gaussian processes . 10 [ ; ) [ ; ) 2.5 Constructing distributions on R 0 ¥ ;B(R 0 ¥ ) . 11 3 Brownian Motion and First Properties 12 3.1 Definition of Brownian motion . 12 3.2 Wiener Measure . 18 3.3 Extensions and first properties . 18 3.4 Basic properties of Brownian sample paths . 19 4 Filtrations and stopping times 23 4.1 Stopping times . 24 5 (Sub/super-)Martingales in continuous time 27 5.1 Definitions . 27 5.2 Doob’s maximal inequalities . 28 5.3 Convergence and regularisation theorems . 30 5.4 Martingale convergence and optional stopping . 33 6 Continuous semimartingales 36 6.1 Functions of finite variation . 36 6.2 Processes of finite variation . 38 6.3 Continuous local martingales . 40 6.4 Quadratic variation of a continuous local martingale . 41 6.5 Continuous semimartingales . 47 1 7 Stochastic Integration 48 7.1 Stochastic integral w.r.t. L2-bounded martingales . 48 7.2 Intrinsic characterisation of stochastic integrals using the quadratic co-variation . 51 7.3 Extensions: stochastic integration with respect to continuous semi- martingales . 54 7.4 Ito’sˆ formula and its applications . 57 A The Monotone Class Lemma/ Dynkin’s p − l Theorem 62 B Review of some basic probability 63 B.1 Convergence of random variables. 63 B.2 Convergence in probability. 63 B.3 Uniform Integrability . 64 B.4 The Convergence Theorems . 64 B.5 Information and independence. 66 B.6 Conditional expectation. 66 C Convergence of backwards discrete time supermartingales 67 D A very short primer in functional analysis 67 E Normed vector spaces 67 F Banach spaces 69 These notes are based heavily on notes by Jan Obłoj´ from last year’s course, and the book by Jean-Franc¸ois Le Gall, Brownian motion, martingales, and stochas- tic calculus, Springer 2016. The first five chapters of that book cover everything in the course (and more). Other useful references (in no particular order) include: 1. D. Revuz and M. Yor, Continuous martingales and Brownian motion, Springer (Revised 3rd ed.), 2001, Chapters 0-4. 2. I. Karatzas and S. Shreve, Brownian motion and stochastic calculus, Springer (2nd ed.), 1991, Chapters 1-3. 3. R. Durrett, Stochastic Calculus: A practical introduction, CRC Press, 1996. Sections 1.1 - 2.10. 4. F. Klebaner, Introduction to Stochastic Calculus with Applications, 3rd edition, Imperial College Press, 2012. Chapters 1, 2, 3.1–3.11, 4.1-4.5, 7.1-7.8, 8.1- 8.7. 2 5. J. M. Steele, Stochastic Calculus and Financial Applications, Springer, 2010. Chapters 3 - 8. 6. B. Oksendal, Stochastic Differential Equations: An introduction with applica- tions, 6th edition, Springer (Universitext), 2007. Chapters 1 - 3. 7. S. Shreve, Stochastic calculus for finance, Vol 2: Continuous-time models, Springer Finance, Springer-Verlag, New York, 2004. Chapters 3 - 4. The appendices gather together some useful results that we take as known. 1 Introduction Our topic is part of the huge field devoted to the study of stochastic processes. Since first year, you’ve had the notion of a random variable, X say, on a probability space (W;F ;P) and taking values in a state space (E;E ). X : W ! E is just a measurable mapping, so that for each e 2 E , X−1(e) 2 F and so, in particular, we can assign a probability to the event that X 2 e. Often (E;E ) is just (R;B(R)) (where B(R) is the Borel sets on R) and this just says that for each x 2 R we can assign a probability to the event fX ≤ xg. Definition 1.1 A stochastic process, indexed by some set T , is a collection of random variables fXt gt2T , defined on a common probability space (W;F ;P) and taking values in a common state space (E;E ). For us, T will generally be either [0;¥) or [0;T] and we think of Xt as a random quantity that evolves with time. When we model deterministic quantitities that evolve with (continuous) time, we often appeal to ordinary differential equations as models. In this course we develop the ‘calculus’ necessary to develop an analogous theory of stochastic (or- dinary) differential equations. An ordinary differential equation might take the form dX(t) = a(t;X(t))dt; for a suitably nice function a. A stochastic equation is often formally written as dX(t) = a(t;X(t))dt + b(t;X(t))dBt ; where the second term on the right models ‘noise’ or fluctuations. Here (Bt )t≥0 is an object that we call Brownian motion. We shall consider what appear to be more general driving noises, but the punchline of the course is that under rather general conditions they can all be built from Brownian motion. Indeed if we added possible (random) ‘jumps’ in X(t) at times given by a Poisson process, we’d cap- ture essentially the most general theory. We are not going to allow jumps, so we’ll be thinking of settings in which our stochastic equation has a continuous solution t 7! Xt , and Brownian motion will be a fundamental object. It requires some measure-theoretic niceties to make sense of all this. 3 Definition 1.2 The mapping t 7! Xt (w) for a fixed w 2 W, represents a realisation of our stochastic process, called a sample path or trajectory. We shall assume that (t;w) 7! Xt (w) : [0;¥) × W;B([0;¥)) ⊗ F 7! R;B(R) is measurable (i.e. 8A 2 B(R), f(t;w) : Xt 2 Ag is in the product s-algebra B([0;¥)) ⊗ F . Our stochastic process is then said to be measurable. Definition 1.3 Let X;Y be two stochastic processes defined on a common proba- bility space (W;F ;P) . i. We say that X is a modification of Y if, for all t ≥ 0, we have Xt = Yt a.s.; ii. We say that X and Y are indistinguishable if P[Xt =Yt ; for all 0 ≤ t < ¥] = 1. If X and Y are modifications of one another then, in particular, they have the same finite dimensional distributions, P[(Xt1 ;:::;Xtn ) 2 A] = P[(Yt1 ;:::;Ytn ) 2 A] for all measurable sets A, but indistinguishability is a much stronger property. Indistinguishability takes the sample path as the basic object of study, so that we could think of (Xt (w);t ≥ 0) (the path) as a random variable taking values in the space E[0;¥) (of all possible paths). We are abusing notation a little by identifying the sample space W with the state space E[0;¥). This state space then has to be endowed with a s-algebra of measurable sets. For definiteness, we take real-valued processes, so E = R. [ ; ) Definition 1.4 An n-dimensional cylinder set in R 0 ¥ is a set of the form [0;¥) C = w 2 R : w(t1);:::;w(tn) 2 A n for some 0 ≤ t1 < t2 ::: < tn and A 2 B(R ). [ ; ) Let C be the family of all finite-dimensional cylinder sets and B(R 0 ¥ ) the s-algebra it generates. This is small enough to be able to build probability mea- [ ; ) sures on B(R 0 ¥ ) using Caratheodory’s´ Theorem (see B8.1). On the other hand [ ; ) B(R 0 ¥ ) only contains events which can be defined using at most countably many coordinates. In particular, the event [0;¥) w 2 B(R ) : w(t) is continuous [ ; ) is not B(R 0 ¥ )-measurable. We will have to do some work to show that many processes can be assumed to be continuous, or right continuous. The sample paths are then fully described by their values at times t 2 Q, which will greatly simplify the study of quantities of interest such as sup0≤s≤t jXsj or t0(w) = infft ≥ 0 : Xt (w) > 0g. A monotone class argument (see Appendix A) will tell us that a probability [ ; ) measure on B(R 0 ¥ ) is characterised by its finite-dimensional distributions - so if we can take continuous paths, then we only need to find the probabilities of cylinder sets to characterise the distribution of the process. 4 2 An overview of Gaussian variables and processes Brownian motion is a special example of a Gaussian process - or at least a version of one that is assumed to have continuous sample paths. In this section we give an overview of Gaussian variables and processes, but in the next we shall give a direct construction of Brownian motion, due to Levy,´ from which continuity of sample paths is an immediate consequence. 2.1 Gaussian variables Definition 2.1 A random variable X is called a centred standard Gaussian, or stan- dard normal, if its distribution has density 2 1 − x pX (x) = p e 2 x 2 R 2p with respect to Lebesgue measure. We write X ∼ N (0;1). It is elementary to calculate its Laplace transform: 2 lX l E[e ] = e 2 ; l 2 R; or extending to complex values the characteristic function 2 ixX −x E[e ] = e 2 ; x 2 R: We say Y has Gaussian (or normal) distribution with mean m and variance s 2, written Y ∼ N (m;s 2), if Y = sX + m where X ∼ N (0;1). Then s 2x 2 [eixY ] = exp imx − ; x 2 ; E 2 R and if s > 0, the density on R is 2 1 − (x−m) pY (x) = p e 2s2 ; x 2 R: 2ps We think of a constant ‘random’ variable as being a degenerate Gaussian.