
Stochastic Calculus Xi Geng Contents 1 Review of probability theory 3 1.1 Conditional expectations . 3 1.2 Uniform integrability . 5 1.3 The Borel-Cantelli lemma . 6 1.4 The law of large numbers and the central limit theorem . 7 1.5 Weak convergence of probability measures . 7 2 Generalities on continuous time stochastic processes 13 2.1 Basic definitions . 13 2.2 Construction of stochastic processes: Kolmogorov’s extension theorem . 15 2.3 Kolmogorov’s continuity theorem . 18 2.4 Filtrations and stopping times . 20 3 Continuous time martingales 27 3.1 Basic properties and the martingale transform: discrete stochastic inte- gration . 27 3.2 The martingale convergence theorems . 28 3.3 Doob’s optional sampling theorems . 34 3.4 Doob’s martingale inequalities . 37 3.5 The continuous time analogue . 38 3.6 The Doob-Meyer decomposition . 42 4 Brownian motion 51 4.1 Basic properties . 51 4.2 The strong Markov property and the reflection principle . 52 4.3 The Skorokhod embedding theorem and the Donsker invariance principle 55 4.4 Passage time distributions . 62 4.5 Sample path properties: an overview . 67 5 Stochastic integration 71 5.1 L2-bounded martingales and the bracket process . 71 5.2 Stochastic integrals . 79 5.3 Itô’s formula . 88 1 5.4 The Burkholder-Davis-Gundy Inequalities . 91 5.5 Lévy’s characterization of Brownian motion . 94 5.6 Continuous local martingales as time-changed Brownian motions . 95 5.7 Continuous local martingales as Itô’s integrals . 102 5.8 The Cameron-Martin-Girsanov transformation . 108 5.9 Local times for continuous semimartingales . 117 6 Stochastic differential equations 127 6.1 Itô’s theory of stochastic differential equations . 127 6.2 Different notions of solutions and the Yamada-Watanabe theorem . 133 6.3 Existence of weak solutions . 136 6.4 Pathwise uniqueness results . 150 6.5 A comparison theorem for one dimensional SDEs . 154 6.6 Two useful techniques: transformation of drift and time-change . 157 6.7 Examples: linear SDEs and Bessel processes . 163 6.8 Itô’s diffusion processes and partial differential equations . 175 2 1 Review of probability theory In this section, we review several aspects of probability theory that are important for our study. Most proofs are contained in standard textbooks and hence will be omitted. Recall that a probability space is a triple (Ω; F; P) which consists of a non-empty set Ω, a σ-algebra F over Ω and a probability measure on F: A random variable over p (Ω; F; P) is a real-valued F-measurable function. For 1 6 p < 1;L (Ω; F; P) denotes p the Banach space of (equivalence classes of) random variables X satisfying E[jXj ] < 1: The following are a few conventions that we will be using in the course. • A P-null set is a subset of some F-measurable set with zero probability. • A property is said to hold almost surely (a.s.) or with probability one if it holds outside an F-measurable set with zero probability, or equivalently, the set on which it does not hold is a P-null set. 1.1 Conditional expectations A fundamental concept in the study of martingale theory and stochastic calculus is the conditional expectation. Definition 1.1. Let (Ω; F; P) be a probability space, and let G be a sub-σ-algebra of F: 1 Given an integrable random variable X 2 L (Ω; F; P), the conditional expectation of X given G is the unique G-measurable and integrable random variable Y such that Y dP = XdP; 8A 2 G: (1.1) ˆA ˆA It is denoted by E[XjG]: The existence of the conditional expectation is a standard application of the Radon- Nikodym theorem, and the uniqueness follows from an easy measure theoretic argument. Here we recall a geometric construction of the conditional expectation. We start 2 2 with the Hilbert space L (Ω; F; P). Since G ⊆ F; the Hilbert space L (Ω; G; P) can 2 2 be regarded as a closed subspace of L (Ω; F; P). Given X 2 L (Ω; F; P); let Y be the 2 orthogonal projection of X onto L (Ω; G; P): Then Y satisfies the characterizing property (1.1) of the conditional expectation. If X is a non-negative integrable random variable, 2 we consider Xn = X ^ n 2 L (Ω; F; P) and let Yn be the orthogonal projection of Xn 2 onto L (Ω; G; P): It follows that Yn is non-negative and increasing. Its pointwise limit, denoted by Y; is a non-negative, G-measurable and integrable random variable which satisfies (1.1). The general case is treated by writing X = X+ − X− and using linearity. We left it as an exercise to provide the details of the construction. The conditional expectation satisfies the following basic properties. (1) X 7! E[XjG] is linear. (2) If X 6 Y; then E[XjG] 6 E[Y jG]: In particular, jE[XjG]j 6 E[jXjjG]: (3) If X and ZX are both integrable, and Z 2 G; then E[ZXjG] = ZE[XjG]: 3 (4) If G1 ⊂ G2 are sub-σ-algebras of F; then E[E[XjG2]jG1] = E[XjG1]: (5) If X and G are independent, then E[XjG] = E[X]: In addition, we have the following Jensen’s inequality: if ' is a convex function on R, and both X and '(X) are integrable, then '(E[XjG]) 6 E['(X)jG]: (1.2) p Applying this to the function '(x) = jxj for p > 1, we see immediately that the p conditional expectation is a contraction operator on L (Ω; F; P): The convergence theorems (the monotone convergence theorem, Fatou’s lemma, and the dominated convergence theorem) also hold for the conditional expectation, stated in an obvious way. For every measurable subset A 2 F; P(AjG) is the conditional probability of A given G. However, P(AjG) is defined up to a null set which depends on A; and in general there does not exist a universal null set outside which the conditional probability A 7! P(AjG) can be regarded as a probability measure. The resolution of this issue leads to the notion of regular conditional expectations, which plays an important role in the study of Markov processes and stochastic differential equations. Definition 1.2. Let (Ω; F; P) be a probability space and let G be a sub-σ-algebra of F: A system fp(!; A)g!2Ω;A2F is called a regular conditional probability given G if it satisfies the following conditions: (1) for every ! 2 Ω;A 7! p(!; A) is a probability measure on (Ω; F); (2) for every A 2 F;! 7! p(!; A) is G-measurable; (3) for every A 2 F and B 2 G; \ P(A B) = p(!; A)P(d!): ˆB The third condition tells us that for every A 2 F; p(·;A) is a version of P(AjG): It follows that for every integrable random variable X; ! 7! X(!0)p(!; d!0) is an almost surely well-defined and it is a version of E[XjG]: ´ In many situations, we are interested in the conditional distribution of a random variable taking values in another measurable space. Suppose that fp(!; A)g!2Ω;A2F is a regular conditional probability on (Ω; F; P) given G: Let X be a measurable map from (Ω; F) to some measurable space (E; E): We can define Q(!; Γ) = p(!; X−1Γ);! 2 Ω; Γ 2 E: Then the system fQ(!; Γ)g!2Ω;Γ2E satisfies: (1)’ for every ! 2 Ω; Γ 7! Q(!; Γ) is a probability measure on (E; E); (2)’ for every Γ 2 E;! 7! Q(!; Γ) is G-measurable; (3)’ for every Γ 2 E and B 2 G; \ P(fX 2 Γg B) = Q(!; Γ)P(d!): ˆB 4 In particular, we can see that Q(·; Γ) is a version of P(X 2 ΓjG) for every Γ 2 E: The system fQ(!; Γ)g!2Ω;Γ2E is called a regular conditional distribution of X given G: It is a deep result in measure theory that if E is a complete and separable metric space, and E is the σ-algebra generated by open sets in E; then a regular conditional distribution of X given G exists. In particular, if (Ω; F) is a complete and separable metric space, by considering the identity map we know that a regular conditional probability given G exists. In this course we will mainly be interested in complete and separable metric spaces. Sometimes we also consider conditional expectations given some random variable X: X Let X be as before, and let P be the law of X on (E; E): Similar to Definition 1.2, a system fp(x; A)gx2E;A2F is called a regular conditional probability given X if it satisfies: (1)” for every x 2 E; A 7! p(x; A) is a probability measure on (Ω; F); (2)” for every A 2 F; x 7! p(x; A) is E-measurable; (3)” for every A 2 F and Γ 2 E; \ X P(A fX 2 Γg) = p(x; A)P (dx): ˆΓ In particular, p(·;A) gives a version of P(AjX = ·). If (Ω; F) is a complete and separable metric space, then a regular conditional probability given X exists. 1.2 Uniform integrability Now we review an important concept which is closely related to the study of L1- convergence. Definition 1.3. A family fXt : t 2 T g of integrable random variables over a probability space (Ω; F; P) is called uniformly integrable if lim sup jXtjdP = 0: λ!1 t2T ˆfjXtj>λg Uniform integrability can be characterized by the following two properties.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages182 Page
-
File Size-