Stochastic Integration

Total Page:16

File Type:pdf, Size:1020Kb

Stochastic Integration Stochastic Integration Prakash Balachandran Department of Mathematics Duke University June 11, 2008 These notes are based on Durrett’s Stochastic Calculus, Revuz and Yor’s Continuous Martingales and Brownian Motion, and Kuo’s Introduction to Stochastic Integration. 1 Preliminaries Definition: A continuous-time process Xt is said to be a continuous local martingale w.r.t. fFt; t ≥ 0g if there are stopping times Tn " 1 such that 8 X on fT > 0g <> Tn^t n Tn Xt = :>0 on fTn = 0g is a martingale w.r.t. fFt^Tn : t ≥ 0g. The stopping times fTng are said to reduce X. Remarks: T 1. I brooded over why we set Xt = 0 in the definition, and this is the only explanation I could find: If we defined T Xt = XT ^t on fT ≥ 0g; T T then X0 = X0, so according to the above definition of a local martingale, Xt a martingale implies T E[X0 ] = E[X0] < 1. T So, with the above definition of Xt , X0 has to be integrable. 1 Since we want to consider more general processes in which X0 need not be integrable, we set 8 >XT ^t on fT > 0g T < Xt = :>0 on fT = 0g so that 8 >X0 on fT > 0g T < X0 = :>0 on fT = 0g T and according to the definition of a local martingale, Xt a martingale implies that T E[X0 ] = E[X0; T > 0] < 1 which does not necessarily imply that X0 is integrable since E[X0; T > 0] ≤ E[X0]. Thus, our definition of a continuous local martingale frees us from integrability of X0. 2. We say that a process Y is locally A if there is a sequence of stopping times Tn " 1 so that the T stopped processes Yt has property A. Now, you might ask why the hell we should care about continuous local martingales. Again, I thought about this a lot, and these are the only reasons I could salvage: (1) (n) n (j)on Example 1: Let Bt = Bt ;:::;Bt be n-dimensional Brownian motion, where Bt are j=1 independent Brownian motions on R. r (1)2 (n)2 Suppose we’re interested in the process jjBtjj = Bt + ··· + Bt ; it can be shown that: n Z t (j) Z t X Bs n − 1 1 jjB jj = dB(j) + ds t jjB jj s 2 jjB jj j=1 0 s 0 s and that n Z t (j) X Bs W = dB(j) t jjB jj s j=1 0 s is a Brownian motion, and hence, a martingale. Now, what about the second integral above? It’s not immediately obvious how this integral behaves, but it’s certainly not a martingale. In fact, it can be shown that 1 for t ≥ 0 is a continuous local jjBtjj martingale, and so R t ds is a continuous local martingale. 0 jjBsjj 2 Since a continuous martingale is certainly a continuous local martingale, it follows that jjBtjj is a con- tinuous local martingale. Now, suppose that a particle is exhibiting Brownian motion. If w(t; !) is any suitable process which represents a quantity that varies with the distance from the origin to the particle, the process Z t W (!) = w(s; !)djjBsjj(!) 0 represents the total accumulation of this quantity, along a path of Brownian motion. So, we need to know how to integrate processes w.r.t. continuous local martingales to evaluate this quantity (and ensure that it does, in fact, exist). Example 2: Let Xt be a continuous martingale, and let φ be a convex function (imagine that Xt is the −tXt interest rate at time t, so that φ(Xt) = e is the present value of a dollar made in t years). Theorem 1 If E[jφ(Xt)j] < 1, then φ(Xt) is a submartingale. Proof: Jensen’s inequality for conditional expectation states that if φ is convex, and E[jXtj];E[j[φ(Xt)j] < 1 for each t, then for s < t: φ(Xs) = φ (E[XtjFs]) ≤ E[φ(Xt)jFs]: On the other hand, we have: Theorem 2 φ(Xt) is always a local submartingale. For the proof of this, see the corollary after Theorem 4. R T −tXt R T So, if φ(t) is a cash flow from now to time T , 0 φ(t)e dt = 0 φ(t)dYt is the net present value of −tXt this cash flow, where we’ve set dYt = e dt. Again, we need to know how to integrate processes w.r.t. continuous local martingales to evaluate this quantity (and ensure that it does, in fact, exist). Example 3: 2 Definition: Define Lad(Ω;L [a; b]) denote the space of stochastic processes f(t; !) satisfying: 1. f(t; !) is adapted to the filtration fFtg of Brownian motion. R b 2 2. a jf(t; !)j dt < 1 almost surely. 2 R b 2 Definition: Define L ([a; b]×Ω) to be the space of fFtg adapted processes f(t; !) such that a E(jf(t; !)j )dt < 1. 3 2 hR b 2 i R b 2 Now, by Fubini’s Theorem if f 2 L ([a; b] × Ω), E a jf(t; !)j dt = a E[jf(t; !)j ]dt < 1. Thus, R b 2 2 a jf(t; !)j dt < 1 almost surely, so that f 2 Lad(Ω;L [a; b]). Since f was arbitrary, we must have 2 2 L ([a; b] × Ω) ⊆ Lad(Ω;L [a; b]): R t 2 Now, in Stochastic Calculus, one constructs the integral 0 f(s; !)dBs for f 2 L ([a; b] × Ω). In this R t case, we have that 0 f(s; !)dBs is a martingale. 2 R t However, when f 2 Lad(Ω;L [a; b]), 0 f(s; !)dBs need not be a martingale. Now, in order to proceed, we need a couple of theorems about continuous local martingales, and theorems concerning variance and covariance processes. They may seem irrelevant, now, but they’ll come in handy later. 4 2 Continuous Local Martingales The first section was supposed to convince you why you should care about continuous local martingales. Now, we prove some theorems. Theorem 3 (The Optional Stopping Theorem) Let X be a continuous local martingale. If S ≤ T are stopping times, and XT ^t is a uniformly integrable martingale, then E[XT jFS] = XS. Proof: The classic Optional Stopping Theorem states that: If L ≤ M are stopping times and YM^n is a uniformly integrable martingale w.r.t. Gn, then E[YM jGL] = YL: [2nS]+1 To extend the result from discrete to continuous time, let Sn = 2n . Applying the discrete time result n to the uniformly integrable martingale Ym = XT ^m2−n with L = 2 Sn and M = 1, we have E[XT jFSn ] = XT ^Sn : Now, the dominated convergence theorem for conditional expectation states: If Zn ! Z a.s., jZnj ≤ W for all n where E[W ] < 1, and Fn "F1, E[ZnjFn] ! E[ZjF1] a:s: Taking Zn = XT and Fn = FSn , and noticing that XT ^t a uniformly integrable martingale implies that E[jXT j] < 1, we have E[XT jFSn ] ! E[XT jFS] a.s.. Since E[XT jFSn ] = XT ^Sn ! XS a.s., we have that XS = E[XT jFS]. 5 Theorem 4 If X is a continuous local martingale, we can always take the sequence which reduces X to be Tn = infft : jXtj > ng 0 0 or any other sequence Tn ≤ Tn that has Tn " 1 as n " 1. Proof: Let Sn be a sequence that reduces X. If s < t, then applying the optional stopping theorem to Sn 0 0 Xr at times r = s ^ Tm and t ^ Tm gives: 0 0 0 E[Xt^Tm^Sn 1Sn>0jFs^Tm^Sn ] = Xs^Tm^Sn 1Sn>0: 0 0 Multiplying by 1Tm>0 2 F0 ⊆ Fs^Tm^Sn : 0 0 0 0 0 E[Xt^Tm^Sn 1Tm>0;Sn>0jFs^Tm^Sn ] = Xs^Tm^Sn 1Tm>0;Sn>0: 0 0 0 0 0 0 As n " 1, Fs^Tm^Sn "Fs^Tm , and Xr^Tm^Sn 1Sn>0;Tm>0 ! Xr^Tm 1Tm>0 for all r ≥ 0 and 0 0 jXr^Tm^Sn 1Sn>0;Tm>0j ≤ m it follows from the dominated convergence theorem for conditional ex- pectation that: 0 0 0 0 0 E[Xt^Tm 1Tm>0jFs^Tm ] = Xs^Tm 1Tm>0: Corollary 1 If X is a continuous martingale, and φ is a convex function, then φ(Xt) is a continuous local submartingale. Proof: By Theorem 4, we can let Tn = infft : jXtj > ng to be a sequence of stopping times that reduce Tn Tn Tn Xt. By definition of Xt , we therefore have jXt j ≤ n ) E[jXt j] ≤ n, t ≥ 0. Tn Tn Now, since φ is convex, and jXt j ≤ n, φ(Xt ) is contained in φ([−n; n]). Since φ is convex, it is Tn continuous, so that φ([−n; n]) is bounded. Hence, jφ(Xt )j ≤ M for some 0 < M < 1, and so Tn E[jφ(Xt )j] ≤ M. So, by Jensen’s inequality: Tn Tn Tn E[φ(Xt )jFTn^s] ≥ φ(E[Xt jFTn^s]) = φ(Xs ): Tn Thus, φ(Xt ) is a submartingale, so that φ(Xt) is a local submartingale. 6 Tn In the proof of Theorem 4, we used the fact that Xt is a martingale w.r.t. fFt^Tn ; t ≥ 0g, as per the definition of a continuous local martingale. In general, we have S Theorem 5 Let S be a stopping time.
Recommended publications
  • CONFORMAL INVARIANCE and 2 D STATISTICAL PHYSICS −
    CONFORMAL INVARIANCE AND 2 d STATISTICAL PHYSICS − GREGORY F. LAWLER Abstract. A number of two-dimensional models in statistical physics are con- jectured to have scaling limits at criticality that are in some sense conformally invariant. In the last ten years, the rigorous understanding of such limits has increased significantly. I give an introduction to the models and one of the major new mathematical structures, the Schramm-Loewner Evolution (SLE). 1. Critical Phenomena Critical phenomena in statistical physics refers to the study of systems at or near the point at which a phase transition occurs. There are many models of such phenomena. We will discuss some discrete equilibrium models that are defined on a lattice. These are measures on paths or configurations where configurations are weighted by their energy with a preference for paths of smaller energy. These measures depend on at leastE one parameter. A standard parameter in physics β = c/T where c is a fixed constant, T stands for temperature and the measure given to a configuration is e−βE . Phase transitions occur at critical values of the temperature corresponding to, e.g., the transition from a gaseous to liquid state. We use β for the parameter although for understanding the literature it is useful to know that large values of β correspond to “low temperature” and small values of β correspond to “high temperature”. Small β (high temperature) systems have weaker correlations than large β (low temperature) systems. In a number of models, there is a critical βc such that qualitatively the system has three regimes β<βc (high temperature), β = βc (critical) and β>βc (low temperature).
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Locally Feller Processes and Martingale Local Problems
    Locally Feller processes and martingale local problems Mihai Gradinaru and Tristan Haugomat Institut de Recherche Math´ematique de Rennes, Universit´ede Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France fMihai.Gradinaru,[email protected] Abstract: This paper is devoted to the study of a certain type of martingale problems associated to general operators corresponding to processes which have finite lifetime. We analyse several properties and in particular the weak convergence of sequences of solutions for an appropriate Skorokhod topology setting. We point out the Feller-type features of the associated solutions to this type of martingale problem. Then localisation theorems for well-posed martingale problems or for corresponding generators are proved. Key words: martingale problem, Feller processes, weak convergence of probability measures, Skorokhod topology, generators, localisation MSC2010 Subject Classification: Primary 60J25; Secondary 60G44, 60J35, 60B10, 60J75, 47D07 1 Introduction The theory of L´evy-type processes stays an active domain of research during the last d d two decades. Heuristically, a L´evy-type process X with symbol q : R × R ! C is a Markov process which behaves locally like a L´evyprocess with characteristic exponent d q(a; ·), in a neighbourhood of each point a 2 R . One associates to a L´evy-type process 1 d the pseudo-differential operator L given by, for f 2 Cc (R ), Z Z Lf(a) := − eia·αq(a; α)fb(α)dα; where fb(α) := (2π)−d e−ia·αf(a)da: Rd Rd (n) Does a sequence X of L´evy-type processes, having symbols qn, converges toward some process, when the sequence of symbols qn converges to a symbol q? What can we say about the sequence X(n) when the corresponding sequence of pseudo-differential operators Ln converges to an operator L? What could be the appropriate setting when one wants to approximate a L´evy-type processes by a family of discrete Markov chains? This is the kind of question which naturally appears when we study L´evy-type processes.
    [Show full text]
  • Martingale Theory
    CHAPTER 1 Martingale Theory We review basic facts from martingale theory. We start with discrete- time parameter martingales and proceed to explain what modifications are needed in order to extend the results from discrete-time to continuous-time. The Doob-Meyer decomposition theorem for continuous semimartingales is stated but the proof is omitted. At the end of the chapter we discuss the quadratic variation process of a local martingale, a key concept in martin- gale theory based stochastic analysis. 1. Conditional expectation and conditional probability In this section, we review basic properties of conditional expectation. Let (W, F , P) be a probability space and G a s-algebra of measurable events contained in F . Suppose that X 2 L1(W, F , P), an integrable ran- dom variable. There exists a unique random variable Y which have the following two properties: (1) Y 2 L1(W, G , P), i.e., Y is measurable with respect to the s-algebra G and is integrable; (2) for any C 2 G , we have E fX; Cg = E fY; Cg . This random variable Y is called the conditional expectation of X with re- spect to G and is denoted by E fXjG g. The existence and uniqueness of conditional expectation is an easy con- sequence of the Radon-Nikodym theorem in real analysis. Define two mea- sures on (W, G ) by m fCg = E fX; Cg , n fCg = P fCg , C 2 G . It is clear that m is absolutely continuous with respect to n. The conditional expectation E fXjG g is precisely the Radon-Nikodym derivative dm/dn.
    [Show full text]
  • A Guide to Brownian Motion and Related Stochastic Processes
    Vol. 0 (0000) A guide to Brownian motion and related stochastic processes Jim Pitman and Marc Yor Dept. Statistics, University of California, 367 Evans Hall # 3860, Berkeley, CA 94720-3860, USA e-mail: [email protected] Abstract: This is a guide to the mathematical theory of Brownian mo- tion and related stochastic processes, with indications of how this theory is related to other branches of mathematics, most notably the classical the- ory of partial differential equations associated with the Laplace and heat operators, and various generalizations thereof. As a typical reader, we have in mind a student, familiar with the basic concepts of probability based on measure theory, at the level of the graduate texts of Billingsley [43] and Durrett [106], and who wants a broader perspective on the theory of Brow- nian motion and related stochastic processes than can be found in these texts. Keywords and phrases: Markov process, random walk, martingale, Gaus- sian process, L´evy process, diffusion. AMS 2000 subject classifications: Primary 60J65. Contents 1 Introduction................................. 3 1.1 History ................................ 3 1.2 Definitions............................... 4 2 BM as a limit of random walks . 5 3 BMasaGaussianprocess......................... 7 3.1 Elementarytransformations . 8 3.2 Quadratic variation . 8 3.3 Paley-Wiener integrals . 8 3.4 Brownianbridges........................... 10 3.5 FinestructureofBrownianpaths . 10 arXiv:1802.09679v1 [math.PR] 27 Feb 2018 3.6 Generalizations . 10 3.6.1 FractionalBM ........................ 10 3.6.2 L´evy’s BM . 11 3.6.3 Browniansheets ....................... 11 3.7 References............................... 11 4 BMasaMarkovprocess.......................... 12 4.1 Markovprocessesandtheirsemigroups. 12 4.2 ThestrongMarkovproperty. 14 4.3 Generators .............................
    [Show full text]
  • ENLARGEMENT of FILTRATION and the STRICT LOCAL MARTINGALE PROPERTY in STOCHASTIC DIFFERENTIAL EQUATIONS Aditi Dandapani COLUMBIA
    ENLARGEMENT OF FILTRATION AND THE STRICT LOCAL MARTINGALE PROPERTY IN STOCHASTIC DIFFERENTIAL EQUATIONS Aditi Dandapani Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2016 © 2016 Aditi Dandapani All Rights Reserved Abstract ENLARGEMENT OF FILTRATION AND THE STRICT LOCAL MARTINGALE PROPERTY IN STOCHASTIC DIFFERENTIAL EQUATIONS Aditi Dandapani In this thesis, we study the strict local martingale property of solutions of various types of stochastic differential equations and the effect of an initial expansion of the filtration on this property. For the models we consider, we either use existing criteria or, in the case where the stochastic differential equation has jumps, develop new criteria that can can detect the presence of the strict local martingale property. We develop deterministic sufficient conditions on the drift and diffusion coefficient of the stochastic process such that an enlargement by initial expansion of the filtration can produce a strict local martingale from a true martingale. We also develop a way of characterizing the martingale property in stochastic volatility models where the local martingale has a general diffusion coefficient, of the form µ(St; vt);where the function µ(x1; x2) is locally Lipschitz in x: Contents Acknowledgements ii Introduction 1 Chapter 1 5 0.1 The One-Dimensional Case . .5 0.2 Stochastic Volatility Models . 13 0.3 Expansion of the Filtration by Initial Expansions . 31 Chapter 2 35 0.4 The Model of Lions and Musiela . 35 0.5 The Case of Jump Discontinuities . 62 0.6 The Model of Mijatovic and Urusov .
    [Show full text]
  • Itô's Stochastic Calculus
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Stochastic Processes and their Applications 120 (2010) 622–652 www.elsevier.com/locate/spa Ito’sˆ stochastic calculus: Its surprising power for applications Hiroshi Kunita Kyushu University Available online 1 February 2010 Abstract We trace Ito’sˆ early work in the 1940s, concerning stochastic integrals, stochastic differential equations (SDEs) and Ito’sˆ formula. Then we study its developments in the 1960s, combining it with martingale theory. Finally, we review a surprising application of Ito’sˆ formula in mathematical finance in the 1970s. Throughout the paper, we treat Ito’sˆ jump SDEs driven by Brownian motions and Poisson random measures, as well as the well-known continuous SDEs driven by Brownian motions. c 2010 Elsevier B.V. All rights reserved. MSC: 60-03; 60H05; 60H30; 91B28 Keywords: Ito’sˆ formula; Stochastic differential equation; Jump–diffusion; Black–Scholes equation; Merton’s equation 0. Introduction This paper is written for Kiyosi Itoˆ of blessed memory. Ito’sˆ famous work on stochastic integrals and stochastic differential equations started in 1942, when mathematicians in Japan were completely isolated from the world because of the war. During that year, he wrote a colloquium report in Japanese, where he presented basic ideas for the study of diffusion processes and jump–diffusion processes. Most of them were completed and published in English during 1944–1951. Ito’sˆ work was observed with keen interest in the 1960s both by probabilists and applied mathematicians. Ito’sˆ stochastic integrals based on Brownian motion were extended to those E-mail address: [email protected].
    [Show full text]
  • Non-Linear Superprocesses
    Non-linear sup erpro cesses L. Overb eck Lab oratoire de Probabilit es, Universit eParis VI, eme 4, Place Jussieu, Tour 56, 3 Etage, 75252 Paris Cedex 05, y France January 27, 1995 Abstract Non-linear martingale problems in the McKean-Vlasov sense for sup erpro cesses are studied. The sto chastic calculus on historical trees is used in order to show that there is a unique solution of the non-linear martingale problems under Lipschitz conditions on the co ecients. Mathematics Subject Classi cation 1991: 60G57, 60K35, 60J80. 1 Intro duction Non-linear di usions, also called McKean-Vlasov pro cesses, are di usion pro- cesses which are asso ciated with non-linear second order partial di erential d equation. IR -valued McKean-Vlasov di usions are studied in detail in many pap ers, e.g. [F,Oel,S1,S2]. The main issues are approximation by a sequence of weakly interacting di usions, asso ciated large deviations and uctuations Supp orted byanEC-Fellowship under Contract No. ERBCHBICT930682 and par- tially by the Sonderforschungsb ereich 256. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 and nally uniqueness and existence of the non-linear martingale problem asso ciated with McKean-Vlasov pro cess. In this pap er we fo cus on the latter question in the set-up of branching measure-valued di usions pro cesses, also called sup erpro cesses. For an ex- cellentintro duction to the theory of sup erpro cesses we refer to [D]. In order to formulate the basic de nition we need to intro duce some notation.
    [Show full text]
  • Martingale Problems and Stochastic Equations
    Martingale problems and stochastic equations • Characterizing stochastic processes by their martingale properties • Markov processes and their generators • Forward equations • Other types of martingale problems • Change of measure • Martingale problems for conditional distributions • Stochastic equations for Markov processes • Filtering • Time change equations • Bits and pieces • Supplemental material • Exercises • References •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 1 Supplemental material • Background: Basics of stochastic processes • Stochastic integrals for Poisson random measures • Technical lemmas •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 2 1. Characterizing stochastic processes by their martingale prop- erties • Levy’s´ characterization of Brownian motion • Definition of a counting process • Poisson processes • Martingale properties of the Poisson process • Strong Markov property for the Poisson process • Intensities • Counting processes as time changes of Poisson processes • Martingale characterizations of a counting process • Multivariate counting processes Jacod(1974/75),Anderson and Kurtz(2011, 2015) •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 3 Brownian motion A Brownian motion is a continuous process with independent, Gaussian increments. A Brownian motion W is standard if the increments W (t + r) − W (t), t; r ≥ 0, have mean zero and variance r. W is a martingale: W W E[W (t + r)jFt ] = E[W (t + r) − W (t)jFt ] + W (t) = W (t); W Ft = σ(W (s): s ≤ t). W has quadratic variation t, that is X 2 [W ]t = lim (W (t ^ ti+1) − W (t ^ ti)) = t sup jti+1−tij=0 i in probability, or more precisely, X 2 lim sup j (W (t ^ ti+1) − W (t ^ ti)) − tj = 0 sup jti+1−tij=0 t≤T i in probability for each T > 0.
    [Show full text]
  • Lecture 20 Itô's Formula
    Lecture 20:Itô’s formula 1 of 13 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 20 Itô’s formula Itô’s formula Itô’s formula is for stochastic calculus what the Newton-Leibnitz for- mula is for (the classical) calculus. Not only does it relate differentia- tion and integration, it also provides a practical method for computa- tion of stochastic integrals. There is an added benefit in the stochastic case. It shows that the class of continuous semimartingales is closed under composition with C2 functions. We start with the simplest ver- sion: Theorem 20.1. Let X be a continous semimartingale taking values in a seg- ment [a, b] ⊆ R, and let f : [a, b] ! R be a twice continuously differentiable function ( f 2 C2[a, b]). Then the process f (X) is a continuous semimartin- gale and Z t Z t Z t 0 0 1 00 (20.1) f (Xt) − f (X0) = f (Xu) dMu + f (Xu) dAu + 2 f (Xu) dhMiu. 0 0 0 R t 0 Remark 20.2. The Leibnitz notation 0 f (Xu) dMu (as opposed to the “semimartingale notation” f 0(X) · M) is more common in the context of the Itô formula. We’ll continue to use both. Before we proceed with the proof, let us state and prove two useful related results. In the first one we compute a simple stochastic integral explicitly. You will immediately see how it differs from the classical Stieltjes integral of the same form. 2,c Lemma 20.3. For M 2 M0 , we have 1 2 1 M · M = 2 M − 2 hMi.
    [Show full text]