An Introduction to Markov Processes and Their Applications in Mathematical Economics

Total Page:16

File Type:pdf, Size:1020Kb

An Introduction to Markov Processes and Their Applications in Mathematical Economics AN INTRODUCTION TO MARKOV PROCESSES AND THEIR APPLICATIONS IN MATHEMATICAL ECONOMICS UMUT C¸ETIN 1. Markov Property 2. Brief review of martingale theory 3. Feller Processes 4. Infinitesimal generators 5. Martingale Problems and Stochastic Differential Equations 6. Linear continuous Markov processes In this section we will focus on one-dimensional continuous Markov processes on real line. Our aim is to better understand their extended generators, transition functions, and to construct diffusion process from a Brownian motion by a change of time and space. We will deal with a Markov process, X, whose state space, E, is an interval (l; r) in R, which may be closed, open, semi-open, bounded or unbounded. The life-time is denoted with ζ as usual. The following is the standing assumptions on X throughout the section: (1) X is continuous on [0; ζ[; (2) X has strong Markov property; x (3) X is regular; i.e. if Tx := infft > 0 : Xt = xg, then P (Ty < 1) > 0 for any x 2 int(E) and y 2 E. The last assumption on the regularity of X is very much like the concept of irreducibility for Markov chains. Indeed, when X is regular, its state space cannot be decomposed into smaller sets from which X cannot exit. For any interval I =]a; b[ such that [a; b] ⊂ E, we denote by σI the exit time of I. Note that x x x for x 2 I, σI = Ta ^ Tb;P -a.s., and for x2 = I, σI = 0;P -a.s.. We also put mI (x) := E [σI ]. Proposition 6.1. mI is bounded on I. In particular σI is finite almost surely. Proof. Let y be a fixed point in I. Since X is regular, we can find α < 1 and t > 0 such that y y maxfP (Ta > t);P (Tb > t)g = α: If y < x < b, then x x y P (σI > t) ≤ P (Tb > t) ≤ P (Tb > t) ≤ α: Applying the same reasoning to a < x < y, we thus get x sup P (σI > t) ≤ α < 1: x2I Now, since σI = u + σI ◦ θu on [σI > u], we have x x \ P (σI > nt) = P [σI > (n − 1)t] [(n − 1)t + σI ◦ θ(n−1)t > nt] : 1 2 MARTINGALE PROBLEMS Thus, in view of the Markov property x x X(n−1)t P (σI > nt) = E 1[σI >(n−1)t]E 1[σI >t] : X(n−1)t However, on [σI > (n − 1)t], X(n−1)t 2 I, thus, E 1[σI >t] ≤ α so that x x P (σI > nt) ≤ αP (σI > (n − 1)t); x n and consequently P (σI > nt) ≤ α for every x 2 I. Therefore, Z 1 1 Z (n+1)t x x X x sup E [σI ] = sup P (σI > s)ds = sup P (σI > s)ds x2I x2I 0 x2I n=0 nt 1 X 1 ≤ tP x(σ > nt) ≤ t : I 1 − α n=0 In view of the above proposition, for any l ≤ a < x < b ≤ r, we have x x b a P [Ta < Tb] + P [T < T ] = 1: Theorem 6.1. There exists a continuous and strictly increasing function s on E such that for any a; b; x in E with l ≤ a < x < b ≤ r s(x) − s(a) P x(T < T ) = : b a s(b) − s(a) Moroever, if s~ is another function with the same properties, then s~ = αs+β for some α > 0. Proof. We will only give the proof of the formula when E is a closed and bounded interval. For the rest of the proof and the extension to the general case see the proof of Proposition 3.2 in Chap. VII of Revuz & Yor (or better try it yourself!). First, observe that [Tr < Tl] = [Tr < Tl;Ta < Tb] \ [Tr < Tl;Tb < Ta]; and that Tl = Ta + Tl ◦ θTa and Tr = Ta + Tr ◦ θTa on [Ta < Tb]. Thus, x x P (Tr < Tl;Ta < Tb) = E 1[Ta<Tb]1[Tr<Tl] ◦ θTa : Using the strong Markov property at Ta and that XTa = a when Ta < 1, we obtain x P (Tr < Tl;Ta < Tb) = Px(Ta < Tb)Pa(Tr < Tl): Similarly, x P (Tr < Tl;Tb < Ta) = Px(Tb < Ta)Pb(Tr < Tl); so that x P (Tr < Tl) = Px(Ta < Tb)Pa(Tr < Tl) + Px(Tb < Ta)Pb(Tr < Tl): Setting x s(x) = P (Tr < Tl) and solving for Px(Tb < Ta), we get the formula in the statement. Definition 6.1. The function s in the above result is called the scale function of X. Observe that s is unique only upto an affine transformation. We will say that X is on natural scale when s(x) = x. Moreover, if we apply the scale function to X, we get the following MARTINGALE PROBLEMS 3 Proposition 6.2. Let X~ := s(X). Then, X~ satisfies the standing assumptions of this section and it is on natural scale. Let's put R = Tr ^ Tl. Theorem 6.2. Let f be a locally bounded and increasing Borel function. Then, f is a scale function if and only if f(X)R is a local martingale. Proof. We refer to the proof of Proposition 3.5 in Chap. VII of Revuz & Yor for the proof of that s(X)R is a local martingale. We will now give the proof of the converse. Suppose that f(X)R is a local martingale and let [a; b] be a closed interval in the interior of E. Then, f(X)Ta^Tb is a bounded martingale. Thus, an application of the Optional Stopping Theorem yields x x f(x) = f(a)P (Ta < Tb) + f(b)P (Tb < Ta): x x x Since P (Ta < Tb) + P (Tb < Ta) = 1, solving for P (Tb < Ta) shows that f is a scale function. Exercise 6.1. Suppose that X is a diffusion with the infinitesimal generator 1 d2 d L = σ2(x) + b(x) ; 2 dx2 dx where σ and b are locally bounded functions on E and σ > 0. Prove that the scale function is given by Z x Z y s(x) = exp − 2b(z)σ−2(z)dz dy c c where c is an arbitrary point in the interior of E. Exercise 6.2. A 3-dimensional Bessel process is a Markov process on (0; 1) satisfying Z t 1 Xt = x + ds + Bt; x > 0: 0 Xs Compute its scale function. Since X has continuous paths, s(X) is a continuous local martingale. Thus, it can be viewed as a time-changed Brownian motion. This time change can be explicitly written in terms of a so-called speed measure. x Recall that mI (x) = E [σI ]. Let now that J =]c; d[ be a subinterval of I. By the strong Markov property, for any a < c < x < d < b one has x mI (x) = E [σJ + σI ◦ θσJ ] s(d) − s(x) s(x) − s(c) = m (x) + m (c) + m (d) J s(d) − s(c) I s(d) − s(c) I s(d) − s(x) s(x) − s(c) > m (c) + m (d); s(d) − s(c) I s(d) − s(c) I which says that mI is an s-concave function. Thus, one can define a Green's function GI on E × E by 8 (s(x)−s(a))(s(b)−s(y)) ; if a ≤ x ≤ y ≤ b; <> s(b)−s(a) (s(y)−s(a))(s(b)−s(x)) GI (x; y) = s(b)−s(a) ; if a ≤ y ≤ x ≤ b; :> 0; otherwise; 4 MARTINGALE PROBLEMS so that there exists a measure µI on the interior of E such that Z mI (x) = GI (x; y)µI (dy): In fact, we have more. The measure µI does not depend on I (see Theorem 3.6 in Chap. VII of Revuz & Yor) and one can write Z mI (x) = GI (x; y)m(dy): The measure in the above representation is called the speed measure of X. The next two results state some of the basic properties of the speed measure. Proposition 6.3. For any positive Borel function f Z Ta^Tb Z x E f(Xs)ds = GI (x; y)f(y)m(dy): 0 Proof. See the proof of Corollary 3.8 n Chap. VII of Revuz & Yor. Theorem 6.3. For a bounded f 2 DA and x 2 int(E) d d Af(x) = f(x) dm ds in the sense that df i) the s-derivative ds exists except possibly on the set fx : m(fxg) > 0g, ii) if x1 and x2 are two points for which this s-derivative exists, then df df Z x2 (x2) − (x1) = Af(y)m(dy): ds ds x1 Proof. If f 2, then M f is a martingale where Z t f Mt = f(Xt) − f(X0) − Af(Xs) ds: 0 x f Moreover, for any stopping time T with E [T ] < 1,(Mt^T ) is a uniformly integrable f martingale since jMt^T j ≤ 2 k f k +T k Af k. Thus, if we choose T = Ta ^ Tb, by the Optional Stopping Theorem Z T x x E [f(XT )] = f(x) + E Af(Xs) ds : 0 Thus, it follows from the preceding result that (6.1) f(a)(s(b) − s(x)) + f(b)(s(x) − s(a)) − f(x)(s(b) − s(a)) Z = (s(b) − s(a)) GI (x; y)Af(y)m(dy): I It can be directly verified that f(b) − f(x) f(x) − f(a) Z + = HI (x; y)Af(y)m(dy); s(b) − s(x) s(x) − s(a) I MARTINGALE PROBLEMS 5 where 8 s(y)−s(a) ; if a < y ≤ x; <> s(x)−s(a) s(b)−s(y) HI (x; y) = s(b)−s(x) if x ≤ y < b; :> 0; otherwise.
Recommended publications
  • CONFORMAL INVARIANCE and 2 D STATISTICAL PHYSICS −
    CONFORMAL INVARIANCE AND 2 d STATISTICAL PHYSICS − GREGORY F. LAWLER Abstract. A number of two-dimensional models in statistical physics are con- jectured to have scaling limits at criticality that are in some sense conformally invariant. In the last ten years, the rigorous understanding of such limits has increased significantly. I give an introduction to the models and one of the major new mathematical structures, the Schramm-Loewner Evolution (SLE). 1. Critical Phenomena Critical phenomena in statistical physics refers to the study of systems at or near the point at which a phase transition occurs. There are many models of such phenomena. We will discuss some discrete equilibrium models that are defined on a lattice. These are measures on paths or configurations where configurations are weighted by their energy with a preference for paths of smaller energy. These measures depend on at leastE one parameter. A standard parameter in physics β = c/T where c is a fixed constant, T stands for temperature and the measure given to a configuration is e−βE . Phase transitions occur at critical values of the temperature corresponding to, e.g., the transition from a gaseous to liquid state. We use β for the parameter although for understanding the literature it is useful to know that large values of β correspond to “low temperature” and small values of β correspond to “high temperature”. Small β (high temperature) systems have weaker correlations than large β (low temperature) systems. In a number of models, there is a critical βc such that qualitatively the system has three regimes β<βc (high temperature), β = βc (critical) and β>βc (low temperature).
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Locally Feller Processes and Martingale Local Problems
    Locally Feller processes and martingale local problems Mihai Gradinaru and Tristan Haugomat Institut de Recherche Math´ematique de Rennes, Universit´ede Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France fMihai.Gradinaru,[email protected] Abstract: This paper is devoted to the study of a certain type of martingale problems associated to general operators corresponding to processes which have finite lifetime. We analyse several properties and in particular the weak convergence of sequences of solutions for an appropriate Skorokhod topology setting. We point out the Feller-type features of the associated solutions to this type of martingale problem. Then localisation theorems for well-posed martingale problems or for corresponding generators are proved. Key words: martingale problem, Feller processes, weak convergence of probability measures, Skorokhod topology, generators, localisation MSC2010 Subject Classification: Primary 60J25; Secondary 60G44, 60J35, 60B10, 60J75, 47D07 1 Introduction The theory of L´evy-type processes stays an active domain of research during the last d d two decades. Heuristically, a L´evy-type process X with symbol q : R × R ! C is a Markov process which behaves locally like a L´evyprocess with characteristic exponent d q(a; ·), in a neighbourhood of each point a 2 R . One associates to a L´evy-type process 1 d the pseudo-differential operator L given by, for f 2 Cc (R ), Z Z Lf(a) := − eia·αq(a; α)fb(α)dα; where fb(α) := (2π)−d e−ia·αf(a)da: Rd Rd (n) Does a sequence X of L´evy-type processes, having symbols qn, converges toward some process, when the sequence of symbols qn converges to a symbol q? What can we say about the sequence X(n) when the corresponding sequence of pseudo-differential operators Ln converges to an operator L? What could be the appropriate setting when one wants to approximate a L´evy-type processes by a family of discrete Markov chains? This is the kind of question which naturally appears when we study L´evy-type processes.
    [Show full text]
  • Martingale Theory
    CHAPTER 1 Martingale Theory We review basic facts from martingale theory. We start with discrete- time parameter martingales and proceed to explain what modifications are needed in order to extend the results from discrete-time to continuous-time. The Doob-Meyer decomposition theorem for continuous semimartingales is stated but the proof is omitted. At the end of the chapter we discuss the quadratic variation process of a local martingale, a key concept in martin- gale theory based stochastic analysis. 1. Conditional expectation and conditional probability In this section, we review basic properties of conditional expectation. Let (W, F , P) be a probability space and G a s-algebra of measurable events contained in F . Suppose that X 2 L1(W, F , P), an integrable ran- dom variable. There exists a unique random variable Y which have the following two properties: (1) Y 2 L1(W, G , P), i.e., Y is measurable with respect to the s-algebra G and is integrable; (2) for any C 2 G , we have E fX; Cg = E fY; Cg . This random variable Y is called the conditional expectation of X with re- spect to G and is denoted by E fXjG g. The existence and uniqueness of conditional expectation is an easy con- sequence of the Radon-Nikodym theorem in real analysis. Define two mea- sures on (W, G ) by m fCg = E fX; Cg , n fCg = P fCg , C 2 G . It is clear that m is absolutely continuous with respect to n. The conditional expectation E fXjG g is precisely the Radon-Nikodym derivative dm/dn.
    [Show full text]
  • A Guide to Brownian Motion and Related Stochastic Processes
    Vol. 0 (0000) A guide to Brownian motion and related stochastic processes Jim Pitman and Marc Yor Dept. Statistics, University of California, 367 Evans Hall # 3860, Berkeley, CA 94720-3860, USA e-mail: [email protected] Abstract: This is a guide to the mathematical theory of Brownian mo- tion and related stochastic processes, with indications of how this theory is related to other branches of mathematics, most notably the classical the- ory of partial differential equations associated with the Laplace and heat operators, and various generalizations thereof. As a typical reader, we have in mind a student, familiar with the basic concepts of probability based on measure theory, at the level of the graduate texts of Billingsley [43] and Durrett [106], and who wants a broader perspective on the theory of Brow- nian motion and related stochastic processes than can be found in these texts. Keywords and phrases: Markov process, random walk, martingale, Gaus- sian process, L´evy process, diffusion. AMS 2000 subject classifications: Primary 60J65. Contents 1 Introduction................................. 3 1.1 History ................................ 3 1.2 Definitions............................... 4 2 BM as a limit of random walks . 5 3 BMasaGaussianprocess......................... 7 3.1 Elementarytransformations . 8 3.2 Quadratic variation . 8 3.3 Paley-Wiener integrals . 8 3.4 Brownianbridges........................... 10 3.5 FinestructureofBrownianpaths . 10 arXiv:1802.09679v1 [math.PR] 27 Feb 2018 3.6 Generalizations . 10 3.6.1 FractionalBM ........................ 10 3.6.2 L´evy’s BM . 11 3.6.3 Browniansheets ....................... 11 3.7 References............................... 11 4 BMasaMarkovprocess.......................... 12 4.1 Markovprocessesandtheirsemigroups. 12 4.2 ThestrongMarkovproperty. 14 4.3 Generators .............................
    [Show full text]
  • ENLARGEMENT of FILTRATION and the STRICT LOCAL MARTINGALE PROPERTY in STOCHASTIC DIFFERENTIAL EQUATIONS Aditi Dandapani COLUMBIA
    ENLARGEMENT OF FILTRATION AND THE STRICT LOCAL MARTINGALE PROPERTY IN STOCHASTIC DIFFERENTIAL EQUATIONS Aditi Dandapani Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2016 © 2016 Aditi Dandapani All Rights Reserved Abstract ENLARGEMENT OF FILTRATION AND THE STRICT LOCAL MARTINGALE PROPERTY IN STOCHASTIC DIFFERENTIAL EQUATIONS Aditi Dandapani In this thesis, we study the strict local martingale property of solutions of various types of stochastic differential equations and the effect of an initial expansion of the filtration on this property. For the models we consider, we either use existing criteria or, in the case where the stochastic differential equation has jumps, develop new criteria that can can detect the presence of the strict local martingale property. We develop deterministic sufficient conditions on the drift and diffusion coefficient of the stochastic process such that an enlargement by initial expansion of the filtration can produce a strict local martingale from a true martingale. We also develop a way of characterizing the martingale property in stochastic volatility models where the local martingale has a general diffusion coefficient, of the form µ(St; vt);where the function µ(x1; x2) is locally Lipschitz in x: Contents Acknowledgements ii Introduction 1 Chapter 1 5 0.1 The One-Dimensional Case . .5 0.2 Stochastic Volatility Models . 13 0.3 Expansion of the Filtration by Initial Expansions . 31 Chapter 2 35 0.4 The Model of Lions and Musiela . 35 0.5 The Case of Jump Discontinuities . 62 0.6 The Model of Mijatovic and Urusov .
    [Show full text]
  • Itô's Stochastic Calculus
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Stochastic Processes and their Applications 120 (2010) 622–652 www.elsevier.com/locate/spa Ito’sˆ stochastic calculus: Its surprising power for applications Hiroshi Kunita Kyushu University Available online 1 February 2010 Abstract We trace Ito’sˆ early work in the 1940s, concerning stochastic integrals, stochastic differential equations (SDEs) and Ito’sˆ formula. Then we study its developments in the 1960s, combining it with martingale theory. Finally, we review a surprising application of Ito’sˆ formula in mathematical finance in the 1970s. Throughout the paper, we treat Ito’sˆ jump SDEs driven by Brownian motions and Poisson random measures, as well as the well-known continuous SDEs driven by Brownian motions. c 2010 Elsevier B.V. All rights reserved. MSC: 60-03; 60H05; 60H30; 91B28 Keywords: Ito’sˆ formula; Stochastic differential equation; Jump–diffusion; Black–Scholes equation; Merton’s equation 0. Introduction This paper is written for Kiyosi Itoˆ of blessed memory. Ito’sˆ famous work on stochastic integrals and stochastic differential equations started in 1942, when mathematicians in Japan were completely isolated from the world because of the war. During that year, he wrote a colloquium report in Japanese, where he presented basic ideas for the study of diffusion processes and jump–diffusion processes. Most of them were completed and published in English during 1944–1951. Ito’sˆ work was observed with keen interest in the 1960s both by probabilists and applied mathematicians. Ito’sˆ stochastic integrals based on Brownian motion were extended to those E-mail address: [email protected].
    [Show full text]
  • Non-Linear Superprocesses
    Non-linear sup erpro cesses L. Overb eck Lab oratoire de Probabilit es, Universit eParis VI, eme 4, Place Jussieu, Tour 56, 3 Etage, 75252 Paris Cedex 05, y France January 27, 1995 Abstract Non-linear martingale problems in the McKean-Vlasov sense for sup erpro cesses are studied. The sto chastic calculus on historical trees is used in order to show that there is a unique solution of the non-linear martingale problems under Lipschitz conditions on the co ecients. Mathematics Subject Classi cation 1991: 60G57, 60K35, 60J80. 1 Intro duction Non-linear di usions, also called McKean-Vlasov pro cesses, are di usion pro- cesses which are asso ciated with non-linear second order partial di erential d equation. IR -valued McKean-Vlasov di usions are studied in detail in many pap ers, e.g. [F,Oel,S1,S2]. The main issues are approximation by a sequence of weakly interacting di usions, asso ciated large deviations and uctuations Supp orted byanEC-Fellowship under Contract No. ERBCHBICT930682 and par- tially by the Sonderforschungsb ereich 256. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 and nally uniqueness and existence of the non-linear martingale problem asso ciated with McKean-Vlasov pro cess. In this pap er we fo cus on the latter question in the set-up of branching measure-valued di usions pro cesses, also called sup erpro cesses. For an ex- cellentintro duction to the theory of sup erpro cesses we refer to [D]. In order to formulate the basic de nition we need to intro duce some notation.
    [Show full text]
  • Martingale Problems and Stochastic Equations
    Martingale problems and stochastic equations • Characterizing stochastic processes by their martingale properties • Markov processes and their generators • Forward equations • Other types of martingale problems • Change of measure • Martingale problems for conditional distributions • Stochastic equations for Markov processes • Filtering • Time change equations • Bits and pieces • Supplemental material • Exercises • References •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 1 Supplemental material • Background: Basics of stochastic processes • Stochastic integrals for Poisson random measures • Technical lemmas •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 2 1. Characterizing stochastic processes by their martingale prop- erties • Levy’s´ characterization of Brownian motion • Definition of a counting process • Poisson processes • Martingale properties of the Poisson process • Strong Markov property for the Poisson process • Intensities • Counting processes as time changes of Poisson processes • Martingale characterizations of a counting process • Multivariate counting processes Jacod(1974/75),Anderson and Kurtz(2011, 2015) •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 3 Brownian motion A Brownian motion is a continuous process with independent, Gaussian increments. A Brownian motion W is standard if the increments W (t + r) − W (t), t; r ≥ 0, have mean zero and variance r. W is a martingale: W W E[W (t + r)jFt ] = E[W (t + r) − W (t)jFt ] + W (t) = W (t); W Ft = σ(W (s): s ≤ t). W has quadratic variation t, that is X 2 [W ]t = lim (W (t ^ ti+1) − W (t ^ ti)) = t sup jti+1−tij=0 i in probability, or more precisely, X 2 lim sup j (W (t ^ ti+1) − W (t ^ ti)) − tj = 0 sup jti+1−tij=0 t≤T i in probability for each T > 0.
    [Show full text]
  • Lecture 20 Itô's Formula
    Lecture 20:Itô’s formula 1 of 13 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 20 Itô’s formula Itô’s formula Itô’s formula is for stochastic calculus what the Newton-Leibnitz for- mula is for (the classical) calculus. Not only does it relate differentia- tion and integration, it also provides a practical method for computa- tion of stochastic integrals. There is an added benefit in the stochastic case. It shows that the class of continuous semimartingales is closed under composition with C2 functions. We start with the simplest ver- sion: Theorem 20.1. Let X be a continous semimartingale taking values in a seg- ment [a, b] ⊆ R, and let f : [a, b] ! R be a twice continuously differentiable function ( f 2 C2[a, b]). Then the process f (X) is a continuous semimartin- gale and Z t Z t Z t 0 0 1 00 (20.1) f (Xt) − f (X0) = f (Xu) dMu + f (Xu) dAu + 2 f (Xu) dhMiu. 0 0 0 R t 0 Remark 20.2. The Leibnitz notation 0 f (Xu) dMu (as opposed to the “semimartingale notation” f 0(X) · M) is more common in the context of the Itô formula. We’ll continue to use both. Before we proceed with the proof, let us state and prove two useful related results. In the first one we compute a simple stochastic integral explicitly. You will immediately see how it differs from the classical Stieltjes integral of the same form. 2,c Lemma 20.3. For M 2 M0 , we have 1 2 1 M · M = 2 M − 2 hMi.
    [Show full text]