Connection Between Martingale Problems and Markov Processes
Total Page:16
File Type:pdf, Size:1020Kb
IMT Toulouse Skript: Connection between Martingale Problems and Markov Processes Author Franziska K¨uhn Info IRTG Summer School Bielefeld, August 2019 Contents Index of Notation 3 1 Introduction 4 2 Martingale Problems5 2.1 Existence of solutions..................................... 5 2.2 Uniqueness ........................................... 6 2.3 Markov property........................................ 7 3 Connection between SDEs and martingale problems8 4 Markov processes 11 4.1 Semigroups ........................................... 11 4.2 Infinitesimal generator..................................... 12 4.3 From Markov processes to martingale problems ..................... 13 Bibliography 15 2 Index of Notation ∞ d d Probability/measure theory Cc (R ) space of smooth functions f ∶ R → R with compact support d ∼, = distributed as D[0; ∞) Skorohod space = space of c`adl`ag d PX distribution of X w.r.t. P functions f ∶ [0; ∞) → R B( d) Borel σ-algebra on d R R Analysis δx Dirac measure centered at x ∈ R ∧ minimum Spaces of functions Y ⋅ Y∞ uniform norm d Bb(R ) space of bounded Borel measurable D(A) domain of operator A d functions f ∶ R → R c`adl`ag finite left limits and right-continuous d Cb(R ) space of bounded continuous tr(A) trace of A d functions f ∶ R → R ∇2f Hessian of f d C∞(R ) space of continuous functions d f ∶ R → R vanishing at infinity limSxS→∞ Sf(x)S = 0 3 1 Introduction Central question: How to characterize stochastic processes in terms of martingale properties? Start with two simple examples: Brownian motion and Poisson process. 1.1 Definition A stochastic process (Bt)t≥0 is a Brownian motion if • B0 = 0 almost surely, • Bt1 −Bt0 ;:::;Btn −Btn−1 are independent for all 0 = t0 < t1 < ::: < tn (independent increments), • Bt − Bs ∼ N(0; (t − s) id) for all s ≤ t, • t ↦ Bt(!) is continuous for all ! (continuous sample paths) 1.2 Theorem (L´evy) Let (Xt)t≥0 be a 1-dim. stochastic process with continuous sample paths and 2 X0 = 0. (Xt)t≥0 is a Brownian motion if, and only, if (Xt)t≥0 and (Xt −t)t≥0 are (local) martingales. Beautiful result! Only 2 martingales needed to describe all the information about Brownian motion. Remark Assumption on continuity of sample paths is needed (consider Xt = Nt − t for Poisson process with intensity 1). 1.3 Definition A stochastic process (Nt)t≥0 is a Poisson process with intensity λ > 0 if • N0 = 0 almost surely, • (Nt)t≥0 has independent increments, • Nt − Ns ∼ Poi(λ(t − s)) for all s ≤ t, • t ↦ Nt(!) is c`adl`ag(=right-continuous with finite left-hand limits). 1.4 Theorem (Watanabe) Let (Nt)t≥0 be a counting process, i.e. a process with N0 = 0 which is constant except for jumps of height +1. Then (Nt)t≥0 is a Poisson process with intensity λ if and only if (Nt − λt)t≥0 is a martingale. Outline of this lecture series: • Set up general framework to describe processes via martingales (→ martingale problems, ▸ §2) • study connection between martingale problems and Markov processes • application: study solutions to stochastic differential equations ▸ §3) 4 2 Martingale Problems 2.1 Definition Let (Ω; F; P) be a probability space. (i).A filtration (Ft)t≥0 is a family of sub-σ-algebras of F such that Fs ⊆ Ft for s ≤ t. (ii). A stochastic process (Mt)t≥0 is a martingale (with respect to (Ft)t≥0 and P) if • (Mt; Ft)t≥0 is an adapted process, i.e. Xt is Ft-measurable for all t ≥ 0, 1 • Mt ∈ L (P) for all t ≥ 0, • E(Mt S Fs) = Ms for all s ≤ t. d Standing assumption: processes have c`adl`agsample paths t ↦ Xt(!) ∈ R . Skorohod space: d D[0; ∞) ∶= {f ∶ [0; ∞) → R ; f c`adl`ag }: d d Fix a linear operator L ∶ D → Bb(R ) with domain D ⊆ Bb(R ). Notation: (L; D). d 2.2 Definition Let µ be a probability measure on R . A (c`adl`ag) stochastic process (Xt)t≥0 on a µ probability space (Ω; F; P ) is a solution to the (L; D)-martingale problem with initial distribution µ µ if P (X0 ∈ ⋅) = µ(⋅) and t f Mt ∶= f(Xt) − f(X0) − Lf(Xs) ds S0 µ is for any f ∈ D a martingale w.r.t to the canonical filtration Ft ∶= σ(Xs; s ≤ t) and P . x δx x x 2.3 Remark (i). For µ = δx we write P instead of P . Moreover, E ∶= ∫ dP . (ii). Sometimes it is convenient to fix the measurable space. On chooses Ω = D[0; ∞) Skorohod space and X(t; !) = !(t) canonical process (possible WLOG, cf. Lemma 2.9 below). Next: existence and uniqueness of solutions. Rule of thumb: Existence is much easier to prove than uniqueness. 2.1 Existence of solutions 2.4 Definition A linear operator (L; D) satisfies the positive maximum principle if f ∈ D; f(x0) = sup f(x) ≥ 0 Ô⇒ Lf(x0) ≤ 0: d x∈R d d d 2.5 Lemma Let (L; D) be a linear operator on Cb(R ) (i.e. D ⊆ Cb(R ) and L(D) ⊆ Cb(R )). If d there exists for any x ∈ R a solution to the (L; D)-martingale problem with initial distribution µ = δx, then (L; D) satisfies the positive maximum principle. d Proof. Fix f and x with f x sup d f x 0. Let X be a solution with ∈ D 0 ∈ R ( 0) = x∈R ( ) ≥ ( t)t≥0 X0 = x0. Then t x0 x0 0 ≥ E (f(Xt)) − f(x0) = E Lf(Xs) ds ; S0 5 x0 and therefore it follows from the right-continuity of s ↦ E Lf(Xs) that t 1 x0 Lf(x0) = lim E (Lf(Xs)) ds ≤ 0: t→0 t S0 d Denote by C∞(R ) the space of continuous functions vanishing at infinity, limSxS→∞ f(x) = 0. 2.6 Theorem ([4]) Suppose that d d d (i). D ⊆ C∞(R ) is dense in C∞(R ) and L ∶ D → C∞(R ), (ii). L satisfies the positive maximum principle on D, (iii). there exists a sequence (fn)n∈N ⊆ D such that lim fn x 1 and lim Lfn x 0 n→∞ ( ) = n→∞ ( ) = d for all x ∈ R and sup(YfnY∞ + YLfnY∞) < ∞: n≥1 Then there exists for every probability measure µ a solution to the (L; D)-martingale problem with initial distribution µ. 2.7 Remark If (iii) fails to hold then there exists a solution which might explode in finite time. ∞ d DIY Define an operator L on the smooth functions with compact support Cc (R ) by 1 Lf x b x f x tr Q x 2f x ( ) ∶= ( )∇ ( ) + 2 ( ( )∇ ( )) 1 + (f(x + y) − f(x) − y ⋅ ∇f(x) (0;1)(SyS)) ν(x; dy) Sy≠0 d d d×d where for each x ∈ R , b(x) ∈ R , Q(x) ∈ R is positive semidefinite and ν(x; dy) is a measure sat- 2 isfying ∫ min{1; SyS } ν(x; dy) < ∞. Show that L satisfies the positive maximum principle. (Extra: Find sufficient conditions on b, Q, ν such that Theorem 2.6 is applicable.) 2.2 Uniqueness Next: introduce notion of uniqueness. What is the proper notion in this context? ~ 2.8 Definition Let (Xt)t≥0 be a stochastic process on a probability space (Ω; F; P). If (Xt)t≥0 is ~ ~ ~ another stochastic process (possibly on a different probability space (Ω; F; P)), then (Xt)t≥0 equals ~ in distribution (Xt)t≥0 if both processes have the same finite dimensional distributions, i.e. ~ ~ ~ P(Xt1 ∈ B1;:::;Xtn ∈ Bn) = P(Xt1 ∈ B1;:::; Xtn ∈ Bn) d ~ for any measurable sets Bi and any times ti. Notation: (Xt)t≥0 = (Xt)t≥0. d ~ d ~ Mind: In general, Xt = Xt for all t ≥ 0 does not imply (Xt)t≥0 = (Xt)t≥0. Consider for instance a ~ √ Brownian motion (Xt)t≥0 and Xt ∶= tX1. 2.9 Lemma Let (Xt)t≥0 be a solution to the (L; D)-martingale problem with initial distribution µ. d If (Yt)t≥0 is another c`adl`agprocess such that (Xt)t≥0 = (Yt)t≥0, then (Yt)t≥0 is a solution to the (L; D)-martingale problem with initial distribution µ. Idea of proof. A c`adl`agprocess (Zt)t≥0 (with initial distribution µ) solves the (L; D)-martingale Skript: \Connection between Martingale Problems and Markov Processes" by Franziska K¨uhn,IMT Toulouse, August 2019 problem (with initial distribution µ) iff t k µ ⎛ f Z f Z Lf Z dr h Z ⎞ 0 P ( t) − ( s) − S ( r) M j( ti ) = ⎝ s j=1 ⎠ d for any t1 < ::: < tk ≤ s ≤ t and hj ∈ Bb(R ). This is an assertion on the fdd's! (Put Z = X and then replace X by Y .) 6 Lemma 2.9 motivates the following definition. 2.10 Definition (i). Uniqueness holds for the (L; D)-martingale problem with initial distribution µ if any two solutions have the same finite-dimensional distributions. (ii). The (L; D)-martingale problem is well posed if for any initial distribution µ there exists a unique solution to the (L; D)-martingale problem with initial distribution µ. Next result: uniqueness of the finite-dimensional distributions follows from the uniqueness of the one-dimensional distributions. Reminds us of Markov processes. 2.11 Proposition ([4, Theorem 4.4.2]) Assume that for any initial distribution µ and any two solutions (Xt)t≥0 and (Yt)t≥0 to the (L; D)-martingale problem with initial distribution µ it holds that d Xt = Yt for all t ≥ 0: d Then uniqueness holds for any initial distribution µ, i.e.