C. Geiss and S. Geiss: Markov Processes

C. Geiss and S. Geiss: Markov Processes

Markov processes C. Geiss and S. Geiss September 18, 2020 Contents 1 Introduction2 2 Definition of a Markov process2 3 Existence of Markov processes6 4 Strong Markov processes 10 4.1 Stopping times and optional times................ 10 4.2 Strong Markov property..................... 13 4.3 L´evyprocesses are strong Markov................ 15 4.4 Right-continuous filtrations................... 17 5 The semigroup/infinitesimal generator approach 21 5.1 Contraction semigroups...................... 21 5.2 Infinitesimal generator...................... 23 5.3 Martingales and Dynkin's formula................ 27 6 Weak solutions of SDEs and martingale problems 30 7 Feller processes 36 7.1 Feller semigroups, Feller transition functions and Feller processes 36 7.2 C`adl`agmodifications of Feller processes............. 41 A Appendix 48 1 1 Introduction Why should one study Markov processes? • Markov processes are quite general: A Brownian motion is a L´evyprocess. L´evyprocesses are Feller processes. Feller processes are Hunt processes, and the class of Markov processes comprises all of them. • Solutions to certain SDEs are Markov processes. • There exist many useful relations between Markov processes and { martingale problems, { diffusions, { second order differential and integral operators, { Dirichlet forms. 2 Definition of a Markov process Let (Ω; F; P) be a complete probability space and (E; r) a complete separable metric space. By (E; E) we denote a measurable space and T ⊆ R [ f1g [ {−∞}: We call X = fXt; t 2 Tg a stochastic process if Xt : (Ω; F) ! (E; E); 8t 2 T: The map t 7! Xt(!) we call a path of X: We say that F = fFt; t 2 Tg is a filtration, if Ft ⊆ F is a sub-σ-algebra for any t 2 T; and it holds Fs ⊆ Ft for s ≤ t: The process X is adapted to F ()df Xt is Ft measurable for all t 2 T: X X Obviously, X is always adapted to its natural filtration F = fFt ; t 2 Tg X given by Ft = σ(Xs; s ≤ t; s 2 T): 2 Definition 2.1 (Markov process). The stochastic process X is a Markov process w.r.t. F ()df (1) X is adapted to F; (2) for all t 2 T : P(A \ BjXt) = P(AjXt)P(BjXt); a:s: whenever A 2 Ft and B 2 σ(Xs; s ≥ t): (for all t 2 T the σ-algebras Ft and σ(Xs; s ≥ t; s 2 T) are condition- ally independent given Xt:) Remark 2.2. (1) Recall that we define conditional probability using con- 1 ditional expectation: P(CjXt) := P(Cjσ(Xt)) = E[ C jσ(Xt)]: (2) If X is a Markov process w.r.t. F; then X is a Markov process w.r.t. G = fGt; s 2 Tg; with Gt = σ(Xs; s ≤ t; s 2 T): (3) If X is a Markov process w.r.t. its natural filtration the Markov property is preserved if one reverses the order in T: Theorem 2.3. Let X be F-adapted. TFAE: (i) X is a Markov process w.r.t. F: (ii) For each t 2 T and each bounded σ(Xs; s ≥ t; s 2 T)-measurable Y one has E[Y jFt] = E[Y jXt]: (1) (iii) If s; t 2 T and t ≤ s; then E[f(Xs)jFt] = E[f(Xs)jXt] (2) for all bounded f :(E; E) ! (R; B(R)): Proof. (i) =) (ii): Suppose (i) holds. The Monotone Class Theorem for functions (Theorem A.1) implies that it suffices to show (1) for Y = 1B where B 2 σ(Xs; s ≥ t; s 2 T). For A 2 Ft we have 1 1 1 E(E[Y jFt] A) = E A B = P(A \ B) = EP(A \ BjXt) 3 = EP(AjXt)P(BjXt) 1 = EE[ AjXt]P(BjXt) 1 = E AP(BjXt) 1 = E(E[Y jXt] A) which implies (ii). (ii) =) (i): Assume (ii) holds. If A 2 Ft and B 2 σ(Xs; s ≥ t; s 2 T); then 1 P(A \ BjXt) = E[ A\BjXt] 1 = E[E[ A\BjFt]jXt] 1 1 = E[ AE[ BjFt]jXt] 1 1 = E[ AjXt]E[ BjXt]; which implies (i). (ii) () (iii): The implication (ii) =) (iii) is trivial. Assume that (iii) holds. We want to use the Monotone Class Theorem for functions. Let H := fY ; Y is bounded and σ(Xs; s ≥ t; s 2 T) − measurable such that (1) holds:g Then H is a vector space containing the constants and is closed under bounded and monotone limits. We want that H = fY ; Y is bounded and σ(Xs; s ≥ t; s 2 T) − measurableg It is enough to show that n Y = Πi=1fi(Xsi ) 2 H (3) ∗ for bounded fi :(E; E) ! (R; B(R)) and t ≤ s1 < ::: < sn (n 2 N ). (Notice that then especially 1A 2 H for any A 2 A with ∗ A = ff! 2 Ω; Xs1 2 I1; :::; Xsn 2 Ing : Ik 2 B(R); sk 2 T; sk ≥ t; n 2 N g and σ(A) = σ(Xs; s ≥ t; s 2 T). We show (3) by induction in n: 4 n = 1: This is assertion (iii). n > 1: E[Y jFt] = E[E[Y jFsn−1 ]jFt] n−1 = E[Πi=1 fi(Xsi )E[fn(Xsn )jFsn−1 ]jFt] n−1 = E[Πi=1 fi(Xsi )E[fn(Xsn )jXsn−1 ]jFt] By the factorization Lemma (Lemma A.2) there exists a h :(E; E) ! (R; B(R)) such that E[fn(Xsn )jXsn−1 ] = h(Xsn−1 ): By induction assumption: n−1 n−1 E[Πi=1 fi(Xsi )h(Xsn−1 )jFt] = E[Πi=1 fi(Xsi )h(Xsn−1 )jXt]: By the tower property, since σ(Xt) ⊆ Fsn−1 n−1 n−1 E[Πi=1 fi(Xsi )h(Xsn−1 )jXt] = E[Πi=1 fi(Xsi )E[fn(Xsn )jFsn−1 ]jXt] n−1 = E[E[Πi=1 fi(Xsi )fn(Xsn )jFsn−1 ]jXt] n = E[Πi=1fi(Xsi )jXt]: Definition 2.4 (transition function). Let s; t 2 T ⊆ [0; 1): . (1) The map Pt;s(x; A); 0 ≤ t < s < 1; x 2 E; A 2 E; is called Markov transition function on (E; E), provided that (i) A 7! Pt;s(x; A) is a probability measure on (E; E) for each (t; s; x); (ii) x 7! Pt;s(x; A) is E-measurable for each (t; s; A); (iii) Pt;t(x; A) = δx(A) (iv) if 0 ≤ t < s < u then the Chapman-Kolmogorov equation Z Pt;u(x; A) = Ps;u(y; A)Pt;s(x; dy) E holds for all x 2 E and A 2 E. 5 (2) The Markov transition function Pt;s(x; A) is homogeneous () df if ^ ^ there exists a map Pt(x; A) with Pt;s(x; A) = Ps−t(x; A) for all 0 ≤ t ≤ s; x 2 E; A 2 E: (3) Let X be adapted to F and Pt;s(x; A) with 0 ≤ t ≤ s; x 2 E; A 2 E a Markov transition function. We say that X is a Markov process w.r.t. F having Pt;s(x; A) as transition function if Z E[f(Xs)jFt] = f(y)Pt;s(Xt; dy) (4) E for all 0 ≤ t ≤ s and all bounded f :(E; E) ! (R; B(R)): (4) Let µ be a probability measure on (E; E) such that µ(A) = P(X0 2 A): Then µ is called initial distribution of X. Remark 2.5. (1) There exist Markov processes which do not possess tran- sition functions (see [4] Remark 1.11 page 446) (2) A Markov transition function for a Markov process is not necessarily unique. Using the Markov property, one obtains the finite-dimensional distributions of X: for 0 ≤ t1 < t2 < ::: < tn and bounded n ⊗n f :(E ; E ) ! (R; B(R)) it holds Z Z Z Ef(Xt1 ; :::; Xtn ) = µ(dx0) P0;t1 (x0; dx1)::: Ptn−1;tn (xn−1; dxn)f(x1; :::; xn): E E E 3 Existence of Markov processes Given a distribution µ and Markov transition functions fPt;s(x; A)g, does there always exist a Markov process with initial distribution µ and transition function fPt;s(x; A)g? Definition 3.1. For a measurable space (E; E) and an arbitrary index set T define T T Ω := E ; F := E := σ(Xt; t 2 T); 6 where Xt :Ω ! E is the coordinate map Xt(!) = !(t): For a finite subset J J = ft1; :::; tng ⊆ T we use the projections πJ :Ω ! E J πJ ! = (!(t1); :::; !(tn)) 2 E πJ X = (Xt1 ; :::; Xtn ): (1) Let Fin(T) := fJ ⊆ T; 0 < jJj < 1g: Then J J fPJ : PJ is a probability measure on (E ; E );J 2 Fin(T)g is called the set of finite-dimensional distributions of X: (2) The set of probability measures fPJ : J 2 Fin(T)g is called Kolmogorov consistent (or compatible or projective) provided that −1 PJ = PK ◦ (πJ jEK ) for all J ⊆ K, J; K 2 Fin(T): (Here it is implicitly assumed that Ptσ(1);:::;tσ(n) (Aσ(1) × ::: × Aσ(n)) = Pt1;:::;tn (A1 × ::: × An) for any permutation σ : f1; :::; ng ! f1; :::; ng:) Theorem 3.2 (Kolmogorov's extension theorem, Daniell-Kolmogorov The- orem). Let E be a complete, separable metric space and E = B(E): Let T be a set. Suppose that for each J 2 Fin(T) there exists a probability measure J J PJ on (E ; E ) and that fPJ ; J 2 Fin(T)g is Kolmogorov consistent. Then there exists a unique probability measure P on (ET; E T) such that −1 J J PJ = P ◦ πJ on (E ; E ): Proof: see, for example, Theorem 2.2 in Chapter 2 of [8].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    52 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us