Connection Between Martingale Problems and Markov Processes

Total Page:16

File Type:pdf, Size:1020Kb

Connection Between Martingale Problems and Markov Processes IMT Toulouse Skript: Connection between Martingale Problems and Markov Processes Author Franziska K¨uhn Info IRTG Summer School Bielefeld, August 2019 Contents Index of Notation 3 1 Introduction 4 2 Martingale Problems5 2.1 Existence of solutions..................................... 5 2.2 Uniqueness ........................................... 6 2.3 Markov property........................................ 7 3 Connection between SDEs and martingale problems8 4 Markov processes 11 4.1 Semigroups ........................................... 11 4.2 Infinitesimal generator..................................... 12 4.3 From Markov processes to martingale problems ..................... 13 Bibliography 15 2 Index of Notation ∞ d d Probability/measure theory Cc (R ) space of smooth functions f ∶ R → R with compact support d ∼, = distributed as D[0; ∞) Skorohod space = space of c`adl`ag d PX distribution of X w.r.t. P functions f ∶ [0; ∞) → R B( d) Borel σ-algebra on d R R Analysis δx Dirac measure centered at x ∈ R ∧ minimum Spaces of functions Y ⋅ Y∞ uniform norm d Bb(R ) space of bounded Borel measurable D(A) domain of operator A d functions f ∶ R → R c`adl`ag finite left limits and right-continuous d Cb(R ) space of bounded continuous tr(A) trace of A d functions f ∶ R → R ∇2f Hessian of f d C∞(R ) space of continuous functions d f ∶ R → R vanishing at infinity limSxS→∞ Sf(x)S = 0 3 1 Introduction Central question: How to characterize stochastic processes in terms of martingale properties? Start with two simple examples: Brownian motion and Poisson process. 1.1 Definition A stochastic process (Bt)t≥0 is a Brownian motion if • B0 = 0 almost surely, • Bt1 −Bt0 ;:::;Btn −Btn−1 are independent for all 0 = t0 < t1 < ::: < tn (independent increments), • Bt − Bs ∼ N(0; (t − s) id) for all s ≤ t, • t ↦ Bt(!) is continuous for all ! (continuous sample paths) 1.2 Theorem (L´evy) Let (Xt)t≥0 be a 1-dim. stochastic process with continuous sample paths and 2 X0 = 0. (Xt)t≥0 is a Brownian motion if, and only, if (Xt)t≥0 and (Xt −t)t≥0 are (local) martingales. Beautiful result! Only 2 martingales needed to describe all the information about Brownian motion. Remark Assumption on continuity of sample paths is needed (consider Xt = Nt − t for Poisson process with intensity 1). 1.3 Definition A stochastic process (Nt)t≥0 is a Poisson process with intensity λ > 0 if • N0 = 0 almost surely, • (Nt)t≥0 has independent increments, • Nt − Ns ∼ Poi(λ(t − s)) for all s ≤ t, • t ↦ Nt(!) is c`adl`ag(=right-continuous with finite left-hand limits). 1.4 Theorem (Watanabe) Let (Nt)t≥0 be a counting process, i.e. a process with N0 = 0 which is constant except for jumps of height +1. Then (Nt)t≥0 is a Poisson process with intensity λ if and only if (Nt − λt)t≥0 is a martingale. Outline of this lecture series: • Set up general framework to describe processes via martingales (→ martingale problems, ▸ §2) • study connection between martingale problems and Markov processes • application: study solutions to stochastic differential equations ▸ §3) 4 2 Martingale Problems 2.1 Definition Let (Ω; F; P) be a probability space. (i).A filtration (Ft)t≥0 is a family of sub-σ-algebras of F such that Fs ⊆ Ft for s ≤ t. (ii). A stochastic process (Mt)t≥0 is a martingale (with respect to (Ft)t≥0 and P) if • (Mt; Ft)t≥0 is an adapted process, i.e. Xt is Ft-measurable for all t ≥ 0, 1 • Mt ∈ L (P) for all t ≥ 0, • E(Mt S Fs) = Ms for all s ≤ t. d Standing assumption: processes have c`adl`agsample paths t ↦ Xt(!) ∈ R . Skorohod space: d D[0; ∞) ∶= {f ∶ [0; ∞) → R ; f c`adl`ag }: d d Fix a linear operator L ∶ D → Bb(R ) with domain D ⊆ Bb(R ). Notation: (L; D). d 2.2 Definition Let µ be a probability measure on R . A (c`adl`ag) stochastic process (Xt)t≥0 on a µ probability space (Ω; F; P ) is a solution to the (L; D)-martingale problem with initial distribution µ µ if P (X0 ∈ ⋅) = µ(⋅) and t f Mt ∶= f(Xt) − f(X0) − Lf(Xs) ds S0 µ is for any f ∈ D a martingale w.r.t to the canonical filtration Ft ∶= σ(Xs; s ≤ t) and P . x δx x x 2.3 Remark (i). For µ = δx we write P instead of P . Moreover, E ∶= ∫ dP . (ii). Sometimes it is convenient to fix the measurable space. On chooses Ω = D[0; ∞) Skorohod space and X(t; !) = !(t) canonical process (possible WLOG, cf. Lemma 2.9 below). Next: existence and uniqueness of solutions. Rule of thumb: Existence is much easier to prove than uniqueness. 2.1 Existence of solutions 2.4 Definition A linear operator (L; D) satisfies the positive maximum principle if f ∈ D; f(x0) = sup f(x) ≥ 0 Ô⇒ Lf(x0) ≤ 0: d x∈R d d d 2.5 Lemma Let (L; D) be a linear operator on Cb(R ) (i.e. D ⊆ Cb(R ) and L(D) ⊆ Cb(R )). If d there exists for any x ∈ R a solution to the (L; D)-martingale problem with initial distribution µ = δx, then (L; D) satisfies the positive maximum principle. d Proof. Fix f and x with f x sup d f x 0. Let X be a solution with ∈ D 0 ∈ R ( 0) = x∈R ( ) ≥ ( t)t≥0 X0 = x0. Then t x0 x0 0 ≥ E (f(Xt)) − f(x0) = E Lf(Xs) ds ; S0 5 x0 and therefore it follows from the right-continuity of s ↦ E Lf(Xs) that t 1 x0 Lf(x0) = lim E (Lf(Xs)) ds ≤ 0: t→0 t S0 d Denote by C∞(R ) the space of continuous functions vanishing at infinity, limSxS→∞ f(x) = 0. 2.6 Theorem ([4]) Suppose that d d d (i). D ⊆ C∞(R ) is dense in C∞(R ) and L ∶ D → C∞(R ), (ii). L satisfies the positive maximum principle on D, (iii). there exists a sequence (fn)n∈N ⊆ D such that lim fn x 1 and lim Lfn x 0 n→∞ ( ) = n→∞ ( ) = d for all x ∈ R and sup(YfnY∞ + YLfnY∞) < ∞: n≥1 Then there exists for every probability measure µ a solution to the (L; D)-martingale problem with initial distribution µ. 2.7 Remark If (iii) fails to hold then there exists a solution which might explode in finite time. ∞ d DIY Define an operator L on the smooth functions with compact support Cc (R ) by 1 Lf x b x f x tr Q x 2f x ( ) ∶= ( )∇ ( ) + 2 ( ( )∇ ( )) 1 + (f(x + y) − f(x) − y ⋅ ∇f(x) (0;1)(SyS)) ν(x; dy) Sy≠0 d d d×d where for each x ∈ R , b(x) ∈ R , Q(x) ∈ R is positive semidefinite and ν(x; dy) is a measure sat- 2 isfying ∫ min{1; SyS } ν(x; dy) < ∞. Show that L satisfies the positive maximum principle. (Extra: Find sufficient conditions on b, Q, ν such that Theorem 2.6 is applicable.) 2.2 Uniqueness Next: introduce notion of uniqueness. What is the proper notion in this context? ~ 2.8 Definition Let (Xt)t≥0 be a stochastic process on a probability space (Ω; F; P). If (Xt)t≥0 is ~ ~ ~ another stochastic process (possibly on a different probability space (Ω; F; P)), then (Xt)t≥0 equals ~ in distribution (Xt)t≥0 if both processes have the same finite dimensional distributions, i.e. ~ ~ ~ P(Xt1 ∈ B1;:::;Xtn ∈ Bn) = P(Xt1 ∈ B1;:::; Xtn ∈ Bn) d ~ for any measurable sets Bi and any times ti. Notation: (Xt)t≥0 = (Xt)t≥0. d ~ d ~ Mind: In general, Xt = Xt for all t ≥ 0 does not imply (Xt)t≥0 = (Xt)t≥0. Consider for instance a ~ √ Brownian motion (Xt)t≥0 and Xt ∶= tX1. 2.9 Lemma Let (Xt)t≥0 be a solution to the (L; D)-martingale problem with initial distribution µ. d If (Yt)t≥0 is another c`adl`agprocess such that (Xt)t≥0 = (Yt)t≥0, then (Yt)t≥0 is a solution to the (L; D)-martingale problem with initial distribution µ. Idea of proof. A c`adl`agprocess (Zt)t≥0 (with initial distribution µ) solves the (L; D)-martingale Skript: \Connection between Martingale Problems and Markov Processes" by Franziska K¨uhn,IMT Toulouse, August 2019 problem (with initial distribution µ) iff t k µ ⎛ f Z f Z Lf Z dr h Z ⎞ 0 P ( t) − ( s) − S ( r) M j( ti ) = ⎝ s j=1 ⎠ d for any t1 < ::: < tk ≤ s ≤ t and hj ∈ Bb(R ). This is an assertion on the fdd's! (Put Z = X and then replace X by Y .) 6 Lemma 2.9 motivates the following definition. 2.10 Definition (i). Uniqueness holds for the (L; D)-martingale problem with initial distribution µ if any two solutions have the same finite-dimensional distributions. (ii). The (L; D)-martingale problem is well posed if for any initial distribution µ there exists a unique solution to the (L; D)-martingale problem with initial distribution µ. Next result: uniqueness of the finite-dimensional distributions follows from the uniqueness of the one-dimensional distributions. Reminds us of Markov processes. 2.11 Proposition ([4, Theorem 4.4.2]) Assume that for any initial distribution µ and any two solutions (Xt)t≥0 and (Yt)t≥0 to the (L; D)-martingale problem with initial distribution µ it holds that d Xt = Yt for all t ≥ 0: d Then uniqueness holds for any initial distribution µ, i.e.
Recommended publications
  • Conditioning and Markov Properties
    Conditioning and Markov properties Anders Rønn-Nielsen Ernst Hansen Department of Mathematical Sciences University of Copenhagen Department of Mathematical Sciences University of Copenhagen Universitetsparken 5 DK-2100 Copenhagen Copyright 2014 Anders Rønn-Nielsen & Ernst Hansen ISBN 978-87-7078-980-6 Contents Preface v 1 Conditional distributions 1 1.1 Markov kernels . 1 1.2 Integration of Markov kernels . 3 1.3 Properties for the integration measure . 6 1.4 Conditional distributions . 10 1.5 Existence of conditional distributions . 16 1.6 Exercises . 23 2 Conditional distributions: Transformations and moments 27 2.1 Transformations of conditional distributions . 27 2.2 Conditional moments . 35 2.3 Exercises . 41 3 Conditional independence 51 3.1 Conditional probabilities given a σ{algebra . 52 3.2 Conditionally independent events . 53 3.3 Conditionally independent σ-algebras . 55 3.4 Shifting information around . 59 3.5 Conditionally independent random variables . 61 3.6 Exercises . 68 4 Markov chains 71 4.1 The fundamental Markov property . 71 4.2 The strong Markov property . 84 4.3 Homogeneity . 90 4.4 An integration formula for a homogeneous Markov chain . 99 4.5 The Chapmann-Kolmogorov equations . 100 iv CONTENTS 4.6 Stationary distributions . 103 4.7 Exercises . 104 5 Ergodic theory for Markov chains on general state spaces 111 5.1 Convergence of transition probabilities . 113 5.2 Transition probabilities with densities . 115 5.3 Asymptotic stability . 117 5.4 Minorisation . 122 5.5 The drift criterion . 127 5.6 Exercises . 131 6 An introduction to Bayesian networks 141 6.1 Introduction . 141 6.2 Directed graphs .
    [Show full text]
  • CONFORMAL INVARIANCE and 2 D STATISTICAL PHYSICS −
    CONFORMAL INVARIANCE AND 2 d STATISTICAL PHYSICS − GREGORY F. LAWLER Abstract. A number of two-dimensional models in statistical physics are con- jectured to have scaling limits at criticality that are in some sense conformally invariant. In the last ten years, the rigorous understanding of such limits has increased significantly. I give an introduction to the models and one of the major new mathematical structures, the Schramm-Loewner Evolution (SLE). 1. Critical Phenomena Critical phenomena in statistical physics refers to the study of systems at or near the point at which a phase transition occurs. There are many models of such phenomena. We will discuss some discrete equilibrium models that are defined on a lattice. These are measures on paths or configurations where configurations are weighted by their energy with a preference for paths of smaller energy. These measures depend on at leastE one parameter. A standard parameter in physics β = c/T where c is a fixed constant, T stands for temperature and the measure given to a configuration is e−βE . Phase transitions occur at critical values of the temperature corresponding to, e.g., the transition from a gaseous to liquid state. We use β for the parameter although for understanding the literature it is useful to know that large values of β correspond to “low temperature” and small values of β correspond to “high temperature”. Small β (high temperature) systems have weaker correlations than large β (low temperature) systems. In a number of models, there is a critical βc such that qualitatively the system has three regimes β<βc (high temperature), β = βc (critical) and β>βc (low temperature).
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • A Predictive Model Using the Markov Property
    A Predictive Model using the Markov Property Robert A. Murphy, Ph.D. e-mail: [email protected] Abstract: Given a data set of numerical values which are sampled from some unknown probability distribution, we will show how to check if the data set exhibits the Markov property and we will show how to use the Markov property to predict future values from the same distribution, with probability 1. Keywords and phrases: markov property. 1. The Problem 1.1. Problem Statement Given a data set consisting of numerical values which are sampled from some unknown probability distribution, we want to show how to easily check if the data set exhibits the Markov property, which is stated as a sequence of dependent observations from a distribution such that each suc- cessive observation only depends upon the most recent previous one. In doing so, we will present a method for predicting bounds on future values from the same distribution, with probability 1. 1.2. Markov Property Let I R be any subset of the real numbers and let T I consist of times at which a numerical ⊆ ⊆ distribution of data is randomly sampled. Denote the random samples by a sequence of random variables Xt t∈T taking values in R. Fix t T and define T = t T : t>t to be the subset { } 0 ∈ 0 { ∈ 0} of times in T that are greater than t . Let t T . 0 1 ∈ 0 Definition 1 The sequence Xt t∈T is said to exhibit the Markov Property, if there exists a { } measureable function Yt1 such that Xt1 = Yt1 (Xt0 ) (1) for all sequential times t0,t1 T such that t1 T0.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Local Conditioning in Dawson–Watanabe Superprocesses
    The Annals of Probability 2013, Vol. 41, No. 1, 385–443 DOI: 10.1214/11-AOP702 c Institute of Mathematical Statistics, 2013 LOCAL CONDITIONING IN DAWSON–WATANABE SUPERPROCESSES By Olav Kallenberg Auburn University Consider a locally finite Dawson–Watanabe superprocess ξ =(ξt) in Rd with d ≥ 2. Our main results include some recursive formulas for the moment measures of ξ, with connections to the uniform Brown- ian tree, a Brownian snake representation of Palm measures, continu- ity properties of conditional moment densities, leading by duality to strongly continuous versions of the multivariate Palm distributions, and a local approximation of ξt by a stationary clusterη ˜ with nice continuity and scaling properties. This all leads up to an asymptotic description of the conditional distribution of ξt for a fixed t> 0, given d that ξt charges the ε-neighborhoods of some points x1,...,xn ∈ R . In the limit as ε → 0, the restrictions to those sets are conditionally in- dependent and given by the pseudo-random measures ξ˜ orη ˜, whereas the contribution to the exterior is given by the Palm distribution of ξt at x1,...,xn. Our proofs are based on the Cox cluster representa- tions of the historical process and involve some delicate estimates of moment densities. 1. Introduction. This paper may be regarded as a continuation of [19], where we considered some local properties of a Dawson–Watanabe super- process (henceforth referred to as a DW-process) at a fixed time t> 0. Recall that a DW-process ξ = (ξt) is a vaguely continuous, measure-valued diffu- d ξtf µvt sion process in R with Laplace functionals Eµe− = e− for suitable functions f 0, where v = (vt) is the unique solution to the evolution equa- 1 ≥ 2 tion v˙ = 2 ∆v v with initial condition v0 = f.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Strong Memoryless Times and Rare Events in Markov Renewal Point
    The Annals of Probability 2004, Vol. 32, No. 3B, 2446–2462 DOI: 10.1214/009117904000000054 c Institute of Mathematical Statistics, 2004 STRONG MEMORYLESS TIMES AND RARE EVENTS IN MARKOV RENEWAL POINT PROCESSES By Torkel Erhardsson Royal Institute of Technology Let W be the number of points in (0,t] of a stationary finite- state Markov renewal point process. We derive a bound for the total variation distance between the distribution of W and a compound Poisson distribution. For any nonnegative random variable ζ, we con- struct a “strong memoryless time” ζˆ such that ζ − t is exponentially distributed conditional on {ζˆ ≤ t,ζ>t}, for each t. This is used to embed the Markov renewal point process into another such process whose state space contains a frequently observed state which repre- sents loss of memory in the original process. We then write W as the accumulated reward of an embedded renewal reward process, and use a compound Poisson approximation error bound for this quantity by Erhardsson. For a renewal process, the bound depends in a simple way on the first two moments of the interrenewal time distribution, and on two constants obtained from the Radon–Nikodym derivative of the interrenewal time distribution with respect to an exponential distribution. For a Poisson process, the bound is 0. 1. Introduction. In this paper, we are concerned with rare events in stationary finite-state Markov renewal point processes (MRPPs). An MRPP is a marked point process on R or Z (continuous or discrete time). Each point of an MRPP has an associated mark, or state.
    [Show full text]
  • Markov Chains on a General State Space
    Markov chains on measurable spaces Lecture notes Dimitri Petritis Master STS mention mathématiques Rennes UFR Mathématiques Preliminary draft of April 2012 ©2005–2012 Petritis, Preliminary draft, last updated April 2012 Contents 1 Introduction1 1.1 Motivating example.............................1 1.2 Observations and questions........................3 2 Kernels5 2.1 Notation...................................5 2.2 Transition kernels..............................6 2.3 Examples-exercises.............................7 2.3.1 Integral kernels...........................7 2.3.2 Convolution kernels........................9 2.3.3 Point transformation kernels................... 11 2.4 Markovian kernels............................. 11 2.5 Further exercises.............................. 12 3 Trajectory spaces 15 3.1 Motivation.................................. 15 3.2 Construction of the trajectory space................... 17 3.2.1 Notation............................... 17 3.3 The Ionescu Tulce˘atheorem....................... 18 3.4 Weak Markov property........................... 22 3.5 Strong Markov property.......................... 24 3.6 Examples-exercises............................. 26 4 Markov chains on finite sets 31 i ii 4.1 Basic construction............................. 31 4.2 Some standard results from linear algebra............... 32 4.3 Positive matrices.............................. 36 4.4 Complements on spectral properties.................. 41 4.4.1 Spectral constraints stemming from algebraic properties of the stochastic matrix.......................
    [Show full text]
  • Simulation of Markov Chains
    Copyright c 2007 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi- neering are Markovian, and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ= f:::; −2; −1; 0; 1; 2;:::g, the integers, or a proper subset of the integers. Typical examples are S = IN = f0; 1; 2 :::g, the non-negative integers, or S = f0; 1; 2 : : : ; ag, or S = {−b; : : : ; 0; 1; 2 : : : ; ag for some integers a; b > 0, in which case the state space is finite. Definition 1.1 A stochastic process fXn : n ≥ 0g is called a Markov chain if for all times n ≥ 0 and all states i0; : : : ; i; j 2 S, P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) (1) = Pij: Pij denotes the probability that the chain, whenever in state i, moves next (one unit of time later) into state j, and is referred to as a one-step transition probability. The square matrix P = (Pij); i; j 2 S; is called the one-step transition matrix, and since when leaving state i the chain must move to one of the states j 2 S, each row sums to one (e.g., forms a probability distribution): For each i X Pij = 1: j2S We are assuming that the transition probabilities do not depend on the time n, and so, in particular, using n = 0 in (1) yields Pij = P (X1 = jjX0 = i): (Formally we are considering only time homogenous MC's meaning that their transition prob- abilities are time-homogenous (time stationary).) The defining property (1) can be described in words as the future is independent of the past given the present state.
    [Show full text]
  • Locally Feller Processes and Martingale Local Problems
    Locally Feller processes and martingale local problems Mihai Gradinaru and Tristan Haugomat Institut de Recherche Math´ematique de Rennes, Universit´ede Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France fMihai.Gradinaru,[email protected] Abstract: This paper is devoted to the study of a certain type of martingale problems associated to general operators corresponding to processes which have finite lifetime. We analyse several properties and in particular the weak convergence of sequences of solutions for an appropriate Skorokhod topology setting. We point out the Feller-type features of the associated solutions to this type of martingale problem. Then localisation theorems for well-posed martingale problems or for corresponding generators are proved. Key words: martingale problem, Feller processes, weak convergence of probability measures, Skorokhod topology, generators, localisation MSC2010 Subject Classification: Primary 60J25; Secondary 60G44, 60J35, 60B10, 60J75, 47D07 1 Introduction The theory of L´evy-type processes stays an active domain of research during the last d d two decades. Heuristically, a L´evy-type process X with symbol q : R × R ! C is a Markov process which behaves locally like a L´evyprocess with characteristic exponent d q(a; ·), in a neighbourhood of each point a 2 R . One associates to a L´evy-type process 1 d the pseudo-differential operator L given by, for f 2 Cc (R ), Z Z Lf(a) := − eia·αq(a; α)fb(α)dα; where fb(α) := (2π)−d e−ia·αf(a)da: Rd Rd (n) Does a sequence X of L´evy-type processes, having symbols qn, converges toward some process, when the sequence of symbols qn converges to a symbol q? What can we say about the sequence X(n) when the corresponding sequence of pseudo-differential operators Ln converges to an operator L? What could be the appropriate setting when one wants to approximate a L´evy-type processes by a family of discrete Markov chains? This is the kind of question which naturally appears when we study L´evy-type processes.
    [Show full text]
  • Interacting Particle Systems MA4H3
    Interacting particle systems MA4H3 Stefan Grosskinsky Warwick, 2009 These notes and other information about the course are available on http://www.warwick.ac.uk/˜masgav/teaching/ma4h3.html Contents Introduction 2 1 Basic theory3 1.1 Continuous time Markov chains and graphical representations..........3 1.2 Two basic IPS....................................6 1.3 Semigroups and generators.............................9 1.4 Stationary measures and reversibility........................ 13 2 The asymmetric simple exclusion process 18 2.1 Stationary measures and conserved quantities................... 18 2.2 Currents and conservation laws........................... 23 2.3 Hydrodynamics and the dynamic phase transition................. 26 2.4 Open boundaries and matrix product ansatz.................... 30 3 Zero-range processes 34 3.1 From ASEP to ZRPs................................ 34 3.2 Stationary measures................................. 36 3.3 Equivalence of ensembles and relative entropy................... 39 3.4 Phase separation and condensation......................... 43 4 The contact process 46 4.1 Mean field rate equations.............................. 46 4.2 Stochastic monotonicity and coupling....................... 48 4.3 Invariant measures and critical values....................... 51 d 4.4 Results for Λ = Z ................................. 54 1 Introduction Interacting particle systems (IPS) are models for complex phenomena involving a large number of interrelated components. Examples exist within all areas of natural and social sciences, such as traffic flow on highways, pedestrians or constituents of a cell, opinion dynamics, spread of epi- demics or fires, reaction diffusion systems, crystal surface growth, chemotaxis, financial markets... Mathematically the components are modeled as particles confined to a lattice or some discrete geometry. Their motion and interaction is governed by local rules. Often microscopic influences are not accesible in full detail and are modeled as effective noise with some postulated distribution.
    [Show full text]