Martingale Problems and Stochastic Equations for Markov Processes 1

Total Page:16

File Type:pdf, Size:1020Kb

Martingale Problems and Stochastic Equations for Markov Processes 1 Martingale problems and stochastic equations for Markov processes 1. Basics of stochastic processes 2. Markov processes and generators 3. Martingale problems 4. Exisence of solutions and forward equations 5. Stochastic integrals for Poisson random measures 6. Weak and strong solutions of stochastic equations 7. Stochastic equations for Markov processes in Rd 8. Convergence for Markov processes characterized by martingale problems •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 1 9. Convergence for Markov processes characterized by stochastic differential equations 10. Martingale problems for conditional distributions 11. Equivalence of stochastic equations and martingale problems 12. Genealogies and ordered representations of measure-valued pro- cesses 13. Poisson representations 14. Stochastic partial differenctial equations 15. Information and conditional expectation 16. Technical lemmas 17. Exercises 18. Stochastic analysis exercises 19. References •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 2 http://www.math.wisc.edu/˜kurtz/FrankLect.htm •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 3 1. Basics of stochastic processes • Filtrations • Stopping times • Martingales • Optional sampling theorem • Doob’s inequalities • Stochastic integrals • Local martingales • Semimartingales • Computing quadratic variations • Covariation • Ito’sˆ formula •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 4 Conventions and caveats State spaces are always complete, separable metric spaces (some- times called Polish spaces), usually denoted (E; r). All probability spaces are complete. All identities involving conditional expectations (or conditional prob- abilities) only hold almost surely (even when I don’t say so). If the filtration fFtg involved is obvious, I will say adapted, rather than fFtg-adapted, stopping time, rather than fFtg-stopping time, etc. All processes are cadlag (right continuous with left limits at each t > 0), unless otherwise noted. A process is real-valued if that is the only way the formula makes sense. •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 5 References Kurtz, Lecture Notes for Math 735 http://www.math.wisc.edu/˜kurtz/m735.htm Ethier and Kurtz, Markov Processes: Characterization and Convergence Protter, Stochastic Integration and Differential Equations, Second Edi- tion •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 6 Filtrations (Ω; F;P ) a probability space Available information is modeled by a sub-σ-algebra of F Ft information available at time t fFtg is a filtration. t < s implies Ft ⊂ Fs fFtg is complete if F0 contains all subsets of sets of probability zero. A stochastic process X is adapted to fFtg if X(t) is Ft-measurable for each t ≥ 0. An E-valued stochastic process X adapted to fFtg is fFtg-Markov if E[f(X(t + r))jFt] = E[f(X(t + r))jX(t)]; t; r ≥ 0; f 2 B(E) •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 7 Measurability for stochastic processes A stochastic process is an indexed family of random variables, but if the index set is [0; 1), then we may want to know more about X(t; !) than that it is a measurable function of ! for each t. For example, for a R-valued process X, when are Z b X(s; !)ds and X(τ(!);!) a random variables? X is measurable if (t; !) 2 [0; 1) × Ω ! X(t; !) 2 E is B([0; 1)) × F- measurable. R b R b Lemma 1.1 If X is measurable and a jX(s; !)jds < 1, then a X(s; !)ds is a random variable. If, in addition, τ is a nonnegative random variable, then X(τ(!);!) is a random variable. •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 8 Proof. The first part is a standard result for measurable functions on a product space. Verify the result for X(s; !) = 1A(s)1B(!), A 2 B[0; 1), B 2 F and apply the Dynkin class theorem to extend the result to 1C, C 2 B[0; 1) × F. If τ is a nonnegative random variable, then ! 2 Ω ! (τ(!);!) 2 [0; 1) × Ω is measurable. Consequently, X(τ(!);!) is the composi- tion of two measurble functions. •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 9 Measurability continued A stochastic process X is fFtg-adapted if for all t ≥ 0, X(t) is Ft- measurable. If X is measurable and adapted, the restriction of X to [0; t] × Ω is B[0; t] × F-measurable, but it may not be B[0; t] × Ft-measurable. X is progressive if for each t ≥ 0, (s; !) 2 [0; t] × Ω ! X(s; !) 2 E is B[0; t] × Ft-measurable. Let W = fA 2 B[0; 1) × F : A \ [0; t] × Ω 2 B[0; t] × Ft; t ≥ 0g: Then W is a σ-algebra and X is progressive if and only if (s; !) ! X(s; !) is W-measurable. Since pointwise limits of measurable functions are measurable, point- wise limits of progressive processes are progressive. •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 10 Stopping times Let fFtg be a filtration. τ is a Ft-stopping time if and only if fτ ≤ tg 2 Ft for each t ≥ 0. If τ is a stopping time, Fτ ≡ fA 2 F : A \ fτ ≤ tg 2 Ft; t ≥ 0g. If τ1 and τ2 are stopping times with τ1 ≤ τ2, then Fτ1 ⊂ Fτ2 . If τ1 and τ2 are stopping times then τ1 and τ1 ^ τ2 are Fτ1 -measurable. •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 11 A process observed at a stopping time If X is measurable and τ is a stopping time, then X(τ(!);!) is a ran- dom variable. Lemma 1.2 If τ is a stopping time and X is progressive, then X(τ) is Fτ - measurable. Proof. ! 2 Ω ! (τ(!) ^ t; !) 2 [0; t] × Ω is measurable as a mapping from (Ω; Ft) to ([0; t] × Ω; B[0; t] × Ft). Consequently, ! ! X(τ(!) ^ t; !) is Ft-measurable, and fX(τ) 2 Ag \ fτ ≤ tg = fX(τ ^ t) 2 Ag \ fτ ≤ tg 2 Ft: •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 12 Right continuous processes Most of the processes you know are either continuous (e.g., Brown- ian motion) or right continuous (e.g., Poisson process). Lemma 1.3 If X is right continuous and adapted, then X is progressive. Proof. If X is adapted, then [ns] + 1 (s; !) 2 [0; t] × Ω ! Y (s; !) ≡ X( ^ t; !) n n X k + 1 = X( ^ t; !)1[ k ; k+1 )(s) n n n k is B[0; t] × Ft-measurable. By the right continuity of X, Yn(s; !) ! X(s; !) on B[0; t] × Ft, so (s; !) 2 [0; t] × Ω ! X(s; !) is B[0; t] × Ft- measurable and X is progressive. •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 13 Examples and properties Define Ft+ ≡ \s>tFs. fFtg is right continuous if Ft = Ft+ for all t ≥ 0. If fFtg is right continuous, then τ is a stopping time if and only if fτ < tg 2 Ft for all t > 0. h Let X be cadlag and adapted. If K ⊂ E is closed, τK = infft : X(t) or X(t−) 2 Kg is a stopping time, but infft : X(t) 2 Kg may not be; however, if fFtg is right continuous and complete, then for any B 2 B(E), τB = infft : X(t) 2 Bg is an fFtg-stopping time. This result is a special case of the debut theorem, a very technical result from set theory. Note that f! : τB(!) < tg = f! : 9s < t 3 X(s; !) 2 Bg = projΩf(s; !): X(s; !) 2 B; s < tg •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 14 Piecewise constant approximations > 0, τ0 = 0, τi+1 = infft > τi : r(X(t);X(τi )) _ r(X(t−);X(τi )) ≥ g Define X (t) = X(τi ), τi ≤ t < τi+1. Then r(X(t);X (t)) ≤ . If X is adapted to fFtg, then the fτi g are fFtg-stopping times and X is fFtg-adapted. See Exercise4. •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 15 Martingales An R-valued stochastic process M adapted to fFtg is an fFtg-martingale if E[M(t + r)jFt] = M(t); t; r ≥ 0 Every martingale has finite quadratic variation: X 2 [M]t = lim (M(t ^ ti+1) − M(t ^ ti)) where 0 = t0 < t1 < ···, ti ! 1, and the limit is in probability as max(ti+1 − ti) ! 0. More precisely, for > 0 and t0 > 0, X 2 lim P fsup j[M]t − lim (M(t ^ ti+1) − M(t ^ ti)) j > g = 0: t≤t0 For standard Brownian motion W , [W ]t = t. •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 16 Optional sampling theorem A real-valued process is a submartingale if E[jX(t)j] < 1, t ≥ 0, and E[X(t + s)jFt] ≥ X(t); t; s ≥ 0: If τ1 and τ2 are stopping times, then E[X(t ^ τ2)jFτ1 ] ≥ X(t ^ τ1 ^ τ2): If τ2 is finite a.s. E[jX(τ2)j] < 1 and limt!1 E[jX(t)j1fτ2>tg] = 0, then E[X(τ2)jFτ1 ] ≥ X(τ1 ^ τ2): Of course, if X is a martingale E[X(t ^ τ2)jFτ1 ] = X(t ^ τ1 ^ τ2): •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 17 Square integrable martingales M a martingale satisfying E[M(t)2] < 1. Then 2 M(t) − [M]t is a martingale. In particular, for t > s 2 E[(M(t) − M(s)) ] = E[[M]t − [M]s]: •First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 18 Doob’s inequalities Let X be a submartingale.
Recommended publications
  • Conditioning and Markov Properties
    Conditioning and Markov properties Anders Rønn-Nielsen Ernst Hansen Department of Mathematical Sciences University of Copenhagen Department of Mathematical Sciences University of Copenhagen Universitetsparken 5 DK-2100 Copenhagen Copyright 2014 Anders Rønn-Nielsen & Ernst Hansen ISBN 978-87-7078-980-6 Contents Preface v 1 Conditional distributions 1 1.1 Markov kernels . 1 1.2 Integration of Markov kernels . 3 1.3 Properties for the integration measure . 6 1.4 Conditional distributions . 10 1.5 Existence of conditional distributions . 16 1.6 Exercises . 23 2 Conditional distributions: Transformations and moments 27 2.1 Transformations of conditional distributions . 27 2.2 Conditional moments . 35 2.3 Exercises . 41 3 Conditional independence 51 3.1 Conditional probabilities given a σ{algebra . 52 3.2 Conditionally independent events . 53 3.3 Conditionally independent σ-algebras . 55 3.4 Shifting information around . 59 3.5 Conditionally independent random variables . 61 3.6 Exercises . 68 4 Markov chains 71 4.1 The fundamental Markov property . 71 4.2 The strong Markov property . 84 4.3 Homogeneity . 90 4.4 An integration formula for a homogeneous Markov chain . 99 4.5 The Chapmann-Kolmogorov equations . 100 iv CONTENTS 4.6 Stationary distributions . 103 4.7 Exercises . 104 5 Ergodic theory for Markov chains on general state spaces 111 5.1 Convergence of transition probabilities . 113 5.2 Transition probabilities with densities . 115 5.3 Asymptotic stability . 117 5.4 Minorisation . 122 5.5 The drift criterion . 127 5.6 Exercises . 131 6 An introduction to Bayesian networks 141 6.1 Introduction . 141 6.2 Directed graphs .
    [Show full text]
  • Poisson Representations of Branching Markov and Measure-Valued
    The Annals of Probability 2011, Vol. 39, No. 3, 939–984 DOI: 10.1214/10-AOP574 c Institute of Mathematical Statistics, 2011 POISSON REPRESENTATIONS OF BRANCHING MARKOV AND MEASURE-VALUED BRANCHING PROCESSES By Thomas G. Kurtz1 and Eliane R. Rodrigues2 University of Wisconsin, Madison and UNAM Representations of branching Markov processes and their measure- valued limits in terms of countable systems of particles are con- structed for models with spatially varying birth and death rates. Each particle has a location and a “level,” but unlike earlier con- structions, the levels change with time. In fact, death of a particle occurs only when the level of the particle crosses a specified level r, or for the limiting models, hits infinity. For branching Markov pro- cesses, at each time t, conditioned on the state of the process, the levels are independent and uniformly distributed on [0,r]. For the limiting measure-valued process, at each time t, the joint distribu- tion of locations and levels is conditionally Poisson distributed with mean measure K(t) × Λ, where Λ denotes Lebesgue measure, and K is the desired measure-valued process. The representation simplifies or gives alternative proofs for a vari- ety of calculations and results including conditioning on extinction or nonextinction, Harris’s convergence theorem for supercritical branch- ing processes, and diffusion approximations for processes in random environments. 1. Introduction. Measure-valued processes arise naturally as infinite sys- tem limits of empirical measures of finite particle systems. A number of ap- proaches have been developed which preserve distinct particles in the limit and which give a representation of the measure-valued process as a transfor- mation of the limiting infinite particle system.
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • A Predictive Model Using the Markov Property
    A Predictive Model using the Markov Property Robert A. Murphy, Ph.D. e-mail: [email protected] Abstract: Given a data set of numerical values which are sampled from some unknown probability distribution, we will show how to check if the data set exhibits the Markov property and we will show how to use the Markov property to predict future values from the same distribution, with probability 1. Keywords and phrases: markov property. 1. The Problem 1.1. Problem Statement Given a data set consisting of numerical values which are sampled from some unknown probability distribution, we want to show how to easily check if the data set exhibits the Markov property, which is stated as a sequence of dependent observations from a distribution such that each suc- cessive observation only depends upon the most recent previous one. In doing so, we will present a method for predicting bounds on future values from the same distribution, with probability 1. 1.2. Markov Property Let I R be any subset of the real numbers and let T I consist of times at which a numerical ⊆ ⊆ distribution of data is randomly sampled. Denote the random samples by a sequence of random variables Xt t∈T taking values in R. Fix t T and define T = t T : t>t to be the subset { } 0 ∈ 0 { ∈ 0} of times in T that are greater than t . Let t T . 0 1 ∈ 0 Definition 1 The sequence Xt t∈T is said to exhibit the Markov Property, if there exists a { } measureable function Yt1 such that Xt1 = Yt1 (Xt0 ) (1) for all sequential times t0,t1 T such that t1 T0.
    [Show full text]
  • Local Conditioning in Dawson–Watanabe Superprocesses
    The Annals of Probability 2013, Vol. 41, No. 1, 385–443 DOI: 10.1214/11-AOP702 c Institute of Mathematical Statistics, 2013 LOCAL CONDITIONING IN DAWSON–WATANABE SUPERPROCESSES By Olav Kallenberg Auburn University Consider a locally finite Dawson–Watanabe superprocess ξ =(ξt) in Rd with d ≥ 2. Our main results include some recursive formulas for the moment measures of ξ, with connections to the uniform Brown- ian tree, a Brownian snake representation of Palm measures, continu- ity properties of conditional moment densities, leading by duality to strongly continuous versions of the multivariate Palm distributions, and a local approximation of ξt by a stationary clusterη ˜ with nice continuity and scaling properties. This all leads up to an asymptotic description of the conditional distribution of ξt for a fixed t> 0, given d that ξt charges the ε-neighborhoods of some points x1,...,xn ∈ R . In the limit as ε → 0, the restrictions to those sets are conditionally in- dependent and given by the pseudo-random measures ξ˜ orη ˜, whereas the contribution to the exterior is given by the Palm distribution of ξt at x1,...,xn. Our proofs are based on the Cox cluster representa- tions of the historical process and involve some delicate estimates of moment densities. 1. Introduction. This paper may be regarded as a continuation of [19], where we considered some local properties of a Dawson–Watanabe super- process (henceforth referred to as a DW-process) at a fixed time t> 0. Recall that a DW-process ξ = (ξt) is a vaguely continuous, measure-valued diffu- d ξtf µvt sion process in R with Laplace functionals Eµe− = e− for suitable functions f 0, where v = (vt) is the unique solution to the evolution equa- 1 ≥ 2 tion v˙ = 2 ∆v v with initial condition v0 = f.
    [Show full text]
  • A Class of Measure-Valued Markov Chains and Bayesian Nonparametrics
    Bernoulli 18(3), 2012, 1002–1030 DOI: 10.3150/11-BEJ356 A class of measure-valued Markov chains and Bayesian nonparametrics STEFANO FAVARO1, ALESSANDRA GUGLIELMI2 and STEPHEN G. WALKER3 1Universit`adi Torino and Collegio Carlo Alberto, Dipartimento di Statistica e Matematica Ap- plicata, Corso Unione Sovietica 218/bis, 10134 Torino, Italy. E-mail: [email protected] 2Politecnico di Milano, Dipartimento di Matematica, P.zza Leonardo da Vinci 32, 20133 Milano, Italy. E-mail: [email protected] 3University of Kent, Institute of Mathematics, Statistics and Actuarial Science, Canterbury CT27NZ, UK. E-mail: [email protected] Measure-valued Markov chains have raised interest in Bayesian nonparametrics since the seminal paper by (Math. Proc. Cambridge Philos. Soc. 105 (1989) 579–585) where a Markov chain having the law of the Dirichlet process as unique invariant measure has been introduced. In the present paper, we propose and investigate a new class of measure-valued Markov chains defined via exchangeable sequences of random variables. Asymptotic properties for this new class are derived and applications related to Bayesian nonparametric mixture modeling, and to a generalization of the Markov chain proposed by (Math. Proc. Cambridge Philos. Soc. 105 (1989) 579–585), are discussed. These results and their applications highlight once again the interplay between Bayesian nonparametrics and the theory of measure-valued Markov chains. Keywords: Bayesian nonparametrics; Dirichlet process; exchangeable sequences; linear functionals
    [Show full text]
  • Markov Process Duality
    Markov Process Duality Jan M. Swart Luminy, October 23 and 24, 2013 Jan M. Swart Markov Process Duality Markov Chains S finite set. RS space of functions f : S R. ! S For probability kernel P = (P(x; y))x;y2S and f R define left and right multiplication as 2 X X Pf (x) := P(x; y)f (y) and fP(x) := f (y)P(y; x): y y (I do not distinguish row and column vectors.) Def Chain X = (Xk )k≥0 of S-valued r.v.'s is Markov chain with transition kernel P and state space S if S E f (Xk+1) (X0;:::; Xk ) = Pf (Xk ) a:s: (f R ) 2 P (X0;:::; Xk ) = (x0;:::; xk ) , = P[X0 = x0]P(x0; x1) P(xk−1; xk ): ··· µ µ µ Write P ; E for process with initial law µ = P [X0 ]. x δx x 2 · P := P with δx (y) := 1fx=yg. E similar. Jan M. Swart Markov Process Duality Markov Chains Set k µ k x µk := µP (x) = P [Xk = x] and fk := P f (x) = E [f (Xk )]: Then the forward and backward equations read µk+1 = µk P and fk+1 = Pfk : In particular π invariant law iff πP = π. Function h harmonic iff Ph = h h(Xk ) martingale. , Jan M. Swart Markov Process Duality Random mapping representation (Zk )k≥1 i.i.d. with common law ν, take values in (E; ). φ : S E S measurable E × ! P(x; y) = P[φ(x; Z1) = y]: Random mapping representation (E; ; ν; φ) always exists, highly non-unique.
    [Show full text]
  • 1 Introduction Branching Mechanism in a Superprocess from a Branching
    数理解析研究所講究録 1157 巻 2000 年 1-16 1 An example of random snakes by Le Gall and its applications 渡辺信三 Shinzo Watanabe, Kyoto University 1 Introduction The notion of random snakes has been introduced by Le Gall ([Le 1], [Le 2]) to construct a class of measure-valued branching processes, called superprocesses or continuous state branching processes ([Da], [Dy]). A main idea is to produce the branching mechanism in a superprocess from a branching tree embedded in excur- sions at each different level of a Brownian sample path. There is no clear notion of particles in a superprocess; it is something like a cloud or mist. Nevertheless, a random snake could provide us with a clear picture of historical or genealogical developments of”particles” in a superprocess. ” : In this note, we give a sample pathwise construction of a random snake in the case when the underlying Markov process is a Markov chain on a tree. A simplest case has been discussed in [War 1] and [Wat 2]. The construction can be reduced to this case locally and we need to consider a recurrence family of stochastic differential equations for reflecting Brownian motions with sticky boundaries. A special case has been already discussed by J. Warren [War 2] with an application to a coalescing stochastic flow of piece-wise linear transformations in connection with a non-white or non-Gaussian predictable noise in the sense of B. Tsirelson. 2 Brownian snakes Throughout this section, let $\xi=\{\xi(t), P_{x}\}$ be a Hunt Markov process on a locally compact separable metric space $S$ endowed with a metric $d_{S}(\cdot, *)$ .
    [Show full text]
  • Strong Memoryless Times and Rare Events in Markov Renewal Point
    The Annals of Probability 2004, Vol. 32, No. 3B, 2446–2462 DOI: 10.1214/009117904000000054 c Institute of Mathematical Statistics, 2004 STRONG MEMORYLESS TIMES AND RARE EVENTS IN MARKOV RENEWAL POINT PROCESSES By Torkel Erhardsson Royal Institute of Technology Let W be the number of points in (0,t] of a stationary finite- state Markov renewal point process. We derive a bound for the total variation distance between the distribution of W and a compound Poisson distribution. For any nonnegative random variable ζ, we con- struct a “strong memoryless time” ζˆ such that ζ − t is exponentially distributed conditional on {ζˆ ≤ t,ζ>t}, for each t. This is used to embed the Markov renewal point process into another such process whose state space contains a frequently observed state which repre- sents loss of memory in the original process. We then write W as the accumulated reward of an embedded renewal reward process, and use a compound Poisson approximation error bound for this quantity by Erhardsson. For a renewal process, the bound depends in a simple way on the first two moments of the interrenewal time distribution, and on two constants obtained from the Radon–Nikodym derivative of the interrenewal time distribution with respect to an exponential distribution. For a Poisson process, the bound is 0. 1. Introduction. In this paper, we are concerned with rare events in stationary finite-state Markov renewal point processes (MRPPs). An MRPP is a marked point process on R or Z (continuous or discrete time). Each point of an MRPP has an associated mark, or state.
    [Show full text]
  • Markov Chains on a General State Space
    Markov chains on measurable spaces Lecture notes Dimitri Petritis Master STS mention mathématiques Rennes UFR Mathématiques Preliminary draft of April 2012 ©2005–2012 Petritis, Preliminary draft, last updated April 2012 Contents 1 Introduction1 1.1 Motivating example.............................1 1.2 Observations and questions........................3 2 Kernels5 2.1 Notation...................................5 2.2 Transition kernels..............................6 2.3 Examples-exercises.............................7 2.3.1 Integral kernels...........................7 2.3.2 Convolution kernels........................9 2.3.3 Point transformation kernels................... 11 2.4 Markovian kernels............................. 11 2.5 Further exercises.............................. 12 3 Trajectory spaces 15 3.1 Motivation.................................. 15 3.2 Construction of the trajectory space................... 17 3.2.1 Notation............................... 17 3.3 The Ionescu Tulce˘atheorem....................... 18 3.4 Weak Markov property........................... 22 3.5 Strong Markov property.......................... 24 3.6 Examples-exercises............................. 26 4 Markov chains on finite sets 31 i ii 4.1 Basic construction............................. 31 4.2 Some standard results from linear algebra............... 32 4.3 Positive matrices.............................. 36 4.4 Complements on spectral properties.................. 41 4.4.1 Spectral constraints stemming from algebraic properties of the stochastic matrix.......................
    [Show full text]
  • 1 Markov Chain Notation for a Continuous State Space
    The Metropolis-Hastings Algorithm 1 Markov Chain Notation for a Continuous State Space A sequence of random variables X0;X1;X2;:::, is a Markov chain on a continuous state space if... ... where it goes depends on where is is but not where is was. I'd really just prefer that you have the “flavor" here. The previous Markov property equation that we had P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) is uninteresting now since both of those probabilities are zero when the state space is con- tinuous. It would be better to say that, for any A ⊆ S, where S is the state space, we have P (Xn+1 2 AjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 2 AjXn = i): Note that it is okay to have equalities on the right side of the conditional line. Once these continuous random variables have been observed, they are fixed and nailed down to discrete values. 1.1 Transition Densities The continuous state analog of the one-step transition probability pij is the one-step tran- sition density. We will denote this as p(x; y): This is not the probability that the chain makes a move from state x to state y. Instead, it is a probability density function in y which describes a curve under which area represents probability. x can be thought of as a parameter of this density. For example, given a Markov chain is currently in state x, the next value y might be drawn from a normal distribution centered at x.
    [Show full text]
  • Non-Local Branching Superprocesses and Some Related Models
    Published in: Acta Applicandae Mathematicae 74 (2002), 93–112. Non-local Branching Superprocesses and Some Related Models Donald A. Dawson1 School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Canada K1S 5B6 E-mail: [email protected] Luis G. Gorostiza2 Departamento de Matem´aticas, Centro de Investigaci´ony de Estudios Avanzados, A.P. 14-740, 07000 M´exicoD. F., M´exico E-mail: [email protected] Zenghu Li3 Department of Mathematics, Beijing Normal University, Beijing 100875, P.R. China E-mail: [email protected] Abstract A new formulation of non-local branching superprocesses is given from which we derive as special cases the rebirth, the multitype, the mass- structured, the multilevel and the age-reproduction-structured superpro- cesses and the superprocess-controlled immigration process. This unified treatment simplifies considerably the proof of existence of the old classes of superprocesses and also gives rise to some new ones. AMS Subject Classifications: 60G57, 60J80 Key words and phrases: superprocess, non-local branching, rebirth, mul- titype, mass-structured, multilevel, age-reproduction-structured, superprocess- controlled immigration. 1Supported by an NSERC Research Grant and a Max Planck Award. 2Supported by the CONACYT (Mexico, Grant No. 37130-E). 3Supported by the NNSF (China, Grant No. 10131040). 1 1 Introduction Measure-valued branching processes or superprocesses constitute a rich class of infinite dimensional processes currently under rapid development. Such processes arose in appli- cations as high density limits of branching particle systems; see e.g. Dawson (1992, 1993), Dynkin (1993, 1994), Watanabe (1968). The development of this subject has been stimu- lated from different subjects including branching processes, interacting particle systems, stochastic partial differential equations and non-linear partial differential equations.
    [Show full text]