Continuous Martingales and Stochastic Calculus

Total Page:16

File Type:pdf, Size:1020Kb

Continuous Martingales and Stochastic Calculus Continuous martingales and stochastic calculus Alison Etheridge March 11, 2018 Contents 1 Introduction 3 2 An overview of Gaussian variables and processes 5 2.1 Gaussian variables . 5 2.2 Gaussian vectors and spaces . 7 2.3 Gaussian spaces . 9 2.4 Gaussian processes . 10 [ ; ) [ ; ) 2.5 Constructing distributions on R 0 ¥ ;B(R 0 ¥ ) . 11 3 Brownian Motion and First Properties 12 3.1 Definition of Brownian motion . 12 3.2 Wiener Measure . 18 3.3 Extensions and first properties . 18 3.4 Basic properties of Brownian sample paths . 19 4 Filtrations and stopping times 23 4.1 Stopping times . 24 5 (Sub/super-)Martingales in continuous time 27 5.1 Definitions . 27 5.2 Doob’s maximal inequalities . 28 5.3 Convergence and regularisation theorems . 30 5.4 Martingale convergence and optional stopping . 33 6 Continuous semimartingales 36 6.1 Functions of finite variation . 36 6.2 Processes of finite variation . 38 6.3 Continuous local martingales . 40 6.4 Quadratic variation of a continuous local martingale . 41 6.5 Continuous semimartingales . 47 1 7 Stochastic Integration 48 7.1 Stochastic integral w.r.t. L2-bounded martingales . 48 7.2 Intrinsic characterisation of stochastic integrals using the quadratic co-variation . 51 7.3 Extensions: stochastic integration with respect to continuous semi- martingales . 54 7.4 Ito’sˆ formula and its applications . 57 A The Monotone Class Lemma/ Dynkin’s p − l Theorem 62 B Review of some basic probability 63 B.1 Convergence of random variables. 63 B.2 Convergence in probability. 63 B.3 Uniform Integrability . 64 B.4 The Convergence Theorems . 64 B.5 Information and independence. 66 B.6 Conditional expectation. 66 C Convergence of backwards discrete time supermartingales 67 D A very short primer in functional analysis 67 E Normed vector spaces 67 F Banach spaces 69 These notes are based heavily on notes by Jan Obłoj´ from last year’s course, and the book by Jean-Franc¸ois Le Gall, Brownian motion, martingales, and stochas- tic calculus, Springer 2016. The first five chapters of that book cover everything in the course (and more). Other useful references (in no particular order) include: 1. D. Revuz and M. Yor, Continuous martingales and Brownian motion, Springer (Revised 3rd ed.), 2001, Chapters 0-4. 2. I. Karatzas and S. Shreve, Brownian motion and stochastic calculus, Springer (2nd ed.), 1991, Chapters 1-3. 3. R. Durrett, Stochastic Calculus: A practical introduction, CRC Press, 1996. Sections 1.1 - 2.10. 4. F. Klebaner, Introduction to Stochastic Calculus with Applications, 3rd edition, Imperial College Press, 2012. Chapters 1, 2, 3.1–3.11, 4.1-4.5, 7.1-7.8, 8.1- 8.7. 2 5. J. M. Steele, Stochastic Calculus and Financial Applications, Springer, 2010. Chapters 3 - 8. 6. B. Oksendal, Stochastic Differential Equations: An introduction with applica- tions, 6th edition, Springer (Universitext), 2007. Chapters 1 - 3. 7. S. Shreve, Stochastic calculus for finance, Vol 2: Continuous-time models, Springer Finance, Springer-Verlag, New York, 2004. Chapters 3 - 4. The appendices gather together some useful results that we take as known. 1 Introduction Our topic is part of the huge field devoted to the study of stochastic processes. Since first year, you’ve had the notion of a random variable, X say, on a probability space (W;F ;P) and taking values in a state space (E;E ). X : W ! E is just a measurable mapping, so that for each e 2 E , X−1(e) 2 F and so, in particular, we can assign a probability to the event that X 2 e. Often (E;E ) is just (R;B(R)) (where B(R) is the Borel sets on R) and this just says that for each x 2 R we can assign a probability to the event fX ≤ xg. Definition 1.1 A stochastic process, indexed by some set T , is a collection of random variables fXt gt2T , defined on a common probability space (W;F ;P) and taking values in a common state space (E;E ). For us, T will generally be either [0;¥) or [0;T] and we think of Xt as a random quantity that evolves with time. When we model deterministic quantitities that evolve with (continuous) time, we often appeal to ordinary differential equations as models. In this course we develop the ‘calculus’ necessary to develop an analogous theory of stochastic (or- dinary) differential equations. An ordinary differential equation might take the form dX(t) = a(t;X(t))dt; for a suitably nice function a. A stochastic equation is often formally written as dX(t) = a(t;X(t))dt + b(t;X(t))dBt ; where the second term on the right models ‘noise’ or fluctuations. Here (Bt )t≥0 is an object that we call Brownian motion. We shall consider what appear to be more general driving noises, but the punchline of the course is that under rather general conditions they can all be built from Brownian motion. Indeed if we added possible (random) ‘jumps’ in X(t) at times given by a Poisson process, we’d cap- ture essentially the most general theory. We are not going to allow jumps, so we’ll be thinking of settings in which our stochastic equation has a continuous solution t 7! Xt , and Brownian motion will be a fundamental object. It requires some measure-theoretic niceties to make sense of all this. 3 Definition 1.2 The mapping t 7! Xt (w) for a fixed w 2 W, represents a realisation of our stochastic process, called a sample path or trajectory. We shall assume that (t;w) 7! Xt (w) : [0;¥) × W;B([0;¥)) ⊗ F 7! R;B(R) is measurable (i.e. 8A 2 B(R), f(t;w) : Xt 2 Ag is in the product s-algebra B([0;¥)) ⊗ F . Our stochastic process is then said to be measurable. Definition 1.3 Let X;Y be two stochastic processes defined on a common proba- bility space (W;F ;P) . i. We say that X is a modification of Y if, for all t ≥ 0, we have Xt = Yt a.s.; ii. We say that X and Y are indistinguishable if P[Xt =Yt ; for all 0 ≤ t < ¥] = 1. If X and Y are modifications of one another then, in particular, they have the same finite dimensional distributions, P[(Xt1 ;:::;Xtn ) 2 A] = P[(Yt1 ;:::;Ytn ) 2 A] for all measurable sets A, but indistinguishability is a much stronger property. Indistinguishability takes the sample path as the basic object of study, so that we could think of (Xt (w);t ≥ 0) (the path) as a random variable taking values in the space E[0;¥) (of all possible paths). We are abusing notation a little by identifying the sample space W with the state space E[0;¥). This state space then has to be endowed with a s-algebra of measurable sets. For definiteness, we take real-valued processes, so E = R. [ ; ) Definition 1.4 An n-dimensional cylinder set in R 0 ¥ is a set of the form [0;¥) C = w 2 R : w(t1);:::;w(tn) 2 A n for some 0 ≤ t1 < t2 ::: < tn and A 2 B(R ). [ ; ) Let C be the family of all finite-dimensional cylinder sets and B(R 0 ¥ ) the s-algebra it generates. This is small enough to be able to build probability mea- [ ; ) sures on B(R 0 ¥ ) using Caratheodory’s´ Theorem (see B8.1). On the other hand [ ; ) B(R 0 ¥ ) only contains events which can be defined using at most countably many coordinates. In particular, the event [0;¥) w 2 B(R ) : w(t) is continuous [ ; ) is not B(R 0 ¥ )-measurable. We will have to do some work to show that many processes can be assumed to be continuous, or right continuous. The sample paths are then fully described by their values at times t 2 Q, which will greatly simplify the study of quantities of interest such as sup0≤s≤t jXsj or t0(w) = infft ≥ 0 : Xt (w) > 0g. A monotone class argument (see Appendix A) will tell us that a probability [ ; ) measure on B(R 0 ¥ ) is characterised by its finite-dimensional distributions - so if we can take continuous paths, then we only need to find the probabilities of cylinder sets to characterise the distribution of the process. 4 2 An overview of Gaussian variables and processes Brownian motion is a special example of a Gaussian process - or at least a version of one that is assumed to have continuous sample paths. In this section we give an overview of Gaussian variables and processes, but in the next we shall give a direct construction of Brownian motion, due to Levy,´ from which continuity of sample paths is an immediate consequence. 2.1 Gaussian variables Definition 2.1 A random variable X is called a centred standard Gaussian, or stan- dard normal, if its distribution has density 2 1 − x pX (x) = p e 2 x 2 R 2p with respect to Lebesgue measure. We write X ∼ N (0;1). It is elementary to calculate its Laplace transform: 2 lX l E[e ] = e 2 ; l 2 R; or extending to complex values the characteristic function 2 ixX −x E[e ] = e 2 ; x 2 R: We say Y has Gaussian (or normal) distribution with mean m and variance s 2, written Y ∼ N (m;s 2), if Y = sX + m where X ∼ N (0;1). Then s 2x 2 [eixY ] = exp imx − ; x 2 ; E 2 R and if s > 0, the density on R is 2 1 − (x−m) pY (x) = p e 2s2 ; x 2 R: 2ps We think of a constant ‘random’ variable as being a degenerate Gaussian.
Recommended publications
  • MAST31801 Mathematical Finance II, Spring 2019. Problem Sheet 8 (4-8.4.2019)
    UH|Master program Mathematics and Statistics|MAST31801 Mathematical Finance II, Spring 2019. Problem sheet 8 (4-8.4.2019) 1. Show that a local martingale (Xt : t ≥ 0) in a filtration F, which is bounded from below is a supermartingale. Hint: use localization together with Fatou lemma. 2. A non-negative local martingale Xt ≥ 0 is a martingale if and only if E(Xt) = E(X0) 8t. 3. Recall Ito formula, for f(x; s) 2 C2;1 and a continuous semimartingale Xt with quadratic covariation hXit 1 Z t @2f Z t @f Z t @f f(Xt; t) − f(X0; 0) − 2 (Xs; s)dhXis − (Xs; s)ds = (Xs; s)dXs 2 0 @x 0 @s 0 @x where on the right side we have the Ito integral. We can also understand the Ito integral as limit in probability of Riemann sums X @f n n n n X(tk−1); tk−1 X(tk ^ t) − X(tk−1 ^ t) n n @x tk 2Π among a sequence of partitions Πn with ∆(Πn) ! 0. Recalling that the Brownian motion Wt has quadratic variation hW it = t, write the following Ito integrals explicitely by using Ito formula: R t n (a) 0 Ws dWs, n 2 N. R t (b) 0 exp σWs σdWs R t 2 (c) 0 exp σWs − σ s=2 σdWs R t (d) 0 sin(σWs)dWs R t (e) 0 cos(σWs)dWs 4. These Ito integrals as stochastic processes are semimartingales. Com- pute the quadratic variation of the Ito-integrals above. Hint: recall that the finite variation part of a semimartingale has zero quadratic variation, and the quadratic variation is bilinear.
    [Show full text]
  • CONFORMAL INVARIANCE and 2 D STATISTICAL PHYSICS −
    CONFORMAL INVARIANCE AND 2 d STATISTICAL PHYSICS − GREGORY F. LAWLER Abstract. A number of two-dimensional models in statistical physics are con- jectured to have scaling limits at criticality that are in some sense conformally invariant. In the last ten years, the rigorous understanding of such limits has increased significantly. I give an introduction to the models and one of the major new mathematical structures, the Schramm-Loewner Evolution (SLE). 1. Critical Phenomena Critical phenomena in statistical physics refers to the study of systems at or near the point at which a phase transition occurs. There are many models of such phenomena. We will discuss some discrete equilibrium models that are defined on a lattice. These are measures on paths or configurations where configurations are weighted by their energy with a preference for paths of smaller energy. These measures depend on at leastE one parameter. A standard parameter in physics β = c/T where c is a fixed constant, T stands for temperature and the measure given to a configuration is e−βE . Phase transitions occur at critical values of the temperature corresponding to, e.g., the transition from a gaseous to liquid state. We use β for the parameter although for understanding the literature it is useful to know that large values of β correspond to “low temperature” and small values of β correspond to “high temperature”. Small β (high temperature) systems have weaker correlations than large β (low temperature) systems. In a number of models, there is a critical βc such that qualitatively the system has three regimes β<βc (high temperature), β = βc (critical) and β>βc (low temperature).
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Derivatives of Self-Intersection Local Times
    Derivatives of self-intersection local times Jay Rosen? Department of Mathematics College of Staten Island, CUNY Staten Island, NY 10314 e-mail: [email protected] Summary. We show that the renormalized self-intersection local time γt(x) for both the Brownian motion and symmetric stable process in R1 is differentiable in 0 the spatial variable and that γt(0) can be characterized as the continuous process of zero quadratic variation in the decomposition of a natural Dirichlet process. This Dirichlet process is the potential of a random Schwartz distribution. Analogous results for fractional derivatives of self-intersection local times in R1 and R2 are also discussed. 1 Introduction In their study of the intrinsic Brownian local time sheet and stochastic area integrals for Brownian motion, [14, 15, 16], Rogers and Walsh were led to analyze the functional Z t A(t, Bt) = 1[0,∞)(Bt − Bs) ds (1) 0 where Bt is a 1-dimensional Brownian motion. They showed that A(t, Bt) is not a semimartingale, and in fact showed that Z t Bs A(t, Bt) − Ls dBs (2) 0 x has finite non-zero 4/3-variation. Here Ls is the local time at x, which is x R s formally Ls = 0 δ(Br − x) dr, where δ(x) is Dirac’s ‘δ-function’. A formal d d2 0 application of Ito’s lemma, using dx 1[0,∞)(x) = δ(x) and dx2 1[0,∞)(x) = δ (x), yields Z t 1 Z t Z s Bs 0 A(t, Bt) − Ls dBs = t + δ (Bs − Br) dr ds (3) 0 2 0 0 ? This research was supported, in part, by grants from the National Science Foun- dation and PSC-CUNY.
    [Show full text]
  • Skorohod Stochastic Integration with Respect to Non-Adapted Processes on Wiener Space Nicolas Privault
    Skorohod stochastic integration with respect to non-adapted processes on Wiener space Nicolas Privault Equipe d'Analyse et Probabilit´es,Universit´ed'Evry-Val d'Essonne Boulevard des Coquibus, 91025 Evry Cedex, France e-mail: [email protected] Abstract We define a Skorohod type anticipative stochastic integral that extends the It^ointegral not only with respect to the Wiener process, but also with respect to a wide class of stochastic processes satisfying certain homogeneity and smoothness conditions, without requirements relative to filtrations such as adaptedness. Using this integral, a change of variable formula that extends the classical and Skorohod It^oformulas is obtained. Key words: It^ocalculus, Malliavin calculus, Skorohod integral. Mathematics Subject Classification (1991): 60H05, 60H07. 1 Introduction The Skorohod integral, defined by creation on Fock space, is an extension of the It^o integral with respect to the Wiener or Poisson processes, depending on the probabilis- tic interpretation chosen for the Fock space. This means that it coincides with the It^ointegral on square-integrable adapted processes. It can also extend the stochastic integral with respect to certain normal martingales, cf. [10], but it always acts with respect to the underlying process relative to a given probabilistic interpretation. The Skorohod integral is also linked to the Stratonovich, forward and backward integrals, and allows to extend the It^oformula to a class of Skorohod integral processes which contains non-adapted processes that have a certain structure. In this paper we introduce an extended Skorohod stochastic integral on Wiener space that can act with respect to processes other than Brownian motion.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Local Time and Tanaka Formula for G-Brownian Motion
    Local time and Tanaka formula for the G-Brownian motion∗ Qian LIN 1,2† 1School of Mathematics, Shandong University, Jinan 250100, China; 2 Laboratoire de Math´ematiques, CNRS UMR 6205, Universit´ede Bretagne Occidentale, 6, avenue Victor Le Gorgeu, CS 93837, 29238 Brest cedex 3, France. Abstract In this paper, we study the notion of local time and obtain the Tanaka formula for the G-Brownian motion. Moreover, the joint continuity of the local time of the G- Brownian motion is obtained and its quadratic variation is proven. As an application, we generalize Itˆo’s formula with respect to the G-Brownian motion to convex functions. Keywords: G-expectation; G-Brownian motion; local time; Tanaka formula; quadratic variation. 1 Introduction The objective of the present paper is to study the local time as well as the Tanaka formula for the G-Brownian motion. Motivated by uncertainty problems, risk measures and superhedging in finance, Peng has introduced a new notion of nonlinear expectation, the so-called G-expectation (see [10], [12], [13], [15], [16]), which is associated with the following nonlinear heat equation 2 ∂u(t,x) ∂ u(t,x) R arXiv:0912.1515v3 [math.PR] 20 Oct 2012 = G( 2 ), (t,x) [0, ) , ∂t ∂x ∈ ∞ × u(0,x)= ϕ(x), where, for given parameters 0 σ σ, the sublinear function G is defined as follows: ≤ ≤ 1 G(α)= (σ2α+ σ2α−), α R. 2 − ∈ The G-expectation represents a special case of general nonlinear expectations Eˆ whose importance stems from the fact that they are related to risk measures ρ in finance by ∗Corresponding address: Institute of Mathematical Economics, Bielefeld University, Postfach 100131, 33501 Bielefeld, Germany †Email address: [email protected] 1 the relation Eˆ[X] = ρ( X), where X runs the class of contingent claims.
    [Show full text]
  • Lecture III: Review of Classic Quadratic Variation Results and Relevance to Statistical Inference in Finance
    Lecture III: Review of Classic Quadratic Variation Results and Relevance to Statistical Inference in Finance Christopher P. Calderon Rice University / Numerica Corporation Research Scientist Chris Calderon, PASI, Lecture 3 Outline I Refresher on Some Unique Properties of Brownian Motion II Stochastic Integration and Quadratic Variation of SDEs III Demonstration of How Results Help Understand ”Realized Variation” Discretization Error and Idea Behind “Two Scale Realized Volatility Estimator” Chris Calderon, PASI, Lecture 3 Part I Refresher on Some Unique Properties of Brownian Motion Chris Calderon, PASI, Lecture 3 Outline For Items I & II Draw Heavily from: Protter, P. (2004) Stochastic Integration and Differential Equations, Springer-Verlag, Berlin. Steele, J.M. (2001) Stochastic Calculus and Financial Applications, Springer, New York. For Item III Highlight Results from: Barndorff-Nielsen, O. & Shepard, N. (2003) Bernoulli 9 243-265. Zhang, L.. Mykland, P. & Ait-Sahalia,Y. (2005) JASA 100 1394-1411. Chris Calderon, PASI, Lecture 3 Assuming Some Basic Familiarity with Brownian Motion (B.M.) • Stationary Independent Increments • Increments are Gaussian Bt Bs (0,t s) • Paths are Continuous but “Rough”− / “Jittery”∼ N − (Not Classically Differentiable) • Paths are of Infinite Variation so “Funny” Integrals Used in Stochastic Integration, e.g. t 1 2 BsdBs = 2 (Bt t) 0 − R Chris Calderon, PASI, Lecture 3 Assuming Some Basic Familiarity with Brownian Motion (B.M.) The Material in Parts I and II are “Classic” Fundamental & Established Results but Set the Stage to Understand Basics in Part III. The References Listed at the Beginning Provide a Detailed Mathematical Account of the Material I Summarize Briefly Here. Chris Calderon, PASI, Lecture 3 A Classic Result Worth Reflecting On N 2 lim (Bti Bti 1 ) = T N − →∞ i=1 − T Implies Something Not t = i Intuitive to Many Unfamiliar i N with Brownian Motion … Chris Calderon, PASI, Lecture 3 Discretely Sampled Brownian Motion Paths 2 1.8 Fix Terminal Time, “T” and Refine Mesh by 1.6 Increasing “N”.
    [Show full text]
  • Locally Feller Processes and Martingale Local Problems
    Locally Feller processes and martingale local problems Mihai Gradinaru and Tristan Haugomat Institut de Recherche Math´ematique de Rennes, Universit´ede Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France fMihai.Gradinaru,[email protected] Abstract: This paper is devoted to the study of a certain type of martingale problems associated to general operators corresponding to processes which have finite lifetime. We analyse several properties and in particular the weak convergence of sequences of solutions for an appropriate Skorokhod topology setting. We point out the Feller-type features of the associated solutions to this type of martingale problem. Then localisation theorems for well-posed martingale problems or for corresponding generators are proved. Key words: martingale problem, Feller processes, weak convergence of probability measures, Skorokhod topology, generators, localisation MSC2010 Subject Classification: Primary 60J25; Secondary 60G44, 60J35, 60B10, 60J75, 47D07 1 Introduction The theory of L´evy-type processes stays an active domain of research during the last d d two decades. Heuristically, a L´evy-type process X with symbol q : R × R ! C is a Markov process which behaves locally like a L´evyprocess with characteristic exponent d q(a; ·), in a neighbourhood of each point a 2 R . One associates to a L´evy-type process 1 d the pseudo-differential operator L given by, for f 2 Cc (R ), Z Z Lf(a) := − eia·αq(a; α)fb(α)dα; where fb(α) := (2π)−d e−ia·αf(a)da: Rd Rd (n) Does a sequence X of L´evy-type processes, having symbols qn, converges toward some process, when the sequence of symbols qn converges to a symbol q? What can we say about the sequence X(n) when the corresponding sequence of pseudo-differential operators Ln converges to an operator L? What could be the appropriate setting when one wants to approximate a L´evy-type processes by a family of discrete Markov chains? This is the kind of question which naturally appears when we study L´evy-type processes.
    [Show full text]
  • Notes on Brownian Motion II: Introduction to Stochastic Integration
    Copyright c 2014 by Karl Sigman 1 IEOR 6712: Notes on Brownian Motion II: Introduction to stochastic integration 1.1 Functions of bounded variation A real-valued function f = f(t) on [0; 1) is said to be of bounded variation if the y−axis distance (e.g., up-down distance) it travels in any finite interval of time is finite. As we shall soon see, the paths of Brownian motion do not have this property leading us to conclude, among other things, that in fact the paths are not differentiable. If f is differentiable, then during an infinitesimal time interval of length dt, near t, the y−axis distance traversed is jdf(t)j = jf 0(t)jdt, R b 0 and so the total distance traveled over an interval [a; b] is a jf (t)jdt, as we know from calculus. To make this precise, consider an interval [a; b] and a partition Π of the interval of the form a = t0 < t1 < ··· < tn = b. Define the variation over [a; b] with respect to Π by n X VΠ(f)[a; b] = jf(tk) − f(tk−1)j: (1) k=1 As the partition gets finer and finer, this quantity approaches the total variation of f over [a; b]: Leting jΠj = max1≤k≤nftk − tk−1g denote the maximum interval length of a partition, the total variation of f over [a; b] is defined by V (f)[a; b] = lim VΠ(f)[a; b]; (2) jΠj!0 If V (f)[a; b] < 1, then f is said to be of bounded variation over [a; b].
    [Show full text]
  • MIT15 070JF13 Lec14.Pdf
    MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 14 10/28/2013 Introduction to Ito calculus. Content. 1. Spaces L2; M2; M2;c. 2. Quadratic variation property of continuous martingales. 1 Doob-Kolmogorov inequality. Continuous time version Let us establish the following continuous time version of the Doob-Kolmogorov inequality. We use RCLL as abbreviation for right-continuous function with left limits. Proposition 1. Suppose Xt ≥ 0 is a RCLL sub-martingale. Then for every T; x ≥ 0 2 E[XT ] P( sup Xt ≥ x) ≤ 2 : 0≤t≤T x Proof. Consider any sequence of partitions Π = f0 = tn < tn < ::: < tn = n 0 1 Nn T g such that Δ(Π ) = max jtn − tnj ! 0. Additionally, suppose that the n j j+1 j sequence Πn is nested, in the sense the for every n1 ≤ n2, every point in Πn1 is n n also a point in Π . Let X = X n where j = maxfi : t ≤ tg. Then X is a n2 t tj i t sub-martingale adopted to the same filtration (notice that this would not be the case if we instead chose right ends of the intervals). By the discrete version of the D-K inequality (see previous lectures), we have [X2 ] n n E T (max X ≥ x) = (sup X ≥ x) ≤ : P tj P t 2 j≤Nn t≤T x n By RCLL, we have supt≤T Xt ! supt≤T Xt a.s. Indeed, fix E > 0 and find t0 = t0(!) such that Xt0 ≥ supt≤T Xt − E. Find n large enough and 1 n j = j(n) such that tj(n)−1 ≤ t0 ≤ tj(n).
    [Show full text]