A Stochastic Processes and Martingales

Total Page:16

File Type:pdf, Size:1020Kb

A Stochastic Processes and Martingales A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted. If we consider a process X, we always work with the natural filtration unless stated otherwise. Suppose that I =IIN. A stochastic process X is called predictable if {Xt+1} is Ft-adapted and X0 is not stochastic. Suppose that I =IR+.LetFt− = 202 A Stochastic Processes and Martingales F F s<t s be the smallest σ-algebra containing all s for s<t. We say that a stochastic process X is predictable if X is adapted to {Ft−}. Note that if X is adapted, then {Xt−} (with X0− = X0 deterministic) is predictable. Indeed, −1 {Xt− ≥ a} = {Xv >a− n } . n∈IIN∗ s∈Q∩[0,t) v∈Q∩(s,t) −1 Because {Xv >a− n }∈Fv ⊂Ft−,wehave{Xt− ≥ a}∈Ft−. A random variable T ∈ I ∪{∞} is called Ft-stopping time if {T ≤ t}∈Ft for all t ∈ I. Note that deterministic times are stopping times because {s ≤ t}∈{∅,Ω}.IfX is an adapted stochastic process and T is a stopping time, then the stopped process {XT ∧t} is also adapted. Let A ⊂ E be a Borel set. Define the first entrance times { ∈ ∈ } ∗ { ∈ ∈ ∈ } TA =inf t I : Xt A ,TA =inf t I : Xt A or Xt− A . The first entrance times are important examples of stopping times. If A is ∗ open, then TA is a stopping time. If A is closed, then TA is a stopping time. If A is compact or X is (pathwise) continuous, then TA is a stopping time. In many examples one needs the information up to a stopping time. For a stopping time T we define the σ-algebra FT = {A ∈F: A ∩{T ≤ t}∈Ft for all t ∈ I} . The σ-algebra FT contains all events observable until the stopping time T . A.3 Martingales An important concept for stochastic processes is the notion of a fair game, called a martingale.NowletE = IR. An adapted stochastic process M is called an Ft-submartingale if IIE[|Mt|] < ∞ for all t ∈ I and IIE[Mt+s |Ft] ≥ Mt for all t, s ∈ I. (A.1) If the inequality (A.1) is reversed, the process M is an Ft-supermartingale. A stochastic process that is both a sub- and a supermartingale is called a martingale. If diffusion processes are involved, it often turns out that the notion of a martingale is too strong. For example, stochastic integrals with respect to Brownian motion are martingales if one stops them when reaching a finite level. This is very close to a martingale. We therefore say that a stochastic process M is an Ft-local martingale if there exists a sequence of Ft-stopping times {Tn : n ∈ IIN} with limn→∞ Tn = ∞ such that the stopped processes { } F MTn∧t are t-martingales. Such a sequence of stopping times is called a A.4 Poisson Processes 203 localisation sequence. A localisation sequence can always be chosen to be in- { } creasing and such that MTn∧t is a uniformly integrable martingale for each n. The following results make martingales an important tool. The proofs can be found in [57] or [152]. Proposition A.1 (Convergence theorem). Let M be a submartingale, and suppose that + sup IIE[(Mt) ] < ∞ . t≥0 Then there exists a random variable M∞ such that limt→∞ Mt = M∞ and IIE[|M∞|] < ∞. It should be noted that one cannot necessarily interchange the limit and in- tegration. There are many examples where M0 = limt→∞ IIE[Mt] =II E[M∞]. Proposition A.2 (Stopping theorem). Let M be a submartingale, t ≥ 0, and T1,T2 be stopping times. Then |F ≥ IIE[MT1∧t T2 ] MT1∧T2∧t . { } In particular, the stopped process MT1∧t is a submartingale. When dealing with local martingales, the following result is often helpful. Lemma A.3. Let M be a positive local martingale. Then M is a supermartin- gale. Proof. Let {Tn} be a localisation sequence. By Fatou’s lemma, IIE[Mt+s |Ft]=IIE[ lim MT ∧(t+s) |Ft] ≤ lim IIE[MT ∧(t+s) |Ft] n→∞ n n→∞ n = lim MT ∧t = Mt . n→∞ n Thus, M is a supermartingale. A.4 Poisson Processes An increasing stochastic process with values in IINandN0 = 0 is called a point process. If the jumps are only one unit, i.e., Nt − Nt− ∈{0, 1} for all t, then we call a point process simple. A special case of a simple point process is the homogeneous Poisson pro- cess. This process has some special properties that make it easy to handle. It can be defined via one of the following equivalent conditions. A proof can be found in [152]. 204 A Stochastic Processes and Martingales Proposition A.4. Let N be a point process with occurrence times 0=T0 < T1 < ··· and λ>0. The following conditions are equivalent: i) N has stationary and independent increments such that IIP[Nh =0]=1− λh + o(h) , IIP[Nh =1]=λh + o(h) , as h ↓ 0. ii) N has independent increments and Nt is Poisson-distributed with param- eter λt for each t>0. iii) The interarrival times {Tk − Tk−1 : k ≥ 1} are independent and exponen- tially distributed with parameter λ. iv) For each t>0, Nt is Poisson-distributed with parameter λt and given {Nt = n} the occurrence points T1,...,Tn have the same distribution as the order statistics of n independent uniformly on (0,t) distributed random variables. v) N has independent increments, IIE[N1]=λ, and given {Nt = n} the occur- rence points T1,...,Tn have the same distribution as the order statistics of n independent uniformly on (0,t) distributed random variables. vi) N has stationary and independent increments such that IIE[N1]=λ and IIP[Nh ≥ 2] = o(h) as h ↓ 0. We call λ the rate of the Poisson process. { } Let Yi be a sequence of iid random variables independent of N.The { Nt } process i=1 Yi is called a compound Poisson process. The Poisson process is just a special case with Yi = 1. As the Poisson process, the compound Poisson process has independent and stationary increments. This leads to the following martingales. If IIE[|Yi|] < ∞, then Nt Yi − λIIE[Yi]t i=1 2 ∞ is a martingale. If IIE[Yi ] < , then N t 2 − − 2 Yi λIIE[Yi]t λIIE[Yi ]t i=1 is a martingale. If the moment-generating function IIE[exp{rYi}] exists, then the process Nt exp r Yi − λIIE[exp{rYi}−1]t i=1 is a martingale. The martingale property can easily be verified from the inde- pendent and stationary increments property. A.5 Brownian Motion 205 A.5 Brownian Motion A (cadlag) stochastic process W is called a standard Brownian motion if it has independent increments, W0 =0,andWt is normally distributed with mean value 0 and variance t. Wiener [186] proved that the Brownian motion 2 exists. A process {Xt = mt + σWt} is called an (m, σ ) Brownian motion. It turns out that a Brownian motion also has stationary increments and that its paths are continuous (a.s.). It is easy to verify that a standard Brownian motion is a martingale. A Brownian motion fluctuates rapidly. If we let T =inf{t : Xt < 0}, then T = 0. The Brownian motion therefore reaches strictly positive and strictly negative values immediately. The reason is that the Brownian motion has unbounded variation; see the definition ahead. One can also show that the sample path of a Brownian motion is nowhere differentiable. Brownian motion has bounded quadratic variation. Namely, for t>0, n − 2 lim (Wt Wt − ) = t, n→∞ i i 1 i=1 ··· − where 0 = t0 <t1 < <tn = t is a partition and supk tk tk−1 has to converge to zero with n. Indeed, n − 2 IIE (Wti Wti−1 ) = t i=1 and n n n − 2 − 2 − 2 Var (Wti Wti−1 ) = Var[(Wti Wti−1 ) ]= 2(ti ti−1) . i=1 i=1 i=1 Thus, the variance converges to zero with n.
Recommended publications
  • CONFORMAL INVARIANCE and 2 D STATISTICAL PHYSICS −
    CONFORMAL INVARIANCE AND 2 d STATISTICAL PHYSICS − GREGORY F. LAWLER Abstract. A number of two-dimensional models in statistical physics are con- jectured to have scaling limits at criticality that are in some sense conformally invariant. In the last ten years, the rigorous understanding of such limits has increased significantly. I give an introduction to the models and one of the major new mathematical structures, the Schramm-Loewner Evolution (SLE). 1. Critical Phenomena Critical phenomena in statistical physics refers to the study of systems at or near the point at which a phase transition occurs. There are many models of such phenomena. We will discuss some discrete equilibrium models that are defined on a lattice. These are measures on paths or configurations where configurations are weighted by their energy with a preference for paths of smaller energy. These measures depend on at leastE one parameter. A standard parameter in physics β = c/T where c is a fixed constant, T stands for temperature and the measure given to a configuration is e−βE . Phase transitions occur at critical values of the temperature corresponding to, e.g., the transition from a gaseous to liquid state. We use β for the parameter although for understanding the literature it is useful to know that large values of β correspond to “low temperature” and small values of β correspond to “high temperature”. Small β (high temperature) systems have weaker correlations than large β (low temperature) systems. In a number of models, there is a critical βc such that qualitatively the system has three regimes β<βc (high temperature), β = βc (critical) and β>βc (low temperature).
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • Introduction to Stochastic Processes - Lecture Notes (With 33 Illustrations)
    Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin Contents 1 Probability review 4 1.1 Random variables . 4 1.2 Countable sets . 5 1.3 Discrete random variables . 5 1.4 Expectation . 7 1.5 Events and probability . 8 1.6 Dependence and independence . 9 1.7 Conditional probability . 10 1.8 Examples . 12 2 Mathematica in 15 min 15 2.1 Basic Syntax . 15 2.2 Numerical Approximation . 16 2.3 Expression Manipulation . 16 2.4 Lists and Functions . 17 2.5 Linear Algebra . 19 2.6 Predefined Constants . 20 2.7 Calculus . 20 2.8 Solving Equations . 22 2.9 Graphics . 22 2.10 Probability Distributions and Simulation . 23 2.11 Help Commands . 24 2.12 Common Mistakes . 25 3 Stochastic Processes 26 3.1 The canonical probability space . 27 3.2 Constructing the Random Walk . 28 3.3 Simulation . 29 3.3.1 Random number generation . 29 3.3.2 Simulation of Random Variables . 30 3.4 Monte Carlo Integration . 33 4 The Simple Random Walk 35 4.1 Construction . 35 4.2 The maximum . 36 1 CONTENTS 5 Generating functions 40 5.1 Definition and first properties . 40 5.2 Convolution and moments . 42 5.3 Random sums and Wald’s identity . 44 6 Random walks - advanced methods 48 6.1 Stopping times . 48 6.2 Wald’s identity II . 50 6.3 The distribution of the first hitting time T1 .......................... 52 6.3.1 A recursive formula . 52 6.3.2 Generating-function approach .
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • 6 Mar 2019 Strict Local Martingales and the Khasminskii Test for Explosions
    Strict Local Martingales and the Khasminskii Test for Explosions Aditi Dandapani∗† and Philip Protter‡§ March 7, 2019 Abstract We exhibit sufficient conditions such that components of a multidimen- sional SDE giving rise to a local martingale M are strict local martingales or martingales. We assume that the equations have diffusion coefficients of the form σ(Mt, vt), with vt being a stochastic volatility term. We dedicate this paper to the memory of Larry Shepp 1 Introduction In 2007 P.L. Lions and M. Musiela (2007) gave a sufficient condition for when a unique weak solution of a one dimensional stochastic volatility stochastic differen- tial equation is a strict local martingale. Lions and Musiela also gave a sufficient condition for it to be a martingale. Similar results were obtained by L. Andersen and V. Piterbarg (2007). The techniques depend at one key stage on Feller’s test for explosions, and as such are uniquely applicable to the one dimensional case. Earlier, in 2002, F. Delbaen and H. Shirakawa (2002) gave a seminal result giving arXiv:1903.02383v1 [q-fin.MF] 6 Mar 2019 necessary and sufficient conditions for a solution of a one dimensional stochastic differential equation (without stochastic volatility) to be a strict local martingale or not. This was later extended by A. Mijatovic and M. Urusov (2012) and then by ∗Applied Mathematics Department, Columbia University, New York, NY 10027; email: [email protected]; Currently at Ecole Polytechnique, Palaiseau, France. †Supported in part by NSF grant DMS-1308483 ‡Statistics Department, Columbia University, New York, NY 10027; email: [email protected].
    [Show full text]
  • Geometric Brownian Motion with Affine Drift and Its Time-Integral
    Geometric Brownian motion with affine drift and its time-integral ⋆ a b, c Runhuan Feng , Pingping Jiang ∗, Hans Volkmer aDepartment of Mathematics, University of Illinois at Urbana-Champaign, Illinois, USA bSchool of Management and Economics,The Chinese University of Hong Kong, Shenzhen, Shenzhen, Guangdong, 518172, P.R. China School of Management, University of Science and Technology of China, Hefei, Anhui, 230026, P.R.China cDepartment of Mathematical Sciences, University of Wisconsin–Milwaukee, Wisconsin, USA Abstract The joint distribution of a geometric Brownian motion and its time-integral was derived in a seminal paper by Yor (1992) using Lamperti’s transformation, leading to explicit solutions in terms of modified Bessel functions. In this paper, we revisit this classic result using the simple Laplace transform approach in connection to the Heun differential equation. We extend the methodology to the geometric Brownian motion with affine drift and show that the joint distribution of this process and its time-integral can be determined by a doubly-confluent Heun equation. Furthermore, the joint Laplace transform of the process and its time-integral is derived from the asymptotics of the solutions. In addition, we provide an application by using the results for the asymptotics of the double-confluent Heun equation in pricing Asian options. Numerical results show the accuracy and efficiency of this new method. Keywords: Doubly-confluent Heun equation, geometric Brownian motion with affine drift, Lamperti’s transformation, asymptotics, boundary value problem. ⋆R. Feng is supported by an endowment from the State Farm Companies Foundation. P. Jiang arXiv:2012.09661v1 [q-fin.MF] 17 Dec 2020 is supported by the Postdoctoral Science Foundation of China (No.2020M671853) and the National Natural Science Foundation of China (No.
    [Show full text]
  • Non-Local Branching Superprocesses and Some Related Models
    Published in: Acta Applicandae Mathematicae 74 (2002), 93–112. Non-local Branching Superprocesses and Some Related Models Donald A. Dawson1 School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Canada K1S 5B6 E-mail: [email protected] Luis G. Gorostiza2 Departamento de Matem´aticas, Centro de Investigaci´ony de Estudios Avanzados, A.P. 14-740, 07000 M´exicoD. F., M´exico E-mail: [email protected] Zenghu Li3 Department of Mathematics, Beijing Normal University, Beijing 100875, P.R. China E-mail: [email protected] Abstract A new formulation of non-local branching superprocesses is given from which we derive as special cases the rebirth, the multitype, the mass- structured, the multilevel and the age-reproduction-structured superpro- cesses and the superprocess-controlled immigration process. This unified treatment simplifies considerably the proof of existence of the old classes of superprocesses and also gives rise to some new ones. AMS Subject Classifications: 60G57, 60J80 Key words and phrases: superprocess, non-local branching, rebirth, mul- titype, mass-structured, multilevel, age-reproduction-structured, superprocess- controlled immigration. 1Supported by an NSERC Research Grant and a Max Planck Award. 2Supported by the CONACYT (Mexico, Grant No. 37130-E). 3Supported by the NNSF (China, Grant No. 10131040). 1 1 Introduction Measure-valued branching processes or superprocesses constitute a rich class of infinite dimensional processes currently under rapid development. Such processes arose in appli- cations as high density limits of branching particle systems; see e.g. Dawson (1992, 1993), Dynkin (1993, 1994), Watanabe (1968). The development of this subject has been stimu- lated from different subjects including branching processes, interacting particle systems, stochastic partial differential equations and non-linear partial differential equations.
    [Show full text]
  • Simulation of Markov Chains
    Copyright c 2007 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi- neering are Markovian, and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ= f:::; −2; −1; 0; 1; 2;:::g, the integers, or a proper subset of the integers. Typical examples are S = IN = f0; 1; 2 :::g, the non-negative integers, or S = f0; 1; 2 : : : ; ag, or S = {−b; : : : ; 0; 1; 2 : : : ; ag for some integers a; b > 0, in which case the state space is finite. Definition 1.1 A stochastic process fXn : n ≥ 0g is called a Markov chain if for all times n ≥ 0 and all states i0; : : : ; i; j 2 S, P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) (1) = Pij: Pij denotes the probability that the chain, whenever in state i, moves next (one unit of time later) into state j, and is referred to as a one-step transition probability. The square matrix P = (Pij); i; j 2 S; is called the one-step transition matrix, and since when leaving state i the chain must move to one of the states j 2 S, each row sums to one (e.g., forms a probability distribution): For each i X Pij = 1: j2S We are assuming that the transition probabilities do not depend on the time n, and so, in particular, using n = 0 in (1) yields Pij = P (X1 = jjX0 = i): (Formally we are considering only time homogenous MC's meaning that their transition prob- abilities are time-homogenous (time stationary).) The defining property (1) can be described in words as the future is independent of the past given the present state.
    [Show full text]
  • Jump-Diffusion Models for Asset Pricing in Financial Engineering
    J.R. Birge and V. Linetsky (Eds.), Handbooks in OR & MS, Vol. 15 Copyright © 2008 Elsevier B.V. All rights reserved DOI: 10.1016/S0927-0507(07)15002-7 Chapter 2 Jump-Diffusion Models for Asset Pricing in Financial Engineering S.G. Kou Department of Industrial Engineering and Operations Research, Columbia University E-mail: [email protected] Abstract In this survey we shall focus on the following issues related to jump-diffusion mod- els for asset pricing in financial engineering. (1) The controversy over tailweight of distributions. (2) Identifying a risk-neutral pricing measure by using the rational ex- pectations equilibrium. (3) Using Laplace transforms to pricing options, including European call/put options, path-dependent options, such as barrier and lookback op- tions. (4) Difficulties associated with the partial integro-differential equations related to barrier-crossing problems. (5) Analytical approximations for finite-horizon Amer- ican options with jump risk. (6) Multivariate jump-diffusion models. 1Introduction There is a large literature on jump-diffusion models in finance, including several excellent books, e.g. the books by Cont and Tankov (2004), Kijima (2002). So a natural question is why another survey article is needed. What we attempt to achieve in this survey chapter is to emphasis some points that have not been well addressed in previous surveys. More precisely we shall focus on the following issues. (1) The controversy over tailweight of distributions. An empirical motiva- tion for using jump-diffusion models comes from the fact that asset return distributions tend to have heavier tails than those of normal dis- tribution. However, it is not clear how heavy the tail distributions are, as some people favor power-type distributions, others exponential-type distributions.
    [Show full text]
  • 1 the Ito Integral
    Derivative Securities, Fall 2010 Mathematics in Finance Program Courant Institute of Mathematical Sciences, NYU Jonathan Goodman http://www.math.nyu.edu/faculty/goodman/teaching/DerivSec10/resources.html Week 6 1 The Ito integral The Black Scholes reasoning asks us to apply calculus, stochastic calculus, to expressions involving differentials of Brownian motion and other diffusion pro- cesses. To do this, we define the Ito integral with respect to Brownian motion and a general diffusion. Things discussed briefly and vaguely here are discussed in more detail and at greater length in Stochastic calculus. Suppose Xt is a diffusion process that satisfies (18) and (19) from week 5. Suppose that ft is another stochastic process that is adapted to the same filtration F. The Ito integral is Z t Yt = fs dXs : (1) 0 You could call this the \indefinite integral" because we care how the answer depends on t. In particular, the Ito integral is one of the ways to construct a new stochastic process, Yt, from old ones ft and Xt. It is not possible to define (1) unless ft is adapted. If ft is allowed to depend 0 on future values Xt0 (t > t), then the integral may not make sense or it may not have the properties we expect. The essential technical idea in the definition of (1) is that dXs is in the future of s. If we think that dXs = Xs+ds − Xs, then we must take ds > 0 (but infinitely close to zero). This implies that if Xt is a martingale, then Yt also is a martingale.
    [Show full text]
  • Locally Feller Processes and Martingale Local Problems
    Locally Feller processes and martingale local problems Mihai Gradinaru and Tristan Haugomat Institut de Recherche Math´ematique de Rennes, Universit´ede Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France fMihai.Gradinaru,[email protected] Abstract: This paper is devoted to the study of a certain type of martingale problems associated to general operators corresponding to processes which have finite lifetime. We analyse several properties and in particular the weak convergence of sequences of solutions for an appropriate Skorokhod topology setting. We point out the Feller-type features of the associated solutions to this type of martingale problem. Then localisation theorems for well-posed martingale problems or for corresponding generators are proved. Key words: martingale problem, Feller processes, weak convergence of probability measures, Skorokhod topology, generators, localisation MSC2010 Subject Classification: Primary 60J25; Secondary 60G44, 60J35, 60B10, 60J75, 47D07 1 Introduction The theory of L´evy-type processes stays an active domain of research during the last d d two decades. Heuristically, a L´evy-type process X with symbol q : R × R ! C is a Markov process which behaves locally like a L´evyprocess with characteristic exponent d q(a; ·), in a neighbourhood of each point a 2 R . One associates to a L´evy-type process 1 d the pseudo-differential operator L given by, for f 2 Cc (R ), Z Z Lf(a) := − eia·αq(a; α)fb(α)dα; where fb(α) := (2π)−d e−ia·αf(a)da: Rd Rd (n) Does a sequence X of L´evy-type processes, having symbols qn, converges toward some process, when the sequence of symbols qn converges to a symbol q? What can we say about the sequence X(n) when the corresponding sequence of pseudo-differential operators Ln converges to an operator L? What could be the appropriate setting when one wants to approximate a L´evy-type processes by a family of discrete Markov chains? This is the kind of question which naturally appears when we study L´evy-type processes.
    [Show full text]