Martingale Theory

Total Page:16

File Type:pdf, Size:1020Kb

Martingale Theory CHAPTER 1 Martingale Theory We review basic facts from martingale theory. We start with discrete- time parameter martingales and proceed to explain what modifications are needed in order to extend the results from discrete-time to continuous-time. The Doob-Meyer decomposition theorem for continuous semimartingales is stated but the proof is omitted. At the end of the chapter we discuss the quadratic variation process of a local martingale, a key concept in martin- gale theory based stochastic analysis. 1. Conditional expectation and conditional probability In this section, we review basic properties of conditional expectation. Let (W, F , P) be a probability space and G a s-algebra of measurable events contained in F . Suppose that X 2 L1(W, F , P), an integrable ran- dom variable. There exists a unique random variable Y which have the following two properties: (1) Y 2 L1(W, G , P), i.e., Y is measurable with respect to the s-algebra G and is integrable; (2) for any C 2 G , we have E fX; Cg = E fY; Cg . This random variable Y is called the conditional expectation of X with re- spect to G and is denoted by E fXjG g. The existence and uniqueness of conditional expectation is an easy con- sequence of the Radon-Nikodym theorem in real analysis. Define two mea- sures on (W, G ) by m fCg = E fX; Cg , n fCg = P fCg , C 2 G . It is clear that m is absolutely continuous with respect to n. The conditional expectation E fXjG g is precisely the Radon-Nikodym derivative dm/dn. If Y is another random variable, then we denote E fXjs(Y)g simply by E fXjYg. Here s(Y) is the s-algebra generated by Y. The conditional probability of an event A is defined by P fAjG g = E fIAjG g . The following two examples are helpful. EXAMPLE 1.1. Suppose that the s-algebra G is generated by a partition: ¥ W = [i=1 Ai, Ai \ Aj = Æ if i 6= j, 1 2 1. MARTINGALE THEORY and P fAig > 0. Then E fXjG g is constant on each Ai and is equal to the average of X on Ai, i.e., 1 Z E fXjG g (w) = X dP, w 2 Ai. P fAig Ai EXAMPLE 1.2. The conditional expectation E fXjYg is measurable with respect to s(Y). By a well known result, it must be a Borel function f (Y) of Y. We usually write symbolically f (y) = E fXjY = yg . Suppose that (X, Y) has a joint density function p(x, y) on R2. Then we can take R R xp(x, y) dy f (y) = R . R p(x, y) dx The following three properties of conditional expectation are often used. (1) If X 2 G , then E fXjG g = X; more generally, if X 2 G then E fXYjG g = XE fYjG g . (2) If G1 ⊆ G2, then E fXjG1jG2g = E fXjG2jG1g = E fXjG1g . (3) If X is independent of G , then E fXjG g = E fXg . The monotone convergence theorem, the dominated convergence theo- rem, and Fatou’s lemma are three basic convergence theorems in Lebesgue integration theory. They still hold when the usual expectation is replaced by the conditional expectation with respect to an aribtary s-algebra. 2. Martingales A sequence F∗ = fFn, n 2 Z+g of increasing s-algebras on W is called a filtration (of s-algebras). The quadruple (W, F∗, F , P) with Fn ⊆ F is a filtered probability space. We usually assume that ¥ de f _ F = F¥ = Fn, n=0 the smallest s-algebra containing all Fn. Intuitively Fn represents the in- formation of an evolving random system under consideration accumulated up to time n. 2. MARTINGALES 3 A sequence of random variables X = fXng is said to be adapted to the filtration F∗ if Xn is measurable with respect to Fn for all n. The filtration X F∗ generated by the sequence X is X Fn = s fXi, i ≤ ng . It is the smallest filtration to which the sequence X is adapted. DEFINITION 2.1. A sequence of integrable random variables X = fXn, n 2 Z+g on a filtered probability space (W, F∗, P) is called a martingale with respect to F∗ if X is adapted to F∗ and E fXnjFn−1g = Xn for all n. It is called a submartingale or supermartingale if in the above relation = is replaced by ≥ or ≤, respectively. If the reference filtration is not explicitly mentioned, it is usually under- stood what it should be from the context. In many situations, we simply take the reference filtration is the one generated by the sequence itself. REMARK 2.2. If X is a submartingale with respect to some filtration F∗, X then it is also a martingale with respect to its own filtration F∗ . The definition of a supermartingale is rather unfortunate, for EXn ≤ EXn−1, that is, the sequence of expected values is decreasing. Intuitively a martingale represents a fair game. The defining property of a martingale can be written as E fXn − Xn−1jFn−1g = 0. We can regard the difference Xn − Xn−1 as a gambler’s gain at the nth play of a game. The above equation says that even after applying all the infor- mation and knowledge he has accumulated up to time n − 1, his expected gain is still zero. EXAMPLE 2.3. If fXng is a sequence of independent and integrable ran- dom variables with mean zero, then the partial sum Sn = X1 + X2 + ··· + Xn is a martingale. EXAMPLE 2.4. (1) If fXng is a martingale and f : R 7! R a convex func- tion such that each f (Xn) is integrable, then f f (Xn)g is a submartingale. (2) fXng is a submartingale and f : R 7! R a convex and increasing function such that each f (Xn) is integrable, then f f (Xn)g is a submartingale. EXAMPLE 2.5. (martingale transform) Let M be a martingale and Z = fZng adapted. Suppose further that each Zn is uniformly bounded. Define n Nn = ∑ Zi−1(Mi − Mi−1). i=1 4 1. MARTINGALE THEORY Then N is a martingale. Note that in the general summand, the multi- plicative factor Zi−1 is measurable with respect to the left time point of the martingale difference Mi − Mi−1. EXAMPLE 2.6. (Reverse martingale) Suppose that fXng is a sequence of i.i.d. integrable random variables and X + X + ··· X Z = 1 2 n . n n Then fZng is a reverse martingale, which means that E fZnjGn+1g = Zn+1, where Gn = s fSi, i ≥ ng . If we write Wn = Z−n and Hn = G−n for n = −1, −2, . .. Then we can write E fWnjHn−1g = Wn−1. 3. Basic properties Suppose that X = fXng is a submartingale. We have EXn ≤ EXn+1. Thus on average a submartingale is increasing. On the other hand, for a martingale we have EXn = EXn+1, which shows that it is purely noise. The Doob decomposition theorem claims that a submartingale can be decom- posed uniquely into the sum of a martingale and an increasing sequence. The following example shows that the uniqueness question for the decom- position is not an entirely trivial matter. EXAMPLE 3.1. Consider Sn, the sum of a sequence of independent and 2 square integrable random variables with mean zero. Then Sn is a sub- martingale. We have obviously 2 2 2 2 Sn = Sn − ESn + ESn. 2 n 2 2 2 From ESn = ∑i=1 EXi it is easy to verify that Mn = Sn − ESn is a martin- gale. Therefore the above identity is a decomposition of the submartingale 2 Sn into the sum of a martingale and an increasing process. On the other hand, n n 2 2 Sn = 2 ∑ Si−1Xi + ∑ Xi . i=1 i=1 The first sum on the right side is a martingale (in the form of a martingale transform). Thus the above gives another such decomposition. In general these two decompositions are different. The above example shows that in order to have a unique decomposition we need further restrictions. DEFINITION 3.2. A sequence Z = fZng is said to be predictable with respect to a filtration F∗ if Fn if Zn 2 Fn−1 for all n ≥ 0. 4. OPTIONAL SAMPLING THEOREM 5 THEOREM 3.3. (Doob decomposition) Let X be a submartingale. Then there is a unique increasing predictable process Z with Z0 = 0 and a martingale M such that Xn = Mn + Zn. PROOF. Suppose that we have such a decomposition. Conditioning on Fn−1 in Zn − Zn−1 = Xn − Xn−1 − (Mn − Mn−1), we have Zn − Zn−1 = E fXn − Xn−1jFn−1g . The right side is nonnegative if X is a submartingale. This shows that a Doob decomposition, if exists, must be unique. It is now clear how to pro- ceed to show the existence. We define n Zn = ∑ E fXi − Xi−1jFi−1g . i=1 It is clear that X is increasing, predictable, and Z0 = 0. Define Mn = Xn − Zn. We have Mn − Mn−1 = Xn − Xn−1 − E fXn − Xn−1jFn−1g , from which it is easy to see that E fMn − Mn−1jFn−1g = 0. This shows that M is a martingale. 4. Optional sampling theorem The concept of a martingale derives much of its power from the option- al sampling theorem we will discuss in this section. Most interesting events concerning a random sequence occurs not at a fixed constant time, but at a random time.
Recommended publications
  • Integral Representations of Martingales for Progressive Enlargements Of
    INTEGRAL REPRESENTATIONS OF MARTINGALES FOR PROGRESSIVE ENLARGEMENTS OF FILTRATIONS ANNA AKSAMIT, MONIQUE JEANBLANC AND MAREK RUTKOWSKI Abstract. We work in the setting of the progressive enlargement G of a reference filtration F through the observation of a random time τ. We study an integral representation property for some classes of G-martingales stopped at τ. In the first part, we focus on the case where F is a Poisson filtration and we establish a predictable representation property with respect to three G-martingales. In the second part, we relax the assumption that F is a Poisson filtration and we assume that τ is an F-pseudo-stopping time. We establish integral representations with respect to some G-martingales built from F-martingales and, under additional hypotheses, we obtain a predictable representation property with respect to two G-martingales. Keywords: predictable representation property, Poisson process, random time, progressive enlargement, pseudo-stopping time Mathematics Subjects Classification (2010): 60H99 1. Introduction We are interested in the stability of the predictable representation property in a filtration en- largement setting: under the postulate that the predictable representation property holds for F and G is an enlargement of F, the goal is to find under which conditions the predictable rep- G G resentation property holds for . We focus on the progressive enlargement = (Gt)t∈R+ of a F reference filtration = (Ft)t∈R+ through the observation of the occurrence of a random time τ and we assume that the hypothesis (H′) is satisfied, i.e., any F-martingale is a G-semimartingale. It is worth noting that no general result on the existence of a G-semimartingale decomposition after τ of an F-martingale is available in the existing literature, while, for any τ, any F-martingale stopped at time τ is a G-semimartingale with an explicit decomposition depending on the Az´ema supermartingale of τ.
    [Show full text]
  • CONFORMAL INVARIANCE and 2 D STATISTICAL PHYSICS −
    CONFORMAL INVARIANCE AND 2 d STATISTICAL PHYSICS − GREGORY F. LAWLER Abstract. A number of two-dimensional models in statistical physics are con- jectured to have scaling limits at criticality that are in some sense conformally invariant. In the last ten years, the rigorous understanding of such limits has increased significantly. I give an introduction to the models and one of the major new mathematical structures, the Schramm-Loewner Evolution (SLE). 1. Critical Phenomena Critical phenomena in statistical physics refers to the study of systems at or near the point at which a phase transition occurs. There are many models of such phenomena. We will discuss some discrete equilibrium models that are defined on a lattice. These are measures on paths or configurations where configurations are weighted by their energy with a preference for paths of smaller energy. These measures depend on at leastE one parameter. A standard parameter in physics β = c/T where c is a fixed constant, T stands for temperature and the measure given to a configuration is e−βE . Phase transitions occur at critical values of the temperature corresponding to, e.g., the transition from a gaseous to liquid state. We use β for the parameter although for understanding the literature it is useful to know that large values of β correspond to “low temperature” and small values of β correspond to “high temperature”. Small β (high temperature) systems have weaker correlations than large β (low temperature) systems. In a number of models, there is a critical βc such that qualitatively the system has three regimes β<βc (high temperature), β = βc (critical) and β>βc (low temperature).
    [Show full text]
  • Applied Probability
    ISSN 1050-5164 (print) ISSN 2168-8737 (online) THE ANNALS of APPLIED PROBABILITY AN OFFICIAL JOURNAL OF THE INSTITUTE OF MATHEMATICAL STATISTICS Articles Optimal control of branching diffusion processes: A finite horizon problem JULIEN CLAISSE 1 Change point detection in network models: Preferential attachment and long range dependence . SHANKAR BHAMIDI,JIMMY JIN AND ANDREW NOBEL 35 Ergodic theory for controlled Markov chains with stationary inputs YUE CHEN,ANA BUŠIC´ AND SEAN MEYN 79 Nash equilibria of threshold type for two-player nonzero-sum games of stopping TIZIANO DE ANGELIS,GIORGIO FERRARI AND JOHN MORIARTY 112 Local inhomogeneous circular law JOHANNES ALT,LÁSZLÓ ERDOS˝ AND TORBEN KRÜGER 148 Diffusion approximations for controlled weakly interacting large finite state systems withsimultaneousjumps.........AMARJIT BUDHIRAJA AND ERIC FRIEDLANDER 204 Duality and fixation in -Wright–Fisher processes with frequency-dependent selection ADRIÁN GONZÁLEZ CASANOVA AND DARIO SPANÒ 250 CombinatorialLévyprocesses.......................................HARRY CRANE 285 Eigenvalue versus perimeter in a shape theorem for self-interacting random walks MAREK BISKUP AND EVIATAR B. PROCACCIA 340 Volatility and arbitrage E. ROBERT FERNHOLZ,IOANNIS KARATZAS AND JOHANNES RUF 378 A Skorokhod map on measure-valued paths with applications to priority queues RAMI ATAR,ANUP BISWAS,HAYA KASPI AND KAV I TA RAMANAN 418 BSDEs with mean reflection . PHILIPPE BRIAND,ROMUALD ELIE AND YING HU 482 Limit theorems for integrated local empirical characteristic exponents from noisy high-frequency data with application to volatility and jump activity estimation JEAN JACOD AND VIKTOR TODOROV 511 Disorder and wetting transition: The pinned harmonic crystal indimensionthreeorlarger.....GIAMBATTISTA GIACOMIN AND HUBERT LACOIN 577 Law of large numbers for the largest component in a hyperbolic model ofcomplexnetworks..........NIKOLAOS FOUNTOULAKIS AND TOBIAS MÜLLER 607 Vol.
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • Stochastic Differential Equations
    Stochastic Differential Equations Stefan Geiss November 25, 2020 2 Contents 1 Introduction 5 1.0.1 Wolfgang D¨oblin . .6 1.0.2 Kiyoshi It^o . .7 2 Stochastic processes in continuous time 9 2.1 Some definitions . .9 2.2 Two basic examples of stochastic processes . 15 2.3 Gaussian processes . 17 2.4 Brownian motion . 31 2.5 Stopping and optional times . 36 2.6 A short excursion to Markov processes . 41 3 Stochastic integration 43 3.1 Definition of the stochastic integral . 44 3.2 It^o'sformula . 64 3.3 Proof of Ito^'s formula in a simple case . 76 3.4 For extended reading . 80 3.4.1 Local time . 80 3.4.2 Three-dimensional Brownian motion is transient . 83 4 Stochastic differential equations 87 4.1 What is a stochastic differential equation? . 87 4.2 Strong Uniqueness of SDE's . 90 4.3 Existence of strong solutions of SDE's . 94 4.4 Theorems of L´evyand Girsanov . 98 4.5 Solutions of SDE's by a transformation of drift . 103 4.6 Weak solutions . 105 4.7 The Cox-Ingersoll-Ross SDE . 108 3 4 CONTENTS 4.8 The martingale representation theorem . 116 5 BSDEs 121 5.1 Introduction . 121 5.2 Setting . 122 5.3 A priori estimate . 123 Chapter 1 Introduction One goal of the lecture is to study stochastic differential equations (SDE's). So let us start with a (hopefully) motivating example: Assume that Xt is the share price of a company at time t ≥ 0 where we assume without loss of generality that X0 := 1.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • PREDICTABLE REPRESENTATION PROPERTY for PROGRESSIVE ENLARGEMENTS of a POISSON FILTRATION Anna Aksamit, Monique Jeanblanc, Marek Rutkowski
    PREDICTABLE REPRESENTATION PROPERTY FOR PROGRESSIVE ENLARGEMENTS OF A POISSON FILTRATION Anna Aksamit, Monique Jeanblanc, Marek Rutkowski To cite this version: Anna Aksamit, Monique Jeanblanc, Marek Rutkowski. PREDICTABLE REPRESENTATION PROPERTY FOR PROGRESSIVE ENLARGEMENTS OF A POISSON FILTRATION. 2015. hal- 01249662 HAL Id: hal-01249662 https://hal.archives-ouvertes.fr/hal-01249662 Preprint submitted on 2 Jan 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. PREDICTABLE REPRESENTATION PROPERTY FOR PROGRESSIVE ENLARGEMENTS OF A POISSON FILTRATION Anna Aksamit Mathematical Institute, University of Oxford, Oxford OX2 6GG, United Kingdom Monique Jeanblanc∗ Laboratoire de Math´ematiques et Mod´elisation d’Evry´ (LaMME), Universit´ed’Evry-Val-d’Essonne,´ UMR CNRS 8071 91025 Evry´ Cedex, France Marek Rutkowski School of Mathematics and Statistics University of Sydney Sydney, NSW 2006, Australia 10 December 2015 Abstract We study problems related to the predictable representation property for a progressive en- largement G of a reference filtration F through observation of a finite random time τ. We focus on cases where the avoidance property and/or the continuity property for F-martingales do not hold and the reference filtration is generated by a Poisson process.
    [Show full text]
  • Semimartingale Properties of a Generalized Fractional Brownian Motion and Its Mixtures with Applications in finance
    Semimartingale properties of a generalized fractional Brownian motion and its mixtures with applications in finance TOMOYUKI ICHIBA, GUODONG PANG, AND MURAD S. TAQQU ABSTRACT. We study the semimartingale properties for the generalized fractional Brownian motion (GFBM) introduced by Pang and Taqqu (2019) and discuss the applications of the GFBM and its mixtures to financial models, including stock price and rough volatility. The GFBM is self-similar and has non-stationary increments, whose Hurst index H 2 (0; 1) is determined by two parameters. We identify the region of these two parameter values where the GFBM is a semimartingale. Specifically, in one region resulting in H 2 (1=2; 1), it is in fact a process of finite variation and differentiable, and in another region also resulting in H 2 (1=2; 1) it is not a semimartingale. For regions resulting in H 2 (0; 1=2] except the Brownian motion case, the GFBM is also not a semimartingale. We also establish p-variation results of the GFBM, which are used to provide an alternative proof of the non-semimartingale property when H < 1=2. We then study the semimartingale properties of the mixed process made up of an independent Brownian motion and a GFBM with a Hurst parameter H 2 (1=2; 1), and derive the associated equivalent Brownian measure. We use the GFBM and its mixture with a BM to study financial asset models. The first application involves stock price models with long range dependence that generalize those using shot noise processes and FBMs. When the GFBM is a process of finite variation (resulting in H 2 (1=2; 1)), the mixed GFBM process as a stock price model is a Brownian motion with a random drift of finite variation.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Fclts for the Quadratic Variation of a CTRW and for Certain Stochastic Integrals
    FCLTs for the Quadratic Variation of a CTRW and for certain stochastic integrals No`eliaViles Cuadros (joint work with Enrico Scalas) Universitat de Barcelona Sevilla, 17 de Septiembre 2013 Damped harmonic oscillator subject to a random force The equation of motion is informally given by x¨(t) + γx_(t) + kx(t) = ξ(t); (1) where x(t) is the position of the oscillating particle with unit mass at time t, γ > 0 is the damping coefficient, k > 0 is the spring constant and ξ(t) represents white L´evynoise. I. M. Sokolov, Harmonic oscillator under L´evynoise: Unexpected properties in the phase space. Phys. Rev. E. Stat. Nonlin Soft Matter Phys 83, 041118 (2011). 2 of 27 The formal solution is Z t x(t) = F (t) + G(t − t0)ξ(t0)dt0; (2) −∞ where G(t) is the Green function for the homogeneous equation. The solution for the velocity component can be written as Z t 0 0 0 v(t) = Fv (t) + Gv (t − t )ξ(t )dt ; (3) −∞ d d where Fv (t) = dt F (t) and Gv (t) = dt G(t). 3 of 27 • Replace the white noise with a sequence of instantaneous shots of random amplitude at random times. • They can be expressed in terms of the formal derivative of compound renewal process, a random walk subordinated to a counting process called continuous-time random walk. A continuous time random walk (CTRW) is a pure jump process given by a sum of i.i.d. random jumps fYi gi2N separated by i.i.d. random waiting times (positive random variables) fJi gi2N.
    [Show full text]
  • Solving Black-Schole Equation Using Standard Fractional Brownian Motion
    http://jmr.ccsenet.org Journal of Mathematics Research Vol. 11, No. 2; 2019 Solving Black-Schole Equation Using Standard Fractional Brownian Motion Didier Alain Njamen Njomen1 & Eric Djeutcha2 1 Department of Mathematics and Computer’s Science, Faculty of Science, University of Maroua, Cameroon 2 Department of Mathematics, Higher Teachers’ Training College, University of Maroua, Cameroon Correspondence: Didier Alain Njamen Njomen, Department of Mathematics and Computer’s Science, Faculty of Science, University of Maroua, Po-Box 814, Cameroon. Received: May 2, 2017 Accepted: June 5, 2017 Online Published: March 24, 2019 doi:10.5539/jmr.v11n2p142 URL: https://doi.org/10.5539/jmr.v11n2p142 Abstract In this paper, we emphasize the Black-Scholes equation using standard fractional Brownian motion BH with the hurst index H 2 [0; 1]. N. Ciprian (Necula, C. (2002)) and Bright and Angela (Bright, O., Angela, I., & Chukwunezu (2014)) get the same formula for the evaluation of a Call and Put of a fractional European with the different approaches. We propose a formula by adapting the non-fractional Black-Scholes model using a λH factor to evaluate the european option. The price of the option at time t 2]0; T[ depends on λH(T − t), and the cost of the action S t, but not only from t − T as in the classical model. At the end, we propose the formula giving the implied volatility of sensitivities of the option and indicators of the financial market. Keywords: stock price, black-Scholes model, fractional Brownian motion, options, volatility 1. Introduction Among the first systematic treatments of the option pricing problem has been the pioneering work of Black, Scholes and Merton (see Black, F.
    [Show full text]
  • Locally Feller Processes and Martingale Local Problems
    Locally Feller processes and martingale local problems Mihai Gradinaru and Tristan Haugomat Institut de Recherche Math´ematique de Rennes, Universit´ede Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France fMihai.Gradinaru,[email protected] Abstract: This paper is devoted to the study of a certain type of martingale problems associated to general operators corresponding to processes which have finite lifetime. We analyse several properties and in particular the weak convergence of sequences of solutions for an appropriate Skorokhod topology setting. We point out the Feller-type features of the associated solutions to this type of martingale problem. Then localisation theorems for well-posed martingale problems or for corresponding generators are proved. Key words: martingale problem, Feller processes, weak convergence of probability measures, Skorokhod topology, generators, localisation MSC2010 Subject Classification: Primary 60J25; Secondary 60G44, 60J35, 60B10, 60J75, 47D07 1 Introduction The theory of L´evy-type processes stays an active domain of research during the last d d two decades. Heuristically, a L´evy-type process X with symbol q : R × R ! C is a Markov process which behaves locally like a L´evyprocess with characteristic exponent d q(a; ·), in a neighbourhood of each point a 2 R . One associates to a L´evy-type process 1 d the pseudo-differential operator L given by, for f 2 Cc (R ), Z Z Lf(a) := − eia·αq(a; α)fb(α)dα; where fb(α) := (2π)−d e−ia·αf(a)da: Rd Rd (n) Does a sequence X of L´evy-type processes, having symbols qn, converges toward some process, when the sequence of symbols qn converges to a symbol q? What can we say about the sequence X(n) when the corresponding sequence of pseudo-differential operators Ln converges to an operator L? What could be the appropriate setting when one wants to approximate a L´evy-type processes by a family of discrete Markov chains? This is the kind of question which naturally appears when we study L´evy-type processes.
    [Show full text]