Stochastic Calculus: an Introduction with Applications

Total Page:16

File Type:pdf, Size:1020Kb

Stochastic Calculus: an Introduction with Applications Stochastic Calculus: An Introduction with Applications Gregory F. Lawler © 2014 Gregory F. Lawler All rights reserved ii Contents 1 Martingales in discrete time 3 1.1 Conditional expectation . 3 1.2 Martingales . 10 1.3 Optional sampling theorem . 14 1.4 Martingale convergence theorem . 19 1.5 Square integrable martingales . 24 1.6 Integrals with respect to random walk . 26 1.7 A maximal inequality . 27 1.8 Exercises . 28 2 Brownian motion 35 2.1 Limits of sums of independent variables . 35 2.2 Multivariate normal distribution . 38 2.3 Limits of random walks . 42 2.4 Brownian motion . 43 2.5 Construction of Brownian motion . 46 2.6 Understanding Brownian motion . 51 2.6.1 Brownian motion as a continuous martingale . 54 2.6.2 Brownian motion as a Markov process . 56 2.6.3 Brownian motion as a Gaussian process . 57 2.6.4 Brownian motion as a self-similar process . 58 2.7 Computations for Brownian motion . 58 2.8 Quadratic variation . 63 2.9 Multidimensional Brownian motion . 66 2.10 Heat equation and generator . 68 2.10.1 One dimension . 68 2.10.2 Expected value at a future time . 74 2.11 Exercises . 78 iii iv CONTENTS 3 Stochastic integration 83 3.1 What is stochastic calculus? . 83 3.2 Stochastic integral . 85 3.2.1 Review of Riemann integration . 85 3.2.2 Integration of simple processes . 86 3.2.3 Integration of continuous processes . 89 3.3 It^o'sformula . 99 3.4 More versions of It^o'sformula . 105 3.5 Diffusions . 111 3.6 Covariation and the product rule . 116 3.7 Several Brownian motions . 117 3.8 Exercises . 120 4 More stochastic calculus 125 4.1 Martingales and local martingales . 125 4.2 An example: the Bessel process . 131 4.3 Feynman-Kac formula . 133 4.4 Binomial approximations . 137 4.5 Continuous martingales . 141 4.6 Exercises . 143 5 Change of measure and Girsanov theorem 145 5.1 Absolutely continuous measures . 145 5.2 Giving drift to a Brownian motion . 150 5.3 Girsanov theorem . 153 5.4 Black-Scholes formula . 162 5.5 Martingale approach to Black-Scholes equation . 166 5.6 Martingale approach to pricing . 169 5.7 Martingale representation theorem . 178 5.8 Exercises . 180 6 Jump processes 185 6.1 L´evyprocesses . 185 6.2 Poisson process . 188 6.3 Compound Poisson process . 192 6.4 Integration with respect to compound Poisson processes . 200 6.5 Change of measure . 205 6.6 Generalized Poisson processes I . 206 CONTENTS v 6.7 Generalized Poisson processes II . 211 6.8 The L´evy-Khinchin characterization . 216 6.9 Integration with respect to L´evyprocesses . 224 6.10 Symmetric stable process . 227 6.11 Exercises . 232 7 Fractional Brownian motion 237 7.1 Definition . 237 7.2 Stochastic integral representation . 239 7.3 Simulation . 241 8 Harmonic functions 243 8.1 Dirichlet problem . 243 8.2 h-processes . 249 8.3 Time changes . 251 8.4 Complex Brownian motion . 252 8.5 Exercises . 254 vi CONTENTS Introductory comments This is an introduction to stochastic calculus. I will assume that the reader has had a post-calculus course in probability or statistics. For much of these notes this is all that is needed, but to have a deep understanding of the subject, one needs to know measure theory and probability from that per- spective. My goal is to include discussion for readers with that background as well. I also assume that the reader can write simple computer programs either using a language like C++ or by using software such as Matlab or Mathematica. More advanced mathematical comments that can be skipped by the reader will be indented with a different font. Comments here will as- sume that the reader knows that language of measure-theoretic prob- ability theory. We will discuss some of the applications to finance but our main fo- cus will be on the mathematics. Financial mathematics is a kind of applied mathematics, and I will start by making some comments about the use of mathematics in \the real world". The general paradigm is as follows. • A mathematical model is made of some real world phenomenon. Usually this model requires simplification and does not precisely describe the real situation. One hopes that models are robust in the sense that if the model is not very far from reality then its predictions will also be close to accurate. • The model consists of mathematical assumptions about the real world. • Given these assumptions, one does mathematical analysis to see what they imply. The analysis can be of various types: 1 2 CONTENTS { Rigorous derivations of consequences. { Derivations that are plausible but are not mathematically rigor- ous. { Approximations of the mathematical model which lead to tractable calculations. { Numerical calculations on a computer. { For models that include randomness, Monte Carlo simulations us- ing a (pseudo) random number generator. • If the mathematical analysis is successful it will make predictions about the real world. These are then checked. • If the predictions are bad, there are two possible reasons: { The mathematical analysis was faulty. { The model does not sufficiently reflect reality. The user of mathematics does not always need to know the details of the mathematical analysis, but it is critical to understand the assumptions in the model. No matter how precise or sophisticated the analysis is, if the assumptions are bad, one cannot expect a good answer. Chapter 1 Martingales in discrete time A martingale is a mathematical model of a fair game. To understand the def- inition, we need to define conditional expectation. The concept of conditional expectation will permeate this book. 1.1 Conditional expectation If X is a random variable, then its expectation, E[X] can be thought of as the best guess for X given no information about the result of the trial. A conditional expectation can be considered as the best guess given some but not total information. Let X1;X2;::: be random variables which we think of as a time series with the data arriving one at a time. At time n we have viewed the values X1;:::;Xn. If Y is another random variable, then E(Y j X1;:::;Xn) is the best guess for Y given X1;:::;Xn. We will assume that Y is an integrable random variable which means E[jY j] < 1. To save some space we will write Fn for \the information contained in X1;:::;Xn" and E[Y j Fn] for E[Y j X1;:::;Xn]. We view F0 as no information. The best guess should satisfy the following properties. • If we have no information, then the best guess is the expected value. In other words, E[Y j F0] = E[Y ]. • The conditional expectation E[Y j Fn] should only use the informa- tion available at time n. In other words, it should be a function of 3 4 CHAPTER 1. MARTINGALES IN DISCRETE TIME X1;:::;Xn, E[Y j Fn] = φ(X1;:::;Xn): We say that E[Y j Fn] is Fn-measurable. The definitions in the last paragraph are certainly vague. We can use measure theory to be precise. We assume that the random variables Y; X1;X2;::: are defined on a probability space (Ω; F; P). Here F is a σ-algebra or σ-field of subsets of Ω, that is, a collection of subsets satisfying • ; 2 F; • A 2 F implies that Ω n A 2 F; S1 • A1;A2;::: 2 F implies that n=1 An 2 F. The information Fn is the smallest sub σ-algebra G of F such that X1;:::;Xn are G-measurable. The latter statement means that for all t 2 R, the event fXj ≤ tg 2 Fn. The \no information" σ-algebra F0 is the trivial σ-algebra containing only ; and Ω. The definition of conditional expectation is a little tricky, so let us try to motivate it by considering an example from undergraduate probability courses. Suppose that (X; Y ) have a joint density f(x; y); 0 < x; y < 1; with marginal densities Z 1 Z 1 f(x) = f(x; y) dy; g(y) = f(x; y) dx: −∞ −∞ The conditional density f(yjx) is defined by f(x; y) f(yjx) = : f(x) This is well defined provided that f(x) > 0, and if f(x) = 0, then x is an \impossible" value for X to take. We can write Z 1 E[Y j X = x] = y f(y j x) dy: −∞ 1.1. CONDITIONAL EXPECTATION 5 We can use this as the definition of conditional expectation in this case, Z 1 R 1 y f(X; y) dy E[Y j X] = y f(y j X) dy = −∞ : −∞ f(X) Note that E[Y j X] is a random variable which is determined by the value of the random variable X. Since it is a random variable, we can take its expectation Z 1 E [E[Y j X]] = E[Y j X = x] f(x) dx −∞ Z 1 Z 1 = y f(y j x) dy f(x) dx −∞ −∞ Z 1 Z 1 = y f(x; y) dy dx −∞ −∞ = E[Y ]: This calculation illustrates a basic property of conditional expectation. Suppose we are interested in the value of a random variable Y and we are going to be given data X1;:::;Xn. Once we observe the data, we make our best prediction E[Y j X1;:::;Xn]. If we average our best prediction given X1;:::;Xn over all the possible values of X1;:::;Xn, we get the best predic- tion of Y . In other words, E[Y ] = E [E[Y j Fn]] : More generally, suppose that A is an Fn-measurable event, that is to say, if we observe the data X1;:::;Xn, then we know whether or not A has occurred.
Recommended publications
  • Persistent Random Walks. II. Functional Scaling Limits
    Persistent random walks. II. Functional Scaling Limits Peggy Cénac1. Arnaud Le Ny2. Basile de Loynes3. Yoann Offret1 1Institut de Mathématiques de Bourgogne (IMB) - UMR CNRS 5584 Université de Bourgogne Franche-Comté, 21000 Dijon, France 2Laboratoire d’Analyse et de Mathématiques Appliquées (LAMA) - UMR CNRS 8050 Université Paris Est Créteil, 94010 Créteil Cedex, France 3Ensai - Université de Bretagne-Loire, Campus de Ker-Lann, Rue Blaise Pascal, BP 37203, 35172 BRUZ cedex, France Abstract We give a complete and unified description – under some stability assumptions – of the functional scaling limits associated with some persistent random walks for which the recurrent or transient type is studied in [1]. As a result, we highlight a phase transition phenomenon with respect to the memory. It turns out that the limit process is either Markovian or not according to – to put it in a nutshell – the rate of decrease of the distribution tails corresponding to the persistent times. In the memoryless situation, the limits are classical strictly stable Lévy processes of infinite variations. However, we point out that the description of the critical Cauchy case fills some lacuna even in the closely related context of Directionally Reinforced Random Walks (DRRWs) for which it has not been considered yet. Besides, we need to introduced some relevant generalized drift – extended the classical one – in order to study the critical case but also the situation when the limit is no longer Markovian. It appears to be in full generality a drift in mean for the Persistent Random Walk (PRW). The limit processes keeping some memory – given by some variable length Markov chain – of the underlying PRW are called arcsine Lamperti anomalous diffusions due to their marginal distribution which are computed explicitly here.
    [Show full text]
  • MAST31801 Mathematical Finance II, Spring 2019. Problem Sheet 8 (4-8.4.2019)
    UH|Master program Mathematics and Statistics|MAST31801 Mathematical Finance II, Spring 2019. Problem sheet 8 (4-8.4.2019) 1. Show that a local martingale (Xt : t ≥ 0) in a filtration F, which is bounded from below is a supermartingale. Hint: use localization together with Fatou lemma. 2. A non-negative local martingale Xt ≥ 0 is a martingale if and only if E(Xt) = E(X0) 8t. 3. Recall Ito formula, for f(x; s) 2 C2;1 and a continuous semimartingale Xt with quadratic covariation hXit 1 Z t @2f Z t @f Z t @f f(Xt; t) − f(X0; 0) − 2 (Xs; s)dhXis − (Xs; s)ds = (Xs; s)dXs 2 0 @x 0 @s 0 @x where on the right side we have the Ito integral. We can also understand the Ito integral as limit in probability of Riemann sums X @f n n n n X(tk−1); tk−1 X(tk ^ t) − X(tk−1 ^ t) n n @x tk 2Π among a sequence of partitions Πn with ∆(Πn) ! 0. Recalling that the Brownian motion Wt has quadratic variation hW it = t, write the following Ito integrals explicitely by using Ito formula: R t n (a) 0 Ws dWs, n 2 N. R t (b) 0 exp σWs σdWs R t 2 (c) 0 exp σWs − σ s=2 σdWs R t (d) 0 sin(σWs)dWs R t (e) 0 cos(σWs)dWs 4. These Ito integrals as stochastic processes are semimartingales. Com- pute the quadratic variation of the Ito-integrals above. Hint: recall that the finite variation part of a semimartingale has zero quadratic variation, and the quadratic variation is bilinear.
    [Show full text]
  • Perturbed Bessel Processes Séminaire De Probabilités (Strasbourg), Tome 32 (1998), P
    SÉMINAIRE DE PROBABILITÉS (STRASBOURG) R.A. DONEY JONATHAN WARREN MARC YOR Perturbed Bessel processes Séminaire de probabilités (Strasbourg), tome 32 (1998), p. 237-249 <http://www.numdam.org/item?id=SPS_1998__32__237_0> © Springer-Verlag, Berlin Heidelberg New York, 1998, tous droits réservés. L’accès aux archives du séminaire de probabilités (Strasbourg) (http://portail. mathdoc.fr/SemProba/) implique l’accord avec les conditions générales d’utili- sation (http://www.numdam.org/conditions). Toute utilisation commerciale ou im- pression systématique est constitutive d’une infraction pénale. Toute copie ou im- pression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ Perturbed Bessel Processes R.A.DONEY, J.WARREN, and M.YOR. There has been some interest in the literature in Brownian Inotion perturbed at its maximum; that is a process (Xt ; t > 0) satisfying (0.1) , where Mf = XS and (Bt; t > 0) is Brownian motion issuing from zero. The parameter a must satisfy a 1. For example arc-sine laws and Ray-Knight theorems have been obtained for this process; see Carmona, Petit and Yor [3], Werner [16], and Doney [7]. Our initial aim was to identify a process which could be considered as the process X conditioned to stay positive. This new process behaves like the Bessel process of dimension three except when at its maximum and we call it a perturbed three-dimensional Bessel process. We establish Ray-Knight theorems for the local times of this process, up to a first passage time and up to infinity (see Theorem 2.3), and observe that these descriptions coincide with those of the local times of two processes that have been considered in Yor [18].
    [Show full text]
  • Lectures on Lévy Processes, Stochastic Calculus and Financial
    Lectures on L¶evyProcesses, Stochastic Calculus and Financial Applications, Ovronnaz September 2005 David Applebaum Probability and Statistics Department, University of She±eld, Hicks Building, Houns¯eld Road, She±eld, England, S3 7RH e-mail: D.Applebaum@she±eld.ac.uk Introduction A L¶evyprocess is essentially a stochastic process with stationary and in- dependent increments. The basic theory was developed, principally by Paul L¶evyin the 1930s. In the past 15 years there has been a renaissance of interest and a plethora of books, articles and conferences. Why ? There are both theoretical and practical reasons. Theoretical ² There are many interesting examples - Brownian motion, simple and compound Poisson processes, ®-stable processes, subordinated processes, ¯nancial processes, relativistic process, Riemann zeta process . ² L¶evyprocesses are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size oc- curring at random times. ² L¶evyprocesses comprise a natural subclass of semimartingales and of Markov processes of Feller type. ² Noise. L¶evyprocesses are a good model of \noise" in random dynamical systems. 1 Input + Noise = Output Attempts to describe this di®erentially leads to stochastic calculus.A large class of Markov processes can be built as solutions of stochastic di®erential equations driven by L¶evynoise. L¶evydriven stochastic partial di®erential equations are beginning to be studied with some intensity. ² Robust structure. Most applications utilise L¶evyprocesses taking val- ues in Euclidean space but this can be replaced by a Hilbert space, a Banach space (these are important for spdes), a locally compact group, a manifold. Quantised versions are non-commutative L¶evyprocesses on quantum groups.
    [Show full text]
  • Lectures on Multiparameter Processes
    Lecture Notes on Multiparameter Processes: Ecole Polytechnique Fed´ erale´ de Lausanne, Switzerland Davar Khoshnevisan Department of Mathematics University of Utah Salt Lake City, UT 84112–0090 [email protected] http://www.math.utah.edu/˜davar April–June 2001 ii Contents Preface vi 1 Examples from Markov chains 1 2 Examples from Percolation on Trees and Brownian Motion 7 3ProvingLevy’s´ Theorem and Introducing Martingales 13 4 Preliminaries on Ortho-Martingales 19 5 Ortho-Martingales and Intersections of Walks and Brownian Motion 25 6 Intersections of Brownian Motion, Multiparameter Martingales 35 7 Capacity, Energy and Dimension 43 8 Frostman’s Theorem, Hausdorff Dimension and Brownian Motion 49 9 Potential Theory of Brownian Motion and Stable Processes 55 10 Brownian Sheet and Kahane’s Problem 65 Bibliography 71 iii iv Preface These are the notes for a one-semester course based on ten lectures given at the Ecole Polytechnique Fed´ erale´ de Lausanne, April–June 2001. My goal has been to illustrate, in some detail, some of the salient features of the theory of multiparameter processes and in particular, Cairoli’s theory of multiparameter mar- tingales. In order to get to the heart of the matter, and develop a kind of intuition at the same time, I have chosen the simplest topics of random walks, Brownian motions, etc. to highlight the methods. The full theory can be found in Multi-Parameter Processes: An Introduction to Random Fields (henceforth, referred to as MPP) which is to be published by Springer-Verlag, although these lectures also contain material not covered in the mentioned book.
    [Show full text]
  • A New Approach for Dynamic Stochastic Fractal Search with Fuzzy Logic for Parameter Adaptation
    fractal and fractional Article A New Approach for Dynamic Stochastic Fractal Search with Fuzzy Logic for Parameter Adaptation Marylu L. Lagunes, Oscar Castillo * , Fevrier Valdez , Jose Soria and Patricia Melin Tijuana Institute of Technology, Calzada Tecnologico s/n, Fracc. Tomas Aquino, Tijuana 22379, Mexico; [email protected] (M.L.L.); [email protected] (F.V.); [email protected] (J.S.); [email protected] (P.M.) * Correspondence: [email protected] Abstract: Stochastic fractal search (SFS) is a novel method inspired by the process of stochastic growth in nature and the use of the fractal mathematical concept. Considering the chaotic stochastic diffusion property, an improved dynamic stochastic fractal search (DSFS) optimization algorithm is presented. The DSFS algorithm was tested with benchmark functions, such as the multimodal, hybrid, and composite functions, to evaluate the performance of the algorithm with dynamic parameter adaptation with type-1 and type-2 fuzzy inference models. The main contribution of the article is the utilization of fuzzy logic in the adaptation of the diffusion parameter in a dynamic fashion. This parameter is in charge of creating new fractal particles, and the diversity and iteration are the input information used in the fuzzy system to control the values of diffusion. Keywords: fractal search; fuzzy logic; parameter adaptation; CEC 2017 Citation: Lagunes, M.L.; Castillo, O.; Valdez, F.; Soria, J.; Melin, P. A New Approach for Dynamic Stochastic 1. Introduction Fractal Search with Fuzzy Logic for Metaheuristic algorithms are applied to optimization problems due to their charac- Parameter Adaptation. Fractal Fract. teristics that help in searching for the global optimum, while simple heuristics are mostly 2021, 5, 33.
    [Show full text]
  • Introduction to Stochastic Processes - Lecture Notes (With 33 Illustrations)
    Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin Contents 1 Probability review 4 1.1 Random variables . 4 1.2 Countable sets . 5 1.3 Discrete random variables . 5 1.4 Expectation . 7 1.5 Events and probability . 8 1.6 Dependence and independence . 9 1.7 Conditional probability . 10 1.8 Examples . 12 2 Mathematica in 15 min 15 2.1 Basic Syntax . 15 2.2 Numerical Approximation . 16 2.3 Expression Manipulation . 16 2.4 Lists and Functions . 17 2.5 Linear Algebra . 19 2.6 Predefined Constants . 20 2.7 Calculus . 20 2.8 Solving Equations . 22 2.9 Graphics . 22 2.10 Probability Distributions and Simulation . 23 2.11 Help Commands . 24 2.12 Common Mistakes . 25 3 Stochastic Processes 26 3.1 The canonical probability space . 27 3.2 Constructing the Random Walk . 28 3.3 Simulation . 29 3.3.1 Random number generation . 29 3.3.2 Simulation of Random Variables . 30 3.4 Monte Carlo Integration . 33 4 The Simple Random Walk 35 4.1 Construction . 35 4.2 The maximum . 36 1 CONTENTS 5 Generating functions 40 5.1 Definition and first properties . 40 5.2 Convolution and moments . 42 5.3 Random sums and Wald’s identity . 44 6 Random walks - advanced methods 48 6.1 Stopping times . 48 6.2 Wald’s identity II . 50 6.3 The distribution of the first hitting time T1 .......................... 52 6.3.1 A recursive formula . 52 6.3.2 Generating-function approach .
    [Show full text]
  • Patterns in Random Walks and Brownian Motion
    Patterns in Random Walks and Brownian Motion Jim Pitman and Wenpin Tang Abstract We ask if it is possible to find some particular continuous paths of unit length in linear Brownian motion. Beginning with a discrete version of the problem, we derive the asymptotics of the expected waiting time for several interesting patterns. These suggest corresponding results on the existence/non-existence of continuous paths embedded in Brownian motion. With further effort we are able to prove some of these existence and non-existence results by various stochastic analysis arguments. A list of open problems is presented. AMS 2010 Mathematics Subject Classification: 60C05, 60G17, 60J65. 1 Introduction and Main Results We are interested in the question of embedding some continuous-time stochastic processes .Zu;0Ä u Ä 1/ into a Brownian path .BtI t 0/, without time-change or scaling, just by a random translation of origin in spacetime. More precisely, we ask the following: Question 1 Given some distribution of a process Z with continuous paths, does there exist a random time T such that .BTCu BT I 0 Ä u Ä 1/ has the same distribution as .Zu;0Ä u Ä 1/? The question of whether external randomization is allowed to construct such a random time T, is of no importance here. In fact, we can simply ignore Brownian J. Pitman ()•W.Tang Department of Statistics, University of California, 367 Evans Hall, Berkeley, CA 94720-3860, USA e-mail: [email protected]; [email protected] © Springer International Publishing Switzerland 2015 49 C. Donati-Martin et al.
    [Show full text]
  • Derivatives of Self-Intersection Local Times
    Derivatives of self-intersection local times Jay Rosen? Department of Mathematics College of Staten Island, CUNY Staten Island, NY 10314 e-mail: [email protected] Summary. We show that the renormalized self-intersection local time γt(x) for both the Brownian motion and symmetric stable process in R1 is differentiable in 0 the spatial variable and that γt(0) can be characterized as the continuous process of zero quadratic variation in the decomposition of a natural Dirichlet process. This Dirichlet process is the potential of a random Schwartz distribution. Analogous results for fractional derivatives of self-intersection local times in R1 and R2 are also discussed. 1 Introduction In their study of the intrinsic Brownian local time sheet and stochastic area integrals for Brownian motion, [14, 15, 16], Rogers and Walsh were led to analyze the functional Z t A(t, Bt) = 1[0,∞)(Bt − Bs) ds (1) 0 where Bt is a 1-dimensional Brownian motion. They showed that A(t, Bt) is not a semimartingale, and in fact showed that Z t Bs A(t, Bt) − Ls dBs (2) 0 x has finite non-zero 4/3-variation. Here Ls is the local time at x, which is x R s formally Ls = 0 δ(Br − x) dr, where δ(x) is Dirac’s ‘δ-function’. A formal d d2 0 application of Ito’s lemma, using dx 1[0,∞)(x) = δ(x) and dx2 1[0,∞)(x) = δ (x), yields Z t 1 Z t Z s Bs 0 A(t, Bt) − Ls dBs = t + δ (Bs − Br) dr ds (3) 0 2 0 0 ? This research was supported, in part, by grants from the National Science Foun- dation and PSC-CUNY.
    [Show full text]
  • Skorohod Stochastic Integration with Respect to Non-Adapted Processes on Wiener Space Nicolas Privault
    Skorohod stochastic integration with respect to non-adapted processes on Wiener space Nicolas Privault Equipe d'Analyse et Probabilit´es,Universit´ed'Evry-Val d'Essonne Boulevard des Coquibus, 91025 Evry Cedex, France e-mail: [email protected] Abstract We define a Skorohod type anticipative stochastic integral that extends the It^ointegral not only with respect to the Wiener process, but also with respect to a wide class of stochastic processes satisfying certain homogeneity and smoothness conditions, without requirements relative to filtrations such as adaptedness. Using this integral, a change of variable formula that extends the classical and Skorohod It^oformulas is obtained. Key words: It^ocalculus, Malliavin calculus, Skorohod integral. Mathematics Subject Classification (1991): 60H05, 60H07. 1 Introduction The Skorohod integral, defined by creation on Fock space, is an extension of the It^o integral with respect to the Wiener or Poisson processes, depending on the probabilis- tic interpretation chosen for the Fock space. This means that it coincides with the It^ointegral on square-integrable adapted processes. It can also extend the stochastic integral with respect to certain normal martingales, cf. [10], but it always acts with respect to the underlying process relative to a given probabilistic interpretation. The Skorohod integral is also linked to the Stratonovich, forward and backward integrals, and allows to extend the It^oformula to a class of Skorohod integral processes which contains non-adapted processes that have a certain structure. In this paper we introduce an extended Skorohod stochastic integral on Wiener space that can act with respect to processes other than Brownian motion.
    [Show full text]
  • 1 Stochastic Processes and Their Classification
    1 1 STOCHASTIC PROCESSES AND THEIR CLASSIFICATION 1.1 DEFINITION AND EXAMPLES Definition 1. Stochastic process or random process is a collection of random variables ordered by an index set. ☛ Example 1. Random variables X0;X1;X2;::: form a stochastic process ordered by the discrete index set f0; 1; 2;::: g: Notation: fXn : n = 0; 1; 2;::: g: ☛ Example 2. Stochastic process fYt : t ¸ 0g: with continuous index set ft : t ¸ 0g: The indices n and t are often referred to as "time", so that Xn is a descrete-time process and Yt is a continuous-time process. Convention: the index set of a stochastic process is always infinite. The range (possible values) of the random variables in a stochastic process is called the state space of the process. We consider both discrete-state and continuous-state processes. Further examples: ☛ Example 3. fXn : n = 0; 1; 2;::: g; where the state space of Xn is f0; 1; 2; 3; 4g representing which of four types of transactions a person submits to an on-line data- base service, and time n corresponds to the number of transactions submitted. ☛ Example 4. fXn : n = 0; 1; 2;::: g; where the state space of Xn is f1; 2g re- presenting whether an electronic component is acceptable or defective, and time n corresponds to the number of components produced. ☛ Example 5. fYt : t ¸ 0g; where the state space of Yt is f0; 1; 2;::: g representing the number of accidents that have occurred at an intersection, and time t corresponds to weeks. ☛ Example 6. fYt : t ¸ 0g; where the state space of Yt is f0; 1; 2; : : : ; sg representing the number of copies of a software product in inventory, and time t corresponds to days.
    [Show full text]
  • Introduction to Lévy Processes
    Introduction to L´evyprocesses Graduate lecture 22 January 2004 Matthias Winkel Departmental lecturer (Institute of Actuaries and Aon lecturer in Statistics) 1. Random walks and continuous-time limits 2. Examples 3. Classification and construction of L´evy processes 4. Examples 5. Poisson point processes and simulation 1 1. Random walks and continuous-time limits 4 Definition 1 Let Yk, k ≥ 1, be i.i.d. Then n X 0 Sn = Yk, n ∈ N, k=1 is called a random walk. -4 0 8 16 Random walks have stationary and independent increments Yk = Sk − Sk−1, k ≥ 1. Stationarity means the Yk have identical distribution. Definition 2 A right-continuous process Xt, t ∈ R+, with stationary independent increments is called L´evy process. 2 Page 1 What are Sn, n ≥ 0, and Xt, t ≥ 0? Stochastic processes; mathematical objects, well-defined, with many nice properties that can be studied. If you don’t like this, think of a model for a stock price evolving with time. There are also many other applications. If you worry about negative values, think of log’s of prices. What does Definition 2 mean? Increments , = 1 , are independent and Xtk − Xtk−1 k , . , n , = 1 for all 0 = . Xtk − Xtk−1 ∼ Xtk−tk−1 k , . , n t0 < . < tn Right-continuity refers to the sample paths (realisations). 3 Can we obtain L´evyprocesses from random walks? What happens e.g. if we let the time unit tend to zero, i.e. take a more and more remote look at our random walk? If we focus at a fixed time, 1 say, and speed up the process so as to make n steps per time unit, we know what happens, the answer is given by the Central Limit Theorem: 2 Theorem 1 (Lindeberg-L´evy) If σ = V ar(Y1) < ∞, then Sn − (Sn) √E → Z ∼ N(0, σ2) in distribution, as n → ∞.
    [Show full text]