Stochastic Calculus

Total Page:16

File Type:pdf, Size:1020Kb

Stochastic Calculus Stochastic Calculus Xi Geng Contents 1 Review of probability theory 3 1.1 Conditional expectations . 3 1.2 Uniform integrability . 5 1.3 The Borel-Cantelli lemma . 6 1.4 The law of large numbers and the central limit theorem . 7 1.5 Weak convergence of probability measures . 7 2 Generalities on continuous time stochastic processes 13 2.1 Basic definitions . 13 2.2 Construction of stochastic processes: Kolmogorov’s extension theorem . 15 2.3 Kolmogorov’s continuity theorem . 18 2.4 Filtrations and stopping times . 20 3 Continuous time martingales 27 3.1 Basic properties and the martingale transform: discrete stochastic inte- gration . 27 3.2 The martingale convergence theorems . 28 3.3 Doob’s optional sampling theorems . 34 3.4 Doob’s martingale inequalities . 37 3.5 The continuous time analogue . 38 3.6 The Doob-Meyer decomposition . 42 4 Brownian motion 51 4.1 Basic properties . 51 4.2 The strong Markov property and the reflection principle . 52 4.3 The Skorokhod embedding theorem and the Donsker invariance principle 55 4.4 Passage time distributions . 62 4.5 Sample path properties: an overview . 67 5 Stochastic integration 71 5.1 L2-bounded martingales and the bracket process . 71 5.2 Stochastic integrals . 79 5.3 Itô’s formula . 88 1 5.4 The Burkholder-Davis-Gundy Inequalities . 91 5.5 Lévy’s characterization of Brownian motion . 94 5.6 Continuous local martingales as time-changed Brownian motions . 95 5.7 Continuous local martingales as Itô’s integrals . 102 5.8 The Cameron-Martin-Girsanov transformation . 108 5.9 Local times for continuous semimartingales . 117 6 Stochastic differential equations 127 6.1 Itô’s theory of stochastic differential equations . 127 6.2 Different notions of solutions and the Yamada-Watanabe theorem . 133 6.3 Existence of weak solutions . 136 6.4 Pathwise uniqueness results . 150 6.5 A comparison theorem for one dimensional SDEs . 154 6.6 Two useful techniques: transformation of drift and time-change . 157 6.7 Examples: linear SDEs and Bessel processes . 163 6.8 Itô’s diffusion processes and partial differential equations . 175 2 1 Review of probability theory In this section, we review several aspects of probability theory that are important for our study. Most proofs are contained in standard textbooks and hence will be omitted. Recall that a probability space is a triple (Ω; F; P) which consists of a non-empty set Ω, a σ-algebra F over Ω and a probability measure on F: A random variable over p (Ω; F; P) is a real-valued F-measurable function. For 1 6 p < 1;L (Ω; F; P) denotes p the Banach space of (equivalence classes of) random variables X satisfying E[jXj ] < 1: The following are a few conventions that we will be using in the course. • A P-null set is a subset of some F-measurable set with zero probability. • A property is said to hold almost surely (a.s.) or with probability one if it holds outside an F-measurable set with zero probability, or equivalently, the set on which it does not hold is a P-null set. 1.1 Conditional expectations A fundamental concept in the study of martingale theory and stochastic calculus is the conditional expectation. Definition 1.1. Let (Ω; F; P) be a probability space, and let G be a sub-σ-algebra of F: 1 Given an integrable random variable X 2 L (Ω; F; P), the conditional expectation of X given G is the unique G-measurable and integrable random variable Y such that Y dP = XdP; 8A 2 G: (1.1) ˆA ˆA It is denoted by E[XjG]: The existence of the conditional expectation is a standard application of the Radon- Nikodym theorem, and the uniqueness follows from an easy measure theoretic argument. Here we recall a geometric construction of the conditional expectation. We start 2 2 with the Hilbert space L (Ω; F; P). Since G ⊆ F; the Hilbert space L (Ω; G; P) can 2 2 be regarded as a closed subspace of L (Ω; F; P). Given X 2 L (Ω; F; P); let Y be the 2 orthogonal projection of X onto L (Ω; G; P): Then Y satisfies the characterizing property (1.1) of the conditional expectation. If X is a non-negative integrable random variable, 2 we consider Xn = X ^ n 2 L (Ω; F; P) and let Yn be the orthogonal projection of Xn 2 onto L (Ω; G; P): It follows that Yn is non-negative and increasing. Its pointwise limit, denoted by Y; is a non-negative, G-measurable and integrable random variable which satisfies (1.1). The general case is treated by writing X = X+ − X− and using linearity. We left it as an exercise to provide the details of the construction. The conditional expectation satisfies the following basic properties. (1) X 7! E[XjG] is linear. (2) If X 6 Y; then E[XjG] 6 E[Y jG]: In particular, jE[XjG]j 6 E[jXjjG]: (3) If X and ZX are both integrable, and Z 2 G; then E[ZXjG] = ZE[XjG]: 3 (4) If G1 ⊂ G2 are sub-σ-algebras of F; then E[E[XjG2]jG1] = E[XjG1]: (5) If X and G are independent, then E[XjG] = E[X]: In addition, we have the following Jensen’s inequality: if ' is a convex function on R, and both X and '(X) are integrable, then '(E[XjG]) 6 E['(X)jG]: (1.2) p Applying this to the function '(x) = jxj for p > 1, we see immediately that the p conditional expectation is a contraction operator on L (Ω; F; P): The convergence theorems (the monotone convergence theorem, Fatou’s lemma, and the dominated convergence theorem) also hold for the conditional expectation, stated in an obvious way. For every measurable subset A 2 F; P(AjG) is the conditional probability of A given G. However, P(AjG) is defined up to a null set which depends on A; and in general there does not exist a universal null set outside which the conditional probability A 7! P(AjG) can be regarded as a probability measure. The resolution of this issue leads to the notion of regular conditional expectations, which plays an important role in the study of Markov processes and stochastic differential equations. Definition 1.2. Let (Ω; F; P) be a probability space and let G be a sub-σ-algebra of F: A system fp(!; A)g!2Ω;A2F is called a regular conditional probability given G if it satisfies the following conditions: (1) for every ! 2 Ω;A 7! p(!; A) is a probability measure on (Ω; F); (2) for every A 2 F;! 7! p(!; A) is G-measurable; (3) for every A 2 F and B 2 G; \ P(A B) = p(!; A)P(d!): ˆB The third condition tells us that for every A 2 F; p(·;A) is a version of P(AjG): It follows that for every integrable random variable X; ! 7! X(!0)p(!; d!0) is an almost surely well-defined and it is a version of E[XjG]: ´ In many situations, we are interested in the conditional distribution of a random variable taking values in another measurable space. Suppose that fp(!; A)g!2Ω;A2F is a regular conditional probability on (Ω; F; P) given G: Let X be a measurable map from (Ω; F) to some measurable space (E; E): We can define Q(!; Γ) = p(!; X−1Γ);! 2 Ω; Γ 2 E: Then the system fQ(!; Γ)g!2Ω;Γ2E satisfies: (1)’ for every ! 2 Ω; Γ 7! Q(!; Γ) is a probability measure on (E; E); (2)’ for every Γ 2 E;! 7! Q(!; Γ) is G-measurable; (3)’ for every Γ 2 E and B 2 G; \ P(fX 2 Γg B) = Q(!; Γ)P(d!): ˆB 4 In particular, we can see that Q(·; Γ) is a version of P(X 2 ΓjG) for every Γ 2 E: The system fQ(!; Γ)g!2Ω;Γ2E is called a regular conditional distribution of X given G: It is a deep result in measure theory that if E is a complete and separable metric space, and E is the σ-algebra generated by open sets in E; then a regular conditional distribution of X given G exists. In particular, if (Ω; F) is a complete and separable metric space, by considering the identity map we know that a regular conditional probability given G exists. In this course we will mainly be interested in complete and separable metric spaces. Sometimes we also consider conditional expectations given some random variable X: X Let X be as before, and let P be the law of X on (E; E): Similar to Definition 1.2, a system fp(x; A)gx2E;A2F is called a regular conditional probability given X if it satisfies: (1)” for every x 2 E; A 7! p(x; A) is a probability measure on (Ω; F); (2)” for every A 2 F; x 7! p(x; A) is E-measurable; (3)” for every A 2 F and Γ 2 E; \ X P(A fX 2 Γg) = p(x; A)P (dx): ˆΓ In particular, p(·;A) gives a version of P(AjX = ·). If (Ω; F) is a complete and separable metric space, then a regular conditional probability given X exists. 1.2 Uniform integrability Now we review an important concept which is closely related to the study of L1- convergence. Definition 1.3. A family fXt : t 2 T g of integrable random variables over a probability space (Ω; F; P) is called uniformly integrable if lim sup jXtjdP = 0: λ!1 t2T ˆfjXtj>λg Uniform integrability can be characterized by the following two properties.
Recommended publications
  • Black-Scholes Model
    Chapter 5 Black-Scholes Model Copyright c 2008{2012 Hyeong In Choi, All rights reserved. 5.1 Modeling the Stock Price Process One of the basic building blocks of the Black-Scholes model is the stock price process. In particular, in the Black-Scholes world, the stock price process, denoted by St, is modeled as a geometric Brow- nian motion satisfying the following stochastic differential equation: dSt = St(µdt + σdWt); where µ and σ are constants called the drift and volatility, respec- tively. Also, σ is always assumed to be a positive constant. In order to see if such model is \reasonable," let us look at the price data of KOSPI 200 index. In particular Figure 5.1 depicts the KOSPI 200 index from January 2, 2001 till May 18, 2011. If it is reasonable to model it as a geometric Brownian motion, we must have ∆St ≈ µ∆t + σ∆Wt: St In particular if we let ∆t = h (so that ∆St = St+h − St and ∆Wt = Wt+h − Wt), the above approximate equality becomes St+h − St ≈ µh + σ(Wt+h − Wt): St 2 Since σ(Wt+h − Wt) ∼ N(0; σ h), the histogram of (St+h − St)p=St should resemble a normal distribution with standard deviation σ h. (It is called the h-day volatility.) To see it is indeed the case we have plotted the histograms for h = 1; 2; 3; 5; 7 (days) in Figures 5.2 up to 5.1. MODELING THE STOCK PRICE PROCESS 156 Figure 5.1: KOSPI 200 from 2001.
    [Show full text]
  • The Distribution of Local Times of a Brownian Bridge
    The distribution of lo cal times of a Brownian bridge by Jim Pitman Technical Rep ort No. 539 Department of Statistics University of California 367 Evans Hall 3860 Berkeley, CA 94720-3860 Nov. 3, 1998 1 The distribution of lo cal times of a Brownian bridge Jim Pitman Department of Statistics, University of California, 367 Evans Hall 3860, Berkeley, CA 94720-3860, USA [email protected] 1 Intro duction x Let L ;t 0;x 2 R denote the jointly continuous pro cess of lo cal times of a standard t one-dimensional Brownian motion B ;t 0 started at B = 0, as determined bythe t 0 o ccupation density formula [19 ] Z Z t 1 x dx f B ds = f xL s t 0 1 for all non-negative Borel functions f . Boro din [7, p. 6] used the metho d of Feynman- x Kac to obtain the following description of the joint distribution of L and B for arbitrary 1 1 xed x 2 R: for y>0 and b 2 R 1 1 2 x jxj+jbxj+y 2 p P L 2 dy ; B 2 db= jxj + jb xj + y e dy db: 1 1 1 2 This formula, and features of the lo cal time pro cess of a Brownian bridge describ ed in the rest of this intro duction, are also implicitinRay's description of the jointlawof x ;x 2 RandB for T an exp onentially distributed random time indep endentofthe L T T Brownian motion [18, 22 , 6 ]. See [14, 17 ] for various characterizations of the lo cal time pro cesses of Brownian bridge and Brownian excursion, and further references.
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • Conditional Generalizations of Strong Laws Which Conclude the Partial Sums Converge Almost Surely1
    The Annals ofProbability 1982. Vol. 10, No.3. 828-830 CONDITIONAL GENERALIZATIONS OF STRONG LAWS WHICH CONCLUDE THE PARTIAL SUMS CONVERGE ALMOST SURELY1 By T. P. HILL Georgia Institute of Technology Suppose that for every independent sequence of random variables satis­ fying some hypothesis condition H, it follows that the partial sums converge almost surely. Then it is shown that for every arbitrarily-dependent sequence of random variables, the partial sums converge almost surely on the event where the conditional distributions (given the past) satisfy precisely the same condition H. Thus many strong laws for independent sequences may be immediately generalized into conditional results for arbitrarily-dependent sequences. 1. Introduction. Ifevery sequence ofindependent random variables having property A has property B almost surely, does every arbitrarily-dependent sequence of random variables have property B almost surely on the set where the conditional distributions have property A? Not in general, but comparisons of the conditional Borel-Cantelli Lemmas, the condi­ tional three-series theorem, and many martingale results with their independent counter­ parts suggest that the answer is affirmative in fairly general situations. The purpose ofthis note is to prove Theorem 1, which states, in part, that if "property B" is "the partial sums converge," then the answer is always affIrmative, regardless of "property A." Thus many strong laws for independent sequences (even laws yet undiscovered) may be immediately generalized into conditional results for arbitrarily-dependent sequences. 2. Main Theorem. In this note, IY = (Y1 , Y2 , ••• ) is a sequence ofrandom variables on a probability triple (n, d, !?I'), Sn = Y 1 + Y 2 + ..
    [Show full text]
  • Perturbed Bessel Processes Séminaire De Probabilités (Strasbourg), Tome 32 (1998), P
    SÉMINAIRE DE PROBABILITÉS (STRASBOURG) R.A. DONEY JONATHAN WARREN MARC YOR Perturbed Bessel processes Séminaire de probabilités (Strasbourg), tome 32 (1998), p. 237-249 <http://www.numdam.org/item?id=SPS_1998__32__237_0> © Springer-Verlag, Berlin Heidelberg New York, 1998, tous droits réservés. L’accès aux archives du séminaire de probabilités (Strasbourg) (http://portail. mathdoc.fr/SemProba/) implique l’accord avec les conditions générales d’utili- sation (http://www.numdam.org/conditions). Toute utilisation commerciale ou im- pression systématique est constitutive d’une infraction pénale. Toute copie ou im- pression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ Perturbed Bessel Processes R.A.DONEY, J.WARREN, and M.YOR. There has been some interest in the literature in Brownian Inotion perturbed at its maximum; that is a process (Xt ; t > 0) satisfying (0.1) , where Mf = XS and (Bt; t > 0) is Brownian motion issuing from zero. The parameter a must satisfy a 1. For example arc-sine laws and Ray-Knight theorems have been obtained for this process; see Carmona, Petit and Yor [3], Werner [16], and Doney [7]. Our initial aim was to identify a process which could be considered as the process X conditioned to stay positive. This new process behaves like the Bessel process of dimension three except when at its maximum and we call it a perturbed three-dimensional Bessel process. We establish Ray-Knight theorems for the local times of this process, up to a first passage time and up to infinity (see Theorem 2.3), and observe that these descriptions coincide with those of the local times of two processes that have been considered in Yor [18].
    [Show full text]
  • Probabilistic and Geometric Methods in Last Passage Percolation
    Probabilistic and geometric methods in last passage percolation by Milind Hegde A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Mathematics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Alan Hammond, Co‐chair Professor Shirshendu Ganguly, Co‐chair Professor Fraydoun Rezakhanlou Professor Alistair Sinclair Spring 2021 Probabilistic and geometric methods in last passage percolation Copyright 2021 by Milind Hegde 1 Abstract Probabilistic and geometric methods in last passage percolation by Milind Hegde Doctor of Philosophy in Mathematics University of California, Berkeley Professor Alan Hammond, Co‐chair Professor Shirshendu Ganguly, Co‐chair Last passage percolation (LPP) refers to a broad class of models thought to lie within the Kardar‐ Parisi‐Zhang universality class of one‐dimensional stochastic growth models. In LPP models, there is a planar random noise environment through which directed paths travel; paths are as‐ signed a weight based on their journey through the environment, usually by, in some sense, integrating the noise over the path. For given points y and x, the weight from y to x is defined by maximizing the weight over all paths from y to x. A path which achieves the maximum weight is called a geodesic. A few last passage percolation models are exactly solvable, i.e., possess what is called integrable structure. This gives rise to valuable information, such as explicit probabilistic resampling prop‐ erties, distributional convergence information, or one point tail bounds of the weight profile as the starting point y is fixed and the ending point x varies.
    [Show full text]
  • Application of Malliavin Calculus and Wiener Chaos to Option Pricing Theory
    Application of Malliavin Calculus and Wiener Chaos to Option Pricing Theory by Eric Ben-Hamou The London School of Economics and Political Science Thesis submitted in partial fulfilment of the requirements of the degree of Doctor of Philosophy at the University of London UMI Number: U615447 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion. Dissertation Publishing UMI U615447 Published by ProQuest LLC 2014. Copyright in the Dissertation held by the Author. Microform Edition © ProQuest LLC. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code. ProQuest LLC 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 48106-1346 weseS F 7855 % * A bstract This dissertation provides a contribution to the option pricing literature by means of some recent developments in probability theory, namely the Malliavin Calculus and the Wiener chaos theory. It concentrates on the issue of faster convergence of Monte Carlo and Quasi-Monte Carlo simulations for the Greeks, on the topic of the Asian option as well as on the approximation for convexity adjustment for fixed income derivatives. The first part presents a new method to speed up the convergence of Monte- Carlo and Quasi-Monte Carlo simulations of the Greeks by means of Malliavin weighted schemes. We extend the pioneering works of Fournie et al.
    [Show full text]
  • Patterns in Random Walks and Brownian Motion
    Patterns in Random Walks and Brownian Motion Jim Pitman and Wenpin Tang Abstract We ask if it is possible to find some particular continuous paths of unit length in linear Brownian motion. Beginning with a discrete version of the problem, we derive the asymptotics of the expected waiting time for several interesting patterns. These suggest corresponding results on the existence/non-existence of continuous paths embedded in Brownian motion. With further effort we are able to prove some of these existence and non-existence results by various stochastic analysis arguments. A list of open problems is presented. AMS 2010 Mathematics Subject Classification: 60C05, 60G17, 60J65. 1 Introduction and Main Results We are interested in the question of embedding some continuous-time stochastic processes .Zu;0Ä u Ä 1/ into a Brownian path .BtI t 0/, without time-change or scaling, just by a random translation of origin in spacetime. More precisely, we ask the following: Question 1 Given some distribution of a process Z with continuous paths, does there exist a random time T such that .BTCu BT I 0 Ä u Ä 1/ has the same distribution as .Zu;0Ä u Ä 1/? The question of whether external randomization is allowed to construct such a random time T, is of no importance here. In fact, we can simply ignore Brownian J. Pitman ()•W.Tang Department of Statistics, University of California, 367 Evans Hall, Berkeley, CA 94720-3860, USA e-mail: [email protected]; [email protected] © Springer International Publishing Switzerland 2015 49 C. Donati-Martin et al.
    [Show full text]
  • Derivatives of Self-Intersection Local Times
    Derivatives of self-intersection local times Jay Rosen? Department of Mathematics College of Staten Island, CUNY Staten Island, NY 10314 e-mail: [email protected] Summary. We show that the renormalized self-intersection local time γt(x) for both the Brownian motion and symmetric stable process in R1 is differentiable in 0 the spatial variable and that γt(0) can be characterized as the continuous process of zero quadratic variation in the decomposition of a natural Dirichlet process. This Dirichlet process is the potential of a random Schwartz distribution. Analogous results for fractional derivatives of self-intersection local times in R1 and R2 are also discussed. 1 Introduction In their study of the intrinsic Brownian local time sheet and stochastic area integrals for Brownian motion, [14, 15, 16], Rogers and Walsh were led to analyze the functional Z t A(t, Bt) = 1[0,∞)(Bt − Bs) ds (1) 0 where Bt is a 1-dimensional Brownian motion. They showed that A(t, Bt) is not a semimartingale, and in fact showed that Z t Bs A(t, Bt) − Ls dBs (2) 0 x has finite non-zero 4/3-variation. Here Ls is the local time at x, which is x R s formally Ls = 0 δ(Br − x) dr, where δ(x) is Dirac’s ‘δ-function’. A formal d d2 0 application of Ito’s lemma, using dx 1[0,∞)(x) = δ(x) and dx2 1[0,∞)(x) = δ (x), yields Z t 1 Z t Z s Bs 0 A(t, Bt) − Ls dBs = t + δ (Bs − Br) dr ds (3) 0 2 0 0 ? This research was supported, in part, by grants from the National Science Foun- dation and PSC-CUNY.
    [Show full text]
  • Lecture 4: Risk Neutral Pricing 1 Part I: the Girsanov Theorem
    SEEM 5670 { Advanced Models in Financial Engineering Professor: Nan Chen Oct. 17 &18, 2017 Lecture 4: Risk Neutral Pricing 1 Part I: The Girsanov Theorem 1.1 Change of Measure and Girsanov Theorem • Change of measure for a single random variable: Theorem 1. Let (Ω; F;P ) be a sample space and Z be an almost surely nonnegative random variable with E[Z] = 1. For A 2 F, define Z P~(A) = Z(!)dP (!): A Then P~ is a probability measure. Furthermore, if X is a nonnegative random variable, then E~[X] = E[XZ]: If Z is almost surely strictly positive, we also have Y E[Y ] = E~ : Z Definition 1. Let (Ω; F) be a sample space. Two probability measures P and P~ on (Ω; F) are said to be equivalent if P (A) = 0 , P~(A) = 0 for all such A. Theorem 2 (Radon-Nikodym). Let P and P~ be equivalent probability measures defined on (Ω; F). Then there exists an almost surely positive random variable Z such that E[Z] = 1 and Z P~(A) = Z(!)dP (!) A for A 2 F. We say Z is the Radon-Nikodym derivative of P~ with respect to P , and we write dP~ Z = : dP Example 1. Change of measure on Ω = [0; 1]. Example 2. Change of measure for normal random variables. • We can also perform change of measure for a whole process rather than for a single random variable. Suppose that there is a probability space (Ω; F) and a filtration fFt; 0 ≤ t ≤ T g. Suppose further that FT -measurable Z is an almost surely positive random variable satisfying E[Z] = 1.
    [Show full text]
  • Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications
    Purdue University Purdue e-Pubs Open Access Dissertations Theses and Dissertations January 2015 Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications. Rolando Dangnanan Navarro Purdue University Follow this and additional works at: https://docs.lib.purdue.edu/open_access_dissertations Recommended Citation Navarro, Rolando Dangnanan, "Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications." (2015). Open Access Dissertations. 1422. https://docs.lib.purdue.edu/open_access_dissertations/1422 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Graduate School Form 30 Updated 1/15/2015 PURDUE UNIVERSITY GRADUATE SCHOOL Thesis/Dissertation Acceptance This is to certify that the thesis/dissertation prepared By Rolando D. Navarro, Jr. Entitled Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications For the degree of Doctor of Philosophy Is approved by the final examining committee: Frederi Viens Chair Jonathon Peterson Michael Levine Jose Figueroa-Lopez To the best of my knowledge and as understood by the student in the Thesis/Dissertation Agreement, Publication Delay, and Certification Disclaimer (Graduate School Form 32), this thesis/dissertation adheres to the provisions of Purdue University’s “Policy of Integrity in Research” and the use of copyright material. Approved by Major Professor(s): Frederi Viens Approved by: Jun Xie 11/24/2015 Head of the Departmental Graduate Program Date MALLIAVIN CALCULUS IN THE CANONICAL LEVY´ PROCESS: WHITE NOISE THEORY AND FINANCIAL APPLICATIONS A Dissertation Submitted to the Faculty of Purdue University by Rolando D. Navarro, Jr.
    [Show full text]
  • Risk Neutral Measures
    We will denote expectations and conditional expectations with respect to the new measure by E˜. That is, given a random variable X, Z E˜X = EZ(T )X = Z(T )X dP . CHAPTER 4 Also, given a σ-algebra F, E˜(X | F) is the unique F-measurable random variable such that Risk Neutral Measures Z Z (1.2) E˜(X | F) dP˜ = X dP˜ , F F Our aim in this section is to show how risk neutral measures can be used to holds for all F measurable events F . Of course, equation (1.2) is equivalent to price derivative securities. The key advantage is that under a risk neutral measure requiring the discounted hedging portfolio becomes a martingale. Thus the price of any Z Z 0 ˜ ˜ derivative security can be computed by conditioning the payoff at maturity. We will (1.2 ) Z(T )E(X | F) dP = Z(T )X dP , F F use this to provide an elegant derivation of the Black-Scholes formula, and discuss for all F measurable events F . the fundamental theorems of asset pricing. The main goal of this section is to prove the Girsanov theorem. 1. The Girsanov Theorem. Theorem 1.6 (Cameron, Martin, Girsanov). Let b(t) = (b1(t), b2(t), . , bd(t)) be a d-dimensional adapted process, W be a d-dimensional Brownian motion, and ˜ Definition 1.1. Two probability measures P and P are said to be equivalent define if for every event A, P (A) = 0 if and only if P˜(A) = 0. Z t W˜ (t) = W (t) + b(s) ds .
    [Show full text]