Variance and Covariance Processes

Total Page:16

File Type:pdf, Size:1020Kb

Variance and Covariance Processes Variance and Covariance Processes Prakash Balachandran Department of Mathematics Duke University May 26, 2008 These notes are based on Durrett’s Stochastic Calculus, Revuz and Yor’s Continuous Martingales and Brownian Motion, Karatzas and Shreve’s Brownian Motion and Stochastic Calculus, and Kuo’s Intro- duction to Stochastic Calculus 1 Motivation In this section, we motivate the construction of variance and covariance processes for continuous local martingales, which is crucial in the construction of stochastic integrals w.r.t. continuous local martingales as we shall see. In this section, unless otherwise specified, we fix a Brownian motion Bt and a filtration fFtg such that: 1. For each t, Bt is Ft-measurable. 2. For and s ≤ t, the random variable Bt − Bs is independent of the σ-field Fs. Recall that for any Brownian motion, hBit = t where hBit is the quadratic variation of Bt. This immediately implies (2) in the following 2 Definition: Define Lad([a; b] × Ω) to be the space of all stochastic processes f(t; !), a ≤ t ≤ b, ! 2 Ω such that: 1. f(t; !) is adapted to the filtration fFtg. R b 2 R b 2 2. a E[jf(t)j ]dt = a E[jf(t)j ]d hBit < 1. 1 Also recall when constructing a theory of integration w.r.t. a Brownian motion, we begin with construct- ing the stochastic integral Z b f(t)dBt a 2 for f 2 Lad([a; b] × Ω). Now, we want a more general formalism of integrating a class of processes w.r.t. a generalized martin- gale that in the case Brownian motion will reduce to the above. Definition: Let Gt be a right-continuous filtration. We define L denote the collection of all jointly measurable stochastic processes X(t; !) such that: 1. Xt is adapted w.r.t. Gt. 2. Almost all sample paths of Xt are left continuous. Furthermore, we define P to be the smallest σ-field of subsets of [a; b] × Ω with respect to which all the stochastic processes in L are measurable. A stochastic processes Y (t; !) that is P measurable is said to be predictible. The motivation for the definition of a predictable process comes from the following argument: If Yt is a predictable process, then almost all its values at time t can be determined [with certainty] with the information available strictly before time t, since left continuity of the process Yt implies that for almost every ! 2 Ω and any sequence tn " t as n ! 1: lim Yt (!) = Yt(!): n!1 n Now, we have the following theorem [which we shall prove a version of in the next section for continuous local martinagles]: Theorem 1 (Doob-Meyer) Let Mt, a ≤ t ≤ b be a right continuous, square integrable martingale with left hand limits. Then, there exists a unique decomposition 2 (Mt) = Lt + At; a ≤ t ≤ b where Lt is a right-continuous martingale with left-hand limits, and At is a predictable, right continuous, increasing process such that Aa ≡ 0 and E[At] < 1 for all a ≤ t ≤ b. The above theorem certainly applies to the square integrable process Bt. 2 Claim 1 In the case Mt = Bt in Doob-Meyer, At = hBit = t. 2 Proof of Claim 1: WLOG, we may take a = 0 and b = t0. Define Pt = (Bt) − t. Then, for 0 ≤ s ≤ t ≤ t0: 2 2 2 2 E[(Bt) jFs] = E[(Bt − Bs + Bs) jFs] = E[(Bt − Bs) + 2Bs(Bt − Bs) + (Bs) jFs] 2 2 2 = E[(Bt − Bs) ] + 2BsE[Bt − Bs] + (Bs) = t − s + (Bs) 2 2 ) E[PtjFs] = E[(Bt) − tjFs] = (Bs) − s = Ps: 2 2 Thus, Pt = (Bt) − t is a martingale, so that (Bt) = Pt + t. Clearly, t satisfies all the conditions that At must satisfy in Doob-Meyer, so that by uniqueness of At, At = t = hBit. 4 So, another way of viewing the integral w.r.t. the martingale Mt w.r.t. the filtration Gt is the following: First, we look for the unique process (guaranteed by Doob-Meyer) hMit such that 2 Lt = (Mt) − hMit is a martingale. Then, we make the 2 Definition: Define Lpred([a; b]hMi × Ω) to be the space of all stochastic processes f(t; !), a ≤ t ≤ b, ! 2 Ω such that: 1. f(t; !) is predictable w.r.t. fGtg. R b 2 2. a E[jf(t)j ]d hMit < 1. Then, we proceed to construct the integral Z b f(t)dMt a 2 for f 2 Lpred([a; b]hMi × Ω). It’s clear that in the case Mt = Bt and Gt = Ft that the above formulation coincides with the original construction of the stochastic integral w.r.t. Bt reviewed at the beginning of this section. For right continuous, square integrable martingales Mt with left hand limits, at least, this process works. In the case where Mt is a continuous local martingale, we do the same thing. However, it’s not immedi- ately clear: 1. that we have a version of Doob-Meyer for continuous local martingales. 2. how the construction of the integral is affected by the stopping times Tn that reduce Mt, if at all. In the next section, we deal with the first problem. Then, we proceed to remedy the second. 3 2 Variance and Covariance Processes We take L and P as defined in section 1. Theorem 2 If Xt is a continuous local martingale, then we define the variance process hXit to be the 2 unique continuous predictable increasing processes At that has A0 ≡ 0 and makes Xt − At a local martingale. Definition: If X and Y are two continuous local martingales, we let 1 hX; Y i = (hX + Y i − hX − Y i ): t 4 t t We call hX; Y it the covariance of X and Y . Based on the discussion in the first section, it’s clear why we’re interested in variance processes. It is convenient to define covariance processes since they are very useful and have quite nice properties, such as: Theorem 3 h·; ·it is a symmetric bilinear form on the class of continuous local martinagles. We might prove it this time around. If not, hopefully next time. Two questions I’m still pondering is 1. Can you turn this into an inner product? 2. If so, how you can characterize the class of processes that is the completion of this space? The proof of theorem 2 is long, but it is instructive to go through it, since it develops techniques that will be useful later. In order to proceed, recall that any predictable discrete time martingale is constant [why?]. There is a result analogous to this in continuous time, and we use it to prove the uniqueness statement in theorem 2: Theorem 4 Any continuous local martingale Xt that is predictable and locally of bounded variation is constant (in time). Proof: By subtracting X0, WMA that X0 ≡ 0. Thus, we wish to show that Xt ≡ 0 for all t > 0 almost surely. t Let Vt(!) = supπ2Πt Tπ(!) be the variation of Xs(!) on [0; t], where Π denotes the set of all (finite) partitions of [0; t], π = f0 = t0 < t1 < ··· < tN = tg, and where for a given partition of this sort, N X Tπ(!) = jXtm (!) − Xtm−1 (!)j: m=1 4 Lemma 1 For almost all ! 2 Ω, t 7! Vt(!) is continuous Proof of Lemma: First, notice that for any ! 2 Ω, t 7! Vt(!) is increasing: For s < t, [0; s] ⊂ [0; t], so that any finite partition π = f0 = t0 < t1 < ··· < tN = sg of [0; s] gives a 0 finite partition π = f0 = t1 < ··· < tN = s < tN+1 = tg of [0; t]. Thus, for any finite partition π of 0 [0; s], Tπ(!) ≤ Tπ0 (!), where π is a finite partition of [0; t], so that Tπ(!) ≤ Tπ0 (!) ≤ sup Tπ(!) = Vt(!) π2Πt ) Vs(!) = sup Tπ(!) ≤ sup Tπ(!) = Vt(!): π2Πs π2Πt Since ! was arbitrary, this is true for all ! 2 Ω. Thus, to show that t 7! Vt is continuous a.s., it suffices to show that for almost all ! 2 Ω, t 7! Vt(!) has no discontinuities (of the first kind). u u Claim 2 For any ! 2 Ω, Vu(!) = Vs(!) + Vs (!) where Vs (!) is the variation of Xt(!) on [s; u]. Proof of Claim: Take any two partitions fs = t0 < t1 < ··· < tN = ug, f0 = t−N 0 < t−N 0−1 < ··· < t0 = sg. Then: 0 N X X u jXtm (!) − Xtm−1 (!)j + jXtm (!) − Xtm−1 (!)j ≤ Vs(!) + Vs (!): m=−N 0+1 m=1 u Now, the LHS is Tπ(!) for π = f0 = t−N 0 < ··· < tN = ug. Thus, Vu(!) ≤ Vs(!) + Vs (!): For the other inequality, note that f0 = t−N 0 < ··· < t0 = s < ··· < tN = ug is a partition of [0; u]. Thus: N 0 N X X X Vu(!) ≥ jXtm (!)−Xtm−1 (!)j = jXtm (!)−Xtm−1 (!)j+ jXtm (!)−Xtm−1 (!)j: m=−N 0+1 m=−N 0+1 m=1 Now, fixing one of the partitions on the RHS, we may take the supremum of the remaining, and then u proceed to take the supremum of the final term. Thus: Vu(!) ≥ Vs(!) + Vs (!), so that Vu(!) = u Vs(!) + Vs (!): 4 Now, by hypothesis, Xs is of locally bounded variation. So, there exists a sequence of stopping times Tn Tn " 1 a.s. such that Xs (!) is of bounded variation in time. Let A = f! 2 Ω: Tn(!) " 1g: By definition, P [A] = 1. Now, let ! 2 A be fixed, and suppose that s 7! Vs(!) has a discontinuity at t. Choosing n large enough so that Tn(!) > t, there exists s0 ≤ t < u0 such that Xs(!) is of bounded variation on [s0; u0].
Recommended publications
  • MAST31801 Mathematical Finance II, Spring 2019. Problem Sheet 8 (4-8.4.2019)
    UH|Master program Mathematics and Statistics|MAST31801 Mathematical Finance II, Spring 2019. Problem sheet 8 (4-8.4.2019) 1. Show that a local martingale (Xt : t ≥ 0) in a filtration F, which is bounded from below is a supermartingale. Hint: use localization together with Fatou lemma. 2. A non-negative local martingale Xt ≥ 0 is a martingale if and only if E(Xt) = E(X0) 8t. 3. Recall Ito formula, for f(x; s) 2 C2;1 and a continuous semimartingale Xt with quadratic covariation hXit 1 Z t @2f Z t @f Z t @f f(Xt; t) − f(X0; 0) − 2 (Xs; s)dhXis − (Xs; s)ds = (Xs; s)dXs 2 0 @x 0 @s 0 @x where on the right side we have the Ito integral. We can also understand the Ito integral as limit in probability of Riemann sums X @f n n n n X(tk−1); tk−1 X(tk ^ t) − X(tk−1 ^ t) n n @x tk 2Π among a sequence of partitions Πn with ∆(Πn) ! 0. Recalling that the Brownian motion Wt has quadratic variation hW it = t, write the following Ito integrals explicitely by using Ito formula: R t n (a) 0 Ws dWs, n 2 N. R t (b) 0 exp σWs σdWs R t 2 (c) 0 exp σWs − σ s=2 σdWs R t (d) 0 sin(σWs)dWs R t (e) 0 cos(σWs)dWs 4. These Ito integrals as stochastic processes are semimartingales. Com- pute the quadratic variation of the Ito-integrals above. Hint: recall that the finite variation part of a semimartingale has zero quadratic variation, and the quadratic variation is bilinear.
    [Show full text]
  • Derivatives of Self-Intersection Local Times
    Derivatives of self-intersection local times Jay Rosen? Department of Mathematics College of Staten Island, CUNY Staten Island, NY 10314 e-mail: [email protected] Summary. We show that the renormalized self-intersection local time γt(x) for both the Brownian motion and symmetric stable process in R1 is differentiable in 0 the spatial variable and that γt(0) can be characterized as the continuous process of zero quadratic variation in the decomposition of a natural Dirichlet process. This Dirichlet process is the potential of a random Schwartz distribution. Analogous results for fractional derivatives of self-intersection local times in R1 and R2 are also discussed. 1 Introduction In their study of the intrinsic Brownian local time sheet and stochastic area integrals for Brownian motion, [14, 15, 16], Rogers and Walsh were led to analyze the functional Z t A(t, Bt) = 1[0,∞)(Bt − Bs) ds (1) 0 where Bt is a 1-dimensional Brownian motion. They showed that A(t, Bt) is not a semimartingale, and in fact showed that Z t Bs A(t, Bt) − Ls dBs (2) 0 x has finite non-zero 4/3-variation. Here Ls is the local time at x, which is x R s formally Ls = 0 δ(Br − x) dr, where δ(x) is Dirac’s ‘δ-function’. A formal d d2 0 application of Ito’s lemma, using dx 1[0,∞)(x) = δ(x) and dx2 1[0,∞)(x) = δ (x), yields Z t 1 Z t Z s Bs 0 A(t, Bt) − Ls dBs = t + δ (Bs − Br) dr ds (3) 0 2 0 0 ? This research was supported, in part, by grants from the National Science Foun- dation and PSC-CUNY.
    [Show full text]
  • Skorohod Stochastic Integration with Respect to Non-Adapted Processes on Wiener Space Nicolas Privault
    Skorohod stochastic integration with respect to non-adapted processes on Wiener space Nicolas Privault Equipe d'Analyse et Probabilit´es,Universit´ed'Evry-Val d'Essonne Boulevard des Coquibus, 91025 Evry Cedex, France e-mail: [email protected] Abstract We define a Skorohod type anticipative stochastic integral that extends the It^ointegral not only with respect to the Wiener process, but also with respect to a wide class of stochastic processes satisfying certain homogeneity and smoothness conditions, without requirements relative to filtrations such as adaptedness. Using this integral, a change of variable formula that extends the classical and Skorohod It^oformulas is obtained. Key words: It^ocalculus, Malliavin calculus, Skorohod integral. Mathematics Subject Classification (1991): 60H05, 60H07. 1 Introduction The Skorohod integral, defined by creation on Fock space, is an extension of the It^o integral with respect to the Wiener or Poisson processes, depending on the probabilis- tic interpretation chosen for the Fock space. This means that it coincides with the It^ointegral on square-integrable adapted processes. It can also extend the stochastic integral with respect to certain normal martingales, cf. [10], but it always acts with respect to the underlying process relative to a given probabilistic interpretation. The Skorohod integral is also linked to the Stratonovich, forward and backward integrals, and allows to extend the It^oformula to a class of Skorohod integral processes which contains non-adapted processes that have a certain structure. In this paper we introduce an extended Skorohod stochastic integral on Wiener space that can act with respect to processes other than Brownian motion.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Local Time and Tanaka Formula for G-Brownian Motion
    Local time and Tanaka formula for the G-Brownian motion∗ Qian LIN 1,2† 1School of Mathematics, Shandong University, Jinan 250100, China; 2 Laboratoire de Math´ematiques, CNRS UMR 6205, Universit´ede Bretagne Occidentale, 6, avenue Victor Le Gorgeu, CS 93837, 29238 Brest cedex 3, France. Abstract In this paper, we study the notion of local time and obtain the Tanaka formula for the G-Brownian motion. Moreover, the joint continuity of the local time of the G- Brownian motion is obtained and its quadratic variation is proven. As an application, we generalize Itˆo’s formula with respect to the G-Brownian motion to convex functions. Keywords: G-expectation; G-Brownian motion; local time; Tanaka formula; quadratic variation. 1 Introduction The objective of the present paper is to study the local time as well as the Tanaka formula for the G-Brownian motion. Motivated by uncertainty problems, risk measures and superhedging in finance, Peng has introduced a new notion of nonlinear expectation, the so-called G-expectation (see [10], [12], [13], [15], [16]), which is associated with the following nonlinear heat equation 2 ∂u(t,x) ∂ u(t,x) R arXiv:0912.1515v3 [math.PR] 20 Oct 2012 = G( 2 ), (t,x) [0, ) , ∂t ∂x ∈ ∞ × u(0,x)= ϕ(x), where, for given parameters 0 σ σ, the sublinear function G is defined as follows: ≤ ≤ 1 G(α)= (σ2α+ σ2α−), α R. 2 − ∈ The G-expectation represents a special case of general nonlinear expectations Eˆ whose importance stems from the fact that they are related to risk measures ρ in finance by ∗Corresponding address: Institute of Mathematical Economics, Bielefeld University, Postfach 100131, 33501 Bielefeld, Germany †Email address: [email protected] 1 the relation Eˆ[X] = ρ( X), where X runs the class of contingent claims.
    [Show full text]
  • Lecture III: Review of Classic Quadratic Variation Results and Relevance to Statistical Inference in Finance
    Lecture III: Review of Classic Quadratic Variation Results and Relevance to Statistical Inference in Finance Christopher P. Calderon Rice University / Numerica Corporation Research Scientist Chris Calderon, PASI, Lecture 3 Outline I Refresher on Some Unique Properties of Brownian Motion II Stochastic Integration and Quadratic Variation of SDEs III Demonstration of How Results Help Understand ”Realized Variation” Discretization Error and Idea Behind “Two Scale Realized Volatility Estimator” Chris Calderon, PASI, Lecture 3 Part I Refresher on Some Unique Properties of Brownian Motion Chris Calderon, PASI, Lecture 3 Outline For Items I & II Draw Heavily from: Protter, P. (2004) Stochastic Integration and Differential Equations, Springer-Verlag, Berlin. Steele, J.M. (2001) Stochastic Calculus and Financial Applications, Springer, New York. For Item III Highlight Results from: Barndorff-Nielsen, O. & Shepard, N. (2003) Bernoulli 9 243-265. Zhang, L.. Mykland, P. & Ait-Sahalia,Y. (2005) JASA 100 1394-1411. Chris Calderon, PASI, Lecture 3 Assuming Some Basic Familiarity with Brownian Motion (B.M.) • Stationary Independent Increments • Increments are Gaussian Bt Bs (0,t s) • Paths are Continuous but “Rough”− / “Jittery”∼ N − (Not Classically Differentiable) • Paths are of Infinite Variation so “Funny” Integrals Used in Stochastic Integration, e.g. t 1 2 BsdBs = 2 (Bt t) 0 − R Chris Calderon, PASI, Lecture 3 Assuming Some Basic Familiarity with Brownian Motion (B.M.) The Material in Parts I and II are “Classic” Fundamental & Established Results but Set the Stage to Understand Basics in Part III. The References Listed at the Beginning Provide a Detailed Mathematical Account of the Material I Summarize Briefly Here. Chris Calderon, PASI, Lecture 3 A Classic Result Worth Reflecting On N 2 lim (Bti Bti 1 ) = T N − →∞ i=1 − T Implies Something Not t = i Intuitive to Many Unfamiliar i N with Brownian Motion … Chris Calderon, PASI, Lecture 3 Discretely Sampled Brownian Motion Paths 2 1.8 Fix Terminal Time, “T” and Refine Mesh by 1.6 Increasing “N”.
    [Show full text]
  • Notes on Brownian Motion II: Introduction to Stochastic Integration
    Copyright c 2014 by Karl Sigman 1 IEOR 6712: Notes on Brownian Motion II: Introduction to stochastic integration 1.1 Functions of bounded variation A real-valued function f = f(t) on [0; 1) is said to be of bounded variation if the y−axis distance (e.g., up-down distance) it travels in any finite interval of time is finite. As we shall soon see, the paths of Brownian motion do not have this property leading us to conclude, among other things, that in fact the paths are not differentiable. If f is differentiable, then during an infinitesimal time interval of length dt, near t, the y−axis distance traversed is jdf(t)j = jf 0(t)jdt, R b 0 and so the total distance traveled over an interval [a; b] is a jf (t)jdt, as we know from calculus. To make this precise, consider an interval [a; b] and a partition Π of the interval of the form a = t0 < t1 < ··· < tn = b. Define the variation over [a; b] with respect to Π by n X VΠ(f)[a; b] = jf(tk) − f(tk−1)j: (1) k=1 As the partition gets finer and finer, this quantity approaches the total variation of f over [a; b]: Leting jΠj = max1≤k≤nftk − tk−1g denote the maximum interval length of a partition, the total variation of f over [a; b] is defined by V (f)[a; b] = lim VΠ(f)[a; b]; (2) jΠj!0 If V (f)[a; b] < 1, then f is said to be of bounded variation over [a; b].
    [Show full text]
  • MIT15 070JF13 Lec14.Pdf
    MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 14 10/28/2013 Introduction to Ito calculus. Content. 1. Spaces L2; M2; M2;c. 2. Quadratic variation property of continuous martingales. 1 Doob-Kolmogorov inequality. Continuous time version Let us establish the following continuous time version of the Doob-Kolmogorov inequality. We use RCLL as abbreviation for right-continuous function with left limits. Proposition 1. Suppose Xt ≥ 0 is a RCLL sub-martingale. Then for every T; x ≥ 0 2 E[XT ] P( sup Xt ≥ x) ≤ 2 : 0≤t≤T x Proof. Consider any sequence of partitions Π = f0 = tn < tn < ::: < tn = n 0 1 Nn T g such that Δ(Π ) = max jtn − tnj ! 0. Additionally, suppose that the n j j+1 j sequence Πn is nested, in the sense the for every n1 ≤ n2, every point in Πn1 is n n also a point in Π . Let X = X n where j = maxfi : t ≤ tg. Then X is a n2 t tj i t sub-martingale adopted to the same filtration (notice that this would not be the case if we instead chose right ends of the intervals). By the discrete version of the D-K inequality (see previous lectures), we have [X2 ] n n E T (max X ≥ x) = (sup X ≥ x) ≤ : P tj P t 2 j≤Nn t≤T x n By RCLL, we have supt≤T Xt ! supt≤T Xt a.s. Indeed, fix E > 0 and find t0 = t0(!) such that Xt0 ≥ supt≤T Xt − E. Find n large enough and 1 n j = j(n) such that tj(n)−1 ≤ t0 ≤ tj(n).
    [Show full text]
  • Arxiv:1508.05984V2
    PATHWISE STOCHASTIC CALCULUS WITH LOCAL TIMES MARK DAVIS, JAN OBL OJ´ AND PIETRO SIORPAES Abstract. We study a notion of local time for a continuous path, defined as a limit of suitable discrete quantities along a general sequence of partitions of the time interval. Our approach subsumes other existing definitions and agrees with the usual (stochastic) local times a.s. for paths of a continuous semimartin- gale. We establish pathwise version of the Tanaka-Meyer, change of variables and change of time formulae. We provide equivalent conditions for existence of path- wise local time. Finally, we study in detail how the limiting objects, the quadratic variation and the local time, depend on the choice of partitions. In particular, we show that an arbitrary given non-decreasing process can be achieved a.s. by the pathwise quadratic variation of a standard Brownian motion for a suitable sequence of (random) partitions; however, such degenerate behaviour is excluded when the partitions are constructed from stopping times. 1. Introduction In a seminal paper, F¨ollmer [13] pioneered a non-probabilistic approach to sto- chastic calculus. For a function x of real variable, he introduced a notion of quadratic variation x along a sequence of partitions (π ) and proved the associated Itˆo’s h it n n formula for f C2: ∈ t 1 t (1.1) f(x ) f(x )= f ′(x )dx + f ′′(x )d x , t − 0 s s 2 s h is Z0 Z0 t ′ where the integral 0 f (xs)dxs is defined as the limits of non-anticipative Riemann sums, shown to exist whenever x exists.
    [Show full text]
  • Itô Calculus
    It^ocalculus in a nutshell Vlad Gheorghiu Department of Physics Carnegie Mellon University Pittsburgh, PA 15213, U.S.A. April 7, 2011 Vlad Gheorghiu (CMU) It^ocalculus in a nutshell April 7, 2011 1 / 23 Outline 1 Elementary random processes 2 Stochastic calculus 3 Functions of stochastic variables and It^o'sLemma 4 Example: The stock market 5 Derivatives. The Black-Scholes equation and its validity. 6 References A summary of this talk is available online at http://quantum.phys.cmu.edu/QIP Vlad Gheorghiu (CMU) It^ocalculus in a nutshell April 7, 2011 2 / 23 Elementary random processes Elementary random processes Consider a coin-tossing experiment. Head: you win $1, tail: you give me $1. Let Ri be the outcome of the i-th toss, Ri = +1 or Ri = −1 both with probability 1=2. Ri is a random variable. 2 E[Ri ] = 0, E[Ri ] = 1, E[Ri Rj ] = 0. No memory! Same as a fair die, a balanced roulette wheel, but not blackjack! Pi Now let Si = j=1 Rj be the total amount of money you have up to and including the i-th toss. Vlad Gheorghiu (CMU) It^ocalculus in a nutshell April 7, 2011 3 / 23 Elementary random processes Random walks This is an example of a random walk. Figure: The outcome of a coin-tossing experiment. From PWQF. Vlad Gheorghiu (CMU) It^ocalculus in a nutshell April 7, 2011 4 / 23 Elementary random processes If we now calculate expectations of Si it does matter what information we have. 2 2 E[Si ] = 0 and E[Si ] = E[R1 + 2R1R2 + :::] = i.
    [Show full text]
  • Pathwise Integration with Respect to Paths of Finite Quadratic Variation Anna Ananova, Rama Cont
    Pathwise integration with respect to paths of finite quadratic variation Anna Ananova, Rama Cont To cite this version: Anna Ananova, Rama Cont. Pathwise integration with respect to paths of finite quadratic variation. 2016. hal-01286515v3 HAL Id: hal-01286515 https://hal.archives-ouvertes.fr/hal-01286515v3 Preprint submitted on 29 Aug 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Pathwise integration with respect to paths of finite quadratic variation Anna ANANOVA and Rama CONT Revised: August 2016. To appear in: Journal de Math´ematiques Pures et Appliqu´ees. Abstract We study a pathwise integral with respect to paths of finite quadratic variation, defined as the limit of non-anticipative Riemann sums for gradient-type integrands. We show that the integral satisfies a path- wise isometry property, analogous to the well-known Ito isometry for stochastic integrals. This property is then used to represent the inte- gral as a continuous map on an appropriately defined vector space of integrands. Finally, we obtain a pathwise 'signal plus noise' decompo- sition for regular functionals of an irregular path with non-vanishing quadratic variation, as a unique sum of a pathwise integral and a com- ponent with zero quadratic variation.
    [Show full text]
  • On Pathwise Quadratic Variation for Cadlag Functions
    On pathwise quadratic variation for càdlàg functions Henry CHIU ∗ & Rama CONT† June 2018. Final revision: October 2018. Published in: Electronic Communications in Probability. Abstract We revisit Föllmer’s concept of quadratic variation of a càdlàg func- tion along a sequence of time partitions and discuss its relation with the Skorokhod topology. We show that in order to obtain a robust notion of pathwise quadratic variation applicable to sample paths of càdlàg pro- cesses, one must reformulate the definition of pathwise quadratic variation as a limit in Skorokhod topology of discrete approximations along the par- tition. One then obtains a simpler definition which implies the Lebesgue decomposition of the pathwise quadratic variation as a result, rather than requiring it as an extra condition. Keywords: Quadratic variation; semimartingale; pathwise calculus; Ito formula; pathwise integration; cadlag functions; Skorokhod topology. Contents 1 Quadratic variation along a sequence of partitions 1 2 Pathwise quadratic variation for cadlag functions 3 3 Quadratic variation for multidimensional functions 8 4 Some applications 10 arXiv:1806.07290v3 [math.PR] 30 Oct 2018 1 Quadratic variation along a sequence of parti- tions In his seminal paper Calcul d’Itô sans probabilités [14], Hans Föllmer introduced a pathwise concept of quadratic variation and used it to provide a pathwise proof of the Itô formula. Föllmer showed that if a function x ∈ D([0,T ], R) has ∗Department of Mathematics, Imperial College London. Email: [email protected] Supported by EPSRC Doctoral Training grant 1824430. †Laboratoire de Probabilités, Statistiques et Modélisation, CNRS-Sorbonne Université. Email: [email protected] 1 n n n n quadratic variation along a sequence π = (t0 =0 < .
    [Show full text]