Variance and Covariance Processes
Total Page:16
File Type:pdf, Size:1020Kb
Variance and Covariance Processes Prakash Balachandran Department of Mathematics Duke University May 26, 2008 These notes are based on Durrett’s Stochastic Calculus, Revuz and Yor’s Continuous Martingales and Brownian Motion, Karatzas and Shreve’s Brownian Motion and Stochastic Calculus, and Kuo’s Intro- duction to Stochastic Calculus 1 Motivation In this section, we motivate the construction of variance and covariance processes for continuous local martingales, which is crucial in the construction of stochastic integrals w.r.t. continuous local martingales as we shall see. In this section, unless otherwise specified, we fix a Brownian motion Bt and a filtration fFtg such that: 1. For each t, Bt is Ft-measurable. 2. For and s ≤ t, the random variable Bt − Bs is independent of the σ-field Fs. Recall that for any Brownian motion, hBit = t where hBit is the quadratic variation of Bt. This immediately implies (2) in the following 2 Definition: Define Lad([a; b] × Ω) to be the space of all stochastic processes f(t; !), a ≤ t ≤ b, ! 2 Ω such that: 1. f(t; !) is adapted to the filtration fFtg. R b 2 R b 2 2. a E[jf(t)j ]dt = a E[jf(t)j ]d hBit < 1. 1 Also recall when constructing a theory of integration w.r.t. a Brownian motion, we begin with construct- ing the stochastic integral Z b f(t)dBt a 2 for f 2 Lad([a; b] × Ω). Now, we want a more general formalism of integrating a class of processes w.r.t. a generalized martin- gale that in the case Brownian motion will reduce to the above. Definition: Let Gt be a right-continuous filtration. We define L denote the collection of all jointly measurable stochastic processes X(t; !) such that: 1. Xt is adapted w.r.t. Gt. 2. Almost all sample paths of Xt are left continuous. Furthermore, we define P to be the smallest σ-field of subsets of [a; b] × Ω with respect to which all the stochastic processes in L are measurable. A stochastic processes Y (t; !) that is P measurable is said to be predictible. The motivation for the definition of a predictable process comes from the following argument: If Yt is a predictable process, then almost all its values at time t can be determined [with certainty] with the information available strictly before time t, since left continuity of the process Yt implies that for almost every ! 2 Ω and any sequence tn " t as n ! 1: lim Yt (!) = Yt(!): n!1 n Now, we have the following theorem [which we shall prove a version of in the next section for continuous local martinagles]: Theorem 1 (Doob-Meyer) Let Mt, a ≤ t ≤ b be a right continuous, square integrable martingale with left hand limits. Then, there exists a unique decomposition 2 (Mt) = Lt + At; a ≤ t ≤ b where Lt is a right-continuous martingale with left-hand limits, and At is a predictable, right continuous, increasing process such that Aa ≡ 0 and E[At] < 1 for all a ≤ t ≤ b. The above theorem certainly applies to the square integrable process Bt. 2 Claim 1 In the case Mt = Bt in Doob-Meyer, At = hBit = t. 2 Proof of Claim 1: WLOG, we may take a = 0 and b = t0. Define Pt = (Bt) − t. Then, for 0 ≤ s ≤ t ≤ t0: 2 2 2 2 E[(Bt) jFs] = E[(Bt − Bs + Bs) jFs] = E[(Bt − Bs) + 2Bs(Bt − Bs) + (Bs) jFs] 2 2 2 = E[(Bt − Bs) ] + 2BsE[Bt − Bs] + (Bs) = t − s + (Bs) 2 2 ) E[PtjFs] = E[(Bt) − tjFs] = (Bs) − s = Ps: 2 2 Thus, Pt = (Bt) − t is a martingale, so that (Bt) = Pt + t. Clearly, t satisfies all the conditions that At must satisfy in Doob-Meyer, so that by uniqueness of At, At = t = hBit. 4 So, another way of viewing the integral w.r.t. the martingale Mt w.r.t. the filtration Gt is the following: First, we look for the unique process (guaranteed by Doob-Meyer) hMit such that 2 Lt = (Mt) − hMit is a martingale. Then, we make the 2 Definition: Define Lpred([a; b]hMi × Ω) to be the space of all stochastic processes f(t; !), a ≤ t ≤ b, ! 2 Ω such that: 1. f(t; !) is predictable w.r.t. fGtg. R b 2 2. a E[jf(t)j ]d hMit < 1. Then, we proceed to construct the integral Z b f(t)dMt a 2 for f 2 Lpred([a; b]hMi × Ω). It’s clear that in the case Mt = Bt and Gt = Ft that the above formulation coincides with the original construction of the stochastic integral w.r.t. Bt reviewed at the beginning of this section. For right continuous, square integrable martingales Mt with left hand limits, at least, this process works. In the case where Mt is a continuous local martingale, we do the same thing. However, it’s not immedi- ately clear: 1. that we have a version of Doob-Meyer for continuous local martingales. 2. how the construction of the integral is affected by the stopping times Tn that reduce Mt, if at all. In the next section, we deal with the first problem. Then, we proceed to remedy the second. 3 2 Variance and Covariance Processes We take L and P as defined in section 1. Theorem 2 If Xt is a continuous local martingale, then we define the variance process hXit to be the 2 unique continuous predictable increasing processes At that has A0 ≡ 0 and makes Xt − At a local martingale. Definition: If X and Y are two continuous local martingales, we let 1 hX; Y i = (hX + Y i − hX − Y i ): t 4 t t We call hX; Y it the covariance of X and Y . Based on the discussion in the first section, it’s clear why we’re interested in variance processes. It is convenient to define covariance processes since they are very useful and have quite nice properties, such as: Theorem 3 h·; ·it is a symmetric bilinear form on the class of continuous local martinagles. We might prove it this time around. If not, hopefully next time. Two questions I’m still pondering is 1. Can you turn this into an inner product? 2. If so, how you can characterize the class of processes that is the completion of this space? The proof of theorem 2 is long, but it is instructive to go through it, since it develops techniques that will be useful later. In order to proceed, recall that any predictable discrete time martingale is constant [why?]. There is a result analogous to this in continuous time, and we use it to prove the uniqueness statement in theorem 2: Theorem 4 Any continuous local martingale Xt that is predictable and locally of bounded variation is constant (in time). Proof: By subtracting X0, WMA that X0 ≡ 0. Thus, we wish to show that Xt ≡ 0 for all t > 0 almost surely. t Let Vt(!) = supπ2Πt Tπ(!) be the variation of Xs(!) on [0; t], where Π denotes the set of all (finite) partitions of [0; t], π = f0 = t0 < t1 < ··· < tN = tg, and where for a given partition of this sort, N X Tπ(!) = jXtm (!) − Xtm−1 (!)j: m=1 4 Lemma 1 For almost all ! 2 Ω, t 7! Vt(!) is continuous Proof of Lemma: First, notice that for any ! 2 Ω, t 7! Vt(!) is increasing: For s < t, [0; s] ⊂ [0; t], so that any finite partition π = f0 = t0 < t1 < ··· < tN = sg of [0; s] gives a 0 finite partition π = f0 = t1 < ··· < tN = s < tN+1 = tg of [0; t]. Thus, for any finite partition π of 0 [0; s], Tπ(!) ≤ Tπ0 (!), where π is a finite partition of [0; t], so that Tπ(!) ≤ Tπ0 (!) ≤ sup Tπ(!) = Vt(!) π2Πt ) Vs(!) = sup Tπ(!) ≤ sup Tπ(!) = Vt(!): π2Πs π2Πt Since ! was arbitrary, this is true for all ! 2 Ω. Thus, to show that t 7! Vt is continuous a.s., it suffices to show that for almost all ! 2 Ω, t 7! Vt(!) has no discontinuities (of the first kind). u u Claim 2 For any ! 2 Ω, Vu(!) = Vs(!) + Vs (!) where Vs (!) is the variation of Xt(!) on [s; u]. Proof of Claim: Take any two partitions fs = t0 < t1 < ··· < tN = ug, f0 = t−N 0 < t−N 0−1 < ··· < t0 = sg. Then: 0 N X X u jXtm (!) − Xtm−1 (!)j + jXtm (!) − Xtm−1 (!)j ≤ Vs(!) + Vs (!): m=−N 0+1 m=1 u Now, the LHS is Tπ(!) for π = f0 = t−N 0 < ··· < tN = ug. Thus, Vu(!) ≤ Vs(!) + Vs (!): For the other inequality, note that f0 = t−N 0 < ··· < t0 = s < ··· < tN = ug is a partition of [0; u]. Thus: N 0 N X X X Vu(!) ≥ jXtm (!)−Xtm−1 (!)j = jXtm (!)−Xtm−1 (!)j+ jXtm (!)−Xtm−1 (!)j: m=−N 0+1 m=−N 0+1 m=1 Now, fixing one of the partitions on the RHS, we may take the supremum of the remaining, and then u proceed to take the supremum of the final term. Thus: Vu(!) ≥ Vs(!) + Vs (!), so that Vu(!) = u Vs(!) + Vs (!): 4 Now, by hypothesis, Xs is of locally bounded variation. So, there exists a sequence of stopping times Tn Tn " 1 a.s. such that Xs (!) is of bounded variation in time. Let A = f! 2 Ω: Tn(!) " 1g: By definition, P [A] = 1. Now, let ! 2 A be fixed, and suppose that s 7! Vs(!) has a discontinuity at t. Choosing n large enough so that Tn(!) > t, there exists s0 ≤ t < u0 such that Xs(!) is of bounded variation on [s0; u0].