MIT15 070JF13 Lec14.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 14 10/28/2013 Introduction to Ito calculus. Content. 1. Spaces L2; M2; M2;c. 2. Quadratic variation property of continuous martingales. 1 Doob-Kolmogorov inequality. Continuous time version Let us establish the following continuous time version of the Doob-Kolmogorov inequality. We use RCLL as abbreviation for right-continuous function with left limits. Proposition 1. Suppose Xt ≥ 0 is a RCLL sub-martingale. Then for every T; x ≥ 0 2 E[XT ] P( sup Xt ≥ x) ≤ 2 : 0≤t≤T x Proof. Consider any sequence of partitions Π = f0 = tn < tn < ::: < tn = n 0 1 Nn T g such that Δ(Π ) = max jtn − tnj ! 0. Additionally, suppose that the n j j+1 j sequence Πn is nested, in the sense the for every n1 ≤ n2, every point in Πn1 is n n also a point in Π . Let X = X n where j = maxfi : t ≤ tg. Then X is a n2 t tj i t sub-martingale adopted to the same filtration (notice that this would not be the case if we instead chose right ends of the intervals). By the discrete version of the D-K inequality (see previous lectures), we have [X2 ] n n E T (max X ≥ x) = (sup X ≥ x) ≤ : P tj P t 2 j≤Nn t≤T x n By RCLL, we have supt≤T Xt ! supt≤T Xt a.s. Indeed, fix E > 0 and find t0 = t0(!) such that Xt0 ≥ supt≤T Xt − E. Find n large enough and 1 n j = j(n) such that tj(n)−1 ≤ t0 ≤ tj(n). Then tj(n) ! t0 as n ! 1. X X ! X By right-continuity of , tj(n) t0 . This implies that for sufficiently n sup Xn ≥ X ≥ X − 2E large , t≤T t tj(n) t0 , and the a.s. convergence is es tablished. On the other hand, since the sequence Πn is nested, then the se n quence supt≤T Xt is non-decreasing. By continuity of probabilities, we obtain n P(supt≤T Xt ≥ x) ! P(supt≤T Xt ≥ x). 2 Stochastic processes and martingales Consider a probability space (Ω; F; P) and a filtration (Ft; t 2 R+). We assume that all zero-measure events are ”added” to F0. Namely, for every A ⊂ Ω, such 0 0 0 that for some A 2 F with P(A ) = 0 we have A ⊂ A 2 F, then A also belongs to F0. A filtration is called right-continuous if Ft = \E>0Ft+E. From now on we consider exclusively right-continuous filtrations. A stochastic process Xt adopted to this filtration is a measurable function X :Ω × [0; 1) ! R, such that Xt 2 Ft for every t. Denote by L2 the space of processes s.t. the R T X (!)dt [R T X2dt] < 1 Riemann integral 0 t exists a.s. and moreover E 0 t for T > 0 (! : R T jX (!)jdt < 1; 8T ) = 1 every . This implies P 0 t . Let M2 consist of square integrable right-continuous martingales with left 2 limits (RCLL). Namely E[Xt ] < 1 for every X 2 M2 and t ≥ 0. Finally M2;c ⊂ M2 is a further subset of processes consisting of a.s. continuous processes. For each T > 0 we define a norm on M2 by kXk = kXkT = ( [X2 ])1=2. Applying sub-martingale property of X2 we have [X2 ] ≤ [X2 ] E T t E T1 E T2 for every T1 ≤ T2. A stochastic process Yt is called a version of Xt if for every t 2 R+; P(Xt = Yt) = 1. Notice, this is weaker than saying P(Xt = Yt; 8t) = 1. Proposition 2. Suppose (Xt; Ft) is a submartingale and t ! E[Xt] is a con tinuous function. Then there exists a version Yt of Xt which is RCLL. We skip the proof of this fact. Proposition 3. M2 is a complete metric space and (w.r.t. k ·k) M2;c is a closed subspace of M2. (n) Proof. We need to show that if X 2 M2 is Cauchy, then there exists X 2 (n) M2 with kX − Xk ! 0. (n) (n) (m) Assume X is Cauchy. Fix t ≤ T Since X − X is a martingale (n)(m) 2 (n)(m) 2 (n) as well, E[(Xt − Xt ) ] ≤ E[(XT − XT ) ]. Thus Xt is Cauchy as well. We know that the space L2 of random variables with finite second moment 2 (n) 2 is closed. Thus for each t there exists a r.v. Xt s.t. E[(Xt − Xt) ] ! 0 as (n) (n) n ! 1. We claim that since Xt 2 Ft and X is RCLL, then (Xt; t ≥ 0) is adopted to Ft as well (exercise). Let us show it is a martingale. First E[jXtj] < 2 (n) 1 since in fact E[Xt ] < 1. Fix s < t and A 2 Fs. Since each Xt is a (n) (n) martingale, then E[Xt 1(A)] = E[Xs 1(A)]. We have (n) (n) E[Xt1(A)] − E[Xs1(A)] = E[(Xt − Xt )1(A)] − E[(Xs − Xs )1(A)] (n) (n) (n) 2 1=2 We have E[jXt − Xt j1(A)] ≤ E[jXt − Xt j] ≤ (E[(Xt − Xt ) ]) ! 0 as n ! 1. A similar statement holds for s. Since the left-hand side does not depend on n, we conclude E[Xt1(A)] = E[Xs1(A)] implying E[XtjFs] = Xs, namely Xt is a martingale. Since E[Xt] = E[X0] is constant and therefore continuous as a function of t, then there exists version of Xt which is RCLL. For simplicity we denote it by Xt as well. We constructed a process Xt 2 M2 (n) 2 s.t. E[(Xt − Xt) ] ! 0 for all t ≤ T . This proves completeness of M2. (n) Now we deal with closeness of M2;c. Since Xt − Xt is a martingale, (n) 2 (n) 2 (Xt − Xt) is a submartingale. Since Xt 2 M2, then (Xt − Xt) is RCLL. Then submartingale inequality applies. Fix E > 0. By submartingale inequality we have 1 (sup jX(n) − X j > E) ≤ [(X(n) − X )2] ! 0; P t t 2 E T T t≤T E as n ! 1. Then we can choose subsequence nk such that 1 (sup jX(nk) − X j > 1=k) ≤ : P t t k t≤T 2 1=2k sup jX(nk) − Since is summable, by Borel-Cantelli Lemma we have t≤T t (nk) Xtj ! 0 almost surely: P(f! 2 Ω : supt≤T jXt (!) − Xt(!)j ! 0g) = 1. Recall that a uniform limit of continuous functions is continuous as well (first lecture). Thus Xt is continuous a.s. As a result Xt 2 M2;c and M2;c is closed. 3 Doob-Meyer decomposition and quadratic variation of processes in M2;c Consider a Brownian motion Bt adopted to a filtration Ft. Suppose this filtration makes Bt a strong Markov process (for example Ft is generated by B itself). 2 Recall that both Bt and Bt − t are martingales and also B 2 M2;c. Finally recall that the quadratic variation of B over any interval [0; t] is t. There is a 3 generalization of these observations to processes in M2;c. For this we need to recall the following result. Theorem 1 (Doob-Meyer decomposition). Suppose (Xt; Ft) is a continuous non-negative sub-martingale. Then there exist a continuous martingale Mt and a.s. non-decreasing continuous process At with A0 = 0, both adopted go Ft such that Xt = At + Mt. The decomposition is unique in the almost sure sense. The proof of this theorem is skipped. It is obtained by appropriate discretiza tion and passing to limits. The discrete version of this result we did earlier. See [1] for details. 2 Now suppose Xt 2 M2;c. Then Xt is a continuous non-negative submartin gale and thus DM theorem applies. The part At in the unique decomposition of 2 Xt is called quadratic variation of Xt (we will shortly justify this) and denoted hXti. Theorem 2. Suppose Xt 2 M2;c. Then for every t > 0 the following conver gence in probability takes place X 2 lim (Xtj+1 − Xtj ) ! hXti; Πn:Δ(Πn)!0 0≤j≤n−1 where the limit is over all partitions Πn = f0 = t0 < t1 < ··· < tn = tg and Δ(Πn) = maxj jtj − tj−1j. Proof. Fix s < t. Let X 2 M2;c. We have 2 2 2 E[(Xt − Xs) − (hXti − hXsi)jFs] = E[Xt − 2XtXs + Xs − (hXti − hXsi)jFs] 2 2 = E[Xt jFs] − 2XsE[XtjFs] + Xs − E[hXtijFs] + hXsi 2 2 = E[Xt − hXtijFs] − Xs + hXsi = 0: Thus for every s < t ≤ u < v by conditioning first on Fu and using tower property we obtain 2 2 E (Xt − Xs) − (hXti − hXsi)(Xu − Xv) − (hXui − hXvi) = 0 (1) The proof of the following lemma is application of various ”carefully placed” tower properties and is omitted. See [1] Lemma 1.5.9 for details. Lemma 1. Suppose X 2 M2 satisfies jXsj ≤ M a.s. for all s ≤ t. Then for every partition 0 = t0 ≤ · ·· ≤ tn = t 2 X 2 4 E (Xtj+1 − Xtj ) ≤ 6M : j 4 Lemma 2. Suppose X 2 M2 satisfies jXsj ≤ M a.s. for all s ≤ t. Then X 4 lim E[(Xtj+1 − Xtj ) ] = 0; Δ(Πn)!0 j where Πn = f0 = t0 < ··· < tn = tg; Δ(Πn) = maxj jtj+1 − tj j. Proof. We have X 4 X 2 2 (Xtj+1 − Xtj ) ≤ (Xtj+1 − Xtj ) supfjXr − Xsj : jr − sj ≤ Δ(Πn)g: j j Applying Cauchy-Schwartz inequality and Lemma 1 we obtain 2 2 X 4 X 2 4 E[(Xtj+1 − Xtj ) ] ≤ E (Xtj+1 − Xtj ) E[supfjXr − Xsj : jr − sj ≤ Δ(Πn)g] j j 4 4 ≤ 6M E[supfjXr − Xsj : jr − sj ≤ Δ(Πn)g]: Now X(!) is a.s.