Strong approximation of stochastic processes using random walks
Balázs Székely Supervisor: Tamás Szabados
Institute of Mathematics Budapest University of Technology and Economics 2004
Contents
1 Introduction 1
2 Approximation of continuous martingales 3 2.1 Random walks and the Wiener process ...... 3 2.2 Approximation of continuous martingales ...... 7 2.2.1 Approximation of the quadratic variation process ...... 8 2.2.2 Strong approximation of continuous martingales ...... 10 2.3 Symmetrically evolving martingales ...... 14 2.3.1 Distributional characterization of Ocone martingales ...... 14 2.3.2 Properties of Ocone martingales ...... 19 2.3.3 Some examples of Ocone and non-Ocone martingales ...... 21 2.4 Approximation of local time, excursion, meander and bridge ...... 25 2.4.1 Approximation of local time ...... 26 2.4.2 Excursion, bridge and meander Marchal's construction ...... 34 2.5 Pathwise stochastic integration ...... 35 2.5.1 Karandikar's integral approximation ...... 35 2.5.2 Discretization applied to the integrator ...... 39 2.5.3 The non cadlag-case ...... 43
3 Approximation of the exponential functional of Brownian motion 46 3.1 Introduction to the exp functional of Brownian motion ...... 46 3.2 The distribution of the discrete exponential functional ...... 47 3.2.1 Review of some fractal notion ...... 48 3.2.2 The non-overlapping case ...... 49 3.2.3 The general case ...... 50 3.3 The moments of the discrete exponential functional ...... 51 3.3.1 Permutations with given descent set ...... 52 3.4 Approximation of the exponential functional of Brownian motion ...... 56 3.4.1 The moments of the exponential functional of BM ...... 58 3.4.2 Properties of the exponential functional process ...... 61
I Chapter 1
Introduction
The main concept of this thesis is to draw the attention to the fact that both for theoretical and practical reasons, it is useful to search for strong (i.e. pathwise, almost sure) approximations of stochastic processes by simple random walks (RWs). The prototype of such eorts was the construction of Brownian motion (BM) as an almost sure limit of simple RW paths, given by Frank Knight in 1962 [15]. Later this construction was simplied and somewhat improved by Pál Révész [22] and then by Tamás Szabados [28]. This sort of results states that one can nd a sequence of time and space scaled random walks (Bm) such that it converges to a Brownian motion B almost surely uniformly on every compact intervals as m goes to innity:
sup |Bm(t) − B(t)| → 0. t∈[0,T ] Besides the theoretical value of discrete approximations, in some applications the discrete model could be more natural than the continuous one. It provides a general tool for proving statements for continuous time stochastic processes. First, one can prove the discrete version of the statement and then take limit for obtaining the continuous version. Of course, we cannot predict which part of this procedure will be easier. In this thesis, we present both theoretical results, e.g. approximation of continuous martingales, and also show examples in which the discrete approximation method can be carried out naturally. During our research some additional statements turned out to be true which are not closely related to our present subject. However, we present some of them because we think that the reader may nd them interesting. This study consists of two main parts, Chapter 2 and Chapter 3. Chapter 2 is mainly based on the paper Strong approximation of continuous local martingales by simple random walks and some recent unpublished results. In Chapter 3, we present our results on the so-called exponential functional of Brownian motion which were discussed in the papers An exponential functional of random walks and Moments of an exponential functional of random walks and permutations with given descent sets. Chapter 2 discusses a generalization of the result by Knight to continuous local martingales M. We will show that the quadratic variation process hM,Mi can be almost surely uniformly approximated by a discrete quadratic variation processes Nm which are based on stopping times of a Skorohod-type embedding of nested simple RWs into M. This corresponds to an earlier similar result by Karandikar [12]. In section 2.5 we present some of his related results, for instance discrete approximation of certain stochastic integrals.
Theorems 2 and 3 give an approximation of M by a nested sequence of RWs Bm, time-changed by hM,Mi and Nm, respectively. The approximations almost surely uniformly converge on bounded intervals.
1 CHAPTER 1. INTRODUCTION 2
It is important to note that the DDS Brownian motion W and the quadratic variation hM,Mi are not independent in general, just like the approximating RW Bm and the discrete quadratic variation Nm. Since this could be a hindrance both in the theory and applications, a necessary and sucient condition is given for the independence in Theorem 4. Namely, W and hM,Mi are independent if and only if M has symmetric increments given the past. This is a reformulation of an earlier result by Ocone and Dubins-Émery-Yor. We also present some recent theorems on the properties of Ocone martingales which provide examples for Ocone martingales. This investigations on this special family of martingales can be found in Section 2.3. Naturally arise the question if we can extend our discrete approximation method to approximate stochastic integrals with respect to continuous martingales. In Section 2.5 it turns out that the con- struction works for cadlag integrands. To get approximation result for a larger class of integrands, e.g. of the form 0 where 0 is the derivative of the dierence of two convex functions, we prove that f−(Mt) f− we can approximate the Brownian local time in the same manner as we do with martingales. This work is done in Section 2.4.
Chapter 3 focuses on a certain application of the discrete approximation of Brownian motion. Here, we investigate the exponential functional of Brownian motion
Z ∞ Iν = exp (B(t) − νt) dt 0 and mainly its discrete version the exponential functional of random walk. The properties of the discrete exponential functional are rather dierent from the continuous one: typically its distribution is singular w.r.t. Lebesgue measure, all of its positive integer moments are nite and they characterize the distribution. On the other hand, using suitable random walk approximations to Brownian motion, the resulting discrete exponential functionals converge a.s. to the exponential functional of Brownian motion, hence their limit distribution is the same as in the continuous case, namely, the one of the reciprocal of a gamma random variable, so absolutely continuous w.r.t. Lebesgue measure. This way we give a new, elementary proof for an earlier result by Dufresne and Yor as well. Beyond these results we have found a recursion of certain moments in the expansion of the moments of the discrete approximation.
In this thesis we denote by alphabetical numbering the known results and by arabic numbering the results of the authors. Chapter 2
Approximation of continuous martingales
2.1 Random walks and the Wiener process
A main tool of this thesis is an elementary construction of the Wiener process (= BM). The specic construction we are going to use in the sequel, taken from [28], is based on a nested sequence of simple random walks that uniformly converges to the Wiener process on bounded intervals with probability 1. This will be called RW construction in the sequel. One of our intentions in this chapter is to extend the underlying twist and shrink algorithm to continuous local martingales. We summarize the major steps of the RW construction here, see [29] as well. We start with an innite matrix of i.i.d. random variables Xm(k), P {Xm(k) = ±1} = 1/2 (m ≥ 0, k ≥ 1), dened on the same underlying probability space (Ω, F, P). Each row of this matrix is a basis of an approximation of the Wiener process with a dyadic step size ∆t = 2−2m in time and a corresponding step size ∆x = 2−m in space, illustrated by the next table.
Table 2.1: The starting setting for the RW construction of BM
∆t ∆x i.i.d. sequence RW Pn 1 1 X0(1),X0(2),X0(3),... S0(n) = k=1 X0(k) −2 −1 Pn 2 2 X1(1),X1(2),X1(3),... S1(n) = k=1 X1(k) −4 −2 Pn 2 2 X2(1),X2(2),X2(3),... S2(n) = X2(k) . . . . k=1 . . . .
The second step of the construction is twisting. From the independent random walks we want to create dependent ones so that after shrinking temporal and spatial step sizes, each consecutive RW becomes a renement of the previous one. Since the spatial unit will be halved at each consecutive row, we dene stopping times by Tm(0) = 0, and for k ≥ 0,
Tm(k + 1) = min{n : n > Tm(k), |Sm(n) − Sm(Tm(k))| = 2} (m ≥ 1) (2.1) These are the random time instants when a RW visits even integers, dierent from the previous one. After shrinking the spatial unit by half, a suitable modication of this RW will visit the same integers in the same order as the previous RW. We operate here on each point ω ∈ Ω of the sample
3 CHAPTER 2. RANDOM WALKS AND THE WIENER PROCESS 4
space separately, i.e. we x a sample path of each RW. We dene twisted RWs S˜m recursively for k = 1, 2,... using S˜m−1, starting with S˜0(n) = S0(n)(n ≥ 0). With each xed m we proceed for k = 0, 1, 2,... successively, and for every n in the corresponding bridge, Tm(k) < n ≤ Tm(k + 1). Any bridge is ipped if its sign diers from the desired: Xm(n) if Sm(Tm(k + 1)) − Sm(Tm(k)) = 2X˜m−1(k + 1), X˜m(n) = −Xm(n) otherwise, and then S˜m(n) = S˜m(n − 1) + X˜m(n). Then S˜m(n)(n ≥ 0) is still a simple symmetric random walk [28, Lemma 1]. The twisted RWs have the desired renement property:
1 S˜ (T (k)) = S˜ (k)(m ≥ 1, k ≥ 0). 2 m m m−1
The last step of the RW construction is shrinking. The sample paths of S˜m(n)(n ≥ 0) can be extended to continuous functions by linear interpolation, this way one gets S˜m(t)(t ≥ 0) for real t. Then we dene the mth approximating RW by
−m 2m B˜m(t) = 2 S˜m(t2 ).
Using the denition of Tm and B˜m we also get the general renement property
−2(m+1) −2m B˜m+1 Tm+1(k)2 = B˜m k2 (m ≥ 0, k ≥ 0). (2.2)
Note that a renement takes the same dyadic values in the same order as the previous shrunken walk, but there is a time lag in general:
−2(m+1) −2m Tm+1(k)2 − k2 6= 0. (2.3) Then we quote some important facts from [28] about the above RW construction that will be used in the sequel. These will be stated in somewhat stronger forms but can be read easily from the proofs in the cited reference, cf. Lemmas 2-4 and Theorem 3 there.
Lemma A. Suppose that X1,X2,...,XN is an i.i.d. sequence of random variables, E(Xk) = 0, Var(Xk) = 1, and their moment generating function is nite in a neighborhood of 0. Let Sj = X1 + ··· + Xj, 1 ≤ j ≤ N. Then for any C > 1 and N ≥ N0(C) one has 1 1−C P sup |Sj| ≥ (2CN log N) 2 ≤ 2N . 1≤j≤N We mention that this basic fact, that appears in the above-mentioned reference [28], essentially depends on a large deviation theorem.
We have a more convenient result in a special case of Hoeding's inequality, cf. [10]. Let X1,X2,... be a sequence of bounded i.i.d. random variables, such that , and let Pn . bi ≤ Xi ≤ ai Sn = i=1 Xi Then by Hoeding's inequality, for any x > 0 we have
1 n ! 2 2 1 X 2 − x P |S − E(S )| ≥ x (a − b ) ≤ 2 e 2 . n n 4 i i i=1
If and here, then 1 Pn 2 Pn 2 if and only if 0, E(Xi) = 0 bi = −ai 4 i=1(ai − bi) = i=1 ai = Var(Sn) Xi = aiXi where 0 1 , . P {Xi = ±1} = 2 1 ≤ i ≤ n CHAPTER 2. RANDOM WALKS AND THE WIENER PROCESS 5
Thus if P 0 , where not all are zero and P 2 , we get S = r arXr ar Var(S) = r ar < ∞
2 n 1 o − x P |S| ≥ x (Var(S)) 2 ≤ 2 e 2 (x ≥ 0). (2.4)
The summation above may extend either to nitely many or to countably many terms. Let S1,S2,...,SN be arbitrary sums of the above type: P 0 , 0 1 , , where 0 Sk = r akrXkr P {Xkr = ±1} = 2 1 ≤ k ≤ N Xkr and 0 can be dependent when . Then by the inequality (2.4) we obtain the following analog of Xls k 6= l Lemma A: for any C > 1 and N ≥ 1,
1 1 2 P sup |Sk| ≥ (2C log N) 2 sup (Var(Sk)) 1≤k≤N 1≤k≤N N X n 1 o −C log N 1−C ≤ P |Sk| ≥ (2C log N Var(Sk)) 2 ≤ 2Ne = 2N . (2.5) k=1 Lemma A easily implies that the time lags (2.3) are uniformly small if m is large enough.
Lemma B. For any K > 0, C > 1, and for any m ≥ m0(C), we have
( 1 ) 2 −2(m+1) −2m 3 1 −m P sup |Tm+1(k)2 − k2 | ≥ CK log ∗K m 2 2 0≤k2−2m≤K 2 ≤ 2(K22m)1−C , where log ∗x = max{1, log x}. This lemma and the renement property (2.2) implies the uniform closeness of two consecutive approximations if m is large enough.
Lemma C. For any K > 0, C > 1, and for any m ≥ m1(C), we have ( ) 1 3 m ˜ −2m ˜ −2m 4 4 − 2 P sup |Bm+1(k2 ) − Bm(k2 )| ≥ K∗ (log∗ K) m2 0≤k2−2m≤K ≤ 3(K22m)1−C , where K∗ = max{1,K}. Based on this lemma, it is not dicult to show the following convergence result.
Theorem A. The shrunken RWs B˜m(t)(t ≥ 0, m = 0, 1, 2,... ) almost surely uniformly converge to a Wiener process W (t)(t ≥ 0) on any compact interval [0,K], K > 0. For any K > 0, C ≥ 3/2, and for any m ≥ m2(C), we have
1 3 m ˜ 4 4 − 2 2m 1−C P sup |W (t) − Bm(t)| ≥ K∗ (log∗ K) m2 ≤ 6(K2 ) . 0≤t≤K
Now taking C = 3 in Theorem A and using the BorelCantelli lemma, we get
− m sup |W (t) − B˜m(t)| < O(1)m2 2 a.s. (m → ∞) 0≤t≤K and 1 3 sup |W (t) − B˜m(t)| < K 4 (log K) 4 a.s. (K → ∞) 0≤t≤K CHAPTER 2. RANDOM WALKS AND THE WIENER PROCESS 6
for any m large enough, m ≥ m2(3). Next we are going to study the properties of another nested sequence of random walks, obtained by Skorohod embedding. This sequence is not identical, though asymptotically equivalent to the above RW construction, cf. [28, Theorem 4]. Given a Wiener process W , rst we dene the stopping times −2m which yield the Skorohod embedded process Bm(k2 ) into W . For every m ≥ 0 let sm(0) = 0 and
−m sm(k + 1) = inf {s : s > sm(k), |W (s) − W (sm(k))| = 2 } (k ≥ 0). (2.6) With these stopping times the embedded process by denition is
−2m Bm(k2 ) = W (sm(k)) (m ≥ 0, k ≥ 0). (2.7)
This denition of Bm can be extended to any real t ≥ 0 by pathwise linear interpolation. The next lemma describes some useful facts about the relationship between B˜m and Bm. These follow from [28, Lemmas 5,7 and Theorem 4], with some minor modications.
In general, roughly saying, B˜m is more useful when someone wants to generate stochastic processes from scratch, while Bm is more advantageous when someone needs a discrete approximation of given processes, like in the case of stochastic integration.
Lemma D. For any C ≥ 3/2, K > 0, take the following subset of the sample space: ( ) 1 1 −2n −2m 2 2 −m (2.8) Am = sup sup |2 Tm,n(k) − k2 | < 6(CK∗ log∗ K) m 2 , n>m 0≤k2−2m≤K where Tm,n(k) = Tn ◦ Tn−1 ◦ · · · ◦ Tm(k) for n > m ≥ 0 and k ≥ 0. Then for any m ≥ m3(C),
c 2m 1−C P {Am} ≤ 4(K2 ) .
−2n Moreover, limn→∞ 2 Tm,n(k) = tm(k) exists almost surely and on the set Am we have
−2m −2m B˜m(k2 ) = W (tm(k)) (0 ≤ k2 ≤ K), cf. (2.7). Further, on Am except for a zero probability subset, sm(k) = tm(k) and
1 1 −2m 2 2 −m (2.9) sup |sm(k) − k2 | ≤ 6(CK∗ log∗ K) m 2 (m ≥ m3(C)). 0≤k2−2m≤K
If the Wiener process is built by the RW construction described above using a sequence B˜m (m ≥ 0) of nested RWs and then one constructs the Skorohod embedded RWs Bm (m ≥ 0), it is natural to ask what the approximating properties of the latter are. The answer described by the next theorem is that they are essentially the same as the ones of B˜m, cf. Theorem A.
Lemma 1. For every K > 0, C ≥ 3/2 and m ≥ m3(C) we have
1 3 m 4 4 − 2 2m 1−C P sup |W (t) − Bm(t)| ≥ K∗ (log∗ K) m2 ≤ 10(K2 ) . 0≤t≤K Proof. By the triangle inequality,
sup |W (t) − Bm(t)| ≤ sup W (t) − B˜m(t) + sup B˜m(t) − Bm(t) . 0≤t≤K 0≤t≤K 0≤t≤K
By Lemma D and equation (2.7), on the set Am dened by (2.8) we have
−2m −2m B˜m(k2 ) = W (sm(k)) = Bm(k2 ), CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 7
except for a zero probability subset when m ≥ m3(C). Since both B˜m(t) and Bm(t) are obtained by −2m pathwise linear interpolation based on the vertices at k2 ∈ [0,K], they are identical on Am, except for a zero probability subset of it when m ≥ m3(C). Thus
1 3 m 4 4 − 2 P sup |W (t) − Bm(t)| ≥ K∗ (log∗ K) m2 0≤t≤K
1 3 m c ˜ 4 4 − 2 ≤ P {Am} + P sup W (t) − Bm(t) ≥ K∗ (log∗ K) m2 0≤t≤K Then by Theorem A and Lemma D we get the statement of the theorem.
2.2 Approximation of continuous martingales
Beside the RW construction of standard Brownian motion, the other main tool applied in this section is a theorem of Dambis (1965) and DubinsSchwarz (1965) and an extension of it, cf. Theorems B and C below. Briey saying, these theorems state that any continuous local martingale (M(t), t ≥ 0) can be transformed into a standard Brownian motion by time-change. Then somewhat loosely speaking, the resulting Brownian motion takes on the same values in the same order as M(t), only the corresponding time instants may dier. These and other necessary matters about continuous local martingales will be taken from and discussed in the style of [23] in the sequel.
Below it is supposed that an increasing family of sub-σ-algebras (Ft, t ≥ 0) is given in the probability space (Ω, F, P) and the given continuous local martingale M is adapted to it. In the case of a continuous local martingale M(t) vanishing at 0 its quadratic variation hM,Mit is a process with almost surely continuous and non-decreasing sample paths vanishing at 0. This will be one of the two time-changes we are going to use in the sequel. The other one is a quasi-inverse of the quadratic variation:
Ts = inf{t : hM,Mit > s}, (2.10) where inf(∅) = ∞ by denition. Then the sample paths of the process Ts are almost surely increasing, but only right-continuous, since such a path has a jump at any value where the quadratic variation has a constant level-stretch. Beside this, Ts may be innite valued. The duality between the two time-changes is expressed by hM,Mit = inf{s : Ts > t}. Observe that Ts cannot have constant level-stretches since this would imply jumps for hM,Mit. Also the continuity of hM,Mit gives that ( ), while we have only ( ) in the opposite direction. It is clear hM,MiTs = s s ≥ 0 ThM,Mit ≥ t t ≥ 0 that
hM,Mit < s =⇒ t < Ts, but t < Ts =⇒ hM,Mit ≤ s, (2.11) while
hM,Mit ≤, ≥, > s ⇐⇒ t ≤, ≥, > Ts, (2.12) respectively.
Theorem B. [23, V (1.6), p.181] If M is a continuous (Ft)-local martingale vanishing at 0 and such that a.s., then is an -Brownian motion and . hM,Mi∞ = ∞ W (s) = M(Ts) (FTs ) M(t) = W (hM,Mit)
Similar statement is true when hM,Mi∞ < ∞ is possible. Note that on the set {hM,Mi∞ < ∞} the limit M(∞) = limt→∞ M(t) exists with probability 1, cf. [23, IV (1.26), p. 131].
Theorem C. [23, V (1.7), p.182] If M is a continuous (Ft)-local martingale vanishing at 0 and such that with positive probability, then there exists an enlargement of hM,Mi∞ < ∞ (Ωe, Fet, Pe) (Ω, FTt , P) and a Wiener process βe on Ωe independent of M such that the process M(T ) if s < hM,Mi , W (s) = s ∞ M(∞) + βe(s − hM,Mi∞) if s ≥ hM,Mi∞ CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 8
is a standard Brownian motion and M(t) = W (hM,Mit) for t ≥ 0. From now on, W will always refer to the Wiener process obtained from M by the above time-change, the so-called DDS Wiener process (or DDS Brownian motion) of M. Now Skorohod-type stopping times can be dened for M, similarly as for W in (2.6). For m ≥ 0, let τm(0) = 0 and
−m τm(k + 1) = inf t : t > τm(k), |M(t) − M(τm(k))| = 2 (k ≥ 0). (2.13) The st stopping time sequence is a renement of the th in the sense that ∞ is (m + 1) m (τm(k))k=0 a subsequence of ∞ so that for any there exist and , and (τm+1(j))j=0 k ≥ 0 j1 j2 τm+1(j1) = τm(k) τm+1(j2) = τm(k + 1), where the dierence j2 − j1 ≥ 2, even. Lemma 2. With the stopping times dened by (2.13) from a continuous local martingale M one can directly obtain the sequence of shrunken RWs that almost surely converges to the DDS Wiener process W of M, cf. (2.7):
−2m Bm(k2 ) = W (sm(k)) = M(τm(k)), sm(k) = hM,Miτm(k) [but ], where for , the non-negative integer is taking values (depending on ) τm(k) ≤ Tsm(k) m ≥ 0 k ω until sm(k) ≤ hM,Mi∞. Proof. By Theorems B and C it follows that . This implies that W (hM,Miτm(k)) = M(τm(k)) sm(k) ≤ . Then consider rst the case . If held, then hM,Miτm(k) k = 1 sm(1) < hM,Miτm(1) Tsm(1) < τm(1) would follow by (2.12), and this would lead to a contradiction because −m. M(Tsm(1)) = W (sm(1)) = ±2 For values k > 1, induction with a similar argument can show the statement of the lemma.
In the sequel Bm will always denote the sequence of shrunken RWs dened by Lemma 2.
2.2.1 Approximation of the quadratic variation process
Our next objective is to show that the quadratic variation of M can be obtained as an almost sure limit of a point process related to the above stopping times that we will call a discrete quadratic variation process:
−2m Nm(t) = 2 #{r : r > 0, τm(r) ≤ t} (2.14) −2m = 2 #{r : r > 0, sm(r) ≤ hM,Mit} (t ≥ 0).
Clearly, the paths of Nm are non-decreasing pure jump functions, the jumping times being exactly −2m the stopping times τm(k). Moreover, Nm (τm(k)) = k2 and the magnitudes of jumps are constant 2−2m when m is xed.
Lemma 3. Let M be a continuous local martingale vanishing at 0, let hM,Mi be the quadratic varia- tion, T be its quasi-inverse (2.10), and Nm be the discrete quadratic variation dened in (2.14). Fix −2− 2m K > 0 and take a sequence am = O(m 2 )K with some > 0, where am ≥ K ∨ 1 for any m ≥ 1 (x ∨ y = max(x, y), x ∧ y = min(x, y)). (a) Then for any C ≥ 3/2 and m ≥ m4(C) we have
1 1 2 2 −m P sup |hM,Mit ∧ am − Nm(t ∧ Tam )| ≥ 12(Cam log∗ am) m 2 0≤t≤K 2m 1−C ≤ 3(am2 ) . CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 9
(b) Suppose that the quadratic variation satises the following tail-condition: a sequence (am) fullling the above assumptions can be chosen so that
−1− P {hM,Mit > am} ≤ D(t)m , (2.15) where D(t) is some nite valued function of . Then for any and it follows t ∈ R+ C ≥ 3/2 m ≥ m4(C) that
1 1 2 2 −m P sup |hM,Mit − Nm(t)| ≥ 12(Cam log∗ am) m 2 0≤t≤K 2m 1−C −1− ≤ 3(am2 ) + D(K)m . Proof. The basic idea of the proof is that the Skorohod stopping times of a Wiener process are asymp- totically uniformly distributed as shown by (2.9), while the case of a continuous local martingale can be reduced to the former by the DDS representation, cf. Lemma 2. 1 1 Introduce the abbreviation 2 2 −m. Then − as ha,m = 11.1 (Cam log∗ am) m 2 ha,m = O(m ) → 0 m → ∞. We need a truncation here using the sequence am, since the quadratic variation hM,Mit is not a bounded random variable in general. By (2.12) and (2.14),
−2m Nm(t ∧ Tam ) = 2 #{r : r > 0, τm(r) ≤ t ∧ Tam } −2m = 2 #{r : r > 0, sm(r) ≤ hM,Mit ∧ am}. On the event ( ) −2m Aa,m = sup |sm(r) − r2 | ≤ ha,m , −2m 0≤r2 ≤2am 2m if r = b(hM,Mit ∧ am + ha,m)2 c + 1, then sm(r) > hM,Mit ∧ am, so sm(r) is not included in . Observe here that −2m if is large enough, , where we Nm(t ∧ Tam ) am + ha,m + 2 ≤ 2am m m ≥ m4(C) also suppose that m4(C) ≥ m3(C) and m3(C) is dened by Lemma D. This explains why the sup is −2m 2m taken for r2 ≤ 2am in the denition of Aa,m. Similarly on Aa,m, if r = b(hM,Mit ∧am −ha,m)2 c, then , so must be included in . Hence sm(r) ≤ hM,Mit ∧ am sm(r) Nm(t ∧ Tam ) −2m −2m (2.16) hM,Mit ∧ am − ha,m − 2 ≤ Nm(t ∧ Tam ) ≤ hM,Mit ∧ am + ha,m + 2 , for any t ∈ [0,K] on Aa,m. 1 1 1 1 Now 2 2 −m 2 2 −m , since 6 (C2am log∗(2am)) m 2 ≤ 11.1 (Cam log∗ am) m 2 = ha,m log∗(2am) ≤ (1 + . Hence it follows by Lemma D that log 2) log∗ am c 2m 1−C 2m 1−C P Aa,m ≤ 4(2am2 ) ≤ 3(am2 ) ,
1 1 when and . Noticing that 2 2 −m −2m for any , this C ≥ 3/2 m ≥ m4(C) 0.9 (Cam log∗ am) m 2 > 2 m ≥ 1 and (2.16) prove (a). Part (b) follows from (a), the inequality
|hM,Mit − Nm(t)| ≤ |hM,Mit − hM,Mit ∧ am|
+ |hM,Mit ∧ am − Nm(t ∧ Tam )| (2.17) + |Nm(t ∧ Tam ) − Nm(t)|, and from the following simple relationships between events:
{hM,Mit ∧ am 6= hM,Mit} = {hM,Mit > am} and
{Nm(t ∧ Tam ) 6= Nm(t)} = {t > Tam } = {hM,Mit > am}, cf. (2.12). CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 10
We mention that when the quadratic variation hM,Mit is almost surely bounded above by a nite valued function g(t) for t > 0, statements (a) and (b) of Lemma 4 simplify as
1 1 2 2 −m P sup |hM,Mit − Nm(t)| ≥ 12(Cg∗(K) log∗ g(K)) m 2 0≤t≤K ≤ 3(g(K)22m)1−C for any K > 0, C ≥ 3/2 and m ≥ m4(C). The statement of the next theorem corresponds to the main result in Karandikar [12], though the method applied is dierent and here we give a rate of convergence as well.
Theorem 1. Using the same notations as in Lemma 3, we have
1 −m sup |hM,Mit − Nm(t)| < O(1)m 2 2 a.s. (m → ∞) 0≤t≤K and 1 1 sup |hM,Mit − Nm(t)| < K 2 (log K) 2 a.s. (K → ∞) 0≤t≤K for any m large enough, m ≥ m4(3).
Proof. To show the rst statement take e.g. C = 3/2 and am = K log(m+2) in Lemma 3 (a). Consider the inequality (2.17). Since hM,MiK is nite-valued and am → ∞, if m is large enough, depending on , holds and then holds as well by (2.11). These remarks show that the rst ω hM,MiK < am t < Tam and the third terms on the right hand side of inequality (2.17) are zero if m is large enough. Further, statement of Lemma 3 (a) can be applied to the second term. This, with the BorelCantelli lemma, proves the theorem. The second statement of theorem follows similarly from Lemma 3 (a) by the BorelCantelli lemma, taking C = 3 and am = K.
2.2.2 Strong approximation of continuous martingales Now we are ready to discuss the strong approximation of continuous local martingales by time-changed random walks.
Lemma 4. Let M be a continuous local martingale vanishing at 0, let hM,Mi be the quadratic variation and T be its quasi-inverse (2.10). Denote by Bm the sequence of shrunken RWs embedded into M by −7− 2m Lemma 2. Fix K > 0 and take a sequence am = O(m 2 )K with some > 0, where am ≥ K ∨ 1 for any m ≥ 1. (a) Then for any C ≥ 3/2 and m ≥ m3(C) we have
1 3 m 4 4 − 2 P sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| ≥ am(log∗ am) m2 0≤t≤K 2m 1−C ≤ 10(am2 ) .
(b) Under the tail-condition (2.15), for any C ≥ 3/2 and m ≥ m3(C) it follows that
1 3 m 4 4 − 2 P sup |M(t) − Bm(hM,Mit)| ≥ am(log∗ am) m2 0≤t≤K 2m 1−C −1− ≤ 10(am2 ) + D(K)m . CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 11
Proof. First, take the DDS Wiener process W (s) obtained from M(t) by the time-change Ts as de- scribed by Theorems B and C. Since below we are going to use W (s) and also the time change Ts only for arguments s ≤ hM,Mi∞, we can always assume that W (s) = M(Ts) and M(t) = W (hM,Mit), irrespective of the fact whether hM,Mi∞ = ∞ or not. Second, dene the nested sequence of shrunken RWs Bm(s) by Lemma 2. Then a quasi-inverse time-change hM,Mit is applied to Bm(s) that gives Bm(hM,Mit) which will be the sequence of time-changed shrunken RWs approximating M(t). Since Ts may have jumps, we get that
sup |M(t) − Bm(hM,Mit)| ≥ sup |M(Ts) − Bm(hM,MiTs )| 0≤t≤K 0≤s≤hM,MiK
= sup |W (s) − Bm(s)| . (2.18) 0≤s≤hM,MiK
Recalling however that the intervals of constancy are the same for M(t) and for hM,Mit [23, IV (1.13), p.125], there is in fact equality in (2.18). To go on, we need a truncation using the sequence am, since the quadratic variation hM,Mit is not a bounded random variable in general. Then (2.18) (with equality as explained above) and (2.12) imply
sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| 0≤t≤K
= sup |M(Ts ∧ Tam ) − Bm(hM,MiTs ∧ am)| 0≤s≤hM,MiK
= sup |W (s) − Bm(s)| 0≤s≤am∧hM,MiK
≤ sup |W (s) − Bm(s)| . 0≤s≤am
Hence by Lemma 1, with m ≥ m3(C),
1 3 m 4 4 − 2 P sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| ≥ am(log∗ am) m2 0≤t≤K
1 3 m 4 4 − 2 ≤ P sup |W (s) − Bm(s)| ≥ am(log∗ am) m2 0≤s≤am 2m 1−C ≤ 10(am2 ) . This proves (a). To show (b) it is enough to consider the inequality
sup |M(t) − Bm(hM,Mit)| 0≤t≤K
≤ sup |M(t) − M(t ∧ Tam )| + sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| 0≤t≤K 0≤t≤K
+ sup |Bm(hM,Mit ∧ am) − Bm(hM,Mit)| . 0≤t≤K From this point the proof is similar to the proof of Lemma 3 (b).
Kiefer [14] proved in the Brownian case M = W that using Skorohod embedding one cannot embed − 1 1 1 a standardized RW into W with convergence rate better than O(1)n 4 (log n) 2 (log log n) 4 , where n is the number of points used in the approximation. Since the next theorem gives a rate of convergence − 1 2m O(1)n 4 log n (the number of points used is n = K2 ), this rate is close to the best we can have with a Skorohod-type embedding. The same remark is valid for Theorem 3 below. CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 12
Theorem 2. Applying the same notations as in Lemma 4, we have
− m sup |M(t) − Bm(hM,Mit)| < O(1)m2 2 a.s. (m → ∞) 0≤t≤K and 1 3 sup |M(t) − Bm(hM,Mit)| < K 4 (log K) 4 a.s. (K → ∞) 0≤t≤K for any m large enough, m ≥ m3(3). Proof. The statements follow from Lemma 4 in a similar way as Theorem 1 followed from Lemma 3.
We mention that when M is a continuous local martingale vanishing at 0 and there is a deterministic function on such that a.s., then it follows that is Gaussian and has independent f R+ hM,Mit = f(t) M increments, see [23, V (1.14), p.186]. Conversely, if M is a continuous Gaussian martingale, then hM,Mit = f(t) a.s., see [23, IV (1.35), p.133]. In this case the twist and shrink construction of Brownian motion described in Section 2 can be extended to a construction of M(t) (or a simulation algorithm in practice). Namely, we have
− m ˜ 2 a.s. M(t) − Bm (f(t)) ≤ O(1)m2 (m → ∞).
−m 2m Here B˜m(t) = 2 S˜m(t2 ) (m ≥ 0) denotes the nested sequence of the RW construction described in Section 2.
Combining the previous results one can replace hM,Mit by the discrete quadratic variation Nm(t) when approximating M(t) by time-changed shrunken RWs.
Lemma 5. Let M be a continuous local martingale vanishing at 0, let hM,Mi be the quadratic varia- tion, T be its quasi-inverse (2.10) and Nm be the discrete quadratic variation dened by (2.14). Denote by Bm the sequence of shrunken RWs embedded into M by Lemma 2. Fix K > 0 and take a sequence −7− 2m am = O(m 2 )K with some > 0, where am ≥ K ∨ 1 for any m ≥ 1. (a) Then for any C ≥ 3/2 and m ≥ m5(C) we have
1 3 m 4 4 − 2 P sup |M(t ∧ Tam ) − Bm(Nm(t ∧ Tam )| ≥ 2am(log∗ am) m2 0≤t≤K 2m 1−C ≤ 14(am2 ) .
(b) Under the tail-condition (2.15), for any C ≥ 3/2 and m ≥ m5(C) it follows that
1 3 m 4 4 − 2 P sup |M(t) − Bm(Nm(t))| ≥ 2am(log∗ am) m2 0≤t≤K 2m 1−C −1− ≤ 14(am2 ) + D(K)m . Proof. For proving (a) we use the triangle inequality
sup |M(t ∧ Tam ) − Bm(Nm(t ∧ Tam )| 0≤t≤K
≤ sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| 0≤t≤K (2.19) + sup |Bm(hM,Mit ∧ am) − Bm(Nm(t ∧ Tam )|. 0≤t≤K CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 13
Since the rst term on the right hand side can be estimated by Theorem 2, we have to consider the second term. For m ≥ 1 introduce the abbreviation
1 1 1 1 2 2 −m 2 2 −m −2m La,m = 13(Cam log∗ am) m 2 ≥ 12(Cam log∗ am) m 2 + 2 .
By Theorem 1 (a), with C ≥ 3/2 and m ≥ m4(C) it follows that
sup |Bm(hM,Mit ∧ am) − Bm(Nm(t ∧ Tam )| 0≤t≤K 2m −2m −m ≤ sup Bm b(hM,Mit ∧ am)2 c2 − Bm(Nm(t ∧ Tam ) + 2 0≤t≤K −2m −2m −m ≤ sup sup |Bm(k2 ) − Bm(r2 )| + 2 −2m −2m 0≤k2 ≤hM,MiK ∧am |r−k|2 ≤La,m (k) −2m −m ≤ sup sup |Bm (r2 )| + 2 , −2m −2m 0≤k2 ≤am 0≤r2 ≤La,m
2m 1−C except for an event of probability ≤ 3(am2 ) , since the dierence of a shrunken RW at two dyadic (k) points equals the value of some shrunken RW Bm at a dyadic point. Then we can apply estimate (2.5) with some C0 > 1 for the last expression: ( (k) −2m P sup sup |Bm (r2 )| −2m −2m 0≤k2 ≤am 0≤r2 ≤La,m 1 ! 2 0 (k) −2m 1−C0 ≥ 2C log N sup Var(Bm (r2 )) ≤ 2N , k,r
where 2m 2m and (k) −2m . Choose here 0 so that N = bam2 cbLa,m2 c supk,r Var Bm (r2 ) ≤ La,m C 0 2 . Then a simple computation shows that 1−C0 2m 1−C , also 1 − C = 3 (1 − C) 2N ≤ (am2 ) log N ≤ , and 8m log∗ C log∗ am
1 ! 2 1 3 m 0 (k) −2m −m 4 4 − 2 2C log N sup Var(Bm (r2 )) + 2 ≤ am(log∗ am) m2 k,r if m ≥ m5(C) ≥ m4(C). This argument and Theorem 2(a) applied to (2.19) give (a). Statement (b) again follow from (a) in a similar way as in Lemma 3.
Theorem 3. With the same notations as in Lemma 5, we have
− m sup |M(t) − Bm(Nm(t))| < O(1)m2 2 a.s. (m → ∞) 0≤t≤K and 1 3 sup |M(t) − Bm(Nm(t))| < K 4 (log K) 4 a.s. (K → ∞) 0≤t≤K for any m large enough, m ≥ m5(3). Proof. The statements follow from Lemma 5 in a similar way again as Theorem 1 followed from Lemma 3. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 14 2.3 Symmetrically evolving martingales
It is important both from theoretical and practical (e.g. simulation) points of view that the shrunken
RW Bm and the corresponding discrete quadratic variation process Nm be independent when approx- imating M as in Theorem 3. This leads to the question of independence of the DDS Brownian motion W and quadratic variation hM,Mi in the case of a continuous local martingale M. For, by Lemma 2, Bm depends only on W and, by (2.14), Nm is determined by hM,Mi alone. Conversely, if the processes Bm and Nm are independent for any m large enough, then so are W and hM,Mi too by Lemma 1 and 1. It will turn out from the next theorem that the basic notion in this respect is the symmetry of the increments of M given the past. Thus we will say that a stochastic process M(t)(t ≥ 0) is symmetrically evolving (or has symmetric increments given the past) if for any positive integer n, reals 0 ≤ s < t1 < ··· < tn and Borel sets of the line U1,...Un we have 0 − 0 (2.20) P Γ | Fs = P Γ | Fs , − where Γ = {M(t1) − M(s) ∈ U1,...,M(tn) − M(s) ∈ Un}, Γ is the same, but each Uj replaced by , and 0 is the ltration generated by the past of . If has nite −Uj Fs = σ(M(u), 0 ≤ u ≤ s) M M(t) expectation for any t ≥ 0, then this condition expresses a very strong martingale property. Condition (2.20) is clearly equivalent to the following one: for arbitrary positive integers n, j, reals 0 ≤ sj < ··· < s1 ≤ s < t1 < . . . tn and Borel-sets V1,...,Vj,U1,...,Un one has P {Γ ∩ Λ} = P Γ− ∩ Λ , (2.21)
− where Γ and Γ are dened above and Λ = {M(s1) ∈ V1,...,M(sj) ∈ Vj}. Our Theorem 4 below is basically a reformulation of Dubins-Émery-Yor's Theorem of [4]. Their theorem is strongly built on Ocone's Theorem A of [20]. In Ocone's paper it is shown that a continuous local martingale M is conditionally (w.r.t. to the sigma algebra generated by hM,Mi) Gaussian martingale if and only if it is -invariant. Here -invariance means that and R t have the J J M 0 α dM same law for any predictable process α with range in {−1, 1}. In fact, it is proved there too that J-invariance is equivalent to H-invariance which means that it is enough to consider deterministic (r) integrands of the form α (t) = I[0,r](t) − I(r,∞)(t). Moreover, Theorem B there extends the above result to càdlàg local martingales with symmetric jumps. In the sequel local martingales with these properties are called Ocone martingales. Dubins, Émery and Yor in [4] proved that these conditions are equivalent to the independence of the DDS Brownian motion and the quadratic variation. Further, in this paper and the paper of Vostrikova and Yor in [36] shorter proofs with additional equivalent conditions were given in the case when M is a continuous martingale. In these references the equivalent condition of the independence of the DDS BM and hM,Mi explicitly appears. Besides, in [4], the conjecture that a continuous martingale M has the same law as its Lévy transform Mˆ = R sgn(M) dM if and only if its DDS BM and hM,Mi are independent is proved to be equivalent to the conjecture that the Lévy transform is ergodic. Below we give a new, long, but elementary proof for any continuous local martingale M that the DDS BM and hM,Mi are independent if and only if M is symmetrically evolving i.e. Ocone. Then, in Subsection 2.3.2 we present some remarkable properties of Ocone martingales and some of our recent results. Here, we also give a couple of examples for martingales being Ocone or non-Ocone.
2.3.1 Distributional characterization of Ocone martingales
Theorem 4. (a) If the Wiener process W (t)(t ≥ 0) and the non-decreasing, vanishing at 0, continuous stochastic process C(t)(t ≥ 0) are independent, then M(t) = W (C(t)) is a symmetrically evolving continuous local martingale vanishing at 0, with quadratic variation C. (b) Conversely, if M is a symmetrically evolving continuous local martingale, then its DDS Brow- nian motion W and its quadratic variation hM,Mi are independent processes. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 15
Proof. To prove (a) suppose that W and C are independent. By [23, V (1.5), p. 181], M(t) = W (C(t)) is a continuous local martingale. For simplicity, we will use only three sets in showing that M is symmetrically evolving, i.e. equation (2.21) holds, the generalization being straightforward:
P {M(s1) ∈ V1,M(t1) − M(s) ∈ U1,M(t2) − M(s) ∈ U2} Z Z = P {W (x1) ∈ V1} P {W (y2) − W (y1) ∈ U2 − u} P {W (y1) − W (y0) ∈ du} U1 ×P {C(s1) ∈ dx1,C(s) ∈ dy0,C(t1) ∈ dy1,C(t2) ∈ dy2} Z Z = P {W (x1) ∈ V1} P {W (y1) − W (y2) ∈ U2 − u} P {W (y0) − W (y1) ∈ du} U1 ×P {C(s1) ∈ dx1,C(s) ∈ dy0,C(t1) ∈ dy1,C(t2) ∈ dy2}
= P {M(s1) ∈ V1,M(s) − M(t1) ∈ U1,M(s) − M(t2) ∈ U2} , using the independence of B and C on one hand and the symmetry and independence of the increments of Brownian motion on the other hand.
For proving (b) we want to show that the sequences τm(k)(k = 1, 2,... ) and M(τm(j))−M(τm(j − 1)) (j = 1, 2,... ) are independent. Since Nm(t) depends only on the number of stopping times −m τm(k) ≤ t, cf. (2.14), while the shrunken random walk Bm is determined by the steps 2 Xm(j) = M(τm(j))−M(τm(j −1)), cf. Lemma 2, this would imply their independence and so the independence of W and hM,Mi too by Lemma 1 and 1. For this it is enough to show that with arbitrary integers −m −m m ≥ 0, n ≥ 1, 0 ≤ k < n, reals t1, . . . , tn and δ1 = ±2 , . . . , δn = ±2 (we x these parameters for the remaining part of the proof) one has
− (2.22) P {A ∩ B≤k ∩ B>k} = P A ∩ B≤k ∩ B>k , where Tk , is similar, but with , , A≤k = r=1 {τm(r) ≤ tr} A>k r = k + 1, . . . n A = A≤k ∩ A>k B≤k = Tk , is similar, but with , , r=1 {M(τm(r)) − M(τm(r − 1)) = δr} B>k r = k + 1, . . . n B = B≤k ∩ B>k and nally − is the same as , but each is replaced by . For, if one can reect all s for B>k B>k δj −δj δj k < j ≤ n without changing the probability, then one has the same probability with arbitrary changed signs of δjs too, since any such change can be reduced to a nite sequence of reections of the above ∗ type. Let B be similar to B, but with arbitrarily changed signs of δjs. Then, as we said, (2.22) implies that P {A ∩ B} = P {A ∩ B∗}. Since P {B} = P {B∗} by Lemma 2, the desired independence follows. We will prove (2.22) in several steps. Step 1. In condition (2.20) one can replace s by an arbitrary stopping time σ adapted to the ltration 0 : for any , (Fs ) uj ≥ 0 (1 ≤ j ≤ N) 0 − 0 (2.23) P F | Fσ = P F | Fσ , where N \ F = {M(uj + σ) − M(σ) ∈ Uj} , (2.24) j=1 − and F is the same, but each Uj replaced by −Uj. This is somewhat similar to the optional stopping theorem, see [23, II (3.2), p.69]. Indeed, for discrete valued stopping times σ the statement is obvious, since then X P F | F 0 = I {σ = s } P F | F 0 , σ r sr sr where {sr} denotes the range of σ, including possibly ∞, and I {S} denotes the indicator of the set S. For every stopping time σ there exists a decreasing sequence of discrete valued stopping times σi CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 16
almost surely converging to σ. Let us denote the events dened according to (2.24) for σi by Fi and −, respectively. Further, denote the operators projecting 2 onto its subspace of random variables Fi L (Ω) measurable w.r.t. F 0 and F 0 by P and P , respectively. Then σi σ i
kP F | F 0 − P F | F 0 k = kP I {F } − P I {F } k i σi σ 2 i i 2 ≤ kPi (I {Fi} − I {F }) k2 + kPiI {F } − P I {F } k2 ≤ kI {F } − I {F } k + kE(I {F } | F 0 ) − E(I {F } | F 0)k , i 2 σi σ 2 which goes to 0 as i → ∞. Here we used that E(I {F } | F 0 ) is a bounded, reversed-time martingale σi converging to 0 . Hence for any , E(I {F } | Fσ) > 0
kP F | F 0 − P F − | F 0 k ≤ kP F | F 0 − P F | F 0 k σ σ 2 σ i σi 2 +kP F − | F 0 − P F − | F 0 k < , σ i σi 2 if i is large enough. This shows that the left hand side of the inequality is zero, so (2.23) holds. Step 2. Then for arbitrary reals 0 ≤ uj < vj and Borel sets Uj (1 ≤ j ≤ N) we have
0 − 0 P G | Fσ = P G | Fσ , where N \ G = {M(vj + σ) − M(uj + σ) ∈ Uj} , (2.25) j=1 − and G is the same, but each Uj is replaced by −Uj. For simplicity we prove this only for two factors, the general case being similar:
0 P M(v1 + σ) − M(u1 + σ) ∈ U1,M(v2 + σ) − M(u2 + σ) ∈ U2 | Fσ Z = I{x1 − x2 ∈ U1} I{x3 − x4 ∈ U2} P {M(v1 + σ) − M(σ) ∈ dx1,
0 M(u1 + σ) − M(σ) ∈ dx2,M(v2 + σ) − M(σ) ∈ dx3,M(u2 + σ) − M(σ) ∈ dx4 | Fσ Z = I{x1 − x2 ∈ U1} I{x3 − x4 ∈ U2} P {M(σ) − M(v1 + σ) ∈ dx1,
0 M(σ) − M(u1 + σ) ∈ dx2,M(σ) − M(v2 + σ) ∈ dx3,M(σ) − M(u2 + σ) ∈ dx4 | Fσ 0 = P M(u1 + σ) − M(v1 + σ) ∈ U1,M(u2 + σ) − M(v2 + σ) ∈ U2 | Fσ .
Step 3. Let ∆τm(i) = τm(i) − τm(i − 1) and a ∈ [0, ∞). Consider the event
S(a) = {∆τm(k + 1) ≥ a} −m = inf u > 0 : |M(u + τm(k)) − M(τm(k))| ≥ 2 > a −m = sup {|M(u + τm(k)) − M(τm(k))|} < 2 0
Introduce the set of dyadic numbers −l , and the events Dl = r2 : r ∈ Z (l ≥ 0)
\ −m −j Sj,l(a) = |M(q + τm(k)) − M(τm(k))| ≤ 2 − 2 . (2.27)
q∈Dl,0 ∞ \ Sj(a) = Sj,l(a). l=0 Since Sj,l(a) is increasing with growing j, so is Sj(a). Put ∞ ∞ ∞ ∗ [ [ \ S (a) = Sj(a) = Sj,l(a). j=m j=m l=0 We want to show that S∗(a) = S(a), where the latter is dened by (2.26). First x an ω ∈ S(a). (We suppress ω in the notations below.) Then with this ω, −m −m sup |M(u + τm(k)) − M(τm(k))| < 2 =: s < 2 . 0 −j −m If j > m is such that 2 < 2 − s, then ω ∈ Sj,l(a) for any l ≥ 0. So ω ∈ Sj(a) for each j large enough, consequently, ω ∈ S∗(a). Second, x an ω∈ / S(a). Then there exists a real u0 ≤ a (depending on ω) so that |M(u0 +τm(k))− −m M(τm(k))| = 2 . Since the path of M is a continuous function, for any j ≥ m there exists an l ≥ 0 −m −j and q ∈ Dl, 0 < q ≤ a such that |M(q +τm(k))−M(τm(k))| > 2 −2 . That is, ω∈ / Sj(a) if j ≥ m, thus ω∈ / S∗(a). In other words, we have proved above that {∆τm(k + 1) > a} = S(a) = lim lim Sj,l(a) j→∞ l→∞ \ −m −j = lim lim |M(q + τm(k)) − M(τm(k))| ≤ 2 − 2 . j→∞ l→∞ q∈Dl,0 Consequently, any event {∆τm(k + 1) > a} = S(a) can be written in terms of monotonic sequences of intersections of nitely many events of the form {|M(q + τm(k)) − M(τm(k))| ≤ c} (c ≥ 0). Moreover, such an approximation can be applied to {∆τm(k + 1) ∈ (a, b]} = S(a) \ S(b) as well, with any 0 ≤ a < b. Step 4. First, Steps 2 and 3 imply that for any a ≥ 0, n o n o P G ∩ S (a) | F 0 = P G− ∩ S (a) | F 0 , j,l τm(k) j,l τm(k) − because of the absolute value in denition (2.27) of Sj,l(a). Throughout Step 4 G and G are dened according to (2.25) with σ = τm(k), but otherwise with arbitrary parameters, possibly dierent from case to case. Then taking limit as j → ∞ and l → ∞ it follows from Steps 2 and 3 that n o P G ∩ {∆τ (k + 1) > a } | F 0 m k+1 τm(k) n o = P G− ∩ {∆τ (k + 1) > a } | F 0 . m k+1 τm(k) We want to extend this symmetry property by induction over i = k +1, . . . , n+1. Taking arbitrary CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 18 reals ai ≥ 0 and integers l ≥ 0, ri > 0, (k + 1 ≤ i ≤ n + 1) dene the events i \ Bi = {M(τm(p)) − M(τm(p − 1)) = δp} , p=k+1 i \ Hi = {∆τm(p) > ap} , p=k+1 i \ −l −l Ki,l(r) = ∆τm(p) ∈ (rp − 1)2 , rp2 p=k+1 −l −l Li,l(r) = δi M(ri2 + ··· + rk+12 + τm(k)) −l −l −M(ri−12 + ··· + rk+12 + τm(k)) > 0 , and −, − similarly, but multiplying each by . Suppose that we have already proved that Bi Li,l(r) δp (−1) n o n o P G ∩ B ∩ H | F 0 = P G− ∩ B− ∩ H | F 0 , i−1 i τm(k) i−1 i τm(k) where Bk = Ω. Dene the following event as a generalization of (2.27): \ −l −l Si,j,l(a, r) = M(q + ri2 + ··· + rk+12 + τm(k)) q∈Dl,0 22l+1 [ P G ∩ B ∩ H ∩ K (r) ∩ L (r) ∩ S (a , r) | F 0 i−1 i i,l i,l i,j,l i+1 τm(k) rk+1,...,ri=1 22l+1 [ = P G− ∩ B− ∩ H ∩ K (r) ∩ L− (r) ∩ S (a , r) | F 0 , i−1 i i,l i,l i,j,l i+1 τm(k) rk+1,...,ri=1 2l −l −l where we agree that when rp = 2 + 1, the interval (rp − 1)2 , rp2 in the denition of Ki,l(r) is −l l replaced by (rp − 1)2 , ∞ = (2 , ∞]. Notice here that the events in Ki,l can be written in terms of a dierence of events appearing in Hi, while the events in Li,l and Si,j,l are both of the type appearing in G, though Si,j,l is not aected by reections because of the absolute values in its denition. Then taking limit as j → ∞ and l → ∞ it follows that n o n o P G ∩ B ∩ H | F 0 = P G− ∩ B− ∩ H | F 0 . i i+1 τm(k) i i+1 τm(k) This completes the induction. Comparing the notations introduced in this step with the ones introduced above, observe that B>k = Bn and H>k = Hn = Hn+1, if an+1 = 0. Thus one obtains that n o n o P B ∩ H | F 0 = P B− ∩ H | F 0 . >k >k τm(k) >k >k τm(k) CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 19 Step 5. The result of Step 4 implies that n o P A ∩ B | F 0 >k >k τm(k) Z = I {τm(k) + xk+1 ≤ tk+1, . . . , τm(k) + xk+1 + ··· + xn ≤ tn} n o ×P B ∩ {∆τ (k + 1) ∈ dx ,..., ∆τ (n) ∈ dx } | F 0 >k m k+1 m n τm(k) Z = I {τm(k) + xk+1 ≤ tk+1, . . . , τm(k) + xk+1 + ··· + xn ≤ tn} n o ×P B− ∩ {∆τ (k + 1) ∈ dx ,..., ∆τ (n) ∈ dx } | F 0 >k m k+1 m n τm(k) n o = P A ∩ B− | F 0 . >k >k τm(k) Step 6. Finally, it follows from Step 5 that P {A ∩ B} = P {A≤k ∩ A>k ∩ B≤k ∩ B>k} = E E I {A ∩ B } I {A ∩ B } | F 0 ≤k ≤k >k >k τm(k) n o = E I {A ∩ B } P A ∩ B | F 0 ≤k ≤k >k >k τm(k) n o = E I {A ∩ B } P A ∩ B− | F 0 ≤k ≤k >k >k τm(k) − = P A≤k ∩ A>k ∩ B≤k ∩ B>k . This proves (2.22), and so completes the proof of the theorem. 2.3.2 Properties of Ocone martingales In this subsection we give some equivalent conditions for martingales being Ocone martingale. Origi- nally, Ocone proved the equivalence of the condition (ii-iv). The equuivalence of the parts (i-v-vi) are due to Émery-Dubins-Yor. We remark that Ocone's original setting is more general: he deals with local martingales not only in continuous but in the cadlag case. Theorem D. Let M be a continuous martingale with natural ltration F = (Ft). The following ve statement are equivalent: (i) the DDS-Brownian motion βM of M and hMi are independent; (ii) conditionaly on hMi M is Gaussian martingale; (iii) for every F-predictable process H taking values in {−1, 1}, the two pairs of processes have the same law Z H dM, hMi =d (M, hMi); R (iv) for every deterministic function h of the form I[0,a] − I(a,∞), the martingale h dM has the same law as M; (v) for every -predictable process measurable for the -eld and such that F H σ B(R+) ⊗ σ(hMi) R ∞ 2 a.s., 0 Hs dhMis < ∞ Z ∞ 1 Z ∞ 2 E exp i Hs dMs hMi = exp − Hs dhMis ; 0 2 0 CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 20 (vi) for every deterministic function of the form Pn , h j=1 cjI[0,aj ] Z ∞ Z ∞ 1 2 E exp i h(s) dMs = E exp − h (s) dhMis . 0 2 0 Rather interestingly, in the theory of Ocone martingales there is a ten years old conjecture originally introduced by Yor [4]. It says that a divergent martingale M is Ocone martingale if and only if the R Lévy transform Mc = sign(M) dM of the martingale M has the same law as M. If it were true it would mean that one could formally compress the conditions of Theorem D into only one condition. Dubins, Émery and Yor proved that this conjecture is equivalent to an other conjecture known since the late 70's. This conjecture is also about the Lévy transformation but it deals with the Brownian case. If B is a standard Brownian motion started at 0 and L is its local time at 0 then the Lévy characterization says that Z sign(B) dB = |B| − L(B) is also a Brownian motion on the measure space (W, µ) where W is the Wiener space and µ is the Wiener measure. The conjecture says that the transformation T : B → R sign(B) dB is ergodic on the above introduced measure space. A result by Dubins and Smorodinsky [6, 1992] increases the plausibility of this conjecture. They established that the discrete version of the Lévy transformation taking eect on the standard symmetric random walk is ergodic on the corresponding measure space. To end this subsection, we want to mention an interesting property of the transformation T . The processes T nB, n ≥ 0, are pairwise weakly orthogonal in the following sense [23]: Two local martingales M and N are said to be weakly orthogonal if EMsNt = 0 for every s and t ≥ 0. This condition is equivalent to that of for all s ≥ 0 EhM,Ni = 0. In our present case this n k means that the martingales T Mt and T Mt are not jointly Gaussian. Proposition 1. The martingales T nW, (n ≥ 0) are pairwise weakly orthogonal. More precisely, for all s, t ≥ 0 and n 6= k non-negative integer we have n k ET WsT Wt = 0 . (2.28) n Proof. It is enough to prove the statement for ET WsT Wt = 0. For simplicity, we will prove it for the case EWsT Wt = 0. Let us consider both Ws and T Wt on the interval [0,T ], 0 ≤ s, t ≤ T . Let us introduce the notations It = R t sign(W ) dW and I(F )t = P sign(W )(W − W ) for arbitrary partition F of s s u u s tk∈F ∩[s,t] tk tk+1 tk [0,T ]. First, there exists a sequence of simple processes converging to (sign(Wt), t ≥ 0) in L2(P × λ) on [0,T ]. We follow Itô's method, see [17]. Let be an W -adapted process with nite norm and u n T . X = (Xt, 0 ≤ t ≤ T ) (Ft ) L2 ϕn(u) = b T 2 c 2n n Itô showed that there is a subsequence (nk) such that X k = (X , 0 ≤ t ≤ T ) → X in ϕnk (t−u)+u L2(P × λ) as k tends to innity for Lebesgue almost every u from [0, 1]. For the present case , it means that for arbitrary t t 2 Xt = sign(Wt) s < t E(I(Fk)s −Is) → 0 (k → ∞) if the partition Fk contains the points which can be obtained via Itô's method. Since this and the Hölder inequality we get the following estimate for all s, t: q t t p 2 t t 2 (2.29) EWs(I(Fk)0 − I0) ≤ EWs E(I(Fk)0 − I0) → 0 (k → ∞), CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 21 from which one gets t t as tends to innity. EWsI(Fk)0 → EWsI0 k For proving (2.28) we rst deal with the case t ≤ s. Using that the Brownian motion has independent increment and the fact that , we have EsignWtk = 0 t X EWsI(Fk)0 = EWssign(Wti )(Wti+1 − Wti ) ti+1≤t,ti∈Fk X = E Wti + (Wti+1 − Wti ) + Ws − Wti+1 sign(Wti )(Wti+1 − Wti ) ti+1≤t,ti∈Fk X 2 = Esign(Wti )(Wti+1 − Wti ) . ti+1≤t,ti∈Fk Now, take into consideration that for all , and 2 are independent. 0 ≤ ti < ti+1 sign(Wti ) (Wti+1 − Wti ) Hence, t . EWsI0 = 0 In the case we have decomposition t s t. By the previous paragraph s < t EWsI0 = EWsI0 + EWsIs the rst term is equal to 0. By (2.29) and by the equation t X EWsI(Fk)s = EWssign(Wti )(Wti+1 − Wti ) s≤ti s≤ti 2.3.3 Some examples of Ocone and non-Ocone martingales In this subsection we present some remarkable properties of Ocone martingales and some of our recent results. Here, we also give a couple of examples for martingales being Ocone or non-Ocone. First, let us recall some denitions and lemmas which will be used in this subsection. Denition 1. A continuous local martingale M such that hMi∞ = ∞ is said to be pure if, calling B its Brownian motion, is B measurable for every or equivalently DDS hMit F∞ t M B F∞ = F∞. Lemma E. (Vostrikova-Yor [36, Proposition]) Suppose that M is Ocone martingale. M enjoys the martingale representation property i hMi is a deterministic process. Lemma F. [23, Section 4, Chapter V.] A pure local martingale is extremal so it enjoys the martingale representation property. Using these lemmas we can prove the following simple result. Proposition 2. Let M be a continuous Ocone martingale. Suppose hMi∞ = ∞ and M is of the form Z t Mt = x + σ(Ms) dβs, (2.30) 0 where σ is a nowhere vanishing function and β is a Brownian motion. Then M is Gaussian martingale. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 22 Proof. By [23, Chapter V. (1.11) Proposition] if a continuous local martingale satises the dierential equation (2.30) then is measurable with respect to B where is the DDS Brownian motion M F∞ B of M. So this martingale is pure which implies that it enjoys the martingale representation property (Lemma F). By Lemma E, M is a martingale with deterministic quadratic variation process. Martingales with this property are Gaussian. Van Zanten proved the following interesting theorem. Theorem E. ([34]) Let M be a martingale with bounded jumps and let an, bn be sequences of positive numbers, both increasing to innity. For each n, dene the rescaled martingale M n by 1 M n = √ M . t n bnt Then the following statements hold: 1. If M n converges weakly to some process N in D(0, ∞) then the N is necessarily a continuous Ocone martingale. 2. Let N be a continuous Ocone martingale. Then M n converges to N in D(0, ∞) if and only if the quadratic variation sequence hM ni converges to hNi in D(§, ∞). Example A. Using this theorem he proved that the martingales Z t Z t + − Wt = I(0,∞)(Ws) dWs,Wt = I(−∞,0)(Ws) dWs 0 0 are non-Gaussian Ocone martingales. Indeed, the are Ocone martingales because of the self-similarity of the Brownian motion. They are non-Gaussian since the cited theorem of Vostrikova-Yor [36, Propo- sition] in the proof of Theorem 2 Example B. ([36]). Let (Bt,Ct), t ≥ 0 a planar Brownian motion. The process 1 Z ∞ At = (Cs dBs − Bs dCs) (2.31) 2 0 is an example of Ocone martingale. This assertion is a consequence of the following general theorem which was inspired by Marc Yor. Theorem 5. Let Φ: Rd → Rd be a regular function. Denote the adjoint of its derivative by Ψ(x) = (Φ0)T (x) and suppose that the following conditions hold: d Ψ(x)Φ(x) = x and x · Φ(x) = 0 for any x ∈ R (2.32) where · stands for the usual scalar product in Rd. If B is a standard d-dimensional Brownian motion then the martingale Z t Mt = Φ(Bs) · dBs (2.33) 0 is an Ocone martingale. Corollary 1. If Φ(x) = Ax where A is a regular matrix then the conditions above are equivalent with the conditions that A is orthogonal and anti-symmetric. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 23 Proof. The derivative of the function Φ: x 7→ Ax is Ψ: x 7→ Ax. So the rst condition in (2.32) is equivalent to that of AT A = Id. The second condition can be written in the form xT Ax = 0 for all x ∈ Rd, in other words A is anti-symmetric. In the light of Corollary 1, the martingale in Example B (2.31) can be written in the following form Z t At = ABs · dBs, 0 where 0 −1 A = 1 0 is clearly orthogonal and anti-symmetric. We remark that matrices with the required properties: regularity, orthogonality, anti-symmetricity are available in even order dimension. If the matrix is in Rd×d where d is odd and posses the properties above then the matrix is a direct sum of a one dimensional null matrix and a matrix of even dimension with the listed properties. Proof of Theorem 5. We will use the theorem of Yor and Vostrikova [36, p 426, Theorem 3']: Let us consider a -martingale and another martingale which is pure, i.e. , (Ft) (Mt) (Nt) Nt = γhNit γ t ≥ 0, with (hNit, t ≥ 0) measurable with respect to the σ-eld F = σ {γu, u ≥ 0} of the DDS- Brownian motion γ. Suppose that hMi is measurable with respect to F N . Then (Mt, t ≥ 0) is an Ocone martingale as soon as N and M are orthogonal. The quadratic variation of the martingale in (2.33) is R · 2 . The ltration of M hMi = 0 |Φ(Bs)| ds is that of 2 2 2 . Using Itô's formula one gets hMi |Φ(Bt)| = φ1(Bt) + ··· + φd(Bt) d d d d X Z t X 1 X Z t X |Φ(B )|2 = 2 (φ ∂ φ )(B ) dB + ∂2φ2(B ) ds. t i j i s j,s 2 j i s i=1 0 j=1 i=1 0 j=1 The rst term is equal to d Z t X Z t 2 Ψ(Bs) ◦ Φ(Bs) · dBs = 2 Bi,s dBi,s =: Nt 0 i 0 by using the rst condition in (2.32). The second term can be written in the following form: d d d d ! 1 Z t X X Z t X X (∂ (2φ ∂ φ )) (B ) ds = ∂ φ ∂ φ (B ) ds 2 j i j i s j i j i s 0 i=1 j=1 0 j=1 i=1 d Z t X = ∂jπj (Bs) ds = td, 0 j=1 where πj(x) = xj the jth coordinate of the vector x. The second equality follows from the rst part of 2 the condition (2.32). Summarizing these results we get that |Φ(Bt)| and so hMit is Nt measurable. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 24 We recall that Nt is pure martingale. Consider the following form of hNi: Z t d ! Z t X 2 2 hNit = Bi (s) ds = Ns + sd ds. 0 i=1 0 Let T be the quasi-inverse of hNi. Using the previous equation we have the following d ! Z Tt Z Tt X 2 2 t = hNiTt = Bi (s) ds = Ns + sd ds. 0 i=1 0 By applying a time-change on the integral one gets Z t Z t t = N 2 + T d dT = γ2 + T d dT Ts s s s s s 0 0 where γ is the DDS-Brownian motion of N. Hence, 1 dTt = 2 dt, (γt + Ttd) that is, T and so hMi is γ measurable. We have used that hNi is strictly increasing and that T is continuous. We will prove that M and N are orthogonal martingales. This proves our theorem. Using the second condition in (2.32) one nally gets Z t hM,Nit = Φ(Bs) · (Ψ(Bs) ◦ Φ(Bs)) ds = 0. 0 We remark that the cited theorem of Vostrikova and Yor can be easily proved by showing that the DDS-Brownian motions of M and N, β and γ, are independent processes. These two Brownian motions are independent i every the following equation satises f, g ∈ L2(R+) Z ∞ Z ∞ Z ∞ Z ∞ 1 2 1 2 E exp f(s) dβs − f (s) ds exp g(s) dγs − g (s) ds = 1 0 2 0 0 2 0 or with other words f,β g,γ E E∞ ·E∞ = 1, with the notation Z t Z t f,β 1 2 Et = exp f(s) dβs − f (s) ds , 0 2 0 g,γ is dened similarly. Time-changing the exponential martingales f,β and g,γ with the time- Et Et Et change processes hMi and hNi respectively we get that we have to prove the following f(hMi),M g(hNi),N E E∞ ·E∞ = 1 since f(hMi),M g(hNi),N f,β g,γ E E∞ ·E∞ = E E∞ ·E∞ . This will be done by showing f(hMi),M g(hNi),N (2.34) E Et ·Et = 1 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 25 for all t > 0. By using Itô's formula on exponential martingales we get Z t Z t f(hMi),M g(hNi),N g(hNi),N f(hMi),M f(hMi),M g(hNi),N Et ·Et = 1 + Es dEs + Es dEs 0 0 Z t 1 f(hMi),M g(hNi),N + Es ·Es f(hMis)g(hNis) dhM,Nis. 2 0 The second and the third term are martingales so their expectation is 0 for all t the last term is also zero since M and N are orthogonal so the expectation in (2.34) is equals to 1. Finally, we show two martingales which are non-Ocone martingales (The examples are taken from [36]). Example C. Let B be a Brownian motion. The martingale Z t Mt = Bs dBs, t ≥ 0 0 is non-Ocone. Its quadratic variation process R t 2 is non deterministic which would hMit = 0 Bs ds contradict that M is pure (see Lemma E and F). Example D. Let (Cs,Bs), s ≥ 0 be a planar Brownian motion. The martingale Z t πt = BtCt = (Cs dBs + Bs dCs) t ≥ 0 0 is non-Ocone. The quadratic variation process of this martingale is R t 2 R t 2 2 , . The hπit = 0 Rs ds = 0 (Bs + Cs ) ds t ≥ 0 -algebra generated by is that of . By the denition of we have 2. Since σ hπi (Rt, t ≥ 0) π |2BtCt| ≤ Rt this inequality, conditionally on (Rt, t ≥ 0) (BtCt) is bounded so it can not be Gaussian. 2.4 Approximation of local time, excursion, meander and bridge The Brownian motion construction in our focus is especially suitable for obtaining results on the approximation of local time of the Brownian motion. Indeed, if two points of an approximating random walk are at the same altitude, they will remain at the same altitude after renements forever. However, this may be disadvantageous for approximating the other three processes, the excursion, bridge and meander because the rst 0 hit changes randomly from one approximation to the next one if we look into the excursion. We also present a dierent, rather combinatorial, construction for BM, excursion, bridge, meander which is due to Phillipe Marchal. His algorithms allow to generate directly the excursion, the bridge and the meander. Moreover, many self-similarity properties are easily seen from this construction, and one can also recover various distributions of BM from the urn schemes that are embedded in the construction. In his paper he also gave a local time approximating algorithm but this one is not so natural as in the rst three cases CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 26 2.4.1 Approximation of local time Let L denote the Brownian local time at level 0. Let (Bm) be our usual sequence of scaled random walks that converges to B almost surely uniformly on every compact interval. Let `m(k) := #{0 ≤ and 1 2m the local time of the th approximation at point 0 up l < k | Sem(l) = 0} Lm(t) := 2m `m t2 c m to time t. We will prove that the sequence of the local time of the discrete approximations is almost surely uniformly converges to the local time of the Brownian motion. The main idea of the proof is the following. When the random walk, say the mth, hits 0 at time k the ner random walk, the (m + 1)th, has 1 0 hits between and (for the denition see 2 (Tm+1(k + 1) − Tm+1(k)) Tm+1(k) Tm+1(k + 1) Tm+1 (2.1)) which is geometrically distributed random variable with parameter 1 so we can evaluate the 2 time the Brownian motion spends at 0. Theorem 6. As n → ∞ ! 2−C 1/4 2 2 −n/2 1/2 n (2.35) P sup |Ln(t) − L(t)| ≥ CK (log∗ K) n 2 ≤ λ(C) K 2 , [0,K] with appropriate C dependent positive constant λ(C) after an appropriately large n and xed C. Proper choice of the constant C and proper usage of the Borel-Cantelli lemma gives the following corollary: Corollary 2. 2 − n sup |Ln(t) − L(t)| < O(1)n 2 2 a.s. (n → ∞) [0,K] 1 2 sup |Ln(t) − L(t)| < K 4 (log K) a.s. (K → ∞) [0,K] Before the proof of Theorem 6 we recall Lemma B in the following form. Lemma B. Let ( 1 ) 2 −2(m+1) −2m 3 1 −m Am = sup |Tm+1(k)2 − k2 | ≥ CK log ∗K m 2 2 . 0≤k2−2m≤K 2 where log ∗x = max{1, log x}. Then, for any K > 0, C > 1, and for any m ≥ m0(C), we have 2m 1−C P {Am} ≤ 2(K2 ) . We need the following LDP type result on the convergence of local time. Lemma 6. Denote by the local time of the symmetric random walk at 0 until the th step. For √ `(k) k j: k ≤ j ≤ k if k → ∞ then ! 1 1 1 j 2 P (`(2k) = j) ≤ √ √ exp − √ 2π k 2 k and `(2k) 3/4 1 1 − 1 c2(log 2k)3/2 P √ ≥ c(log 2k) < √ e 2 . 2k 2π c(log 2k)3/4 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 27 Proof of Lemma 6. For proving these, we apply normal approximation for the distribution of `: 2 2k−(2k−j) 2k − j 1 1 2 P (`(2k) = j) = 2−2k+j ∼ exp − k 2 p 2k−j π(2k − j)/2 2 ! ! 1 1 j −1/2 1 j 2 j −1 1 1 1 j 2 = √ · 1 − exp − √ · 1 − ≤ √ √ exp − √ 2 πk 2k 2 k 2k 2π k 2 k √ if k ≤ j ≤ k and k → ∞. Using this approximation and the asymptotic property of the normal distribution 1 2 1 − Φ(x) < √ e−x /2, x 2π if x is large enough, one can readily write `(2k) X P √ ≥ c(log 2k)3/4 = P (`(2k) = j) ≈ 1 − Φ c(log 2k)3/4 2k j c(log 2k)3/4≤ √ 2k 1 1 − 1 c2(log 2k)3/2 < √ e 2 . 2π c(log 2k)3/4 Proof of Theorem 6. We will prove that two consecutive approximations are close enough to each other. Namely ! 2−C 1/4 2 2 −m/2 1/2 m (2.36) P sup |Lm+1(t) − Lm(t)| ≥ CK (log∗ K) m 2 ≤ 3 K 2 . [0,K] Therefore, {Lm} is a Cauchy sequence in the topology of the uniform convergence on compact sets: ! 2−C 1/4 2 2 −n/2 for some 1/2 n P sup |Ln+j(t) − Ln(t)| ≥ CK (log∗ K) n 2 j ≥ 1 ≤ λ(C) K 2 , [0,K] (2.37) with appropriate C dependent positive constant λ(C) after an appropriately large n and xed C. Hence, (2.35) follows. For proving (2.36) we rst apply a triangle inequality 1 sup |Lm+1(t) − Lm(t)| = sup m+1 |`m+1(4k) − 2`m(k)| [0,K] 1≤k≤K22m 2 1 1 ≤ sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| + sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| . 1≤k≤K22m 2 1≤k≤K22m 2 We will estimate ! 1 1/4 2 2 −m/2 p1 = P sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| ≥ CK (log∗ K) m 2 1≤k≤K22m 2 ! 1 1/4 2 2 −m/2 p2 = P sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| ≥ CK (log∗ K) m 2 1≤k≤K22m 2 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 28 separately. Estimation of p1 Instead of p1 we will estimate a strictly larger probability ! 1 1/2 1/4 −m/2 P sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| ≥ C K log∗ Km2 . 1≤k≤K22m 2 Since {S(k + min {4k, Tm+1(k)}) − S(min {4k, Tm+1(k)})}k has the same law as {S(k)}k one can write ! 1 1/2 1/4 −m/2 P sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| ≥ C K log∗ Km2 1≤k≤K22m 2 ! 1 1/2 1/4 −m/2 = P sup m+1 `m+1 (min {4k, Tm+1(k)} , max {4k, Tm+1(k)}) ≥ C K log∗ Km2 1≤k≤K22m 2 ! 1 1/2 1/4 −m/2 = P sup m+1 `m+1 (0, |4k − Tm+1(k)|) ≥ C K log∗ Km2 1≤k≤K22m 2 Using Lemma B one one has ! 1 1/2 1/4 −m/2 P sup m+1 `m+1 (0, |4k − Tm+1(k)|) ≥ C K log∗ Km2 1≤k≤K22m 2 ! c 1 1/2 1/4 −m/2 ≤ P Am; sup m+1 `m+1 (0, |4k − Tm+1(k)|) ≥ C K log∗ Km2 + P(Am) 1≤k≤K22m 2 1 ! ! 2 1 2 1 2 m 1/2 1/4 −m/2 2m 1−C ≤ P sup m+1 `m+1 CK log ∗K m 2 ≥ C K log∗ Km2 + 2(K2 ) 1≤k≤K22m 2 3 K22m 1 ! ! 2 X 2 1 m 1/2 1/4 m/2 2m 1−C ≤ P ` CK log K m 2 2 ≥ C K log Km2 + 2(K2 ) . (2.38) m+1 3 ∗ ∗ k=1 We want to apply Lemma 6. For, proceed with the following transformation: 1 ! ! 2 2 1 m 1/2 1/4 m/2 P ` CK log K m 2 2 ≥ C K log Km2 m+1 3 ∗ ∗ 1 1 2 2 m `m+1 CK log ∗K m 2 2 1/2 1/4 m/2 3 C K log∗ Km2 = P ≥ 2 1/4 1/4 m/2 2 1/4 1/4 m/2 3 CK log ∗K m 2 3 CK log ∗K m 2 1 1 2 2 m `m+1 CK log ∗K m 2 2 1/4 3 3 1/4 3/4 3/4 = P ≥ C (log∗ K) m . 2 1/4 1/4 m/2 2 3 CK log ∗K m 2 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 29 Routine calculation shows that 1 !!3/4 2 −1 3/4 2 1 m c2 (log C + log K + m) < c log CK log K m 2 2 ∗ 3 ∗ 31/4 ≤ c (2 log C log Km)3/4 = C1/4(log K)3/4m3/4, ∗ ∗ 2 ∗ where 31/4 1/4 −3/4. Now, using the last two displayed lines then applying Lemma 6 we c = 2 C (log∗ C) get 1 1 2 2 m `m+1 CK log ∗K m 2 2 1/4 3 3 1/4 3/4 3/4 P ≥ C (log∗ K) m 2 1/4 1/4 m/2 2 3 CK log ∗K m 2 1 1 2 2 m 1 !!3/4 `m+1 CK log ∗K m 2 2 2 3 2 1 m < P ≥ c log CK log ∗K m 2 2 2 1/4 1/4 m/2 3 3 CK log ∗K m 2 1/2 3/2 1/2 3/2 3/2 3/2 1 2 3/2 1 C 1 C − c (log C+log K+m) − 8 3/2 (log C+log∗ K+m) − 8 3/2 (log C+log∗ K+m ) < e 4 ∗ < e log C < e log C . (2.39) Henceforth, one can continue the estimation of the rst term in (2.38): K22m 1 ! ! 2 X 2 1 m 1/2 1/4 m/2 P ` CK log K m 2 2 ≥ C K log Km2 m+1 3 ∗ ∗ k=1 1 C1/2 3/2 3/2 3/2 1 C1/2 3/2 3/2 3/2 2m − 8 3/2 (log C+log∗ K+m ) log K log 2+2m log 2− 8 3/2 (log C+log∗ K+m ) < K2 e log C < e log C . (2.40) Hence, by (2.38) and (2.40), we get ! 1 1/2 1/4 −m/2 p1 < P sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| ≥ C K log∗ Km2 1≤k≤K22m 2 1 C1/2 3/2 log K log 2+2m log 2− 8 3/2 (log C+log∗ K+m) 2m 1−C ≤ e log C + 2(K2 ) . (2.41) Estimation of the probability p2 2m Taking conditional expectation with respect to `m(K2 ) one has the following sum ! 1 1/4 2 2 −m/2 P sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| ≥ CK (log∗ K) m 2 1≤k≤K22m 2 K22m " # 1 X 1/4 2 2 −m/2 2m = P sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| ≥ CK (log∗ K) m 2 `m(K2 ) = n 2m 2 n=1 1≤k≤K2 2m · P `m(K2 ) = n . (2.42) CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 30 2m Supposing `m(K2 ) = n we have the equality: 1 1 k X sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| = sup m+1 γi − 2k , 2m 2 2 1≤k≤K2 1≤k≤n i=1 where {γi}i is an i.i.d sequence of geometrical variables with parameter 1/2. Taking into consideration this remark one can write the sum (2.42) in he following form: K22m k ! X 1 X P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n . 2m+1 i ∗ m n=1 1≤k≤n i=1 Divide this sum into two parts at level 1/2 1/2 3/4 3/4 m one nds: N = C K (log∗ K) m 2 K22m k ! X 1 X P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n 2m+1 i ∗ m n=1 1≤k≤n i=1 N k ! X 1 X = P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n (2.43) 2m+1 i ∗ m n=1 1≤k≤n i=1 K22m k ! X 1 X + P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n . (2.44) 2m+1 i ∗ m n=N 1≤k≤n i=1 Overestimating the sum (2.43) we get N k ! X 1 X P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n 2m+1 i ∗ m n=1 1≤k≤n i=1 k ! X 1/4 2 2 m/2 < N · P sup γi − 2k ≥ CK (log∗ K) m 2 1≤k≤N i=1 k ! X 1/2 1/2 < N · P sup γi − 2k ≥ 2 (2CN log N) 1≤k≤N i=1 2−C 1+1−C 1/2 3/4 3/4 m (2.45) < N < K (log∗ K) m 2 , if C is large enough. We have basically applied Lemma A here. 1/2 On one hand, Var(γ1 −2) = 2. This is the origin of the 2 multiplier at the right in the probability in the third row. (In Lemma A we have to ensure that the variance of Xi is 1.) On the other hand, (2CN log N)1/2 1 1 3 3 1/2 = 2CC1/2K1/2(log K)3/4m3/42m log C + log K + log log K + log m + m log 2 ∗ 2 2 4 ∗ 4 3/4 1/2 1/4 2 2 m/2 1/4 2 2 m/2 (2.46) < 2C (log C) K (log∗ K) m 2 < CK (log∗ K) m 2 , if C is large enough. This yields the second estimate. The third is a pure application of Lemma A. The last can be obtained by omitting the term C1/2 from N. CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 31 Overestimating (2.44) one can write K22m k ! X 1 X P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n 2m+1 i ∗ m n=N 1≤k≤n i=1 2m K2 2m X `m(K2 ) 3/4 < P ` (K22m) = n = P ` (K22m) ≥ N = P ≥ C1/2 (log Km) m m K1/22m ∗ n=N − 1 C(log Km)3/2 < e 2 ∗ , (2.47) by a proper application of Lemma 6. Summarizing (2.45) and (2.47) one has ! 1 1/4 2 2 −m/2 P sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| ≥ CK (log∗ K) m 2 1≤k≤K22m 2 2−C 1 3/2 2m 1−C 1/2 3/4 3/4 m − 2 C(log∗ Km) (2.48) ≤ 2(K2 ) + K (log∗ K) m 2 + e . The lines (2.41) and (2.48) together yield (2.36): ! 1/4 2 2 −m/2 P sup |Lm+1(t) − Lm(t)| ≥ CK (log∗ K) m 2 [0,K] 1/2 log K log 2+2m log 2− 1 C (log3/2 C+log3/2 K+m3/2) ≤ e 8 log3/2 C ∗ 2−C 2−C 1/2 3/4 3/4 m 1/2 m + K (log∗ K) m 2 < 3 K 2 if C is large enough. Now, for proving (2.37) we use the following estimation n+j X sup |Ln+j(t) − Ln(t)| = sup Lm+1(t) − Lm(t) [0,K] [0,K] m=n n+j ∞ X X 1/4 2 2 −m/2 1/4 2 2 −n/2 (2.49) ≤ sup |Lm+1(t) − Lm(t)| < CK (log∗ K) m 2 < c CK (log∗ K) n 2 m=n [0,K] m=n with proper constant, say c = 20 for any j > 1 after an appropriately large n. We omit the constant c from the latter expressions because it can be built in e.g. the estimate (2.46) if C is large enough. Therefore, we can conclude that ! 1/4 2 2 −n/2 for some P sup |Ln+j(t) − Ln(t)| ≥ CK (log∗ K) n 2 j ≥ 1 [0,K] ∞ ! X 1/4 2 2 −m/2 ≤ P sup |Lm+1(t) − Lm(t)| ≥ CK (log∗ K) m 2 m=n [0,K] ∞ X 2−C 2−C ≤ 3 K1/22m ≤ λ(C) K1/22n , m=n with appropriate C dependent positive constant λ(C). This last line proves (2.37) and (2.35). CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 32 The following corollary is necessary for proving an stochastic integral approximation result Theorem 9 in Section 2.5. Before this global convergence theorem we introduce some notation. Let a and a respec- L (t) Ln(t) tively be the local time of the standard Brownian motion and its nth approximation respectively at n level until time . For arbitrary real number let us dene a daen where da2 e . a t a Ln(t) = Ln (t) daen = 2n n More, daen 1 da2 e 2m 1 2m n Ln (t) = 2m `m (bt2 c) = 2m #{0 ≤ l < bt2 c | Sem(l) = da2 e} Corollary 3. 1 a a −( 2 −ε)n a.s. (2.50) sup sup |Ln(t) − L (t)| < O(1)2 (n → ∞) a∈R [0,K] for an arbitrary small positive ε. The rate of the convergence is worse than in Corollary 2 and we does not say anything about the case when m is xed and K tends to innity. The reason is that we use the following statement on the uniformly Hölder continuity of the local time for proving the corollary. For all xed K there exist a positive random variable DK such that x y α sup |L (t) − L (t)| ≤ DK |x − y| (2.51) t∈[0,K] for every α < 1/2 (see [23, (1.32) Exercise, Chapter VI]). Proof of Corollary 3. First, we prove a a 1/4 2 2 −n/2 for some P sup sup |Ln+i(t) − Ln(t)| ≥ CK (log∗ K) n 2 i ≥ 1 a = j2−2n [0,K] j ∈ Z 4−C ≤ λ(C) K1/22n , (2.52) for all xed j with appropriate C dependent positive constant. This yields a a 1/4 2 2 −n/2 a.s. (2.53) sup sup |Ln(t) − L (t)| < C K (log∗ K) n 2 (n → ∞) a = j2−2n [0,K] j ∈ Z To prove (2.52) we rst apply the following transformation: 1 sup sup |La (t) − La (t)| = sup sup `2j (4k) − 2`j (k) . m+1 m m+1 m+1 m a = j2−2m [0,K] j∈Z 1≤k≤K22m 2 j ∈ Z Dividing Z into two parts one gets 1 sup sup `2j (4k) − 2`j (k) m+1 m+1 m j∈Z 1≤k≤K22m 2 1 1 ≤ sup sup `2j (4k) − 2`j (k) + sup sup `2j (4k) − 2`j (k) . m+1 m+1 m m+1 m+1 m |j|≤K22m 1≤k≤K22m 2 |j|>K22m 1≤k≤K22m 2 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 33 Here, the second term is 0 because under K22m steps the random walk can not reach the level further than K22m from the origin. Therefore, we only have to deal with rst part. ! 1 P sup sup `2j (4k) − 2`j (k) ≥ CK1/4(log K)2m22−m/2 m+1 m+1 m ∗ j∈Z 1≤k≤K22m 2 ! 1 = P sup sup `2j (4k) − 2`j (k) ≥ CK1/4(log K)2m22−m/2 m+1 m+1 m ∗ |j|≤K22m 1≤k≤K22m 2 ! X 1 ≤ P sup `2j (4k) − 2`j (k) ≥ CK1/4(log K)2m22−m/2 m+1 m+1 m ∗ 1≤k≤K22m 2 |j|≤K22m ! X 1 = P sup `2j (4k) − 2`j (k) ≥ CK1/4(log K)2m22−m/2 m+1 m+1 m ∗ τ ≤k≤K22m 2 |j|≤K22m j ! X 1 1/4 2 2 −m/2 ≤ P sup m+1 |`m+1(4k) − 2`m(k)| ≥ CK (log∗ K) m 2 1≤k≤K22m 2 |j|≤K22m 2−C 4−C < 2K22m3 K1/22m = 6 K1/22m , where τj denotes the rst hitting time of the level j. We used the strong Markov property of the standard random walk and Theorem (6). At this point we apply a (2.49) like argument: n+j a a X a a sup sup |Ln+j(t) − Ln(t)| = sup sup Lm+1(t) − Lm(t) −2n −2n a = j2 [0,K] a = j2 [0,K] m=n j ∈ Z j ∈ Z n+j n+j X a a X a a ≤ sup sup |Lm+1(t) − Lm(t)| ≤ sup sup |Lm+1(t) − Lm(t)| −2n −2m m=n a = j2 [0,K] m=n a = j2 [0,K] j ∈ Z j ∈ Z ∞ X 1/4 2 2 −m/2 1/4 2 2 −n/2 < CK (log∗ K) m 2 < c CK (log∗ K) n 2 m=n with proper constant c > 0 , which can be omitted if C is large enough, for any j > 1 after an appropriately large n. So we can conclude that a a 1/4 2 2 −n/2 for some P sup sup |Ln+j(t) − Ln(t)| ≥ CK (log∗ K) n 2 j ≥ 1 a = j2−2m [0,K] j ∈ Z ∞ X a a 1/4 2 2 −m/2 ≤ P sup sup |Lm+1(t) − Lm(t)| ≥ CK (log∗ K) m 2 −2m m=n a = j2 [0,K] j ∈ Z ∞ X 4−C 4−C ≤ 6 K1/22m ≤ λ(C) K1/22n , m=n with appropriate C dependent positive constant which proves (2.52). CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 34 Now, we are ready to prove (2.50). By using (2.51) and (2.53) we get a a daen a sup sup |Ln(t) − L (t)| = sup sup Ln (t) − L (t) a∈R [0,K] a∈R [0,K] a daen daen a ≤ sup sup Ln(t) − L (t) + sup sup L (t) − L (t) a = j2−2n [0,K] a∈R [0,K] j ∈ Z 1 1 1/4 2 2 −n/2 −( 2 −ε)n −( 2 −ε)n ≤ c CK (log∗ K) n 2 + DK 2 ≤ O(1)2 a.s. as n tends to innity. 2.4.2 Excursion, bridge and meander Marchal's construction We present Phillipe Marchal's algorithm for approximating the Brownian excursion, bridge and mean- der. His algorithms allow to generate directly the excursion, the bridge and the meander. Moreover, many self-similarity properties are easily seen from this construction, and one can also recover various distributions of BM from the urn schemes that are embedded in the construction. In his paper he also gave a local time approximating algorithm but this one is not so natural as in the rst three cases. In spite of the originality of that construction we will not present it here. Theorem F. There exists a family (Sn, n ≥ 1) of random walks on Z starting at 0 where for every n, Snrespectively: (1) has length 2n and is conditioned to return to 0 at time 2n, (2) has length 2n and conditioned to return to 0 at time 2n and to stay positive from time 1 to 2n − 1, (3) has length and conditioned to stay positive from time to , n √ 1 n −√1 and such that almost surely, for every , n (or n in the third case) converges t ∈ [0, 1] Sb2ntc/ n Sbntc/ n to Bet where (Bet, 0 ≤ t ≤ 1) is respectively: (1) a Brownian bridge, (2) a Brownian excursion, (3) a Brownian meander. Before the construction we have to introduce some notation and denition. If the meander of a path P is positive, a point t in the meander is visible from the right if P(t) = min{P(n)| n ≥ t }. A positive step followed by a negative step is called positive hat. A negative step is dened likewise. Let us describe a procedure to extend a path P. Suppose we are given a time T such that P(T ) > 0. Then lifting the Dyck path before time T means the following. Let T 0 = 1 + sup{n ≤ T | P(n) < P(T )}. Then form the new path P0 by inserting a positive step at time T 0 and then a negative step at time T + 1. We remark the if P(T − 1) = P(T ) − 1, then T = T 0 and lifting the Dyck path before time T amounts to inserting a hat at time T . Now, we describe that how to generate Sn+1 from Sn: I. Choose a random time t uniformly on {0,..., 2n − 1}. II. If Sn(t) = 0, insert a positive or negative hat at time t with respective probabilities 1/21/2. III. If t is in a positive excursion, CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 35 1 with probability 1/2 insert a positive hat at time t 2 with probability 1/2 lift the Dyck path before time t. IV. If t is in the meander of Sn and if this meander is positive, 1 If t is visible from the right, or if t is visible from the right and Sn(t) is even, proceed as in III. 2 If t is visible from the right and Sn(t) is odd, a with probability 1/2 insert at time t a positive hat, b with probability 1/2 insert at time t two positive steps. Now, let us present the algorithms. In case (1) of F, begin with S0 the empty path. To obtain Sn+1 from Sn, choose a random time t uniformly in {0,..., 2n} and apply procedure II or III. In case (2), begin with S1 a positive hat. To obtain Sn+1 from Sn, choose a random time t uniformly in {1,..., 2n} and apply procedure III. In case (3), begin with S1 a positive step. To obtain S2n+1 from S2n−1, choose a random time t uniformly in {1,..., 2n} and apply procedure IV. To obtain S2n from S2n−1, just add a last random step. The idea of the proof of these statements is a bijection made between Dyck paths and random binary trees. The convergence rate in all case is O(1/n1/4) 2.5 Pathwise stochastic integration Our objective in this section is to dene a sequence of stochastic sums converging to R t 0 Ys dM(s) with probability 1 on every compact interval, where M is a continuous local martingale and Y is an integrable stochastic process with respect to M. It will turn out that one can carry out this procedure for two types of processes. The rst is of the form 0 where 0 denotes the left-hand side derivative f−(M) f− of the dierence of two convex functions. The other family of processes is that of with right continuous paths and left limit. The main idea is that one can take rst a dyadic partition on the spatial axis that gives the random stopping times of the Skorohod embedding on the time axis. At this point the procedure somewhat reminds a possible denition of Lebesgue integral. So we get an approach of stochastic integrals which is basically dierent from the usual denition of stochastic integration. Similar approach can be found in Karandikar's papers. Because of this similarity we think that showing some of his results could make our presentation more complete. The theorems in Subsection 2.5.1 are taken from the above mentioned paper [13] from 1996. Here, we could nd statements on stochastic integration w.r.t Brownian motion and semimartingales and even pathwise construction of solution of SDE. The main dierence of the two approaches is for which process the discretisation is applied. Karandikar applied the discretisation to the integrand while we apply it to the integrator martin- gale. 2.5.1 Karandikar's integral approximation Throughout this subsection, we x a complete probability space (Ω, F, P) and ltration (Ft) satisfying the usual conditions. His rst theorem is about pathwise stochastic integration w.r.t. Brownian motion is CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 36 Theorem G. Let (Wt) be Brownian motion adapted to the ltration (Ft) such that Wt − Ws is independent of Fs for all 0 ≤ s ≤ t < ∞. Let f be an r.c.l.l. adapted process and for n ≥ 1, let n be dened by n and for {τi : i ≥ 0} τ0 = 0 i ≥ i n n −n τ = inf{t ≥ τ : |f (·) − f n (·)| ≥ 2 }. i+1 i t τi Let n be dened as follows. For n n , (Yt ) τk ≤ t < τk+1, k ≥ 0 k−1 n X Y = f n (W n − W n ) + f n (W − W n ). t τk τi+1 τi τk t τk i=0 Then, for all T < ∞ we have Z t n a.s. sup Yt − f dW → 0 (n → ∞). 0≤t≤T 0 We present his proof of this theorem because we will use some notations and key steps of that in the proof of Theorem 7. Proof. Note that n R t n , where Yt = 0 f dW n n n f = f n for τ ≤ t < τ t τk k k+1 and hence by the choice of n we have {τi } n −n |ft − ft| ≤ 2 . Using the standard estimate 2 Z t Z T 2 (2.54) E sup g dW ≤ 4E g dt 0≤t≤T 0 0 one gets Z t 2 n −2n (2.55) E sup Yt − f dW ≤ 4T 2 . 0≤t≤T 0 Let Z t n Un = sup Yt − f dW . 0≤t≤T 0 √ −n Then (2.55) implies that EUn ≤ 2 T 2 and hence it follows that √ X X X −n E Un = EUn ≤ 2 T 2 < ∞. n n n As a consequence, one gets X Un < ∞ a.s. n which gives the required conclusion. CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 37 Theorem H. Let X be a semimartingale and let Z be an r.c.l.l. adapted process. For n ≥ 1, let n be dened by n and for {τi : i ≥ 0} τo = 0 i ≥ i n n −n τ = inf{t ≥ τ : |Z (·) − Z n (·)| ≥ 2 }. i+1 i t τi Let n be dened as follows. For n n , (Yt ) τk ≥ t < τk+1, k ≥ 0 k−1 n X Y = Z n (X n − X n ) + Z n (X − X n ). t τk τi+1 τi τk t τk i=0 Then, for all T < ∞ we have Z t n a.s. sup Yt − Z dX → 0 (n → ∞). 0≤t≤T 0 Proof. Note that n R t n , where Yt = 0 Z− dX n n n Z = fZ n for τ ≤ t < τ t τk k k+1 for and n . Hence by the choice of n we have k ≥ 1 Z0 = Z0 {τi } n −n sup |Zt − Zt−| ≤ 2 . t The core of the proof is the inequality (2.56). For this we have to introduce some notations. Let X = M + A be a decomposition of a semimartingale X where M is a locally square integrable martingale and A is a process whose paths are of bounded variation an bounded intervals. Dene stopping times increasing to such that . We have the following statement: σk ∞ Ck = EhM,Miσk < ∞ For a predictable process f and stopping time σ we have Z t 2 Z σ 2 E sup f dM ≤ 4E f dhM,Mi . (2.56) 0≤t≤σ 0 0 Plugging in our variables we obtain Z t Z t 2 n −2n E sup Z dM − Z− dM ≤ 2 Ck 0≤t≤σk 0 0 and, as in Theorem G we can conclude that Z t Z t n sup Z dM − Z− dM → 0 a.s. 0≤t≤σk 0 0 for all k ≥ 1. Since σk → ∞, we get Z t Z t n sup Z dM − Z− dM → 0 a.s. (2.57) 0≤t≤T 0 0 for all T < ∞. As for the dA integral, uniform convergence of Zn to Z− yields Z t Z t n sup Z dA − Z− dA → 0. (2.58) 0≤t≤σk 0 0 Together, (2.5.1) and (2.58) yield the result. CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 38 Finally we present Karandikar's result on pathwise approximation of the solution of a SDE. The SDE considered is Z t Zt = Ht + a(Z)s− dXs, 0 where X is an Rd valued semimartingale, H is a given adapted r.c.l.l. Rd valued process and d a : D([0, ∞), R ) → D([0, ∞),L(d, d)), where L(d, d) is the space of d × d matrices. We assume that the functional a satises the following Lipschitz condition. For each T < ∞ there exists a constant CT such that ka(ρ1)(t) − a(ρ2)(t)k ≤ CT sup kρ1(s) − ρ2(s)k 0≤s≤t for all d and for all . Here denotes the Euclidean norm on d and ρ1, ρ2 ∈ D([0, ∞), R ) 0 ≤ t ≤ T k · k R on L(d, d). For arbitrary xed d and we will dene a function . For, we ρ, η ∈ D([0, ∞), R ) n ≥ 1 Sn(η, ρ) ∈ D rst dene some sequence of some variables for these xed functions. Let and i d be dened inductively by {ui : i ≥ 1} ξ ∈ D([0, ∞), R ) and 0 u0 = 0 ξt ≡ η0 j and having dened uj, ξ for j ≤ i, let i −n ui+1 = inf{t > ui : kη(t) − η(ui) + a(ξ )(ui)(ρ(t) − ρ(ui))k ≥ 2 i i −n or ka(ξ )(t) − a(ξ )(ui)k ≥ 2 } and i i+1 ξ (t) for t < ui+1 ξ (t) = i i . ξ (ui) + η(ui+1) − η(ui) + a(ξ )(ui)(ρ(ui+1) − ρ(ui)) for t ≥ ui+1 i By denition ξ is a step function with jumps at u1, u2, . . . , ui+1. Now, we are able to dene the function Sn(η, ρ). Let Sn(η, ρ)(0) = η(0) and for ui < t ≤ ui+1 let i i Sn(η, ρ)(t) = ξ (ui) + ηt − ηui + a(ξ )(ρ(t) − ρ(ui)). So have dened a function on D([0, ∞), Rd) × D([0, ∞), Rd). Taking limit one gets the function S(η, ρ) = lim Sn(η, ρ) n→∞ whenever the limit exists in the topology of uniform convergence of compact subsets. Theorem I. Let (Xt) be a semimartingale and let (Ht) be an r.c.l.l. adapted process. Then Zt(ω) = S(H·(ω),X·(ω))(t) is the unique solution to the equation Z t Zt = Ht + a(Z)s− dXs, 0 CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 39 2.5.2 Discretization applied to the integrator Theorem 7. Let (Wt, (Ft)) be a Brownian motion with the standard ltration. Let f be an r.c.l.l. adapted process and for n ≥ 1 let {sn(i): i ≥ 0} be the stopping times dened in (2.6). Let n be dened as follows. For , (It ) sn(k) ≤ t < sn(k + 1), k ≥ 0 k−1 n X It = fsn(i)(Wsn(i+1) − Wsn(i)) + fsn(k)(Wt − Wsn(k)). i=0 Then, for all T < ∞ we have Z t n a.s. (2.59) sup It − f dW → 0 (n → ∞). 0≤t≤T 0 Proof. Essentially, the proof is based on the fact that the sequence {sm(k)}k is getting dense on every compact interval as m goes to innity. M Fix T, M > 0 and let ϑ = inf{t | 0 ≤ t ≤ T, |ft| > M}. Since f is r.c.l.l. process, max[0,T ] |ft| is almost surely nite so P(ϑM = T ) → 1 as M → ∞. Dene M M ft = ft I{t < ϑ } f n,M = f M for s (k) ≤ t < s (k + 1) (2.60) t sn(k) n n and Z t n,M n,M (2.61) It := f dW. 0 We will prove that Z t n,M M (2.62) sup It − f dW → 0 a.s. 0≤t≤T 0 or equivalently Z t n sup It − f dW → 0 a.s. 0≤t≤ϑM 0 Hence, the property P(ϑM = T ) → 1 as M → ∞ yields (2.59). Indeed, the following events are a.s. equal to each other because of the property P(ϑM = T ) → 1 as M → ∞: Z t Z t n for all n sup It − f dW −→ 0 a.s. = m ∈ N : sup It − f dW −→ 0 a.s. 0≤t≤T 0 n→∞ 0≤t≤ϑm 0 n→∞ Z t \ n = sup It − f dW −→ 0 a.s. . 0≤t≤ϑm 0 n→∞ m∈N Since each terms in the last section have probability 1, we get (2.59). For proving (2.62), we prove a somewhat more general statement. Namely, we show (2.59) for arbitrary f r.c.l.l. adapted process such that max[0,T ] |ft| ≤ M a.s. In the sequel, we suppose f possesses this property. For simplicity we use the notations n∗ and n with the following meaning: ft It Z t n∗ for and n n∗ (2.63) ft = fsn(k) sn(k) ≤ t < sn(k + 1) It := f dW 0 similar to (2.60) and (2.61). CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 40 Using the notations of Theorem G we rst dene the set An,k for non-negative integers n and k: n n n An,k = ∀i(τi < T ) ∃j : τi ≤ sk(j) < τi+1 ∧ T . Since the sequence {sn+j(k)}k is a renement of the sequence {sn(k)}k and {sm(k)}k is getting dense on every compact interval as n goes to innity, there exists a positive integer l(n, k) such that for all l ≥ l(n, k) the property An,k ⊂ An+1,l holds. Moreover, for xed n the probability P(An,k) tends to 1 as k → ∞. We introduce the following sequences of positive integers {kn} and embedded sets {An}: k0 = 0, A0 = ∅, ( ) 2−2n k = inf k > k P(A ) ≥ 1 − , A ⊂ A , A := A , (2.64) n+1 n n+1,k n n+1,k n+1 n+1,kn+1 C(n) where 4 4 2 4. By the remarks in the previous paragraph and are well C(n) = 12 3 T M {kn} {An} dened and has the following two properties An−1 ⊂ An and P(An) → ∞ as n → ∞. Let Z t n Un = sup It − f dW . 0≤t≤T 0 We will prove that tends to 0 almost surely as goes to innity. Ukn n Using the triangle-inequality one gets ! 2! ! Z t 2 E sup |U |2 ≤ E sup Y n − f dW + E sup Ikn − Y n . kn t t t [0,T ] [0,T ] 0 [0,T ] (For the denition of n see Theorem G.) By the denition of one nds that for all Yt An t ≤ T n n∗ −n on the set |ft − ft | ≤ 2 An where n is dened in the proof of Theorem G. Using this inequality and (2.54) one gets the following ft estimate: ! ! ! 2 2 2 kn n kn n kn n c E sup It − Yt ≤ E sup It − Yt I{An} + E sup It − Yt I{An} [0,T ] [0,T ] [0,T ] ! Z T 4 n n∗ 2 c kn n ≤ 4E (ft − ft ) dt + P(An) · E sup It − Yt 0 [0,T ] ≤ 4T 2−2n + 2−2n. (2.65) The last inequality is valid because of the denition of An and the following consideration on the last expectation. ! 4 44 4 kn n kn n E sup It − Yt ≤ E IT − YT [0,T ] 3 4 because kn n is submartingale. Simple calculation on stochastic integrals shows that It − Yt !2 4 Z T kn n n∗ n 2 E IT − YT ≤ 3E (fs − fs ) ds . 0 CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 41 4 4 Using the condition max[0,T ] |ft| ≤ M a.s. we get that the last expression is smaller than 4T M so we get ! 4 44 kn n 2 4 E sup It − Yt ≤ 12 T M . [0,T ] 3 Summarizing these estimation we have ! 2 −2n E sup |Ukn | ≤ (8T + 1)2 . [0,T ] √ This implies that −n and hence it follows that EUkn ≤ 8T + 1 2 √ X X X −n E Ukn = EUkn ≤ 8T + 1 2 < ∞. n n n As a consequence, one gets X a.s. Ukn < ∞ n which gives the required conclusion. Now, we will prove that Um → 0 almost surely as m goes to innity. Let n be such that kn ≤ m < k . Then n+1 U ≤ U + sup Ikn − Im . m kn t t [0,T ] Since the sequence is a renement of the sequence one has . Therefore, {sm(j)}j {skn (j)}j An−1 ⊂ An,m kn m −n+1 on the set ft − ft ≤ 2 · 2 An−1 so kn m −n+2 on sup It − It ≤ T 2 An−1 [0,T ] from which one obtains −n+2 on Um ≤ Ukn + T 2 An−1. By the denition of {An}n, P(An) → 1 and An ⊂ An+1 so we get Um tends to 0 a.e. as it was required. The following general form of Theorem 7 can be obtained by using the DDS-construction. Theorem 8. Let M be continuous (Ft)-martingale with quadratic variation such that hMi∞ = ∞. Let Y be an r.c.l.l. (Ft)-adapted process and for n ≥ 1 let {τn(i): i ≥ 0} be the stopping times dened in (2.13). Let n and n be dened as follows. For arbitrary let be such that (Yt ) (It ) t k τn(k) ≤ t < τn(k + 1), k ≥ 0 let us dene n (2.66) Yt := Yτn(k) Z t k−1 n n X It = Ys dMs = Yτn(i)(Mτn(i+1) − Mτn(i)) + Yτn(k)(Mt − Mτn(k)). 0 i=0 Then, for all K < ∞ we have Z t n a.s. (2.67) sup It − Y dM → 0 (n → ∞). 0≤t≤K 0 CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 42 The next important corollary is a simple consequence of this theorem. Corollary 4. Under the assumption of Theorem 8 one can obtain the following convergence result: Z t X a.s. sup Yτn(i)(Mτn(i+1) − Mτn(i)) − Y dM → 0 (n → ∞). 0≤t≤K 0 τn(i+1)≤t This corollary implies that n can be thought of as a stochastic sum of stochastically weighted, It independent, identically distributed, symmetrical and 1 valued random variables. ± 2m Proof. We want to apply Theorem 7. For, consider the following assertion for an (Ft)-progressive process Y : Z t Z t Z t Z hMit Y dM = Y dM = Y dB = Y dB (2.68) s s ThMis s ThMis hMis Ts s 0 0 0 0 where T is the quasi-inverse of hMi and B is the DDS-Brownian motion of M. The rst equality holds because the intervals of constancy are the same for and for so on the intervals M hMi [s, ThMis ] M is constant. The third equality follows from Proposition 1.4 in [23, Chapter V.] for C-continuous time-changed stochastic integrals: Lemma G. If is a time-change and is an -progressive process, then is - C = (Cu) H (Ft) HCu (FCu ) progressive and if X is a C-continuous process of nite variation, then Z Ct Z t Hs dXs = I {Cu < ∞} HCu dXCu . C0 0 Here a process X is said to be C-continuous if X is constant on each interval [Cu−,Cu]. In our present case B is hMi-continuous and hMi∞ < ∞ so (2.68) follows. σ Let σk := inf{t > 0 | hMit > k} and denote by M k the stopped martingale. Using (2.68) one obtains Z t Z t Z t Z t n n σk σk sup Ys dMs − Ys dMs = sup Ys dMs − Ys dMs 0≤t≤K∧σk 0 0 0≤t≤K 0 0 Z t Z t Z t Z t = sup Y n dB − Y dB ≤ sup Y n dB − Y dB . (2.69) Ts s Ts s Ts s Ts s σ 0≤t≤hM k iK 0 0 0≤t≤k 0 0 Here, R hMit Y n dB is equal to the following sum: 0 Ts s k−1 Z hMit Z t X Y n dB = Y n dM = Y (M − M ) + Y (M − M ) Ts s s s τn(i) τn(i+1) τn(i) τn(k) t τn(k) 0 0 i=0 X = YThMi (BhMiτ (i+1) − BhMiτ (i) ) + YThMi (BhMit − BhMiτ (k) ) τn(i) n n τn(k) n τn(i+1) X Y (B − B ) + Y (B − B ) Tsn(i) sn(i+1) sn(i) Tsn(i) hMit sn(k) hMisn(i+1) Z t Z t n∗ sup (YT )s dBs − YTs dBs 0≤t≤k 0 0 which fulls the condition of Theorem 7. Therefore, it converges to zero as n goes to innity. Since hMi∞ = ∞, σk also tends to innity as k tends to innity. Applying this and the previous conclusion of Theorem 7 we get Z t Z t n a.s. sup Ys dMs − Ys dMs → 0, 0≤t≤K 0 0 which was required. 2.5.3 The non cadlag-case Karandikar's approach provides a method for approximating stochastic integrals where the integrand is cadlag process. Therefore, we cannot apply his method for integrals in which the integrand is the sign function of the Brownian motion sign(Bt) which is often used as a basis of examples in stochastic analysis. In this case we can carry out our approximating method which is applicable on a much smaller class of integrands, namely for 0 where 0 is the derivative of the dierence of two f−(B) f− convex functions. In [28] Tamás Szabados gave an approximation theorem using discrete Itô formula. This result was valid for integrals like R f(B) dB for f ∈ C2, i.e. for two times continuously dierentiable real functions. Using Theorem 3 and Itô-Tanaka formula [23] one could prove the following more general statement of this kind. Theorem 9. Let f be a dierence of two convex functions and let M be a continuous local martingale such that hMi∞ = ∞ almost surely. Then for arbitrary K > 0 Z t Z t 0 0 (2.70) sup f−(Mm(s)) dMm(s) − f−(M(s)) dM(s) → 0 t∈[0,K] 0 0 almost surely as m tends to innity. Proof. Basically, we follow the method of the proof of Itô-Tanaka theorem [23, Theorem 1.5, Ch VI]. It is enough to prove the formula for a convex f. On every compact interval I f is equal to a convex 0 function g such that g has compact support. Thus by stopping M and Mm when they rst leave a compact set, it suces to prove the statement when f 00 has compact support in which case there are two constants αI and βI such that 1 Z f(x) = α x + β + |x − a|f 00( da). (2.71) I I 2 First, we prove the statement (2.70) for the Brownian case, that is, M = B. Here, we have to remark that the stopping does not change the convergence rate and the corresponding probabilities in Theorem A so we have the same convergence result for the stopped processes B and Bm. Proper usage of the above equation leads us to the Itô-Tanaka formula for Brownian motion B Z t 1 Z f(B(t)) = f(B(0)) + f 0 (B(s)) dB(s) + Laf 00( da). (2.72) − 2 t 0 R CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 44 For having the discrete version of this equation we need the following equation Z t sign(Bm(s) − a) dBm(s) 0 daem −m a −m (2.73) = |Bm(t) − daem| − daem − Lm (t) + O(2 ) = |Bm(t) − a| − a − Lm(t) + O(2 ). m where da2 e . Here, we used the denition of a daem (see Corollary 3)and that daem = 2m Lm(t) = Lm (t) −m −m −m |Bm(t) − daem| − |Bm(t) − a| = O(2 ) where |O(2 )| ≤ 2 . Finally, applying the last three and the following statement Z 0 1 00 f−(x) = sign(x − a)f ( da) + αI 2 I we obtain the equation 1 Z f(B (t)) = α B (t) + β + |B (t) − a| f 00( da) m I m I 2 m R Z 1 Z t = α (B (t) − B (0)) + f(B (0)) + sign(B (s) − a) dB (s) + La (t) + O(2−m) f 00( da) I m m m 2 m m m R 0 Z t Z Z 0 a 00 −m 00 = αI Bm(t) + f−(Bm(s)) dBm(s) − αI (Bm(t) − Bm(0)) + Lm(t)f ( da) + O(2 )f ( da) 0 R R Z t Z Z 0 a 00 −m 00 = f−(Bm(s)) dBm(s) + Lm(t)f ( da) + O(2 )f ( da). 0 R R Therefore, we can write Z t Z t 0 0 sup | f−(Bm(s)) dBm(s) − f−(B(s)) dB(s)| t∈[0,K] 0 0 Z daem a 00 ≤ sup |f(Bm(t)) − f(B(t))| + sup |Lm (t) − L (t)|f ( da) t∈[0,K] R t∈[0,K] Z + sup O(2−m)f 00( da). R t∈[0,K] On the right hand side each terms converge almost surely to 0 as m tends to innity. For the rs term this is trivial, the convergence of the second and the third term can be proved using (2.50) and the Lebesgue theorem. So we get Z t Z t 0 0 (2.74) sup f−(Bm(s)) dBm(s) − f−(B(s)) dB(s) → 0 t∈[0,K] 0 0 almost surely as m tends to innity. Now, let us investigate the general case. We reduce it to the Brownian case by using DDS con- struction and the scaling property of the local time. Let us introduce the notation a . It denotes the local time of the process at level till time LX (t) X a . By Exercise 1.27 in [23, Ch VI] we have a a a where denotes the t LM (t) = LB(hMit) = L (hMit) B DDS-Brownian motion of M. Simple consideration shows that the discrete version of this identity is also valid: La (t) = La (N (t)) = Ldaem (N (t)) Mm, m Bm, m m m m CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 45 Let σk be the rst time when hMi exceeds the level k. We can write the following estimate for k the stopped martingale M (t) = M(t ∧ σk): Z t Z t 0 k k 0 k k sup f−(Mm(s)) dMm(s) − f−(M (s)) dM (s) t∈[0,K] 0 0 Z t Z t 0 0 ≤ sup | f−(Bm(s)) dBm(s) − f−(B(s)) dB(s)| t∈[0,(hM,MiK ∧k)+1] 0 0 on the set Aa,m (for the precise denition of Aa,m see Lemma 3) after an appropriately large m. k Since (2.74), M fulls (2.70) for all xed k. Taking into consideration that hMi∞ = ∞ (so σk → ∞), (2.70) is valid for M as well. Chapter 3 Approximation of the exponential functional of Brownian motion This chapter is devoted to studying a special application of our discrete approximation method. Here, we investigate the exponential functional of Brownian motion Z ∞ Iν = exp (B(t) − νt) dt 0 and mainly its discrete version the exponential functional of random walk. 3.1 Introduction to the exp functional of Brownian motion The geometric Brownian motion (originally introduced by the economist P. Samuelson in 1965) plays a fundamental role in the BlackScholes theory of option pricing, modeling the price process of a stock. It can be explicitly given in terms of Brownian motion (BM) B as