Strong approximation of stochastic processes using random walks

Balázs Székely Supervisor: Tamás Szabados

Institute of Mathematics Budapest University of Technology and Economics 2004

Contents

1 Introduction 1

2 Approximation of continuous martingales 3 2.1 Random walks and the ...... 3 2.2 Approximation of continuous martingales ...... 7 2.2.1 Approximation of the process ...... 8 2.2.2 Strong approximation of continuous martingales ...... 10 2.3 Symmetrically evolving martingales ...... 14 2.3.1 Distributional characterization of Ocone martingales ...... 14 2.3.2 Properties of Ocone martingales ...... 19 2.3.3 Some examples of Ocone and non-Ocone martingales ...... 21 2.4 Approximation of , excursion, meander and bridge ...... 25 2.4.1 Approximation of local time ...... 26 2.4.2 Excursion, bridge and meander  Marchal's construction ...... 34 2.5 Pathwise stochastic integration ...... 35 2.5.1 Karandikar's integral approximation ...... 35 2.5.2 Discretization applied to the integrator ...... 39 2.5.3 The non cadlag-case ...... 43

3 Approximation of the exponential functional of Brownian motion 46 3.1 Introduction to the exp functional of Brownian motion ...... 46 3.2 The distribution of the discrete exponential functional ...... 47 3.2.1 Review of some fractal notion ...... 48 3.2.2 The non-overlapping case ...... 49 3.2.3 The general case ...... 50 3.3 The moments of the discrete exponential functional ...... 51 3.3.1 Permutations with given descent set ...... 52 3.4 Approximation of the exponential functional of Brownian motion ...... 56 3.4.1 The moments of the exponential functional of BM ...... 58 3.4.2 Properties of the exponential functional process ...... 61

I Chapter 1

Introduction

The main concept of this thesis is to draw the attention to the fact that both for theoretical and practical reasons, it is useful to search for strong (i.e. pathwise, almost sure) approximations of stochastic processes by simple random walks (RWs). The prototype of such eorts was the construction of Brownian motion (BM) as an almost sure limit of simple RW paths, given by Frank Knight in 1962 [15]. Later this construction was simplied and somewhat improved by Pál Révész [22] and then by Tamás Szabados [28]. This sort of results states that one can nd a sequence of time and space scaled random walks (Bm) such that it converges to a Brownian motion B almost surely uniformly on every compact intervals as m goes to innity:

sup |Bm(t) − B(t)| → 0. t∈[0,T ] Besides the theoretical value of discrete approximations, in some applications the discrete model could be more natural than the continuous one. It provides a general tool for proving statements for continuous time stochastic processes. First, one can prove the discrete version of the statement and then take limit for obtaining the continuous version. Of course, we cannot predict which part of this procedure will be easier. In this thesis, we present both theoretical results, e.g. approximation of continuous martingales, and also show examples in which the discrete approximation method can be carried out naturally. During our research some additional statements turned out to be true which are not closely related to our present subject. However, we present some of them because we think that the reader may nd them interesting. This study consists of two main parts, Chapter 2 and Chapter 3. Chapter 2 is mainly based on the paper Strong approximation of continuous local martingales by simple random walks and some recent unpublished results. In Chapter 3, we present our results on the so-called exponential functional of Brownian motion which were discussed in the papers An exponential functional of random walks and Moments of an exponential functional of random walks and permutations with given descent sets. Chapter 2 discusses a generalization of the result by Knight to continuous local martingales M. We will show that the quadratic variation process hM,Mi can be almost surely uniformly approximated by a discrete quadratic variation processes Nm which are based on stopping times of a Skorohod-type embedding of nested simple RWs into M. This corresponds to an earlier similar result by Karandikar [12]. In section 2.5 we present some of his related results, for instance discrete approximation of certain stochastic integrals.

Theorems 2 and 3 give an approximation of M by a nested sequence of RWs Bm, time-changed by hM,Mi and Nm, respectively. The approximations almost surely uniformly converge on bounded intervals.

1 CHAPTER 1. INTRODUCTION 2

It is important to note that the DDS Brownian motion W and the quadratic variation hM,Mi are not independent in general, just like the approximating RW Bm and the discrete quadratic variation Nm. Since this could be a hindrance both in the theory and applications, a necessary and sucient condition is given for the independence in Theorem 4. Namely, W and hM,Mi are independent if and only if M has symmetric increments given the past. This is a reformulation of an earlier result by Ocone and Dubins-Émery-Yor. We also present some recent theorems on the properties of Ocone martingales which provide examples for Ocone martingales. This investigations on this special family of martingales can be found in Section 2.3. Naturally arise the question if we can extend our discrete approximation method to approximate stochastic integrals with respect to continuous martingales. In Section 2.5 it turns out that the con- struction works for cadlag integrands. To get approximation result for a larger class of integrands, e.g. of the form 0 where 0 is the derivative of the dierence of two convex functions, we prove that f−(Mt) f− we can approximate the Brownian local time in the same manner as we do with martingales. This work is done in Section 2.4.

Chapter 3 focuses on a certain application of the discrete approximation of Brownian motion. Here, we investigate the exponential functional of Brownian motion

Z ∞ Iν = exp (B(t) − νt) dt 0 and mainly its discrete version the exponential functional of . The properties of the discrete exponential functional are rather dierent from the continuous one: typically its distribution is singular w.r.t. Lebesgue measure, all of its positive integer moments are nite and they characterize the distribution. On the other hand, using suitable random walk approximations to Brownian motion, the resulting discrete exponential functionals converge a.s. to the exponential functional of Brownian motion, hence their limit distribution is the same as in the continuous case, namely, the one of the reciprocal of a gamma random variable, so absolutely continuous w.r.t. Lebesgue measure. This way we give a new, elementary proof for an earlier result by Dufresne and Yor as well. Beyond these results we have found a recursion of certain moments in the expansion of the moments of the discrete approximation.

In this thesis we denote by alphabetical numbering the known results and by arabic numbering the results of the authors. Chapter 2

Approximation of continuous martingales

2.1 Random walks and the Wiener process

A main tool of this thesis is an elementary construction of the Wiener process (= BM). The specic construction we are going to use in the sequel, taken from [28], is based on a nested sequence of simple random walks that uniformly converges to the Wiener process on bounded intervals with 1. This will be called RW construction in the sequel. One of our intentions in this chapter is to extend the underlying twist and shrink algorithm to continuous local martingales. We summarize the major steps of the RW construction here, see [29] as well. We start with an innite matrix of i.i.d. random variables Xm(k), P {Xm(k) = ±1} = 1/2 (m ≥ 0, k ≥ 1), dened on the same underlying probability space (Ω, F, P). Each row of this matrix is a basis of an approximation of the Wiener process with a dyadic step size ∆t = 2−2m in time and a corresponding step size ∆x = 2−m in space, illustrated by the next table.

Table 2.1: The starting setting for the RW construction of BM

∆t ∆x i.i.d. sequence RW Pn 1 1 X0(1),X0(2),X0(3),... S0(n) = k=1 X0(k) −2 −1 Pn 2 2 X1(1),X1(2),X1(3),... S1(n) = k=1 X1(k) −4 −2 Pn 2 2 X2(1),X2(2),X2(3),... S2(n) = X2(k) . . . . k=1 . . . .

The second step of the construction is twisting. From the independent random walks we want to create dependent ones so that after shrinking temporal and spatial step sizes, each consecutive RW becomes a renement of the previous one. Since the spatial unit will be halved at each consecutive row, we dene stopping times by Tm(0) = 0, and for k ≥ 0,

Tm(k + 1) = min{n : n > Tm(k), |Sm(n) − Sm(Tm(k))| = 2} (m ≥ 1) (2.1) These are the random time instants when a RW visits even integers, dierent from the previous one. After shrinking the spatial unit by half, a suitable modication of this RW will visit the same integers in the same order as the previous RW. We operate here on each point ω ∈ Ω of the sample

3 CHAPTER 2. RANDOM WALKS AND THE WIENER PROCESS 4

space separately, i.e. we x a sample path of each RW. We dene twisted RWs S˜m recursively for k = 1, 2,... using S˜m−1, starting with S˜0(n) = S0(n)(n ≥ 0). With each xed m we proceed for k = 0, 1, 2,... successively, and for every n in the corresponding bridge, Tm(k) < n ≤ Tm(k + 1). Any bridge is ipped if its sign diers from the desired:  Xm(n) if Sm(Tm(k + 1)) − Sm(Tm(k)) = 2X˜m−1(k + 1), X˜m(n) = −Xm(n) otherwise, and then S˜m(n) = S˜m(n − 1) + X˜m(n). Then S˜m(n)(n ≥ 0) is still a simple symmetric random walk [28, Lemma 1]. The twisted RWs have the desired renement property:

1 S˜ (T (k)) = S˜ (k)(m ≥ 1, k ≥ 0). 2 m m m−1

The last step of the RW construction is shrinking. The sample paths of S˜m(n)(n ≥ 0) can be extended to continuous functions by linear interpolation, this way one gets S˜m(t)(t ≥ 0) for real t. Then we dene the mth approximating RW by

−m 2m B˜m(t) = 2 S˜m(t2 ).

Using the denition of Tm and B˜m we also get the general renement property

 −2(m+1) −2m B˜m+1 Tm+1(k)2 = B˜m k2 (m ≥ 0, k ≥ 0). (2.2)

Note that a renement takes the same dyadic values in the same order as the previous shrunken walk, but there is a time lag in general:

−2(m+1) −2m Tm+1(k)2 − k2 6= 0. (2.3) Then we quote some important facts from [28] about the above RW construction that will be used in the sequel. These will be stated in somewhat stronger forms but can be read easily from the proofs in the cited reference, cf. Lemmas 2-4 and Theorem 3 there.

Lemma A. Suppose that X1,X2,...,XN is an i.i.d. sequence of random variables, E(Xk) = 0, Var(Xk) = 1, and their moment generating function is nite in a neighborhood of 0. Let Sj = X1 + ··· + Xj, 1 ≤ j ≤ N. Then for any C > 1 and N ≥ N0(C) one has   1 1−C P sup |Sj| ≥ (2CN log N) 2 ≤ 2N . 1≤j≤N We mention that this basic fact, that appears in the above-mentioned reference [28], essentially depends on a large deviation theorem.

We have a more convenient result in a special case of Hoeding's inequality, cf. [10]. Let X1,X2,... be a sequence of bounded i.i.d. random variables, such that , and let Pn . bi ≤ Xi ≤ ai Sn = i=1 Xi Then by Hoeding's inequality, for any x > 0 we have

 1  n ! 2 2  1 X 2  − x P |S − E(S )| ≥ x (a − b ) ≤ 2 e 2 . n n 4 i i  i=1 

If and here, then 1 Pn 2 Pn 2 if and only if 0, E(Xi) = 0 bi = −ai 4 i=1(ai − bi) = i=1 ai = Var(Sn) Xi = aiXi where 0 1 , . P {Xi = ±1} = 2 1 ≤ i ≤ n CHAPTER 2. RANDOM WALKS AND THE WIENER PROCESS 5

Thus if P 0 , where not all are zero and P 2 , we get S = r arXr ar Var(S) = r ar < ∞

2 n 1 o − x P |S| ≥ x (Var(S)) 2 ≤ 2 e 2 (x ≥ 0). (2.4)

The summation above may extend either to nitely many or to countably many terms. Let S1,S2,...,SN be arbitrary sums of the above type: P 0 , 0 1 , , where 0 Sk = r akrXkr P {Xkr = ±1} = 2 1 ≤ k ≤ N Xkr and 0 can be dependent when . Then by the inequality (2.4) we obtain the following analog of Xls k 6= l Lemma A: for any C > 1 and N ≥ 1,

 1 1  2 P sup |Sk| ≥ (2C log N) 2 sup (Var(Sk)) 1≤k≤N 1≤k≤N N X n 1 o −C log N 1−C ≤ P |Sk| ≥ (2C log N Var(Sk)) 2 ≤ 2Ne = 2N . (2.5) k=1 Lemma A easily implies that the time lags (2.3) are uniformly small if m is large enough.

Lemma B. For any K > 0, C > 1, and for any m ≥ m0(C), we have

( 1 )   2 −2(m+1) −2m 3 1 −m P sup |Tm+1(k)2 − k2 | ≥ CK log ∗K m 2 2 0≤k2−2m≤K 2 ≤ 2(K22m)1−C , where log ∗x = max{1, log x}. This lemma and the renement property (2.2) implies the uniform closeness of two consecutive approximations if m is large enough.

Lemma C. For any K > 0, C > 1, and for any m ≥ m1(C), we have ( ) 1 3 m ˜ −2m ˜ −2m 4 4 − 2 P sup |Bm+1(k2 ) − Bm(k2 )| ≥ K∗ (log∗ K) m2 0≤k2−2m≤K ≤ 3(K22m)1−C , where K∗ = max{1,K}. Based on this lemma, it is not dicult to show the following convergence result.

Theorem A. The shrunken RWs B˜m(t)(t ≥ 0, m = 0, 1, 2,... ) almost surely uniformly converge to a Wiener process W (t)(t ≥ 0) on any compact interval [0,K], K > 0. For any K > 0, C ≥ 3/2, and for any m ≥ m2(C), we have

 1 3 m  ˜ 4 4 − 2 2m 1−C P sup |W (t) − Bm(t)| ≥ K∗ (log∗ K) m2 ≤ 6(K2 ) . 0≤t≤K

Now taking C = 3 in Theorem A and using the BorelCantelli lemma, we get

− m sup |W (t) − B˜m(t)| < O(1)m2 2 a.s. (m → ∞) 0≤t≤K and 1 3 sup |W (t) − B˜m(t)| < K 4 (log K) 4 a.s. (K → ∞) 0≤t≤K CHAPTER 2. RANDOM WALKS AND THE WIENER PROCESS 6

for any m large enough, m ≥ m2(3). Next we are going to study the properties of another nested sequence of random walks, obtained by Skorohod embedding. This sequence is not identical, though asymptotically equivalent to the above RW construction, cf. [28, Theorem 4]. Given a Wiener process W , rst we dene the stopping times −2m which yield the Skorohod embedded process Bm(k2 ) into W . For every m ≥ 0 let sm(0) = 0 and

−m sm(k + 1) = inf {s : s > sm(k), |W (s) − W (sm(k))| = 2 } (k ≥ 0). (2.6) With these stopping times the embedded process by denition is

−2m Bm(k2 ) = W (sm(k)) (m ≥ 0, k ≥ 0). (2.7)

This denition of Bm can be extended to any real t ≥ 0 by pathwise linear interpolation. The next lemma describes some useful facts about the relationship between B˜m and Bm. These follow from [28, Lemmas 5,7 and Theorem 4], with some minor modications.

In general, roughly saying, B˜m is more useful when someone wants to generate stochastic processes from scratch, while Bm is more advantageous when someone needs a discrete approximation of given processes, like in the case of stochastic integration.

Lemma D. For any C ≥ 3/2, K > 0, take the following subset of the sample space: ( ) 1 1 −2n −2m 2 2 −m (2.8) Am = sup sup |2 Tm,n(k) − k2 | < 6(CK∗ log∗ K) m 2 , n>m 0≤k2−2m≤K where Tm,n(k) = Tn ◦ Tn−1 ◦ · · · ◦ Tm(k) for n > m ≥ 0 and k ≥ 0. Then for any m ≥ m3(C),

c 2m 1−C P {Am} ≤ 4(K2 ) .

−2n Moreover, limn→∞ 2 Tm,n(k) = tm(k) exists almost surely and on the set Am we have

−2m −2m B˜m(k2 ) = W (tm(k)) (0 ≤ k2 ≤ K), cf. (2.7). Further, on Am except for a zero probability subset, sm(k) = tm(k) and

1 1 −2m 2 2 −m (2.9) sup |sm(k) − k2 | ≤ 6(CK∗ log∗ K) m 2 (m ≥ m3(C)). 0≤k2−2m≤K

If the Wiener process is built by the RW construction described above using a sequence B˜m (m ≥ 0) of nested RWs and then one constructs the Skorohod embedded RWs Bm (m ≥ 0), it is natural to ask what the approximating properties of the latter are. The answer described by the next theorem is that they are essentially the same as the ones of B˜m, cf. Theorem A.

Lemma 1. For every K > 0, C ≥ 3/2 and m ≥ m3(C) we have

 1 3 m  4 4 − 2 2m 1−C P sup |W (t) − Bm(t)| ≥ K∗ (log∗ K) m2 ≤ 10(K2 ) . 0≤t≤K Proof. By the triangle inequality,

sup |W (t) − Bm(t)| ≤ sup W (t) − B˜m(t) + sup B˜m(t) − Bm(t) . 0≤t≤K 0≤t≤K 0≤t≤K

By Lemma D and equation (2.7), on the set Am dened by (2.8) we have

−2m −2m B˜m(k2 ) = W (sm(k)) = Bm(k2 ), CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 7

except for a zero probability subset when m ≥ m3(C). Since both B˜m(t) and Bm(t) are obtained by −2m pathwise linear interpolation based on the vertices at k2 ∈ [0,K], they are identical on Am, except for a zero probability subset of it when m ≥ m3(C). Thus

 1 3 m  4 4 − 2 P sup |W (t) − Bm(t)| ≥ K∗ (log∗ K) m2 0≤t≤K

 1 3 m  c ˜ 4 4 − 2 ≤ P {Am} + P sup W (t) − Bm(t) ≥ K∗ (log∗ K) m2 0≤t≤K Then by Theorem A and Lemma D we get the statement of the theorem.

2.2 Approximation of continuous martingales

Beside the RW construction of standard Brownian motion, the other main tool applied in this section is a theorem of Dambis (1965) and DubinsSchwarz (1965) and an extension of it, cf. Theorems B and C below. Briey saying, these theorems state that any continuous (M(t), t ≥ 0) can be transformed into a standard Brownian motion by time-change. Then somewhat loosely speaking, the resulting Brownian motion takes on the same values in the same order as M(t), only the corresponding time instants may dier. These and other necessary matters about continuous local martingales will be taken from and discussed in the style of [23] in the sequel.

Below it is supposed that an increasing family of sub-σ-algebras (Ft, t ≥ 0) is given in the probability space (Ω, F, P) and the given continuous local martingale M is adapted to it. In the case of a continuous local martingale M(t) vanishing at 0 its quadratic variation hM,Mit is a process with almost surely continuous and non-decreasing sample paths vanishing at 0. This will be one of the two time-changes we are going to use in the sequel. The other one is a quasi-inverse of the quadratic variation:

Ts = inf{t : hM,Mit > s}, (2.10) where inf(∅) = ∞ by denition. Then the sample paths of the process Ts are almost surely increasing, but only right-continuous, since such a path has a jump at any value where the quadratic variation has a constant level-stretch. Beside this, Ts may be innite valued. The duality between the two time-changes is expressed by hM,Mit = inf{s : Ts > t}. Observe that Ts cannot have constant level-stretches since this would imply jumps for hM,Mit. Also the continuity of hM,Mit gives that ( ), while we have only ( ) in the opposite direction. It is clear hM,MiTs = s s ≥ 0 ThM,Mit ≥ t t ≥ 0 that

hM,Mit < s =⇒ t < Ts, but t < Ts =⇒ hM,Mit ≤ s, (2.11) while

hM,Mit ≤, ≥, > s ⇐⇒ t ≤, ≥, > Ts, (2.12) respectively.

Theorem B. [23, V (1.6), p.181] If M is a continuous (Ft)-local martingale vanishing at 0 and such that a.s., then is an -Brownian motion and . hM,Mi∞ = ∞ W (s) = M(Ts) (FTs ) M(t) = W (hM,Mit)

Similar statement is true when hM,Mi∞ < ∞ is possible. Note that on the set {hM,Mi∞ < ∞} the limit M(∞) = limt→∞ M(t) exists with probability 1, cf. [23, IV (1.26), p. 131].

Theorem C. [23, V (1.7), p.182] If M is a continuous (Ft)-local martingale vanishing at 0 and such that with positive probability, then there exists an enlargement of hM,Mi∞ < ∞ (Ωe, Fet, Pe) (Ω, FTt , P) and a Wiener process βe on Ωe independent of M such that the process  M(T ) if s < hM,Mi , W (s) = s ∞ M(∞) + βe(s − hM,Mi∞) if s ≥ hM,Mi∞ CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 8

is a standard Brownian motion and M(t) = W (hM,Mit) for t ≥ 0. From now on, W will always refer to the Wiener process obtained from M by the above time-change, the so-called DDS Wiener process (or DDS Brownian motion) of M. Now Skorohod-type stopping times can be dened for M, similarly as for W in (2.6). For m ≥ 0, let τm(0) = 0 and

 −m τm(k + 1) = inf t : t > τm(k), |M(t) − M(τm(k))| = 2 (k ≥ 0). (2.13) The st sequence is a renement of the th in the sense that ∞ is (m + 1) m (τm(k))k=0 a subsequence of ∞ so that for any there exist and , and (τm+1(j))j=0 k ≥ 0 j1 j2 τm+1(j1) = τm(k) τm+1(j2) = τm(k + 1), where the dierence j2 − j1 ≥ 2, even. Lemma 2. With the stopping times dened by (2.13) from a continuous local martingale M one can directly obtain the sequence of shrunken RWs that almost surely converges to the DDS Wiener process W of M, cf. (2.7):

−2m Bm(k2 ) = W (sm(k)) = M(τm(k)), sm(k) = hM,Miτm(k) [but ], where for , the non-negative integer is taking values (depending on ) τm(k) ≤ Tsm(k) m ≥ 0 k ω until sm(k) ≤ hM,Mi∞. Proof. By Theorems B and C it follows that . This implies that W (hM,Miτm(k)) = M(τm(k)) sm(k) ≤ . Then consider rst the case . If held, then hM,Miτm(k) k = 1 sm(1) < hM,Miτm(1) Tsm(1) < τm(1) would follow by (2.12), and this would lead to a contradiction because −m. M(Tsm(1)) = W (sm(1)) = ±2 For values k > 1, induction with a similar argument can show the statement of the lemma.

In the sequel Bm will always denote the sequence of shrunken RWs dened by Lemma 2.

2.2.1 Approximation of the quadratic variation process

Our next objective is to show that the quadratic variation of M can be obtained as an almost sure limit of a related to the above stopping times that we will call a discrete quadratic variation process:

−2m Nm(t) = 2 #{r : r > 0, τm(r) ≤ t} (2.14) −2m = 2 #{r : r > 0, sm(r) ≤ hM,Mit} (t ≥ 0).

Clearly, the paths of Nm are non-decreasing pure jump functions, the jumping times being exactly −2m the stopping times τm(k). Moreover, Nm (τm(k)) = k2 and the magnitudes of jumps are constant 2−2m when m is xed.

Lemma 3. Let M be a continuous local martingale vanishing at 0, let hM,Mi be the quadratic varia- tion, T be its quasi-inverse (2.10), and Nm be the discrete quadratic variation dened in (2.14). Fix −2− 2m K > 0 and take a sequence am = O(m 2 )K with some  > 0, where am ≥ K ∨ 1 for any m ≥ 1 (x ∨ y = max(x, y), x ∧ y = min(x, y)). (a) Then for any C ≥ 3/2 and m ≥ m4(C) we have

 1 1  2 2 −m P sup |hM,Mit ∧ am − Nm(t ∧ Tam )| ≥ 12(Cam log∗ am) m 2 0≤t≤K 2m 1−C ≤ 3(am2 ) . CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 9

(b) Suppose that the quadratic variation satises the following tail-condition: a sequence (am) fullling the above assumptions can be chosen so that

−1− P {hM,Mit > am} ≤ D(t)m , (2.15) where D(t) is some nite valued function of . Then for any and it follows t ∈ R+ C ≥ 3/2 m ≥ m4(C) that

 1 1  2 2 −m P sup |hM,Mit − Nm(t)| ≥ 12(Cam log∗ am) m 2 0≤t≤K 2m 1−C −1− ≤ 3(am2 ) + D(K)m . Proof. The basic idea of the proof is that the Skorohod stopping times of a Wiener process are asymp- totically uniformly distributed as shown by (2.9), while the case of a continuous local martingale can be reduced to the former by the DDS representation, cf. Lemma 2. 1 1 Introduce the abbreviation 2 2 −m. Then − as ha,m = 11.1 (Cam log∗ am) m 2 ha,m = O(m ) → 0 m → ∞. We need a truncation here using the sequence am, since the quadratic variation hM,Mit is not a bounded random variable in general. By (2.12) and (2.14),

−2m Nm(t ∧ Tam ) = 2 #{r : r > 0, τm(r) ≤ t ∧ Tam } −2m = 2 #{r : r > 0, sm(r) ≤ hM,Mit ∧ am}. On the event ( ) −2m Aa,m = sup |sm(r) − r2 | ≤ ha,m , −2m 0≤r2 ≤2am 2m if r = b(hM,Mit ∧ am + ha,m)2 c + 1, then sm(r) > hM,Mit ∧ am, so sm(r) is not included in . Observe here that −2m if is large enough, , where we Nm(t ∧ Tam ) am + ha,m + 2 ≤ 2am m m ≥ m4(C) also suppose that m4(C) ≥ m3(C) and m3(C) is dened by Lemma D. This explains why the sup is −2m 2m taken for r2 ≤ 2am in the denition of Aa,m. Similarly on Aa,m, if r = b(hM,Mit ∧am −ha,m)2 c, then , so must be included in . Hence sm(r) ≤ hM,Mit ∧ am sm(r) Nm(t ∧ Tam ) −2m −2m (2.16) hM,Mit ∧ am − ha,m − 2 ≤ Nm(t ∧ Tam ) ≤ hM,Mit ∧ am + ha,m + 2 , for any t ∈ [0,K] on Aa,m. 1 1 1 1 Now 2 2 −m 2 2 −m , since 6 (C2am log∗(2am)) m 2 ≤ 11.1 (Cam log∗ am) m 2 = ha,m log∗(2am) ≤ (1 + . Hence it follows by Lemma D that log 2) log∗ am  c 2m 1−C 2m 1−C P Aa,m ≤ 4(2am2 ) ≤ 3(am2 ) ,

1 1 when and . Noticing that 2 2 −m −2m for any , this C ≥ 3/2 m ≥ m4(C) 0.9 (Cam log∗ am) m 2 > 2 m ≥ 1 and (2.16) prove (a). Part (b) follows from (a), the inequality

|hM,Mit − Nm(t)| ≤ |hM,Mit − hM,Mit ∧ am|

+ |hM,Mit ∧ am − Nm(t ∧ Tam )| (2.17) + |Nm(t ∧ Tam ) − Nm(t)|, and from the following simple relationships between events:

{hM,Mit ∧ am 6= hM,Mit} = {hM,Mit > am} and

{Nm(t ∧ Tam ) 6= Nm(t)} = {t > Tam } = {hM,Mit > am}, cf. (2.12). CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 10

We mention that when the quadratic variation hM,Mit is almost surely bounded above by a nite valued function g(t) for t > 0, statements (a) and (b) of Lemma 4 simplify as

 1 1  2 2 −m P sup |hM,Mit − Nm(t)| ≥ 12(Cg∗(K) log∗ g(K)) m 2 0≤t≤K ≤ 3(g(K)22m)1−C for any K > 0, C ≥ 3/2 and m ≥ m4(C). The statement of the next theorem corresponds to the main result in Karandikar [12], though the method applied is dierent and here we give a rate of convergence as well.

Theorem 1. Using the same notations as in Lemma 3, we have

1 −m sup |hM,Mit − Nm(t)| < O(1)m 2 2 a.s. (m → ∞) 0≤t≤K and 1 1 sup |hM,Mit − Nm(t)| < K 2 (log K) 2 a.s. (K → ∞) 0≤t≤K for any m large enough, m ≥ m4(3).

Proof. To show the rst statement take e.g. C = 3/2 and am = K log(m+2) in Lemma 3 (a). Consider the inequality (2.17). Since hM,MiK is nite-valued and am → ∞, if m is large enough, depending on , holds and then holds as well by (2.11). These remarks show that the rst ω hM,MiK < am t < Tam and the third terms on the right hand side of inequality (2.17) are zero if m is large enough. Further, statement of Lemma 3 (a) can be applied to the second term. This, with the BorelCantelli lemma, proves the theorem. The second statement of theorem follows similarly from Lemma 3 (a) by the BorelCantelli lemma, taking C = 3 and am = K.

2.2.2 Strong approximation of continuous martingales Now we are ready to discuss the strong approximation of continuous local martingales by time-changed random walks.

Lemma 4. Let M be a continuous local martingale vanishing at 0, let hM,Mi be the quadratic variation and T be its quasi-inverse (2.10). Denote by Bm the sequence of shrunken RWs embedded into M by −7− 2m Lemma 2. Fix K > 0 and take a sequence am = O(m 2 )K with some  > 0, where am ≥ K ∨ 1 for any m ≥ 1. (a) Then for any C ≥ 3/2 and m ≥ m3(C) we have

 1 3 m  4 4 − 2 P sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| ≥ am(log∗ am) m2 0≤t≤K 2m 1−C ≤ 10(am2 ) .

(b) Under the tail-condition (2.15), for any C ≥ 3/2 and m ≥ m3(C) it follows that

 1 3 m  4 4 − 2 P sup |M(t) − Bm(hM,Mit)| ≥ am(log∗ am) m2 0≤t≤K 2m 1−C −1− ≤ 10(am2 ) + D(K)m . CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 11

Proof. First, take the DDS Wiener process W (s) obtained from M(t) by the time-change Ts as de- scribed by Theorems B and C. Since below we are going to use W (s) and also the time change Ts only for arguments s ≤ hM,Mi∞, we can always assume that W (s) = M(Ts) and M(t) = W (hM,Mit), irrespective of the fact whether hM,Mi∞ = ∞ or not. Second, dene the nested sequence of shrunken RWs Bm(s) by Lemma 2. Then a quasi-inverse time-change hM,Mit is applied to Bm(s) that gives Bm(hM,Mit) which will be the sequence of time-changed shrunken RWs approximating M(t). Since Ts may have jumps, we get that

sup |M(t) − Bm(hM,Mit)| ≥ sup |M(Ts) − Bm(hM,MiTs )| 0≤t≤K 0≤s≤hM,MiK

= sup |W (s) − Bm(s)| . (2.18) 0≤s≤hM,MiK

Recalling however that the intervals of constancy are the same for M(t) and for hM,Mit [23, IV (1.13), p.125], there is in fact equality in (2.18). To go on, we need a truncation using the sequence am, since the quadratic variation hM,Mit is not a bounded random variable in general. Then (2.18) (with equality as explained above) and (2.12) imply

sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| 0≤t≤K

= sup |M(Ts ∧ Tam ) − Bm(hM,MiTs ∧ am)| 0≤s≤hM,MiK

= sup |W (s) − Bm(s)| 0≤s≤am∧hM,MiK

≤ sup |W (s) − Bm(s)| . 0≤s≤am

Hence by Lemma 1, with m ≥ m3(C),

 1 3 m  4 4 − 2 P sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| ≥ am(log∗ am) m2 0≤t≤K

 1 3 m  4 4 − 2 ≤ P sup |W (s) − Bm(s)| ≥ am(log∗ am) m2 0≤s≤am 2m 1−C ≤ 10(am2 ) . This proves (a). To show (b) it is enough to consider the inequality

sup |M(t) − Bm(hM,Mit)| 0≤t≤K

≤ sup |M(t) − M(t ∧ Tam )| + sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| 0≤t≤K 0≤t≤K

+ sup |Bm(hM,Mit ∧ am) − Bm(hM,Mit)| . 0≤t≤K From this point the proof is similar to the proof of Lemma 3 (b).

Kiefer [14] proved in the Brownian case M = W that using Skorohod embedding one cannot embed − 1 1 1 a standardized RW into W with convergence rate better than O(1)n 4 (log n) 2 (log log n) 4 , where n is the number of points used in the approximation. Since the next theorem gives a rate of convergence − 1 2m O(1)n 4 log n (the number of points used is n = K2 ), this rate is close to the best we can have with a Skorohod-type embedding. The same remark is valid for Theorem 3 below. CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 12

Theorem 2. Applying the same notations as in Lemma 4, we have

− m sup |M(t) − Bm(hM,Mit)| < O(1)m2 2 a.s. (m → ∞) 0≤t≤K and 1 3 sup |M(t) − Bm(hM,Mit)| < K 4 (log K) 4 a.s. (K → ∞) 0≤t≤K for any m large enough, m ≥ m3(3). Proof. The statements follow from Lemma 4 in a similar way as Theorem 1 followed from Lemma 3.

We mention that when M is a continuous local martingale vanishing at 0 and there is a deterministic function on such that a.s., then it follows that is Gaussian and has independent f R+ hM,Mit = f(t) M increments, see [23, V (1.14), p.186]. Conversely, if M is a continuous Gaussian martingale, then hM,Mit = f(t) a.s., see [23, IV (1.35), p.133]. In this case the twist and shrink construction of Brownian motion described in Section 2 can be extended to a construction of M(t) (or a simulation algorithm in practice). Namely, we have

− m ˜ 2 a.s. M(t) − Bm (f(t)) ≤ O(1)m2 (m → ∞).

−m 2m Here B˜m(t) = 2 S˜m(t2 ) (m ≥ 0) denotes the nested sequence of the RW construction described in Section 2.

Combining the previous results one can replace hM,Mit by the discrete quadratic variation Nm(t) when approximating M(t) by time-changed shrunken RWs.

Lemma 5. Let M be a continuous local martingale vanishing at 0, let hM,Mi be the quadratic varia- tion, T be its quasi-inverse (2.10) and Nm be the discrete quadratic variation dened by (2.14). Denote by Bm the sequence of shrunken RWs embedded into M by Lemma 2. Fix K > 0 and take a sequence −7− 2m am = O(m 2 )K with some  > 0, where am ≥ K ∨ 1 for any m ≥ 1. (a) Then for any C ≥ 3/2 and m ≥ m5(C) we have

 1 3 m  4 4 − 2 P sup |M(t ∧ Tam ) − Bm(Nm(t ∧ Tam )| ≥ 2am(log∗ am) m2 0≤t≤K 2m 1−C ≤ 14(am2 ) .

(b) Under the tail-condition (2.15), for any C ≥ 3/2 and m ≥ m5(C) it follows that

 1 3 m  4 4 − 2 P sup |M(t) − Bm(Nm(t))| ≥ 2am(log∗ am) m2 0≤t≤K 2m 1−C −1− ≤ 14(am2 ) + D(K)m . Proof. For proving (a) we use the triangle inequality

sup |M(t ∧ Tam ) − Bm(Nm(t ∧ Tam )| 0≤t≤K

≤ sup |M(t ∧ Tam ) − Bm(hM,Mit ∧ am)| 0≤t≤K (2.19) + sup |Bm(hM,Mit ∧ am) − Bm(Nm(t ∧ Tam )|. 0≤t≤K CHAPTER 2. APPROXIMATION OF CONTINUOUS MARTINGALES 13

Since the rst term on the right hand side can be estimated by Theorem 2, we have to consider the second term. For m ≥ 1 introduce the abbreviation

1 1 1 1 2 2 −m 2 2 −m −2m La,m = 13(Cam log∗ am) m 2 ≥ 12(Cam log∗ am) m 2 + 2 .

By Theorem 1 (a), with C ≥ 3/2 and m ≥ m4(C) it follows that

sup |Bm(hM,Mit ∧ am) − Bm(Nm(t ∧ Tam )| 0≤t≤K 2m −2m −m ≤ sup Bm b(hM,Mit ∧ am)2 c2 − Bm(Nm(t ∧ Tam ) + 2 0≤t≤K −2m −2m −m ≤ sup sup |Bm(k2 ) − Bm(r2 )| + 2 −2m −2m 0≤k2 ≤hM,MiK ∧am |r−k|2 ≤La,m (k) −2m −m ≤ sup sup |Bm (r2 )| + 2 , −2m −2m 0≤k2 ≤am 0≤r2 ≤La,m

2m 1−C except for an event of probability ≤ 3(am2 ) , since the dierence of a shrunken RW at two dyadic (k) points equals the value of some shrunken RW Bm at a dyadic point. Then we can apply estimate (2.5) with some C0 > 1 for the last expression: ( (k) −2m P sup sup |Bm (r2 )| −2m −2m 0≤k2 ≤am 0≤r2 ≤La,m 1  ! 2 0 (k) −2m  1−C0 ≥ 2C log N sup Var(Bm (r2 )) ≤ 2N , k,r 

  where 2m 2m and (k) −2m . Choose here 0 so that N = bam2 cbLa,m2 c supk,r Var Bm (r2 ) ≤ La,m C 0 2 . Then a simple computation shows that 1−C0 2m 1−C , also 1 − C = 3 (1 − C) 2N ≤ (am2 ) log N ≤ , and 8m log∗ C log∗ am

1 ! 2 1 3 m 0 (k) −2m −m 4 4 − 2 2C log N sup Var(Bm (r2 )) + 2 ≤ am(log∗ am) m2 k,r if m ≥ m5(C) ≥ m4(C). This argument and Theorem 2(a) applied to (2.19) give (a). Statement (b) again follow from (a) in a similar way as in Lemma 3.

Theorem 3. With the same notations as in Lemma 5, we have

− m sup |M(t) − Bm(Nm(t))| < O(1)m2 2 a.s. (m → ∞) 0≤t≤K and 1 3 sup |M(t) − Bm(Nm(t))| < K 4 (log K) 4 a.s. (K → ∞) 0≤t≤K for any m large enough, m ≥ m5(3). Proof. The statements follow from Lemma 5 in a similar way again as Theorem 1 followed from Lemma 3. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 14 2.3 Symmetrically evolving martingales

It is important both from theoretical and practical (e.g. simulation) points of view that the shrunken

RW Bm and the corresponding discrete quadratic variation process Nm be independent when approx- imating M as in Theorem 3. This leads to the question of independence of the DDS Brownian motion W and quadratic variation hM,Mi in the case of a continuous local martingale M. For, by Lemma 2, Bm depends only on W and, by (2.14), Nm is determined by hM,Mi alone. Conversely, if the processes Bm and Nm are independent for any m large enough, then so are W and hM,Mi too by Lemma 1 and 1. It will turn out from the next theorem that the basic notion in this respect is the symmetry of the increments of M given the past. Thus we will say that a M(t)(t ≥ 0) is symmetrically evolving (or has symmetric increments given the past) if for any positive integer n, reals 0 ≤ s < t1 < ··· < tn and Borel sets of the line U1,...Un we have  0  − 0 (2.20) P Γ | Fs = P Γ | Fs , − where Γ = {M(t1) − M(s) ∈ U1,...,M(tn) − M(s) ∈ Un}, Γ is the same, but each Uj replaced by , and 0 is the ltration generated by the past of . If has nite −Uj Fs = σ(M(u), 0 ≤ u ≤ s) M M(t) expectation for any t ≥ 0, then this condition expresses a very strong martingale property. Condition (2.20) is clearly equivalent to the following one: for arbitrary positive integers n, j, reals 0 ≤ sj < ··· < s1 ≤ s < t1 < . . . tn and Borel-sets V1,...,Vj,U1,...,Un one has P {Γ ∩ Λ} = P Γ− ∩ Λ , (2.21)

− where Γ and Γ are dened above and Λ = {M(s1) ∈ V1,...,M(sj) ∈ Vj}. Our Theorem 4 below is basically a reformulation of Dubins-Émery-Yor's Theorem of [4]. Their theorem is strongly built on Ocone's Theorem A of [20]. In Ocone's paper it is shown that a continuous local martingale M is conditionally (w.r.t. to the sigma algebra generated by hM,Mi) Gaussian martingale if and only if it is -invariant. Here -invariance means that and R t have the J J M 0 α dM same law for any α with range in {−1, 1}. In fact, it is proved there too that J-invariance is equivalent to H-invariance which means that it is enough to consider deterministic (r) integrands of the form α (t) = I[0,r](t) − I(r,∞)(t). Moreover, Theorem B there extends the above result to càdlàg local martingales with symmetric jumps. In the sequel local martingales with these properties are called Ocone martingales. Dubins, Émery and Yor in [4] proved that these conditions are equivalent to the independence of the DDS Brownian motion and the quadratic variation. Further, in this paper and the paper of Vostrikova and Yor in [36] shorter proofs with additional equivalent conditions were given in the case when M is a continuous martingale. In these references the equivalent condition of the independence of the DDS BM and hM,Mi explicitly appears. Besides, in [4], the conjecture that a continuous martingale M has the same law as its Lévy transform Mˆ = R sgn(M) dM if and only if its DDS BM and hM,Mi are independent is proved to be equivalent to the conjecture that the Lévy transform is ergodic. Below we give a new, long, but elementary proof for any continuous local martingale M that the DDS BM and hM,Mi are independent if and only if M is symmetrically evolving i.e. Ocone. Then, in Subsection 2.3.2 we present some remarkable properties of Ocone martingales and some of our recent results. Here, we also give a couple of examples for martingales being Ocone or non-Ocone.

2.3.1 Distributional characterization of Ocone martingales

Theorem 4. (a) If the Wiener process W (t)(t ≥ 0) and the non-decreasing, vanishing at 0, continuous stochastic process C(t)(t ≥ 0) are independent, then M(t) = W (C(t)) is a symmetrically evolving continuous local martingale vanishing at 0, with quadratic variation C. (b) Conversely, if M is a symmetrically evolving continuous local martingale, then its DDS Brow- nian motion W and its quadratic variation hM,Mi are independent processes. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 15

Proof. To prove (a) suppose that W and C are independent. By [23, V (1.5), p. 181], M(t) = W (C(t)) is a continuous local martingale. For simplicity, we will use only three sets in showing that M is symmetrically evolving, i.e. equation (2.21) holds, the generalization being straightforward:

P {M(s1) ∈ V1,M(t1) − M(s) ∈ U1,M(t2) − M(s) ∈ U2} Z Z = P {W (x1) ∈ V1} P {W (y2) − W (y1) ∈ U2 − u} P {W (y1) − W (y0) ∈ du} U1 ×P {C(s1) ∈ dx1,C(s) ∈ dy0,C(t1) ∈ dy1,C(t2) ∈ dy2} Z Z = P {W (x1) ∈ V1} P {W (y1) − W (y2) ∈ U2 − u} P {W (y0) − W (y1) ∈ du} U1 ×P {C(s1) ∈ dx1,C(s) ∈ dy0,C(t1) ∈ dy1,C(t2) ∈ dy2}

= P {M(s1) ∈ V1,M(s) − M(t1) ∈ U1,M(s) − M(t2) ∈ U2} , using the independence of B and C on one hand and the symmetry and independence of the increments of Brownian motion on the other hand.

For proving (b) we want to show that the sequences τm(k)(k = 1, 2,... ) and M(τm(j))−M(τm(j − 1)) (j = 1, 2,... ) are independent. Since Nm(t) depends only on the number of stopping times −m τm(k) ≤ t, cf. (2.14), while the shrunken random walk Bm is determined by the steps 2 Xm(j) = M(τm(j))−M(τm(j −1)), cf. Lemma 2, this would imply their independence and so the independence of W and hM,Mi too by Lemma 1 and 1. For this it is enough to show that with arbitrary integers −m −m m ≥ 0, n ≥ 1, 0 ≤ k < n, reals t1, . . . , tn and δ1 = ±2 , . . . , δn = ±2 (we x these parameters for the remaining part of the proof) one has

 − (2.22) P {A ∩ B≤k ∩ B>k} = P A ∩ B≤k ∩ B>k , where Tk , is similar, but with , , A≤k = r=1 {τm(r) ≤ tr} A>k r = k + 1, . . . n A = A≤k ∩ A>k B≤k = Tk , is similar, but with , , r=1 {M(τm(r)) − M(τm(r − 1)) = δr} B>k r = k + 1, . . . n B = B≤k ∩ B>k and nally − is the same as , but each is replaced by . For, if one can reect all s for B>k B>k δj −δj δj k < j ≤ n without changing the probability, then one has the same probability with arbitrary changed signs of δjs too, since any such change can be reduced to a nite sequence of reections of the above ∗ type. Let B be similar to B, but with arbitrarily changed signs of δjs. Then, as we said, (2.22) implies that P {A ∩ B} = P {A ∩ B∗}. Since P {B} = P {B∗} by Lemma 2, the desired independence follows. We will prove (2.22) in several steps. Step 1. In condition (2.20) one can replace s by an arbitrary stopping time σ adapted to the ltration 0 : for any , (Fs ) uj ≥ 0 (1 ≤ j ≤ N)  0  − 0 (2.23) P F | Fσ = P F | Fσ , where N \ F = {M(uj + σ) − M(σ) ∈ Uj} , (2.24) j=1 − and F is the same, but each Uj replaced by −Uj. This is somewhat similar to the optional stopping theorem, see [23, II (3.2), p.69]. Indeed, for discrete valued stopping times σ the statement is obvious, since then X P F | F 0 = I {σ = s } P F | F 0 , σ r sr sr where {sr} denotes the range of σ, including possibly ∞, and I {S} denotes the indicator of the set S. For every stopping time σ there exists a decreasing sequence of discrete valued stopping times σi CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 16

almost surely converging to σ. Let us denote the events dened according to (2.24) for σi by Fi and −, respectively. Further, denote the operators projecting 2 onto its subspace of random variables Fi L (Ω) measurable w.r.t. F 0 and F 0 by P and P , respectively. Then σi σ i

kP F | F 0 − P F | F 0 k = kP I {F } − P I {F } k i σi σ 2 i i 2 ≤ kPi (I {Fi} − I {F }) k2 + kPiI {F } − P I {F } k2 ≤ kI {F } − I {F } k + kE(I {F } | F 0 ) − E(I {F } | F 0)k , i 2 σi σ 2 which goes to 0 as i → ∞. Here we used that E(I {F } | F 0 ) is a bounded, reversed-time martingale σi converging to 0 . Hence for any , E(I {F } | Fσ)  > 0

kP F | F 0 − P F − | F 0 k ≤ kP F | F 0 − P F | F 0 k σ σ 2 σ i σi 2 +kP F − | F 0 − P F − | F 0 k < , σ i σi 2 if i is large enough. This shows that the left hand side of the inequality is zero, so (2.23) holds. Step 2. Then for arbitrary reals 0 ≤ uj < vj and Borel sets Uj (1 ≤ j ≤ N) we have

 0  − 0 P G | Fσ = P G | Fσ , where N \ G = {M(vj + σ) − M(uj + σ) ∈ Uj} , (2.25) j=1 − and G is the same, but each Uj is replaced by −Uj. For simplicity we prove this only for two factors, the general case being similar:

 0 P M(v1 + σ) − M(u1 + σ) ∈ U1,M(v2 + σ) − M(u2 + σ) ∈ U2 | Fσ Z = I{x1 − x2 ∈ U1} I{x3 − x4 ∈ U2} P {M(v1 + σ) − M(σ) ∈ dx1,

0 M(u1 + σ) − M(σ) ∈ dx2,M(v2 + σ) − M(σ) ∈ dx3,M(u2 + σ) − M(σ) ∈ dx4 | Fσ Z = I{x1 − x2 ∈ U1} I{x3 − x4 ∈ U2} P {M(σ) − M(v1 + σ) ∈ dx1,

0 M(σ) − M(u1 + σ) ∈ dx2,M(σ) − M(v2 + σ) ∈ dx3,M(σ) − M(u2 + σ) ∈ dx4 | Fσ  0 = P M(u1 + σ) − M(v1 + σ) ∈ U1,M(u2 + σ) − M(v2 + σ) ∈ U2 | Fσ .

Step 3. Let ∆τm(i) = τm(i) − τm(i − 1) and a ∈ [0, ∞). Consider the event

S(a) = {∆τm(k + 1) ≥ a}   −m = inf u > 0 : |M(u + τm(k)) − M(τm(k))| ≥ 2 > a   −m = sup {|M(u + τm(k)) − M(τm(k))|} < 2 0

Introduce the set of dyadic numbers  −l , and the events Dl = r2 : r ∈ Z (l ≥ 0)

\  −m −j Sj,l(a) = |M(q + τm(k)) − M(τm(k))| ≤ 2 − 2 . (2.27)

q∈Dl,0

∞ \ Sj(a) = Sj,l(a). l=0

Since Sj,l(a) is increasing with growing j, so is Sj(a). Put

∞ ∞ ∞ ∗ [ [ \ S (a) = Sj(a) = Sj,l(a). j=m j=m l=0

We want to show that S∗(a) = S(a), where the latter is dened by (2.26). First x an ω ∈ S(a). (We suppress ω in the notations below.) Then with this ω,

 −m −m sup |M(u + τm(k)) − M(τm(k))| < 2 =: s < 2 . 0

−j −m If j > m is such that 2 < 2 − s, then ω ∈ Sj,l(a) for any l ≥ 0. So ω ∈ Sj(a) for each j large enough, consequently, ω ∈ S∗(a). Second, x an ω∈ / S(a). Then there exists a real u0 ≤ a (depending on ω) so that |M(u0 +τm(k))− −m M(τm(k))| = 2 . Since the path of M is a continuous function, for any j ≥ m there exists an l ≥ 0 −m −j and q ∈ Dl, 0 < q ≤ a such that |M(q +τm(k))−M(τm(k))| > 2 −2 . That is, ω∈ / Sj(a) if j ≥ m, thus ω∈ / S∗(a). In other words, we have proved above that

{∆τm(k + 1) > a} = S(a) = lim lim Sj,l(a) j→∞ l→∞ \  −m −j = lim lim |M(q + τm(k)) − M(τm(k))| ≤ 2 − 2 . j→∞ l→∞ q∈Dl,0

Consequently, any event {∆τm(k + 1) > a} = S(a) can be written in terms of monotonic sequences of intersections of nitely many events of the form

{|M(q + τm(k)) − M(τm(k))| ≤ c} (c ≥ 0).

Moreover, such an approximation can be applied to {∆τm(k + 1) ∈ (a, b]} = S(a) \ S(b) as well, with any 0 ≤ a < b. Step 4. First, Steps 2 and 3 imply that for any a ≥ 0, n o n o P G ∩ S (a) | F 0 = P G− ∩ S (a) | F 0 , j,l τm(k) j,l τm(k)

− because of the absolute value in denition (2.27) of Sj,l(a). Throughout Step 4 G and G are dened according to (2.25) with σ = τm(k), but otherwise with arbitrary parameters, possibly dierent from case to case. Then taking limit as j → ∞ and l → ∞ it follows from Steps 2 and 3 that n o P G ∩ {∆τ (k + 1) > a } | F 0 m k+1 τm(k) n o = P G− ∩ {∆τ (k + 1) > a } | F 0 . m k+1 τm(k)

We want to extend this symmetry property by induction over i = k +1, . . . , n+1. Taking arbitrary CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 18

reals ai ≥ 0 and integers l ≥ 0, ri > 0, (k + 1 ≤ i ≤ n + 1) dene the events

i \ Bi = {M(τm(p)) − M(τm(p − 1)) = δp} , p=k+1 i \ Hi = {∆τm(p) > ap} , p=k+1 i \  −l −l Ki,l(r) = ∆τm(p) ∈ (rp − 1)2 , rp2 p=k+1  −l −l Li,l(r) = δi M(ri2 + ··· + rk+12 + τm(k)) −l −l  −M(ri−12 + ··· + rk+12 + τm(k)) > 0 , and −, − similarly, but multiplying each by . Suppose that we have already proved that Bi Li,l(r) δp (−1) n o n o P G ∩ B ∩ H | F 0 = P G− ∩ B− ∩ H | F 0 , i−1 i τm(k) i−1 i τm(k) where Bk = Ω. Dene the following event as a generalization of (2.27):

\  −l −l Si,j,l(a, r) = M(q + ri2 + ··· + rk+12 + τm(k))

q∈Dl,0

 22l+1   [  P G ∩ B ∩ H ∩ K (r) ∩ L (r) ∩ S (a , r) | F 0 i−1 i i,l i,l i,j,l i+1 τm(k)  rk+1,...,ri=1 

 22l+1   [  = P G− ∩ B− ∩ H ∩ K (r) ∩ L− (r) ∩ S (a , r) | F 0 , i−1 i i,l i,l i,j,l i+1 τm(k)  rk+1,...,ri=1 

2l −l −l where we agree that when rp = 2 + 1, the interval (rp − 1)2 , rp2 in the denition of Ki,l(r) is −l  l replaced by (rp − 1)2 , ∞ = (2 , ∞]. Notice here that the events in Ki,l can be written in terms of a dierence of events appearing in Hi, while the events in Li,l and Si,j,l are both of the type appearing in G, though Si,j,l is not aected by reections because of the absolute values in its denition. Then taking limit as j → ∞ and l → ∞ it follows that n o n o P G ∩ B ∩ H | F 0 = P G− ∩ B− ∩ H | F 0 . i i+1 τm(k) i i+1 τm(k) This completes the induction. Comparing the notations introduced in this step with the ones introduced above, observe that

B>k = Bn and H>k = Hn = Hn+1, if an+1 = 0. Thus one obtains that n o n o P B ∩ H | F 0 = P B− ∩ H | F 0 . >k >k τm(k) >k >k τm(k) CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 19

Step 5. The result of Step 4 implies that n o P A ∩ B | F 0 >k >k τm(k) Z = I {τm(k) + xk+1 ≤ tk+1, . . . , τm(k) + xk+1 + ··· + xn ≤ tn} n o ×P B ∩ {∆τ (k + 1) ∈ dx ,..., ∆τ (n) ∈ dx } | F 0 >k m k+1 m n τm(k) Z = I {τm(k) + xk+1 ≤ tk+1, . . . , τm(k) + xk+1 + ··· + xn ≤ tn} n o ×P B− ∩ {∆τ (k + 1) ∈ dx ,..., ∆τ (n) ∈ dx } | F 0 >k m k+1 m n τm(k) n o = P A ∩ B− | F 0 . >k >k τm(k) Step 6. Finally, it follows from Step 5 that

P {A ∩ B} = P {A≤k ∩ A>k ∩ B≤k ∩ B>k}    = E E I {A ∩ B } I {A ∩ B } | F 0 ≤k ≤k >k >k τm(k)  n o = E I {A ∩ B } P A ∩ B | F 0 ≤k ≤k >k >k τm(k)  n o = E I {A ∩ B } P A ∩ B− | F 0 ≤k ≤k >k >k τm(k)  − = P A≤k ∩ A>k ∩ B≤k ∩ B>k . This proves (2.22), and so completes the proof of the theorem.

2.3.2 Properties of Ocone martingales In this subsection we give some equivalent conditions for martingales being Ocone martingale. Origi- nally, Ocone proved the equivalence of the condition (ii-iv). The equuivalence of the parts (i-v-vi) are due to Émery-Dubins-Yor. We remark that Ocone's original setting is more general: he deals with local martingales not only in continuous but in the cadlag case.

Theorem D. Let M be a continuous martingale with natural ltration F = (Ft). The following ve statement are equivalent:

(i) the DDS-Brownian motion βM of M and hMi are independent; (ii) conditionaly on hMi M is Gaussian martingale; (iii) for every F-predictable process H taking values in {−1, 1}, the two pairs of processes have the same law Z  H dM, hMi =d (M, hMi);

R (iv) for every deterministic function h of the form I[0,a] − I(a,∞), the martingale h dM has the same law as M; (v) for every -predictable process measurable for the -eld and such that F H σ B(R+) ⊗ σ(hMi) R ∞ 2 a.s., 0 Hs dhMis < ∞   Z ∞    1 Z ∞  2 E exp i Hs dMs hMi = exp − Hs dhMis ; 0 2 0 CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 20

(vi) for every deterministic function of the form Pn , h j=1 cjI[0,aj ]   Z ∞    Z ∞  1 2 E exp i h(s) dMs = E exp − h (s) dhMis . 0 2 0

Rather interestingly, in the theory of Ocone martingales there is a ten years old conjecture originally introduced by Yor [4]. It says that a divergent martingale M is Ocone martingale if and only if the R Lévy transform Mc = sign(M) dM of the martingale M has the same law as M. If it were true it would mean that one could formally compress the conditions of Theorem D into only one condition. Dubins, Émery and Yor proved that this conjecture is equivalent to an other conjecture known since the late 70's. This conjecture is also about the Lévy transformation but it deals with the Brownian case. If B is a standard Brownian motion started at 0 and L is its local time at 0 then the Lévy characterization says that Z sign(B) dB = |B| − L(B) is also a Brownian motion on the measure space (W, µ) where W is the Wiener space and µ is the Wiener measure. The conjecture says that the transformation T : B → R sign(B) dB is ergodic on the above introduced measure space. A result by Dubins and Smorodinsky [6, 1992] increases the plausibility of this conjecture. They established that the discrete version of the Lévy transformation taking eect on the standard symmetric random walk is ergodic on the corresponding measure space.

To end this subsection, we want to mention an interesting property of the transformation T . The processes T nB, n ≥ 0, are pairwise weakly orthogonal in the following sense [23]: Two local martingales M and N are said to be weakly orthogonal if EMsNt = 0 for every s and t ≥ 0. This condition is equivalent to that of for all s ≥ 0 EhM,Ni = 0. In our present case this n k means that the martingales T Mt and T Mt are not jointly Gaussian. Proposition 1. The martingales T nW, (n ≥ 0) are pairwise weakly orthogonal. More precisely, for all s, t ≥ 0 and n 6= k non-negative integer we have

n k ET WsT Wt = 0 . (2.28)

n Proof. It is enough to prove the statement for ET WsT Wt = 0. For simplicity, we will prove it for the case EWsT Wt = 0. Let us consider both Ws and T Wt on the interval [0,T ], 0 ≤ s, t ≤ T . Let us introduce the notations It = R t sign(W ) dW and I(F )t = P sign(W )(W − W ) for arbitrary partition F of s s u u s tk∈F ∩[s,t] tk tk+1 tk [0,T ]. First, there exists a sequence of simple processes converging to (sign(Wt), t ≥ 0) in L2(P × λ) on [0,T ]. We follow Itô's method, see [17]. Let be an W -adapted process with nite norm and u n T . X = (Xt, 0 ≤ t ≤ T ) (Ft ) L2 ϕn(u) = b T 2 c 2n n Itô showed that there is a subsequence (nk) such that X k = (X , 0 ≤ t ≤ T ) → X in ϕnk (t−u)+u L2(P × λ) as k tends to innity for Lebesgue almost every u from [0, 1]. For the present case , it means that for arbitrary t t 2 Xt = sign(Wt) s < t E(I(Fk)s −Is) → 0 (k → ∞) if the partition Fk contains the points which can be obtained via Itô's method. Since this and the Hölder inequality we get the following estimate for all s, t: q t t p 2 t t 2 (2.29) EWs(I(Fk)0 − I0) ≤ EWs E(I(Fk)0 − I0) → 0 (k → ∞), CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 21 from which one gets t t as tends to innity. EWsI(Fk)0 → EWsI0 k For proving (2.28) we rst deal with the case t ≤ s. Using that the Brownian motion has independent increment and the fact that , we have EsignWtk = 0

t X EWsI(Fk)0 = EWssign(Wti )(Wti+1 − Wti )

ti+1≤t,ti∈Fk X   = E Wti + (Wti+1 − Wti ) + Ws − Wti+1 sign(Wti )(Wti+1 − Wti )

ti+1≤t,ti∈Fk X 2 = Esign(Wti )(Wti+1 − Wti ) .

ti+1≤t,ti∈Fk

Now, take into consideration that for all , and 2 are independent. 0 ≤ ti < ti+1 sign(Wti ) (Wti+1 − Wti ) Hence, t . EWsI0 = 0 In the case we have decomposition t s t. By the previous paragraph s < t EWsI0 = EWsI0 + EWsIs the rst term is equal to 0. By (2.29) and by the equation

t X EWsI(Fk)s = EWssign(Wti )(Wti+1 − Wti )

s≤ti

s≤ti

2.3.3 Some examples of Ocone and non-Ocone martingales In this subsection we present some remarkable properties of Ocone martingales and some of our recent results. Here, we also give a couple of examples for martingales being Ocone or non-Ocone. First, let us recall some denitions and lemmas which will be used in this subsection.

Denition 1. A continuous local martingale M such that hMi∞ = ∞ is said to be pure if, calling B its Brownian motion, is B measurable for every or equivalently DDS hMit F∞ t

M B F∞ = F∞.

Lemma E. (Vostrikova-Yor [36, Proposition]) Suppose that M is Ocone martingale. M enjoys the martingale representation property i hMi is a deterministic process. Lemma F. [23, Section 4, Chapter V.] A pure local martingale is extremal so it enjoys the martingale representation property.

Using these lemmas we can prove the following simple result.

Proposition 2. Let M be a continuous Ocone martingale. Suppose hMi∞ = ∞ and M is of the form

Z t Mt = x + σ(Ms) dβs, (2.30) 0 where σ is a nowhere vanishing function and β is a Brownian motion. Then M is Gaussian martingale. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 22

Proof. By [23, Chapter V. (1.11) Proposition] if a continuous local martingale satises the dierential equation (2.30) then is measurable with respect to B where is the DDS Brownian motion M F∞ B of M. So this martingale is pure which implies that it enjoys the martingale representation property (Lemma F). By Lemma E, M is a martingale with deterministic quadratic variation process. Martingales with this property are Gaussian. Van Zanten proved the following interesting theorem.

Theorem E. ([34]) Let M be a martingale with bounded jumps and let an, bn be sequences of positive numbers, both increasing to innity. For each n, dene the rescaled martingale M n by 1 M n = √ M . t n bnt Then the following statements hold:

1. If M n converges weakly to some process N in D(0, ∞) then the N is necessarily a continuous Ocone martingale.

2. Let N be a continuous Ocone martingale. Then M n converges to N in D(0, ∞) if and only if the quadratic variation sequence hM ni converges to hNi in D(§, ∞).

Example A. Using this theorem he proved that the martingales

Z t Z t + − Wt = I(0,∞)(Ws) dWs,Wt = I(−∞,0)(Ws) dWs 0 0 are non-Gaussian Ocone martingales. Indeed, the are Ocone martingales because of the self-similarity of the Brownian motion. They are non-Gaussian since the cited theorem of Vostrikova-Yor [36, Propo- sition] in the proof of Theorem 2

Example B. ([36]). Let (Bt,Ct), t ≥ 0 a planar Brownian motion. The process 1 Z ∞ At = (Cs dBs − Bs dCs) (2.31) 2 0 is an example of Ocone martingale. This assertion is a consequence of the following general theorem which was inspired by Marc Yor.

Theorem 5. Let Φ: Rd → Rd be a regular function. Denote the adjoint of its derivative by Ψ(x) = (Φ0)T (x) and suppose that the following conditions hold:

d Ψ(x)Φ(x) = x and x · Φ(x) = 0 for any x ∈ R (2.32) where · stands for the usual scalar product in Rd. If B is a standard d-dimensional Brownian motion then the martingale Z t Mt = Φ(Bs) · dBs (2.33) 0 is an Ocone martingale.

Corollary 1. If Φ(x) = Ax where A is a regular matrix then the conditions above are equivalent with the conditions that A is orthogonal and anti-symmetric. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 23

Proof. The derivative of the function Φ: x 7→ Ax is Ψ: x 7→ Ax. So the rst condition in (2.32) is equivalent to that of AT A = Id. The second condition can be written in the form xT Ax = 0 for all x ∈ Rd, in other words A is anti-symmetric. In the light of Corollary 1, the martingale in Example B (2.31) can be written in the following form

Z t At = ABs · dBs, 0 where  0 −1  A = 1 0 is clearly orthogonal and anti-symmetric.

We remark that matrices with the required properties: regularity, orthogonality, anti-symmetricity are available in even order dimension. If the matrix is in Rd×d where d is odd and posses the properties above then the matrix is a direct sum of a one dimensional null matrix and a matrix of even dimension with the listed properties.

Proof of Theorem 5. We will use the theorem of Yor and Vostrikova [36, p 426, Theorem 3']: Let us consider a -martingale and another martingale which is pure, i.e. , (Ft) (Mt) (Nt) Nt = γhNit γ t ≥ 0, with (hNit, t ≥ 0) measurable with respect to the σ-eld F = σ {γu, u ≥ 0} of the DDS- Brownian motion γ. Suppose that hMi is measurable with respect to F N . Then (Mt, t ≥ 0) is an Ocone martingale as soon as N and M are orthogonal.

The quadratic variation of the martingale in (2.33) is R · 2 . The ltration of M hMi = 0 |Φ(Bs)| ds is that of 2 2 2 . Using Itô's formula one gets hMi |Φ(Bt)| = φ1(Bt) + ··· + φd(Bt)

d d d d X Z t X 1 X Z t X |Φ(B )|2 = 2 (φ ∂ φ )(B ) dB + ∂2φ2(B ) ds. t i j i s j,s 2 j i s i=1 0 j=1 i=1 0 j=1 The rst term is equal to

d Z t X Z t 2 Ψ(Bs) ◦ Φ(Bs) · dBs = 2 Bi,s dBi,s =: Nt 0 i 0 by using the rst condition in (2.32). The second term can be written in the following form:

d d  d d ! 1 Z t X X Z t X X (∂ (2φ ∂ φ )) (B ) ds = ∂ φ ∂ φ (B ) ds 2 j i j i s  j i j i  s 0 i=1 j=1 0 j=1 i=1

 d  Z t X =  ∂jπj (Bs) ds = td, 0 j=1 where πj(x) = xj the jth coordinate of the vector x. The second equality follows from the rst part of 2 the condition (2.32). Summarizing these results we get that |Φ(Bt)| and so hMit is Nt measurable. CHAPTER 2. SYMMETRICALLY EVOLVING MARTINGALES 24

We recall that Nt is pure martingale. Consider the following form of hNi:

Z t d ! Z t X 2 2  hNit = Bi (s) ds = Ns + sd ds. 0 i=1 0 Let T be the quasi-inverse of hNi. Using the previous equation we have the following

d ! Z Tt Z Tt X 2 2  t = hNiTt = Bi (s) ds = Ns + sd ds. 0 i=1 0 By applying a time-change on the integral one gets

Z t Z t t = N 2 + T d dT = γ2 + T d dT Ts s s s s s 0 0 where γ is the DDS-Brownian motion of N. Hence, 1 dTt = 2 dt, (γt + Ttd) that is, T and so hMi is γ measurable. We have used that hNi is strictly increasing and that T is continuous. We will prove that M and N are orthogonal martingales. This proves our theorem. Using the second condition in (2.32) one nally gets

Z t hM,Nit = Φ(Bs) · (Ψ(Bs) ◦ Φ(Bs)) ds = 0. 0

We remark that the cited theorem of Vostrikova and Yor can be easily proved by showing that the DDS-Brownian motions of M and N, β and γ, are independent processes. These two Brownian motions are independent i every the following equation satises f, g ∈ L2(R+)  Z ∞ Z ∞  Z ∞ Z ∞  1 2 1 2 E exp f(s) dβs − f (s) ds exp g(s) dγs − g (s) ds = 1 0 2 0 0 2 0 or with other words f,β g,γ  E E∞ ·E∞ = 1, with the notation Z t Z t  f,β 1 2 Et = exp f(s) dβs − f (s) ds , 0 2 0 g,γ is dened similarly. Time-changing the exponential martingales f,β and g,γ with the time- Et Et Et change processes hMi and hNi respectively we get that we have to prove the following

 f(hMi),M g(hNi),N  E E∞ ·E∞ = 1 since  f(hMi),M g(hNi),N  f,β g,γ  E E∞ ·E∞ = E E∞ ·E∞ . This will be done by showing   f(hMi),M g(hNi),N (2.34) E Et ·Et = 1 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 25 for all t > 0. By using Itô's formula on exponential martingales we get

Z t Z t f(hMi),M g(hNi),N g(hNi),N f(hMi),M f(hMi),M g(hNi),N Et ·Et = 1 + Es dEs + Es dEs 0 0 Z t 1 f(hMi),M g(hNi),N + Es ·Es f(hMis)g(hNis) dhM,Nis. 2 0 The second and the third term are martingales so their expectation is 0 for all t the last term is also zero since M and N are orthogonal so the expectation in (2.34) is equals to 1.

Finally, we show two martingales which are non-Ocone martingales (The examples are taken from [36]).

Example C. Let B be a Brownian motion. The martingale

Z t Mt = Bs dBs, t ≥ 0 0 is non-Ocone. Its quadratic variation process R t 2 is non deterministic which would hMit = 0 Bs ds contradict that M is pure (see Lemma E and F).

Example D. Let (Cs,Bs), s ≥ 0 be a planar Brownian motion. The martingale

Z t πt = BtCt = (Cs dBs + Bs dCs) t ≥ 0 0 is non-Ocone. The quadratic variation process of this martingale is R t 2 R t 2 2 , . The hπit = 0 Rs ds = 0 (Bs + Cs ) ds t ≥ 0 -algebra generated by is that of . By the denition of we have 2. Since σ hπi (Rt, t ≥ 0) π |2BtCt| ≤ Rt this inequality, conditionally on (Rt, t ≥ 0) (BtCt) is bounded so it can not be Gaussian.

2.4 Approximation of local time, excursion, meander and bridge

The Brownian motion construction in our focus is especially suitable for obtaining results on the approximation of local time of the Brownian motion. Indeed, if two points of an approximating random walk are at the same altitude, they will remain at the same altitude after renements forever. However, this may be disadvantageous for approximating the other three processes, the excursion, bridge and meander because the rst 0 hit changes randomly from one approximation to the next one if we look into the excursion. We also present a dierent, rather combinatorial, construction for BM, excursion, bridge, meander which is due to Phillipe Marchal. His algorithms allow to generate directly the excursion, the bridge and the meander. Moreover, many self-similarity properties are easily seen from this construction, and one can also recover various distributions of BM from the urn schemes that are embedded in the construction. In his paper he also gave a local time approximating algorithm but this one is not so natural as in the rst three cases CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 26

2.4.1 Approximation of local time

Let L denote the Brownian local time at level 0. Let (Bm) be our usual sequence of scaled random walks that converges to B almost surely uniformly on every compact interval. Let `m(k) := #{0 ≤ and 1  2m  the local time of the th approximation at point 0 up l < k | Sem(l) = 0} Lm(t) := 2m `m t2 c m to time t. We will prove that the sequence of the local time of the discrete approximations is almost surely uniformly converges to the local time of the Brownian motion. The main idea of the proof is the following. When the random walk, say the mth, hits 0 at time k the ner random walk, the (m + 1)th, has 1 0 hits between and (for the denition see 2 (Tm+1(k + 1) − Tm+1(k)) Tm+1(k) Tm+1(k + 1) Tm+1 (2.1)) which is geometrically distributed random variable with parameter 1 so we can evaluate the 2 time the Brownian motion spends at 0.

Theorem 6. As n → ∞ !  2−C 1/4 2 2 −n/2 1/2 n (2.35) P sup |Ln(t) − L(t)| ≥ CK (log∗ K) n 2 ≤ λ(C) K 2 , [0,K] with appropriate C dependent positive constant λ(C) after an appropriately large n and xed C.

Proper choice of the constant C and proper usage of the Borel-Cantelli lemma gives the following corollary:

Corollary 2.

2 − n sup |Ln(t) − L(t)| < O(1)n 2 2 a.s. (n → ∞) [0,K] 1 2 sup |Ln(t) − L(t)| < K 4 (log K) a.s. (K → ∞) [0,K] Before the proof of Theorem 6 we recall Lemma B in the following form.

Lemma B. Let

( 1 )   2 −2(m+1) −2m 3 1 −m Am = sup |Tm+1(k)2 − k2 | ≥ CK log ∗K m 2 2 . 0≤k2−2m≤K 2 where log ∗x = max{1, log x}. Then, for any K > 0, C > 1, and for any m ≥ m0(C), we have

2m 1−C P {Am} ≤ 2(K2 ) .

We need the following LDP type result on the convergence of local time.

Lemma 6. Denote by the local time of the symmetric random walk at 0 until the th step. For √ `(k) k j: k ≤ j ≤ k if k → ∞ then ! 1 1 1  j 2 P (`(2k) = j) ≤ √ √ exp − √ 2π k 2 k and   `(2k) 3/4 1 1 − 1 c2(log 2k)3/2 P √ ≥ c(log 2k) < √ e 2 . 2k 2π c(log 2k)3/4 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 27

Proof of Lemma 6. For proving these, we apply normal approximation for the distribution of `:

2   2k−(2k−j)   2k − j 1 1 2 P (`(2k) = j) = 2−2k+j ∼ exp −  k 2 p  2k−j  π(2k − j)/2 2 ! ! 1 1  j −1/2 1  j 2  j −1 1 1 1  j 2 = √ · 1 − exp − √ · 1 − ≤ √ √ exp − √ 2 πk 2k 2 k 2k 2π k 2 k √ if k ≤ j ≤ k and k → ∞. Using this approximation and the asymptotic property of the normal distribution 1 2 1 − Φ(x) < √ e−x /2, x 2π if x is large enough, one can readily write

`(2k)  X    P √ ≥ c(log 2k)3/4 = P (`(2k) = j) ≈ 1 − Φ c(log 2k)3/4 2k j c(log 2k)3/4≤ √ 2k

1 1 − 1 c2(log 2k)3/2 < √ e 2 . 2π c(log 2k)3/4

Proof of Theorem 6. We will prove that two consecutive approximations are close enough to each other. Namely !  2−C 1/4 2 2 −m/2 1/2 m (2.36) P sup |Lm+1(t) − Lm(t)| ≥ CK (log∗ K) m 2 ≤ 3 K 2 . [0,K]

Therefore, {Lm} is a Cauchy sequence in the topology of the uniform convergence on compact sets: !  2−C 1/4 2 2 −n/2 for some 1/2 n P sup |Ln+j(t) − Ln(t)| ≥ CK (log∗ K) n 2 j ≥ 1 ≤ λ(C) K 2 , [0,K] (2.37) with appropriate C dependent positive constant λ(C) after an appropriately large n and xed C. Hence, (2.35) follows.

For proving (2.36) we rst apply a triangle inequality

1 sup |Lm+1(t) − Lm(t)| = sup m+1 |`m+1(4k) − 2`m(k)| [0,K] 1≤k≤K22m 2 1 1 ≤ sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| + sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| . 1≤k≤K22m 2 1≤k≤K22m 2 We will estimate ! 1 1/4 2 2 −m/2 p1 = P sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| ≥ CK (log∗ K) m 2 1≤k≤K22m 2 ! 1 1/4 2 2 −m/2 p2 = P sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| ≥ CK (log∗ K) m 2 1≤k≤K22m 2 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 28 separately.

Estimation of p1

Instead of p1 we will estimate a strictly larger probability ! 1 1/2 1/4 −m/2 P sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| ≥ C K log∗ Km2 . 1≤k≤K22m 2

Since {S(k + min {4k, Tm+1(k)}) − S(min {4k, Tm+1(k)})}k has the same law as {S(k)}k one can write

! 1 1/2 1/4 −m/2 P sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| ≥ C K log∗ Km2 1≤k≤K22m 2 ! 1 1/2 1/4 −m/2 = P sup m+1 `m+1 (min {4k, Tm+1(k)} , max {4k, Tm+1(k)}) ≥ C K log∗ Km2 1≤k≤K22m 2 ! 1 1/2 1/4 −m/2 = P sup m+1 `m+1 (0, |4k − Tm+1(k)|) ≥ C K log∗ Km2 1≤k≤K22m 2

Using Lemma B one one has

! 1 1/2 1/4 −m/2 P sup m+1 `m+1 (0, |4k − Tm+1(k)|) ≥ C K log∗ Km2 1≤k≤K22m 2 ! c 1 1/2 1/4 −m/2 ≤ P Am; sup m+1 `m+1 (0, |4k − Tm+1(k)|) ≥ C K log∗ Km2 + P(Am) 1≤k≤K22m 2 1 ! !   2 1 2 1 2 m 1/2 1/4 −m/2 2m 1−C ≤ P sup m+1 `m+1 CK log ∗K m 2 ≥ C K log∗ Km2 + 2(K2 ) 1≤k≤K22m 2 3

K22m 1 ! !   2 X 2 1 m 1/2 1/4 m/2 2m 1−C ≤ P ` CK log K m 2 2 ≥ C K log Km2 + 2(K2 ) . (2.38) m+1 3 ∗ ∗ k=1 We want to apply Lemma 6. For, proceed with the following transformation:

1 ! !   2 2 1 m 1/2 1/4 m/2 P ` CK log K m 2 2 ≥ C K log Km2 m+1 3 ∗ ∗

 1 1   2  2 m  `m+1 CK log ∗K m 2 2 1/2 1/4 m/2 3 C K log∗ Km2 = P  ≥  2 1/4 1/4 m/2 2 1/4 1/4 m/2 3 CK log ∗K m 2 3 CK log ∗K m 2

 1 1   2  2 m  `m+1 CK log ∗K m 2 2  1/4 3 3 1/4 3/4 3/4 = P  ≥ C (log∗ K) m  . 2 1/4 1/4 m/2 2 3 CK log ∗K m 2 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 29

Routine calculation shows that

1 !!3/4   2 −1 3/4 2 1 m c2 (log C + log K + m) < c log CK log K m 2 2 ∗ 3 ∗

31/4 ≤ c (2 log C log Km)3/4 = C1/4(log K)3/4m3/4, ∗ ∗ 2 ∗ where 31/4 1/4 −3/4. Now, using the last two displayed lines then applying Lemma 6 we c = 2 C (log∗ C) get

 1 1   2  2 m  `m+1 CK log ∗K m 2 2  1/4 3 3 1/4 3/4 3/4 P  ≥ C (log∗ K) m  2 1/4 1/4 m/2 2 3 CK log ∗K m 2

 1 1   2  2 m 1 !!3/4 `m+1 CK log ∗K m 2 2   2 3 2 1 m < P  ≥ c log CK log ∗K m 2 2  2 1/4 1/4 m/2 3 3 CK log ∗K m 2

1/2 3/2 1/2 3/2 3/2 3/2 1 2 3/2 1 C 1 C − c (log C+log K+m) − 8 3/2 (log C+log∗ K+m) − 8 3/2 (log C+log∗ K+m ) < e 4 ∗ < e log C < e log C . (2.39)

Henceforth, one can continue the estimation of the rst term in (2.38):

K22m 1 ! !   2 X 2 1 m 1/2 1/4 m/2 P ` CK log K m 2 2 ≥ C K log Km2 m+1 3 ∗ ∗ k=1

1 C1/2 3/2 3/2 3/2 1 C1/2 3/2 3/2 3/2 2m − 8 3/2 (log C+log∗ K+m ) log K log 2+2m log 2− 8 3/2 (log C+log∗ K+m ) < K2 e log C < e log C . (2.40)

Hence, by (2.38) and (2.40), we get

! 1 1/2 1/4 −m/2 p1 < P sup m+1 |`m+1(4k) − `m+1(Tm+1(k))| ≥ C K log∗ Km2 1≤k≤K22m 2

1 C1/2 3/2 log K log 2+2m log 2− 8 3/2 (log C+log∗ K+m) 2m 1−C ≤ e log C + 2(K2 ) . (2.41)

Estimation of the probability p2

2m Taking conditional expectation with respect to `m(K2 ) one has the following sum

! 1 1/4 2 2 −m/2 P sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| ≥ CK (log∗ K) m 2 1≤k≤K22m 2

K22m " # 1 X 1/4 2 2 −m/2 2m = P sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| ≥ CK (log∗ K) m 2 `m(K2 ) = n 2m 2 n=1 1≤k≤K2 2m  · P `m(K2 ) = n . (2.42) CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 30

2m Supposing `m(K2 ) = n we have the equality:

1 1 k X sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| = sup m+1 γi − 2k , 2m 2 2 1≤k≤K2 1≤k≤n i=1 where {γi}i is an i.i.d sequence of geometrical variables with parameter 1/2. Taking into consideration this remark one can write the sum (2.42) in he following form:

K22m k ! X 1 X P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n . 2m+1 i ∗ m n=1 1≤k≤n i=1

Divide this sum into two parts at level 1/2 1/2 3/4 3/4 m one nds: N = C K (log∗ K) m 2

K22m k ! X 1 X P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n 2m+1 i ∗ m n=1 1≤k≤n i=1 N k ! X 1 X = P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n (2.43) 2m+1 i ∗ m n=1 1≤k≤n i=1 K22m k ! X 1 X + P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n . (2.44) 2m+1 i ∗ m n=N 1≤k≤n i=1 Overestimating the sum (2.43) we get

N k ! X 1 X P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n 2m+1 i ∗ m n=1 1≤k≤n i=1 k !

X 1/4 2 2 m/2 < N · P sup γi − 2k ≥ CK (log∗ K) m 2 1≤k≤N i=1 k !

X 1/2 1/2 < N · P sup γi − 2k ≥ 2 (2CN log N) 1≤k≤N i=1  2−C 1+1−C 1/2 3/4 3/4 m (2.45) < N < K (log∗ K) m 2 , if C is large enough. We have basically applied Lemma A here. 1/2 On one hand, Var(γ1 −2) = 2. This is the origin of the 2 multiplier at the right in the probability in the third row. (In Lemma A we have to ensure that the variance of Xi is 1.) On the other hand,

(2CN log N)1/2  1 1 3 3 1/2 = 2CC1/2K1/2(log K)3/4m3/42m log C + log K + log log K + log m + m log 2 ∗ 2 2 4 ∗ 4 3/4 1/2 1/4 2 2 m/2 1/4 2 2 m/2 (2.46) < 2C (log C) K (log∗ K) m 2 < CK (log∗ K) m 2 , if C is large enough. This yields the second estimate. The third is a pure application of Lemma A. The last can be obtained by omitting the term C1/2 from N. CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 31

Overestimating (2.44) one can write

K22m k ! X 1 X P sup γ − 2k ≥ CK1/4(log K)2m22−m/2 P ` (K22m) = n 2m+1 i ∗ m n=N 1≤k≤n i=1 2m K2  2m  X `m(K2 ) 3/4 < P ` (K22m) = n = P ` (K22m) ≥ N = P ≥ C1/2 (log Km) m m K1/22m ∗ n=N − 1 C(log Km)3/2 < e 2 ∗ , (2.47) by a proper application of Lemma 6. Summarizing (2.45) and (2.47) one has

! 1 1/4 2 2 −m/2 P sup m+1 |`m+1(Tm+1(k)) − 2`m(k)| ≥ CK (log∗ K) m 2 1≤k≤K22m 2

 2−C 1 3/2 2m 1−C 1/2 3/4 3/4 m − 2 C(log∗ Km) (2.48) ≤ 2(K2 ) + K (log∗ K) m 2 + e .

The lines (2.41) and (2.48) together yield (2.36):

! 1/4 2 2 −m/2 P sup |Lm+1(t) − Lm(t)| ≥ CK (log∗ K) m 2 [0,K]

1/2 log K log 2+2m log 2− 1 C (log3/2 C+log3/2 K+m3/2) ≤ e 8 log3/2 C ∗ 2−C 2−C  1/2 3/4 3/4 m  1/2 m + K (log∗ K) m 2 < 3 K 2 if C is large enough. Now, for proving (2.37) we use the following estimation

n+j

X sup |Ln+j(t) − Ln(t)| = sup Lm+1(t) − Lm(t) [0,K] [0,K] m=n n+j ∞ X X 1/4 2 2 −m/2 1/4 2 2 −n/2 (2.49) ≤ sup |Lm+1(t) − Lm(t)| < CK (log∗ K) m 2 < c CK (log∗ K) n 2 m=n [0,K] m=n with proper constant, say c = 20 for any j > 1 after an appropriately large n. We omit the constant c from the latter expressions because it can be built in e.g. the estimate (2.46) if C is large enough. Therefore, we can conclude that

! 1/4 2 2 −n/2 for some P sup |Ln+j(t) − Ln(t)| ≥ CK (log∗ K) n 2 j ≥ 1 [0,K] ∞ ! X 1/4 2 2 −m/2 ≤ P sup |Lm+1(t) − Lm(t)| ≥ CK (log∗ K) m 2 m=n [0,K] ∞ X  2−C  2−C ≤ 3 K1/22m ≤ λ(C) K1/22n , m=n with appropriate C dependent positive constant λ(C). This last line proves (2.37) and (2.35). CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 32

The following corollary is necessary for proving an stochastic integral approximation result Theorem 9 in Section 2.5. Before this global convergence theorem we introduce some notation. Let a and a respec- L (t) Ln(t) tively be the local time of the standard Brownian motion and its nth approximation respectively at n level until time . For arbitrary real number let us dene a daen where da2 e . a t a Ln(t) = Ln (t) daen = 2n n More, daen 1 da2 e 2m 1 2m n Ln (t) = 2m `m (bt2 c) = 2m #{0 ≤ l < bt2 c | Sem(l) = da2 e} Corollary 3. 1 a a −( 2 −ε)n a.s. (2.50) sup sup |Ln(t) − L (t)| < O(1)2 (n → ∞) a∈R [0,K] for an arbitrary small positive ε. The rate of the convergence is worse than in Corollary 2 and we does not say anything about the case when m is xed and K tends to innity. The reason is that we use the following statement on the uniformly Hölder continuity of the local time for proving the corollary.

For all xed K there exist a positive random variable DK such that

x y α sup |L (t) − L (t)| ≤ DK |x − y| (2.51) t∈[0,K] for every α < 1/2 (see [23, (1.32) Exercise, Chapter VI]). Proof of Corollary 3. First, we prove

 

 a a 1/4 2 2 −n/2 for some  P  sup sup |Ln+i(t) − Ln(t)| ≥ CK (log∗ K) n 2 i ≥ 1  a = j2−2n [0,K]  j ∈ Z  4−C ≤ λ(C) K1/22n , (2.52)

for all xed j with appropriate C dependent positive constant. This yields

a a 1/4 2 2 −n/2 a.s. (2.53) sup sup |Ln(t) − L (t)| < C K (log∗ K) n 2 (n → ∞) a = j2−2n [0,K] j ∈ Z To prove (2.52) we rst apply the following transformation:

1 sup sup |La (t) − La (t)| = sup sup `2j (4k) − 2`j (k) . m+1 m m+1 m+1 m a = j2−2m [0,K] j∈Z 1≤k≤K22m 2 j ∈ Z

Dividing Z into two parts one gets

1 sup sup `2j (4k) − 2`j (k) m+1 m+1 m j∈Z 1≤k≤K22m 2 1 1 ≤ sup sup `2j (4k) − 2`j (k) + sup sup `2j (4k) − 2`j (k) . m+1 m+1 m m+1 m+1 m |j|≤K22m 1≤k≤K22m 2 |j|>K22m 1≤k≤K22m 2 CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 33

Here, the second term is 0 because under K22m steps the random walk can not reach the level further than K22m from the origin. Therefore, we only have to deal with rst part. ! 1 P sup sup `2j (4k) − 2`j (k) ≥ CK1/4(log K)2m22−m/2 m+1 m+1 m ∗ j∈Z 1≤k≤K22m 2 ! 1 = P sup sup `2j (4k) − 2`j (k) ≥ CK1/4(log K)2m22−m/2 m+1 m+1 m ∗ |j|≤K22m 1≤k≤K22m 2 ! X 1 ≤ P sup `2j (4k) − 2`j (k) ≥ CK1/4(log K)2m22−m/2 m+1 m+1 m ∗ 1≤k≤K22m 2 |j|≤K22m ! X 1 = P sup `2j (4k) − 2`j (k) ≥ CK1/4(log K)2m22−m/2 m+1 m+1 m ∗ τ ≤k≤K22m 2 |j|≤K22m j ! X 1 1/4 2 2 −m/2 ≤ P sup m+1 |`m+1(4k) − 2`m(k)| ≥ CK (log∗ K) m 2 1≤k≤K22m 2 |j|≤K22m  2−C  4−C < 2K22m3 K1/22m = 6 K1/22m , where τj denotes the rst hitting time of the level j. We used the strong of the standard random walk and Theorem (6). At this point we apply a (2.49) like argument:

n+j a a X a a sup sup |Ln+j(t) − Ln(t)| = sup sup Lm+1(t) − Lm(t) −2n −2n a = j2 [0,K] a = j2 [0,K] m=n j ∈ Z j ∈ Z n+j n+j X a a X a a ≤ sup sup |Lm+1(t) − Lm(t)| ≤ sup sup |Lm+1(t) − Lm(t)| −2n −2m m=n a = j2 [0,K] m=n a = j2 [0,K] j ∈ Z j ∈ Z ∞ X 1/4 2 2 −m/2 1/4 2 2 −n/2 < CK (log∗ K) m 2 < c CK (log∗ K) n 2 m=n with proper constant c > 0 , which can be omitted if C is large enough, for any j > 1 after an appropriately large n. So we can conclude that  

 a a 1/4 2 2 −n/2 for some  P  sup sup |Ln+j(t) − Ln(t)| ≥ CK (log∗ K) n 2 j ≥ 1  a = j2−2m [0,K]  j ∈ Z   ∞ X  a a 1/4 2 2 −m/2 ≤ P  sup sup |Lm+1(t) − Lm(t)| ≥ CK (log∗ K) m 2  −2m m=n  a = j2 [0,K]  j ∈ Z ∞ X  4−C  4−C ≤ 6 K1/22m ≤ λ(C) K1/22n , m=n with appropriate C dependent positive constant which proves (2.52). CHAPTER 2. APPROXIMATION OF LOCAL TIME AND EXCURSION 34

Now, we are ready to prove (2.50). By using (2.51) and (2.53) we get

a a daen a sup sup |Ln(t) − L (t)| = sup sup Ln (t) − L (t) a∈R [0,K] a∈R [0,K]

a daen daen a ≤ sup sup Ln(t) − L (t) + sup sup L (t) − L (t) a = j2−2n [0,K] a∈R [0,K] j ∈ Z 1 1 1/4 2 2 −n/2 −( 2 −ε)n −( 2 −ε)n ≤ c CK (log∗ K) n 2 + DK 2 ≤ O(1)2 a.s. as n tends to innity.

2.4.2 Excursion, bridge and meander  Marchal's construction We present Phillipe Marchal's algorithm for approximating the , bridge and mean- der. His algorithms allow to generate directly the excursion, the bridge and the meander. Moreover, many self-similarity properties are easily seen from this construction, and one can also recover various distributions of BM from the urn schemes that are embedded in the construction. In his paper he also gave a local time approximating algorithm but this one is not so natural as in the rst three cases. In spite of the originality of that construction we will not present it here.

Theorem F. There exists a family (Sn, n ≥ 1) of random walks on Z starting at 0 where for every n, Snrespectively: (1) has length 2n and is conditioned to return to 0 at time 2n, (2) has length 2n and conditioned to return to 0 at time 2n and to stay positive from time 1 to 2n − 1, (3) has length and conditioned to stay positive from time to , n √ 1 n −√1 and such that almost surely, for every , n (or n in the third case) converges t ∈ [0, 1] Sb2ntc/ n Sbntc/ n to Bet where (Bet, 0 ≤ t ≤ 1) is respectively: (1) a , (2) a Brownian excursion, (3) a Brownian meander. Before the construction we have to introduce some notation and denition. If the meander of a path P is positive, a point t in the meander is visible from the right if

P(t) = min{P(n)| n ≥ t }. A positive step followed by a negative step is called positive hat. A negative step is dened likewise. Let us describe a procedure to extend a path P. Suppose we are given a time T such that P(T ) > 0. Then lifting the Dyck path before time T means the following. Let

T 0 = 1 + sup{n ≤ T | P(n) < P(T )}.

Then form the new path P0 by inserting a positive step at time T 0 and then a negative step at time T + 1. We remark the if P(T − 1) = P(T ) − 1, then T = T 0 and lifting the Dyck path before time T amounts to inserting a hat at time T . Now, we describe that how to generate Sn+1 from Sn: I. Choose a random time t uniformly on {0,..., 2n − 1}. II. If Sn(t) = 0, insert a positive or negative hat at time t with respective 1/21/2. III. If t is in a positive excursion, CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 35

1 with probability 1/2 insert a positive hat at time t 2 with probability 1/2 lift the Dyck path before time t. IV. If t is in the meander of Sn and if this meander is positive, 1 If t is visible from the right, or if t is visible from the right and Sn(t) is even, proceed as in III. 2 If t is visible from the right and Sn(t) is odd, a with probability 1/2 insert at time t a positive hat, b with probability 1/2 insert at time t two positive steps.

Now, let us present the algorithms. In case (1) of F, begin with S0 the empty path. To obtain Sn+1 from Sn, choose a random time t uniformly in {0,..., 2n} and apply procedure II or III. In case (2), begin with S1 a positive hat. To obtain Sn+1 from Sn, choose a random time t uniformly in {1,..., 2n} and apply procedure III. In case (3), begin with S1 a positive step. To obtain S2n+1 from S2n−1, choose a random time t uniformly in {1,..., 2n} and apply procedure IV. To obtain S2n from S2n−1, just add a last random step.

The idea of the proof of these statements is a bijection made between Dyck paths and random binary trees. The convergence rate in all case is O(1/n1/4)

2.5 Pathwise stochastic integration

Our objective in this section is to dene a sequence of stochastic sums converging to R t 0 Ys dM(s) with probability 1 on every compact interval, where M is a continuous local martingale and Y is an integrable stochastic process with respect to M. It will turn out that one can carry out this procedure for two types of processes. The rst is of the form 0 where 0 denotes the left-hand side derivative f−(M) f− of the dierence of two convex functions. The other family of processes is that of with right continuous paths and left limit. The main idea is that one can take rst a dyadic partition on the spatial axis that gives the random stopping times of the Skorohod embedding on the time axis. At this point the procedure somewhat reminds a possible denition of Lebesgue integral. So we get an approach of stochastic integrals which is basically dierent from the usual denition of stochastic integration. Similar approach can be found in Karandikar's papers. Because of this similarity we think that showing some of his results could make our presentation more complete. The theorems in Subsection 2.5.1 are taken from the above mentioned paper [13] from 1996. Here, we could nd statements on stochastic integration w.r.t Brownian motion and and even pathwise construction of solution of SDE. The main dierence of the two approaches is for which process the discretisation is applied. Karandikar applied the discretisation to the integrand while we apply it to the integrator martin- gale.

2.5.1 Karandikar's integral approximation

Throughout this subsection, we x a complete probability space (Ω, F, P) and ltration (Ft) satisfying the usual conditions. His rst theorem is about pathwise stochastic integration w.r.t. Brownian motion is CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 36

Theorem G. Let (Wt) be Brownian motion adapted to the ltration (Ft) such that Wt − Ws is independent of Fs for all 0 ≤ s ≤ t < ∞. Let f be an r.c.l.l. adapted process and for n ≥ 1, let n be dened by n and for {τi : i ≥ 0} τ0 = 0 i ≥ i

n n −n τ = inf{t ≥ τ : |f (·) − f n (·)| ≥ 2 }. i+1 i t τi Let n be dened as follows. For n n , (Yt ) τk ≤ t < τk+1, k ≥ 0

k−1 n X Y = f n (W n − W n ) + f n (W − W n ). t τk τi+1 τi τk t τk i=0 Then, for all T < ∞ we have

Z t n a.s. sup Yt − f dW → 0 (n → ∞). 0≤t≤T 0 We present his proof of this theorem because we will use some notations and key steps of that in the proof of Theorem 7.

Proof. Note that n R t n , where Yt = 0 f dW

n n n f = f n for τ ≤ t < τ t τk k k+1 and hence by the choice of n we have {τi }

n −n |ft − ft| ≤ 2 . Using the standard estimate

2 Z t Z T 2 (2.54) E sup g dW ≤ 4E g dt 0≤t≤T 0 0 one gets Z t 2 n −2n (2.55) E sup Yt − f dW ≤ 4T 2 . 0≤t≤T 0 Let Z t n Un = sup Yt − f dW . 0≤t≤T 0 √ −n Then (2.55) implies that EUn ≤ 2 T 2 and hence it follows that √ X X X −n E Un = EUn ≤ 2 T 2 < ∞. n n n As a consequence, one gets X Un < ∞ a.s. n which gives the required conclusion. CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 37

Theorem H. Let X be a and let Z be an r.c.l.l. adapted process. For n ≥ 1, let n be dened by n and for {τi : i ≥ 0} τo = 0 i ≥ i

n n −n τ = inf{t ≥ τ : |Z (·) − Z n (·)| ≥ 2 }. i+1 i t τi Let n be dened as follows. For n n , (Yt ) τk ≥ t < τk+1, k ≥ 0

k−1 n X Y = Z n (X n − X n ) + Z n (X − X n ). t τk τi+1 τi τk t τk i=0 Then, for all T < ∞ we have Z t n a.s. sup Yt − Z dX → 0 (n → ∞). 0≤t≤T 0

Proof. Note that n R t n , where Yt = 0 Z− dX n n n Z = fZ n for τ ≤ t < τ t τk k k+1 for and n . Hence by the choice of n we have k ≥ 1 Z0 = Z0 {τi }

n −n sup |Zt − Zt−| ≤ 2 . t The core of the proof is the inequality (2.56). For this we have to introduce some notations. Let X = M + A be a decomposition of a semimartingale X where M is a locally square integrable martingale and A is a process whose paths are of bounded variation an bounded intervals. Dene stopping times increasing to such that . We have the following statement: σk ∞ Ck = EhM,Miσk < ∞ For a predictable process f and stopping time σ we have

Z t 2 Z σ 2 E sup f dM ≤ 4E f dhM,Mi . (2.56) 0≤t≤σ 0 0 Plugging in our variables we obtain

Z t Z t 2 n −2n E sup Z dM − Z− dM ≤ 2 Ck 0≤t≤σk 0 0 and, as in Theorem G we can conclude that

Z t Z t n sup Z dM − Z− dM → 0 a.s. 0≤t≤σk 0 0 for all k ≥ 1. Since σk → ∞, we get Z t Z t n sup Z dM − Z− dM → 0 a.s. (2.57) 0≤t≤T 0 0

for all T < ∞. As for the dA integral, uniform convergence of Zn to Z− yields Z t Z t n sup Z dA − Z− dA → 0. (2.58) 0≤t≤σk 0 0 Together, (2.5.1) and (2.58) yield the result. CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 38

Finally we present Karandikar's result on pathwise approximation of the solution of a SDE. The SDE considered is Z t Zt = Ht + a(Z)s− dXs, 0 where X is an Rd valued semimartingale, H is a given adapted r.c.l.l. Rd valued process and

d a : D([0, ∞), R ) → D([0, ∞),L(d, d)), where L(d, d) is the space of d × d matrices. We assume that the functional a satises the following Lipschitz condition. For each T < ∞ there exists a constant CT such that

ka(ρ1)(t) − a(ρ2)(t)k ≤ CT sup kρ1(s) − ρ2(s)k 0≤s≤t for all d and for all . Here denotes the Euclidean norm on d and ρ1, ρ2 ∈ D([0, ∞), R ) 0 ≤ t ≤ T k · k R on L(d, d). For arbitrary xed d and we will dene a function . For, we ρ, η ∈ D([0, ∞), R ) n ≥ 1 Sn(η, ρ) ∈ D rst dene some sequence of some variables for these xed functions. Let and i d be dened inductively by {ui : i ≥ 1} ξ ∈ D([0, ∞), R ) and 0 u0 = 0 ξt ≡ η0

j and having dened uj, ξ for j ≤ i, let

i −n ui+1 = inf{t > ui : kη(t) − η(ui) + a(ξ )(ui)(ρ(t) − ρ(ui))k ≥ 2 i i −n or ka(ξ )(t) − a(ξ )(ui)k ≥ 2 } and  i i+1 ξ (t) for t < ui+1 ξ (t) = i i . ξ (ui) + η(ui+1) − η(ui) + a(ξ )(ui)(ρ(ui+1) − ρ(ui)) for t ≥ ui+1 i By denition ξ is a step function with jumps at u1, u2, . . . , ui+1. Now, we are able to dene the function Sn(η, ρ). Let Sn(η, ρ)(0) = η(0) and for ui < t ≤ ui+1 let

i i Sn(η, ρ)(t) = ξ (ui) + ηt − ηui + a(ξ )(ρ(t) − ρ(ui)).

So have dened a function on D([0, ∞), Rd) × D([0, ∞), Rd). Taking limit one gets the function

S(η, ρ) = lim Sn(η, ρ) n→∞ whenever the limit exists in the topology of uniform convergence of compact subsets.

Theorem I. Let (Xt) be a semimartingale and let (Ht) be an r.c.l.l. adapted process. Then

Zt(ω) = S(H·(ω),X·(ω))(t) is the unique solution to the equation

Z t Zt = Ht + a(Z)s− dXs, 0 CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 39

2.5.2 Discretization applied to the integrator

Theorem 7. Let (Wt, (Ft)) be a Brownian motion with the standard ltration. Let f be an r.c.l.l. adapted process and for n ≥ 1 let {sn(i): i ≥ 0} be the stopping times dened in (2.6). Let n be dened as follows. For , (It ) sn(k) ≤ t < sn(k + 1), k ≥ 0

k−1 n X It = fsn(i)(Wsn(i+1) − Wsn(i)) + fsn(k)(Wt − Wsn(k)). i=0 Then, for all T < ∞ we have

Z t n a.s. (2.59) sup It − f dW → 0 (n → ∞). 0≤t≤T 0

Proof. Essentially, the proof is based on the fact that the sequence {sm(k)}k is getting dense on every compact interval as m goes to innity. M Fix T, M > 0 and let ϑ = inf{t | 0 ≤ t ≤ T, |ft| > M}. Since f is r.c.l.l. process, max[0,T ] |ft| is almost surely nite so P(ϑM = T ) → 1 as M → ∞. Dene

M M ft = ft I{t < ϑ }

f n,M = f M for s (k) ≤ t < s (k + 1) (2.60) t sn(k) n n and Z t n,M n,M (2.61) It := f dW. 0 We will prove that Z t n,M M (2.62) sup It − f dW → 0 a.s. 0≤t≤T 0 or equivalently Z t n sup It − f dW → 0 a.s. 0≤t≤ϑM 0 Hence, the property P(ϑM = T ) → 1 as M → ∞ yields (2.59). Indeed, the following events are a.s. equal to each other because of the property P(ϑM = T ) → 1 as M → ∞:

 Z t   Z t  n for all n sup It − f dW −→ 0 a.s. = m ∈ N : sup It − f dW −→ 0 a.s. 0≤t≤T 0 n→∞ 0≤t≤ϑm 0 n→∞  Z t  \ n = sup It − f dW −→ 0 a.s. . 0≤t≤ϑm 0 n→∞ m∈N Since each terms in the last section have probability 1, we get (2.59). For proving (2.62), we prove a somewhat more general statement. Namely, we show (2.59) for arbitrary f r.c.l.l. adapted process such that max[0,T ] |ft| ≤ M a.s. In the sequel, we suppose f possesses this property. For simplicity we use the notations n∗ and n with the following meaning: ft It Z t n∗ for and n n∗ (2.63) ft = fsn(k) sn(k) ≤ t < sn(k + 1) It := f dW 0 similar to (2.60) and (2.61). CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 40

Using the notations of Theorem G we rst dene the set An,k for non-negative integers n and k:

 n n n An,k = ∀i(τi < T ) ∃j : τi ≤ sk(j) < τi+1 ∧ T .

Since the sequence {sn+j(k)}k is a renement of the sequence {sn(k)}k and {sm(k)}k is getting dense on every compact interval as n goes to innity, there exists a positive integer l(n, k) such that for all l ≥ l(n, k) the property An,k ⊂ An+1,l holds. Moreover, for xed n the probability P(An,k) tends to 1 as k → ∞. We introduce the following sequences of positive integers {kn} and embedded sets {An}:

k0 = 0, A0 = ∅, ( ) 2−2n k = inf k > k P(A ) ≥ 1 − , A ⊂ A , A := A , (2.64) n+1 n n+1,k n n+1,k n+1 n+1,kn+1 C(n) where 4 4 2 4. By the remarks in the previous paragraph and are well C(n) = 12 3 T M {kn} {An} dened and has the following two properties An−1 ⊂ An and P(An) → ∞ as n → ∞.

Let Z t n Un = sup It − f dW . 0≤t≤T 0 We will prove that tends to 0 almost surely as goes to innity. Ukn n Using the triangle-inequality one gets

! 2! ! Z t 2 E sup |U |2 ≤ E sup Y n − f dW + E sup Ikn − Y n . kn t t t [0,T ] [0,T ] 0 [0,T ]

(For the denition of n see Theorem G.) By the denition of one nds that for all Yt An t ≤ T n n∗ −n on the set |ft − ft | ≤ 2 An where n is dened in the proof of Theorem G. Using this inequality and (2.54) one gets the following ft estimate: ! ! ! 2 2 2 kn n kn n kn n c E sup It − Yt ≤ E sup It − Yt I{An} + E sup It − Yt I{An} [0,T ] [0,T ] [0,T ] ! Z T 4 n n∗ 2 c kn n ≤ 4E (ft − ft ) dt + P(An) · E sup It − Yt 0 [0,T ] ≤ 4T 2−2n + 2−2n. (2.65)

The last inequality is valid because of the denition of An and the following consideration on the last expectation. ! 4 44  4 kn n kn n E sup It − Yt ≤ E IT − YT [0,T ] 3  4 because kn n is submartingale. Simple calculation on stochastic integrals shows that It − Yt

!2  4 Z T kn n n∗ n 2 E IT − YT ≤ 3E (fs − fs ) ds . 0 CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 41

4 4 Using the condition max[0,T ] |ft| ≤ M a.s. we get that the last expression is smaller than 4T M so we get ! 4 44 kn n 2 4 E sup It − Yt ≤ 12 T M . [0,T ] 3 Summarizing these estimation we have ! 2 −2n E sup |Ukn | ≤ (8T + 1)2 . [0,T ] √ This implies that −n and hence it follows that EUkn ≤ 8T + 1 2 √ X X X −n E Ukn = EUkn ≤ 8T + 1 2 < ∞. n n n As a consequence, one gets X a.s. Ukn < ∞ n which gives the required conclusion.

Now, we will prove that Um → 0 almost surely as m goes to innity. Let n be such that kn ≤ m < k . Then n+1 U ≤ U + sup Ikn − Im . m kn t t [0,T ] Since the sequence is a renement of the sequence one has . Therefore, {sm(j)}j {skn (j)}j An−1 ⊂ An,m

kn m −n+1 on the set ft − ft ≤ 2 · 2 An−1 so

kn m −n+2 on sup It − It ≤ T 2 An−1 [0,T ] from which one obtains −n+2 on Um ≤ Ukn + T 2 An−1.

By the denition of {An}n, P(An) → 1 and An ⊂ An+1 so we get Um tends to 0 a.e. as it was required.

The following general form of Theorem 7 can be obtained by using the DDS-construction.

Theorem 8. Let M be continuous (Ft)-martingale with quadratic variation such that hMi∞ = ∞. Let Y be an r.c.l.l. (Ft)-adapted process and for n ≥ 1 let {τn(i): i ≥ 0} be the stopping times dened in (2.13). Let n and n be dened as follows. For arbitrary let be such that (Yt ) (It ) t k τn(k) ≤ t < τn(k + 1), k ≥ 0 let us dene n (2.66) Yt := Yτn(k) Z t k−1 n n X It = Ys dMs = Yτn(i)(Mτn(i+1) − Mτn(i)) + Yτn(k)(Mt − Mτn(k)). 0 i=0 Then, for all K < ∞ we have Z t n a.s. (2.67) sup It − Y dM → 0 (n → ∞). 0≤t≤K 0 CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 42

The next important corollary is a simple consequence of this theorem. Corollary 4. Under the assumption of Theorem 8 one can obtain the following convergence result:

Z t X a.s. sup Yτn(i)(Mτn(i+1) − Mτn(i)) − Y dM → 0 (n → ∞). 0≤t≤K 0 τn(i+1)≤t This corollary implies that n can be thought of as a stochastic sum of stochastically weighted, It independent, identically distributed, symmetrical and 1 valued random variables. ± 2m

Proof. We want to apply Theorem 7. For, consider the following assertion for an (Ft)-progressive process Y : Z t Z t Z t Z hMit Y dM = Y dM = Y dB = Y dB (2.68) s s ThMis s ThMis hMis Ts s 0 0 0 0 where T is the quasi-inverse of hMi and B is the DDS-Brownian motion of M. The rst equality holds because the intervals of constancy are the same for and for so on the intervals M hMi [s, ThMis ] M is constant. The third equality follows from Proposition 1.4 in [23, Chapter V.] for C-continuous time-changed stochastic integrals: Lemma G. If is a time-change and is an -progressive process, then is - C = (Cu) H (Ft) HCu (FCu ) progressive and if X is a C-continuous process of nite variation, then

Z Ct Z t

Hs dXs = I {Cu < ∞} HCu dXCu . C0 0

Here a process X is said to be C-continuous if X is constant on each interval [Cu−,Cu].

In our present case B is hMi-continuous and hMi∞ < ∞ so (2.68) follows. σ Let σk := inf{t > 0 | hMit > k} and denote by M k the stopped martingale. Using (2.68) one obtains

Z t Z t Z t Z t n n σk σk sup Ys dMs − Ys dMs = sup Ys dMs − Ys dMs 0≤t≤K∧σk 0 0 0≤t≤K 0 0 Z t Z t Z t Z t = sup Y n dB − Y dB ≤ sup Y n dB − Y dB . (2.69) Ts s Ts s Ts s Ts s σ 0≤t≤hM k iK 0 0 0≤t≤k 0 0

Here, R hMit Y n dB is equal to the following sum: 0 Ts s

k−1 Z hMit Z t X Y n dB = Y n dM = Y (M − M ) + Y (M − M ) Ts s s s τn(i) τn(i+1) τn(i) τn(k) t τn(k) 0 0 i=0 X = YThMi (BhMiτ (i+1) − BhMiτ (i) ) + YThMi (BhMit − BhMiτ (k) ) τn(i) n n τn(k) n τn(i+1)

X Y (B − B ) + Y (B − B ) Tsn(i) sn(i+1) sn(i) Tsn(i) hMit sn(k)

hMisn(i+1)

Z t Z t n∗ sup (YT )s dBs − YTs dBs 0≤t≤k 0 0 which fulls the condition of Theorem 7. Therefore, it converges to zero as n goes to innity. Since hMi∞ = ∞, σk also tends to innity as k tends to innity. Applying this and the previous conclusion of Theorem 7 we get

Z t Z t n a.s. sup Ys dMs − Ys dMs → 0, 0≤t≤K 0 0 which was required.

2.5.3 The non cadlag-case Karandikar's approach provides a method for approximating stochastic integrals where the integrand is cadlag process. Therefore, we cannot apply his method for integrals in which the integrand is the sign function of the Brownian motion sign(Bt) which is often used as a basis of examples in stochastic analysis. In this case we can carry out our approximating method which is applicable on a much smaller class of integrands, namely for 0 where 0 is the derivative of the dierence of two f−(B) f− convex functions. In [28] Tamás Szabados gave an approximation theorem using discrete Itô formula. This result was valid for integrals like R f(B) dB for f ∈ C2, i.e. for two times continuously dierentiable real functions. Using Theorem 3 and Itô-Tanaka formula [23] one could prove the following more general statement of this kind.

Theorem 9. Let f be a dierence of two convex functions and let M be a continuous local martingale such that hMi∞ = ∞ almost surely. Then for arbitrary K > 0 Z t Z t 0 0 (2.70) sup f−(Mm(s)) dMm(s) − f−(M(s)) dM(s) → 0 t∈[0,K] 0 0 almost surely as m tends to innity. Proof. Basically, we follow the method of the proof of Itô-Tanaka theorem [23, Theorem 1.5, Ch VI]. It is enough to prove the formula for a convex f. On every compact interval I f is equal to a convex 0 function g such that g has compact support. Thus by stopping M and Mm when they rst leave a compact set, it suces to prove the statement when f 00 has compact support in which case there are two constants αI and βI such that 1 Z f(x) = α x + β + |x − a|f 00( da). (2.71) I I 2 First, we prove the statement (2.70) for the Brownian case, that is, M = B. Here, we have to remark that the stopping does not change the convergence rate and the corresponding probabilities in Theorem

A so we have the same convergence result for the stopped processes B and Bm. Proper usage of the above equation leads us to the Itô-Tanaka formula for Brownian motion B

Z t 1 Z f(B(t)) = f(B(0)) + f 0 (B(s)) dB(s) + Laf 00( da). (2.72) − 2 t 0 R CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 44

For having the discrete version of this equation we need the following equation

Z t sign(Bm(s) − a) dBm(s) 0 daem −m a −m (2.73) = |Bm(t) − daem| − daem − Lm (t) + O(2 ) = |Bm(t) − a| − a − Lm(t) + O(2 ).

m where da2 e . Here, we used the denition of a daem (see Corollary 3)and that daem = 2m Lm(t) = Lm (t) −m −m −m |Bm(t) − daem| − |Bm(t) − a| = O(2 ) where |O(2 )| ≤ 2 . Finally, applying the last three and the following statement Z 0 1 00 f−(x) = sign(x − a)f ( da) + αI 2 I we obtain the equation

1 Z f(B (t)) = α B (t) + β + |B (t) − a| f 00( da) m I m I 2 m R Z 1 Z t  = α (B (t) − B (0)) + f(B (0)) + sign(B (s) − a) dB (s) + La (t) + O(2−m) f 00( da) I m m m 2 m m m R 0 Z t Z Z 0 a 00 −m 00 = αI Bm(t) + f−(Bm(s)) dBm(s) − αI (Bm(t) − Bm(0)) + Lm(t)f ( da) + O(2 )f ( da) 0 R R Z t Z Z 0 a 00 −m 00 = f−(Bm(s)) dBm(s) + Lm(t)f ( da) + O(2 )f ( da). 0 R R Therefore, we can write

Z t Z t 0 0 sup | f−(Bm(s)) dBm(s) − f−(B(s)) dB(s)| t∈[0,K] 0 0 Z daem a 00 ≤ sup |f(Bm(t)) − f(B(t))| + sup |Lm (t) − L (t)|f ( da) t∈[0,K] R t∈[0,K] Z + sup O(2−m)f 00( da). R t∈[0,K] On the right hand side each terms converge almost surely to 0 as m tends to innity. For the rs term this is trivial, the convergence of the second and the third term can be proved using (2.50) and the Lebesgue theorem. So we get

Z t Z t 0 0 (2.74) sup f−(Bm(s)) dBm(s) − f−(B(s)) dB(s) → 0 t∈[0,K] 0 0 almost surely as m tends to innity. Now, let us investigate the general case. We reduce it to the Brownian case by using DDS con- struction and the scaling property of the local time. Let us introduce the notation a . It denotes the local time of the process at level till time LX (t) X a . By Exercise 1.27 in [23, Ch VI] we have a a a where denotes the t LM (t) = LB(hMit) = L (hMit) B DDS-Brownian motion of M. Simple consideration shows that the discrete version of this identity is also valid: La (t) = La (N (t)) = Ldaem (N (t)) Mm, m Bm, m m m m CHAPTER 2. PATHWISE STOCHASTIC INTEGRATION 45

Let σk be the rst time when hMi exceeds the level k. We can write the following estimate for k the stopped martingale M (t) = M(t ∧ σk):

Z t Z t 0 k k 0 k k sup f−(Mm(s)) dMm(s) − f−(M (s)) dM (s) t∈[0,K] 0 0 Z t Z t 0 0 ≤ sup | f−(Bm(s)) dBm(s) − f−(B(s)) dB(s)| t∈[0,(hM,MiK ∧k)+1] 0 0

on the set Aa,m (for the precise denition of Aa,m see Lemma 3) after an appropriately large m. k Since (2.74), M fulls (2.70) for all xed k. Taking into consideration that hMi∞ = ∞ (so σk → ∞), (2.70) is valid for M as well. Chapter 3

Approximation of the exponential functional of Brownian motion

This chapter is devoted to studying a special application of our discrete approximation method. Here, we investigate the exponential functional of Brownian motion Z ∞ Iν = exp (B(t) − νt) dt 0 and mainly its discrete version the exponential functional of random walk.

3.1 Introduction to the exp functional of Brownian motion

The geometric Brownian motion (originally introduced by the economist P. Samuelson in 1965) plays a fundamental role in the BlackScholes theory of option pricing, modeling the price process of a stock. It can be explicitly given in terms of Brownian motion (BM) B as

2   S(t) = S0 exp σB(t) + µ − σ /2 t , t ≥ 0. In the case of Asian options one is interested in the average price process

1 Z t A(t) = S(u) du, t ≥ 0. t 0 The following interesting result is true for the distribution of a closely related, widely investigated exponential functional of BM:

Z ∞ 2 I = exp(B(t) − νt) dt =d (ν > 0). (3.1) 0 Z2ν

d Here Z2ν is a gamma distributed random variable with index 2ν and parameter 1, while = denotes equality in distribution. This result was proved by [7] using discrete approximations with gamma distributed random variables and also by [38], using rather ingenious stochastic analysis tools. For more background information see [39] and [2]. As a consequence, the pth integer moment of I is nite i p < 2ν and Γ(2ν − p) E(Ip) = 2p . (3.2) Γ(2ν)

46 CHAPTER 3. THE DISTRIBUTION OF THE DISCRETE EXP. FUNCTIONAL 47

On the other hand, all negative integer moments, also given by (3.2), are nite and they characterize the distribution of I. The situation is much nicer when BM with negative drift is replaced in the model by the negative of a subordinator (αt, t ≥ 0), that is, by the negative of a non-decreasing process with stationary and independent increments, starting from the origin. Then, as was shown by [1], all positive integer moments of R ∞ are nite: J = 0 exp(−αt) dt p! 1 E(J p) = , Φ(λ) = − log E(exp(−λα )), (3.3) Φ(1) ··· Φ(p) t t and in this case the positive integer moments characterize the distribution of J . To achieve a similar favorable situation in the BM case, at least in an approximate sense, it is a natural idea to use a simple, symmetric random walk (RW) as an approximation, with a large enough negative drift. Besides, in some applications a discrete model could be more natural than a continuous one. It seems important that, as we shall see below, the discrete case is rather dierent from the continuous case in many respects. So let ∞ be an i.i.d. sequence with 1 and , Pk . (Xj)j=1 P(X1 = ±1) = 2 S0 = 0 Sk = j=1 Xj (k ≥ 1) Introduce the following approximation of I:

∞ X Y = exp(Sk − kν) = 1 + ξ1 + ξ1ξ2 + ··· , ξj = exp(Xj − ν), (3.4) k=0 where ν > 0. Later, we apply proper scaling to get a real approximation of I. In this paper we investigate the properties of Y type random variables which, in this simple case, will be called the discrete exponential functional of the given RW, or shortly, the discrete exponential functional.

In Section 3.2, below it turns out that the distribution of Y is singular w.r.t. Lebesgue measure if ν > 1. Here, we prove a more general result according to the singularity of the distribution of Y if ξj above has more general distribution. In Section 3.3 we determine the moments of the discrete exponential functional in order to work out a (3.2) and a (3.3) type equations for the discrete case. Dealing with the moments, beyond these results we have found a recursion of certain moments in the expansion of the moments of the discrete approximation. We will describe this recursion in Subsection 3.3.1. Finally, in Section 3.4 section we use a nested sequence of RWs to obtain a.s. converging approxi- mations of I, and this way an elementary proof of result (3.1) of Dufresne and Yor as well.

3.2 The distribution of the discrete exponential functional

Let us start with a natural generalization: ∞ be i.i.d., . Consider rst the nite polynomial (ξj)j=1 ξj > 0

Yn = 1 + ξ1 + ξ1ξ2 + ··· + ξ1 ··· ξn (3.5)

= 1 + ξ1(1 + ξ2 + ξ2ξ3 + ··· + ξ2 ··· ξn)(n ≥ 1),

Y0 = 1. This implies the following equality in distribution:

d Yn = 1 + ξYn−1, (3.6)

d where ξ = ξ1, and ξ is independent of Yn−1. Since Yn % Y = 1 + ξ1 + ξ1ξ2 + ··· a.s., we get the basic self-similarity of Y in distribution: Y =d 1 + ξY, (3.7) CHAPTER 3. THE DISTRIBUTION OF THE DISCRETE EXP. FUNCTIONAL 48 where ξ is independent of Y . We remark that innite polynomials similar to Y were studied by [35] and many others. There some of the ideas discussed below have already appeared. A standard application of the strong gives a condition for having an a.s. nite limit Y here, see Theorem 1 in [33]. Namely, when E(| log ξj|) < ∞, one has Yn % Y < ∞ a.s. if and only if E(log ξj) < 0. In the special case when Y is dened as in (3.4), but Sn is the partial sum of an arbitrary i.i.d. sequence ∞ with zero expectation, a.s. i the drift added is negative: . Hence this (Xj)j=1 Y < ∞ ν > 0 condition is always assumed in our basic example (simple, symmetric RW). Next we want to show that self-similarity (3.7) implies a simple functional equation for the dis- tribution function F (y) = P(Y ≤ y), y ∈ R. For a modest generalization of our basic case, let us introduce some notations.

3.2.1 Review of some fractal notion

In (3.5) let ξj take the positive values γ1 < ··· < γN , and let pi = P(ξ = γi). (In our basic case , −1−ν , 1−ν , 1 .) Consider the following similarity transformations: N = 2 γ1 = e γ2 = e p1 = p2 = 2 Ti(x) = γix + 1 (1 ≤ i ≤ N). When

N X E(log ξ) = pi log γi < 0 i=1 holds, by (3.7) we have PN PN P(Y ≤ y) = P(1 + ξY ≤ y) = i=1 pi P(1 + γiY ≤ y|ξ = γi) = i=1 pi P(1 + γiY ≤ y). Thus one obtains the following functional equation for the distribution function of Y :

N X −1 (3.8) F (y) = pi F (Ti (y)). i=1

An important special case is when γN < 1 (in the basic case: ν > 1). Then by (3.5), Y is a bounded random variable. Moreover, each mapping Ti is a contraction, having a unique xpoint −1 yi = (1 − γi) , 0 < y1 < ··· < yN < ∞. Since each Ti is an increasing function, Ti(yj) < Ti(yk) if j < k. Also, Tj(yi) < Tk(yi) if j < k. Then it follows that each Ti maps the fundamental interval I = [y1, yN ] into itself. Clearly, I contains the range of Y too. In the case γN < 1 it is useful to rephrase the given problem in the language of fractal theory, see e.g. [8]. Let us introduce the symbolic space Σ = {i = (i1, i2,... ): ij = 1,...,N}, endowed with the countable power of the discrete measure (p1, . . . , pN ), denoted by P. By (3.5), (3.9) Yn = 1 + γi1 (1 + γi2 (··· (1 + γin ))) = (Ti1 ◦ Ti2 ◦ · · · ◦ Tin )(1), with probability . Thus the canonical projection , pi1 pi2 ··· pin Π:Σ → I Π(i) = limk→∞(Ti1 ◦ · · · ◦ maps onto the range of . The attractor of the Tik )(1) = limk→∞(1 + γi1 + ··· + γi1 . . . γik ) Σ Y Λ iterated function scheme of similarity transformations (T1,...,TN ) is dened as

N \ [ [ (3.10) Λ = (Ti1 ◦ · · · ◦ Tik )(I), Λ = Ti(Λ).

k≥0 1≤i1,...,ik≤N i=1

Then Λ is a non-empty, compact, self-similar set. In (3.10) the fundamental interval I = [y1, yN ] can be replaced by any interval J which is mapped into itself by each Ti, e.g. by J = [0, yN ] 3 1. Thus range(Y ) ⊂ Λ, cf. (3.9). The converse is also true, since for any y ∈ Λ and for any  > 0 there is an and a large enough such that and the length i ∈ Σ k y ∈ (Ti1 ◦ · · · ◦ Tik )(J) |(Ti1 ◦ · · · ◦ Tik )(J)| = CHAPTER 3. THE DISTRIBUTION OF THE DISCRETE EXP. FUNCTIONAL 49

. Hence, by (3.9), range , that is, range . Also, the distribution of on γi1 ··· γik |J| <  y ∈ (Y ) Λ = (Y ) Y −1 the real line, which will be denoted by PY , is simply P ◦ Π . We are going to use the notations

(3.11) Ii1...ik = [yi1...ik1, yi1...ikN ] = (Ti1 ◦ · · · ◦ Tik )(I), as well, where and −1. The length yi1...ikl = (Ti1 ◦ · · · ◦ Tik )(yl)(l = 1,...,N) ij = 1,...N yl = (1 − γl) of such an interval is , where . |Ii1...ik | = γi1 ··· γik |I| |I| = yN − y1

3.2.2 The non-overlapping case

Returning to the distribution of Y in the basic case, consider rst when the intervals I1 = T1(I) = −1 [y11, y12] = [y1, y12] and I2 = T2(I) = [y21, y22] = [y21, y2] do not overlap, where y12 = 1 + γ1(1 − γ2) , −1 −1 y21 = 1 + γ2(1 − γ1) . Thus there is no overlap i y12 < y21, i.e., ν > log(e + e ) ≈ 1.127. Since F (y) = 0 if y < y1 and F (y) = 1 if y ≥ y2, in this non-overlapping case (3.8) simplies as

 1 F (T −1(y)) if y ∈ [y , y ),  2 1 1 12 1 if (3.12) F (y) = 2 y ∈ [y12, y21),  1 1 −1 if 2 + 2 F (T2 (y)) y ∈ [y21, y2).

By the similarities given by T1 and T2, applied to (3.12), one obtains that F has constant value 1 over the interval and constant value 3 on . Continuing this way by induction 4 [y112, y121) 4 [y212, y221) one gets that F has constant dyadic values over such plateau intervals:

k −k−1 X −j (3.13) F (y) = 2 + (ij − 1)2 , y ∈ [yi1...ik12, yi1...ik21), ij = 1, 2. j=1

2 The sum of the lengths of these plateaus is |I| (1 − (γ1 + γ2)) (1 + (γ1 + γ2) +(γ1 + γ2) + ··· ), so add up to |I|. Hence the attractor Λ (the range of Y ), i.e., the set of points of increase of F , has zero Lebesgue measure. So it is a Cantor-type set: an uncountable, perfect set of Lebesgue measure zero.

The distribution function F is clearly a continuous singular function. For, if y0 ∈ Λ and  > 0 is given, take so that −k . By the construction of , there exists an interval . Let the k 2 <  Λ Ii1...ik 3 y0 left endpoint of the left neighbor plateau of be (or ), and the right endpoint of the right Ii1...ik η1 −∞ neighbor plateau be η2 (or ∞). If δ = min(y0 − η1, η2 − y0) > 0, then for any y such that |y − y0| < δ −k one has |F (y) − F (y0)| ≤ 2 <  by (3.13). It is not dicult to see, cf. [9], that in general, any solution of (3.7) has either absolutely continuous or continuous singular distribution. We mention that standard results of fractal theory, see Theorem 9.3 in [8], imply that the Hausdor dimension of equals the (unique) solution of the equation s s . Solving this equation for s Λ γ1 + γ2 = 1 ν, we get ν = s−1 log(es + e−s). Hence the fractal dimension s is a strictly decreasing function of ν > log(e + e−1), tending to 1 as ν → log(e + e−1) and converging to 0 as ν → ∞. Also, the Hausdor measure of Λ is Hs(Λ) = |I|s, where the Hausdor dimension s is the one dened above. It means that

 −1  −1s Hs(Λ) = 1 − e(2 cosh s)−1/s − 1 − e−1(2 cosh s)−1/s .

Thus Hs(Λ) → e2 − e−2 as ν → log(e + e−1) and Hs(Λ) → 0 as ν → ∞. CHAPTER 3. THE DISTRIBUTION OF THE DISCRETE EXP. FUNCTIONAL 50

3.2.3 The general case

Next we are going to show that the distribution of Y is singular w.r.t. Lebesgue measure even in the overlapping case if ν > 1. Again, we consider the slight generalization introduced above. The proof below is based on [26] and on personal communication with K. Simon.

Theorem 10. Let ξ take the values γi (i = 1,...,N), 0 < γ1 < ··· < γN < 1, and let pi = P(ξ = γi). Take an i.i.d. sequence ∞ , d . Then the distribution of is singular (ξj)j=1 ξj = ξ Y = 1 + ξ1 + ξ1ξ2 + ··· w.r.t. Lebesgue measure, if

N N X X −χP = E(log ξ) = pi log γi < pi log pi = −HP. i=1 i=1

This will be called the entropy condition. Here χP is the Lyapunov exponent of the iterated function scheme (T1,...,TN ) corresponding to the Bernoulli measure P. Proof. We are going to use the fractal theoretical approach and notations introduced above. We want to show that

PY (B(x, r)) (D¯PY )(x) = lim sup = ∞ PY a.s., (3.14) r&0 λ(B(x, r)) where B(x, r) denotes the open ball (in the real line) with center at x and radius r and λ is Lebesgue measure. The statement of the theorem easily follows from this. For, take the set E = {x ∈ I : (D¯PY )(x) = ∞}. Then (3.14) implies that PY (E) = 1, while e.g. Theorem 8.6 in [24] shows that the symmetric derivative DPY exists and is nite λ a.e., so λ(E) = 0. Introduce the notation (j) . Thus ak (i) = #{l : il = j, 1 ≤ l ≤ k}

∞ N (j) X Y ak (i) Π(i) = 1 + γj . k=1 j=1 n o By the SLLN, the set −1 (j) has probability for every and so Aj = i ∈ Σ: k ak (i) → pj 1 j = 1,...,N has TN . Let −1 . Then . A = j=1 Aj C = {x ∈ I :Π (x) ∩ A 6= ∅} PY (C) = 1 If , there exists such that and −1 (j) as for all . x ∈ C i ∈ A Π(i) = x k ak (i) → pj k → ∞ j = 1,...,N Fix such an and . Let be the smallest radius such that , where i x rk B(x, rk) ⊃ Ii1...ik i = (i1, . . . , ik,... ) and is dened by (3.11). Ii1...ik The following facts are clear: (a) , moreover, , see (3.10) and (3.11); (b) x ∈ Λ x ∈ Ii1...ik a(j)(i) 1 , where is arbitrary; (c) QN k ; (d) 2 |Ii1...ik | < rk ≤ c|Ii1...ik | c > 1 |Ii1...ik | = |I| j=1 γj PY (B(x, rk)) ≥ a(j)(i) QN k . Using these facts it follows for any that PY (Ii1...ik ) = P(i1, . . . , ik) = j=1 pj k ≥ 1

k  k−1a(j)(i)  QN p k PY (B(x, rk)) −1 j=1 j ≥ (2c|I|)  (j)  . k−1a (i) λ(B(x, rk)) QN k j=1 γj By our assumptions concerning x and i, the ratio on the right hand side converges to

p1 pN p1 pN (p1 ··· pN ) / (γ1 ··· γN ) as k → ∞. The entropy condition of the theorem implies that this latter ratio is larger than 1. Hence (3.14) holds, and this completes the proof. CHAPTER 3. THE MOMENTS OF THE DISCRETE EXP. FUNCTIONAL 51

−1−ν 1−ν Returning to our basic case, consider the entropy condition when γ1 = e , γ2 = e and 1 . The condition holds i , since this is equivalent to 1 1 p1 = p2 = 2 ν > log 2 ≈ 0.693 2 (−1−ν)+ 2 (1−ν) < 1 1 1 1 . Combining this with the condition , this means that the distribution of is 2 log 2 + 2 log 2 γ2 < 1 Y singular w.r.t. Lebesgue measure for any ν > 1. Characterization of the distribution of Y when 0 < ν ≤ 1 remains open. In that case one of the two similarity mappings, T2, is not a contraction anymore, and that situation requires more sophisticated tools than the ones above.

3.3 The moments of the discrete exponential functional

Let us consider rst the general case: ∞ i.i.d., , as at the beginning of the previous section. (ξj)j=1 ξj > 0 Now we turn our attention to the moments of Y = 1 + ξ1 + ξ1ξ2 + ··· . If Yn is dened by (3.5), the equality in law (3.6) implies

p X p E(Y p) = E ((1 + ξY )p) = µ E(Y k ), (3.15) n n−1 k k n−1 k=0

k where p ≥ 0 integer and µk = E(ξ ). As n → ∞, by monotone convergence we obtain

p X p E(Y p) = µ E(Y k). (3.16) k k k=0 In (3.15) and (3.16) both sides are either nite positive, or +∞. Proposition 3. Let ∞ be an i.i.d. sequence, , and . For real, (ξj)j=1 ξj > 0 Y = 1 + ξ1 + ξ1ξ2 + ··· p ≥ 1 p p p 1/p −p E(Y ) < ∞ if and only if µp = E(ξ ) < 1. Then µq < 1 for any 1 ≤ q ≤ p and E(Y ) ≤ (1 − µp ) as well. In this case if p ≥ 1 is an integer, we also have the recursion formula

p−1   p 1 X p k E(Y ) = µkE(Y ). (3.17) 1 − µp k k=0 Proof. First, (3.6) and the simple inequality (1 + x)p ≥ 1 + xp (x ≥ 0, p ≥ 1, real) imply that p p . Suppose that . Since p , taking limit as , one gets E(Yn ) ≥ 1 + µpE(Yn−1) µp ≥ 1 E(Y0 ) = 1 n → ∞ that E(Y p) = ∞. q/p Conversely, suppose that µp < 1. Then by Hölder's (or by Jensen's) inequality, µq ≤ µp < 1 for any 1 ≤ q ≤ p as well. We want to show that E(Y p) is nite. Let us begin by observing that p p j ( ). Hence by the triangle inequality and we get E ((Yj − Yj−1) ) = E ((ξ1 ··· ξj) ) = µp j ≥ 1 Y0 = 1 that n n X 1/p X 1 E(Y p)1/p ≤ 1 + E ((Y − Y )p) = µj/p < , n j j−1 p 1/p j=1 j=0 1 − µp

p 1/p −p for any n ≥ 1 when µp < 1. Taking limit as n → ∞, it follows that E(Y ) ≤ (1 − µp ) < ∞. Finally, the recursion formula (3.17) directly follows from (3.16) when µp < 1, p ≥ 1 integer. There is a nice analogy between the moments of the exponential functional of a subordinator and the moments of Y , compare (3.3) and (3.18). First, the sum of the coecients in the numerator of (3.18) is p!, as can be seen by induction. For, if one explicitly writes down E(Y p), based on the recursion (3.17), taking a common denominator, the numerator of each earlier term except the last one is multiplied by factors 1−µk. In the sum of the coecients of the numerator it means a multiplication CHAPTER 3. THE MOMENTS OF THE DISCRETE EXP. FUNCTIONAL 52

p−1 by zero. On the other hand, in the last term one multiplies the numerator of E(Y ) by pµp−1, which results the sum p! of the coecients by the induction. Second, there is a relationship between the denominators of (3.3) and (3.18) as well. In the special case when is dened as in (3.4), but is the partial sum of an arbitrary i.i.d. sequence ∞ Y Sn (Xj)j=1 −1 λ with zero expectation, Φ(λ) = −n log E (exp(λ(Sn − νn))) = − log E(ξ ), so Φ(k) = − log µk, cor- responding to the factors in the denominator of (3.3). The factors 1 − µk in the denominator of (3.18) are tangents to these. k Finally, let us consider the moments of Y in our basic case. Then µk = E(ξ ) = exp(−kν) cosh(k). k Since cosh(k) < e for any k > 0, it follows that µk < 1 for any k ≥ 1 when ν ≥ 1, therefore all positive integer moments of Y are nite in this case by Proposition 3. In particular, in Section 2 we saw that Y is a bounded random variable if ν > 1, hence the positive integer moments characterize its distribution. On the other hand, when 0 < ν < 1, only nitely many moments of Y are nite. For, −1 µk ≥ 1 if 0 < ν ≤ k log cosh(k) % 1 as k → ∞. For example, even µ1 ≥ 1 (and consequently all E(Y p) = ∞) if 0 < ν < log cosh(1) ≈ 0.43378.

3.3.1 Permutations with given descent set

For integer p ≥ 1 it follows from (3.17) by induction that E(Y p) is a rational function of the moments µ1, . . . , µp:

1 X (p) j E(Y p) = a µj1 ··· µ p−1 , (3.18) (1 − µ ) ··· (1 − µ ) j1,...,jp−1 1 p−1 1 p p−1 (j1,...,jp−1)∈{0,1} where the coecients of the numerator are universal constants, independent of the distribution of ξj. These universal coecients a(p) make a symmetrical, Pascal's triangle-like table if each row is j1...jp−1 p−2 0 listed in the increasing order of the binary numbers jp−12 + ··· + j12 , dened by the multiindices (j1, . . . , jp−1), see the rows p = 1,..., 5:

Table 3.1: The Pascal's triangle-like table of the coecients

1 0 1 1 1 00 01 10 11 1 2 2 1 000 001 010 011 100 101 110 111 1 3 5 3 3 5 3 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 1 4 9 6 9 16 11 4 4 11 16 9 6 9 4 1

Two natural questions may arise at this point, independently of any background mentioned above. First, suppose that one denes a recursive sequence ∞ by (3.17) with coecients (ep)p=1 a(p) given by (3.18). Can one attach any direct mathematical meaning to this coecients a(p) j1...jp−1 j1...jp−1 then? The answer is yes, and rather surprisingly (as was conjectured in [30]), the coecient a(p) j1...jp−1 is equal to the number of permutations π ∈ Sp having descent π(i) > π(i + 1) exactly where ji = 1, 1 ≤ i ≤ p − 1, cf. Theorem 11 below. Second, can one give a direct way to evaluate the coecients (p) ? The armative answer to aj ...j this question is partly included in the previous answer, since several formulae1 p−1 are known for the number of permutations with given descent sets. However, an apparently new recursion was conjectured in CHAPTER 3. THE MOMENTS OF THE DISCRETE EXP. FUNCTIONAL 53

[30], which is analogous to the recursion of binomial coecients in the ordinary Pascal's triangle. The proof of this is the content of Lemma 8 below.

p−1 Lemma 7. Fix a multiindex (j1, . . . , jp−1) ∈ {0, 1} . Let S be the set of indices k where jk = 1:

p−1 X S = {s1, . . . , sm} = {k : jk = 1, 1 ≤ k ≤ p − 1}, m = jk. (3.19) k=1

Then the coecient a(p) dened by (3.18) can be obtained by the recursion j1...jp−1

p−1 p (p) X jk+1+···+jp−1 (k) a = jk(−1) a j1...jp−1 k j1...jk−1 k=0 m   X p (s ) = (−1)m−la l , (3.20) j1...js −1 sl l l=0

(0) where, by denition, a = 1, j0 = 1, s0 = 0 and −1 powered to an empty sum is 1. Proof. The second equality in (3.20) is a direct consequence of the denitions above. To show the rst equality, substitute (3.18) into (3.17):

p−1   1 X p µk X (k) j1 jk−1 ep = a µ ··· µ . 1 − µ k (1 − µ ) ··· (1 − µ ) j1...jk−1 1 k−1 p 1 k k−1 k=0 (j1,...,jk−1)∈{0,1}

Here, multiplying by the common denominator and then collecting the coecients of j1 jp−1 for µ1 ··· µp−1 each (j1, . . . , jp−1) we obtain

ep(1 − µ1) ··· (1 − µp) p−1   X X p (k) j1 jk−1 = a µ ··· µ µk(1 − µk+1) ··· (1 − µp−1) k j1...jk−1 1 k−1 k−1 k=0 (j1,...,jk−1)∈{0,1} p−1 p X j1 jp−1 X (k) jk+1 jp−1 = µ ··· µ a jk(−1) ··· (−1) . 1 p−1 k j1...jk−1 p−1 (j1,...,jp−1)∈{0,1} k=0 This and (3.18) imply the rst equality in (3.20).

Now we turn to the proof of the equality of the coecient a(p) given by (3.18) and the number j1...jp−1 (p) of permutations b (S) with descent set S given by (3.19). The descent set of a permutation π ∈ Sp is dened as D(π) = {i : π(i) > π(i + 1), 1 ≤ i ≤ p − 1}. It is known, cf. [27, p. 69], that the number of permutations π ∈ Sp with a given descent set S = (s1, . . . , sm), 1 ≤ s1 < ··· < sm ≤ p − 1, can be obtained by the following inclusion-exclusion formula:   (p) (p) X m−j p b (S) = b (s1, . . . , sm) = (−1) . (3.21) si1 , si2 , . . . , p − sij 1≤i1<···

Theorem 11. The coecient a(p) given by (3.18) is equal to the number of permutations b(p)(S) j1...jp−1 with descent set S given by (3.19). CHAPTER 3. THE MOMENTS OF THE DISCRETE EXP. FUNCTIONAL 54

Proof. It is enough to show that the numbers b(p)(S) satisfy the same recursion (3.20) as the numbers a(p) do, that is, j1...jp−1

m  p  (p) X m−l (sl) b (s1, . . . , sm) = (−1) b (s1, . . . , sl−1), (3.22) sl l=0

(s0) (0) where, by denition, s0 = 0 and b = b = 1. To show this, let us substitute the sieve formula (3.21) into the right hand side of (3.22):

m  p  X m−l (sl) (−1) b (s1, . . . , sl−1) sl l=0 m     X p X sl = (−1)m + (−1)m−l (−1)l−1−j sl si1 , si2 , . . . , sl − sij l=1 1≤i1<···

p−1 Lemma 8. The following recursion holds for any p ≥ 2 and multiindex (j1, . . . , jp−1) ∈ {0, 1} :

p−1 (p) X (p−1) X (p−1) a = δ a = a , (3.23) j1...jp−1 i (i) (i) i1...ip−2 j1 ...jp−2 i=1 (i1,...,ip−2)∈L(j1,...,jp−1) where (1) , for , , (i) for , (i) for , a = 1 δi = |ji −ji−1| i ≥ 2 δ1 = 1 jk = jk 1 ≤ k ≤ i−1 jk = jk+1 i ≤ k ≤ p−2 and L(j1, . . . , jp−1) is the set of all distinct binary sequences obtained from (j1, . . . , jp−1) by deleting exactly one digit. For example, (5) (4) (4) (4) . a0110 = 11 = a110 + a010 + a011 Proof. First we prove the second equality in (3.23). For this it is enough to show that if the same binary sequence is obtained from (j1, . . . , jp−1) when eliminating either the kth or the lth digit (k < l), then all digits between the kth and lth (including these two) are uniformly either 0's or 1's (a run of 0's or 1's). Therefore, the two recursions given in (3.23) are the same. p−1 Consider a multiindex (j1, . . . , jk−1, jk, . . . , jl, jl+1, . . . , jp−1) ∈ {0, 1} . Suppose that we get the same binary sequence by deleting jk and jl, respectively: (j1, . . . , jk−1, jk+1, . . . , jl, jl+1, . . . , jp−1) = (j1, . . . , jk−1, jk, . . . , jl−1, jl+1, . . . , jp−1). Then jk = jk+1 = ··· = jl−1 = jl, so the second equality in (3.20) really holds. CHAPTER 3. THE MOMENTS OF THE DISCRETE EXP. FUNCTIONAL 55

Now it remains to show the rst equality in (3.23), that is, the recursion itself.

A combinatorial proof of the recursion. Given a binary sequence (j1, . . . , jp−1), let us remove a single 1 from a run of 1's or a single 0 from a run of 0's. Count the number of permutations in Sp−1 determined by the resulting multiindex (i , . . . , i ). This number is a(p−1) by Theorem 11. We 1 p−2 i1...ip−2 want to show that there is a uniquely determined adjoining of the number p to any such permutation from Sp−1 to obtain a permutation from Sp corresponding to the original multiindex (j1, . . . , jp−1). If a 0 was deleted from a run of 0's, the number p should be inserted right after the number at the position of the rst 1 following the aected run of 0's. (If the given run happens to be the last, p is inserted as the last number.) When a 1 was deleted from a run of 1's, the number p should be inserted right after the number at the position of the last 0 preceding the aected run of 1's. (If the given run happens to be the rst, p is inserted as the rst number.) Since these insertions are the only ones that reconstruct the original descent set, the recursion is proved. An algebraic proof of the recursion. We are going to proceed by induction over p. Thus suppose that the recursion holds for all multiindices of lengths smaller than p − 1. First we use the recursion of Lemma 7 for the terms in the right side of (3.23), then we change the order of summation to obtain

p−1 p−1 p−2   X (p−1) X X p − 1 (i) (i) (i) (k) jk+1+···+jp−2 δia (i) (i) = δi jk (−1) a (i) (i) j1 ...jp−2 k j1 ...j i=1 i=1 k=0 k−1 p−2 ( k p − 1 X jk+2+···+jp−1 X (k) = jk+1(−1) δia (i) (i) k j1 ...j k=0 i=1 k−1 p−1 ) (k) X +j (−1)jk+1+···+jp−1 a δ (−1)ji . k j1...jk−1 i i=k+1 Here in the last expression, one may use the induction hypothesis for the rst sum. In the second

ji sum observe that δi(−1) is 1 if ji−1 = 1 and ji = 0, it is −1 if ji−1 = 0 and ji = 1, and it equals 0 p−1 otherwise. Hence we get the identity P ji . Thus one obtains that jk i=k+1 δi(−1) = jk(1 − jp−1)

p−1 p−2   X (p−1) X p − 1 (k+1) δ a = j (−1)jk+2+···+jp−1 a i (i) (i) k+1 j1...jk j1 ...jp−2 k i=1 k=0 p − 1  jk+1+···+jp−1 (k) + (1 − jp−1)jk(−1) a k j1...jk−1 p−2 p − 1 p − 1  X jk+1+···+jp−1 (k) = + (1 − jp−1) jk(−1) a k − 1 k j1...jk−1 k=1 p − 1 j1+···+jp−1 (p) +(1 − jp−1)(−1) + jp−1a . p − 2 j1...jp−2 To rewrite the terms above we use recursion (3.20) in the following case:

p−2 p − 1 (p−1) X jk+1 jp−2 (k) a = jk(−1) ... (−1) a , j1...jp−2 k j1...jk−1 k=0 CHAPTER 3. APPROXIMATION OF THE EXPONENTIAL FUNCTIONAL OF BM 56

jp−1 plus the identity −jp−1(−1) = jp−1, and the recursion for binomial coecients: p−1 p−2     X (p−1) X p − 1 p − 1 (k) δ a = + j (−1)jk+1+···+jp−1 a i (i) (i) k j1...jk−1 j1 ...jp−2 k − 1 k i=1 k=1  (p−1)  −j (−1)jp−1 a − (−1)j1+···+jp−2 p−1 j1...jp−2 p − 1 j1+···+jp−1 (p) +(1 − jp−1)(−1) + jp−1a p − 2 j1...jp−2 p−1 p X jk+1+···+jp−1 (k) (p) = jk(−1) a = a . k j1...jk−1 j1...jp−1 k=0 This completes the proof. The results above imply that Table 1 has properties analogous to the ones of Pascal's triangle: each entry a(p) is a positive integer, the rst and the last entries, a(p) and a(p) are 1, the table has j1...jp−1 0...0 1...1 symmetries a(p) = a(p) and a(p) = a(p) , and the sum of the 2p−1 entries in the j1...jp−1 jp−1...j1 j1...jp−1 1−j1...1−jp−1 pth row is p!.

3.4 Approximation of the exponential functional of Brownian motion

In this section we are going to show that taking our usual sequence of RWs the resulting sequence of discrete exponential functionals (3.4) converges almost surely to the corresponding exponential functional I of BM. Based on this, using convergence of moments, we will give an elementary proof of theorem (3.1) of Dufresne and Yor. We need a more general result about this approximation. This is a generalization of Lemma C originally introduced in [28, Lemma 4]. The proof can be read easily from the proof there. Namely, for almost every ω there exists an m0(ω) such that for any m ≥ m0(ω) and for any K ≥ e, one has

1 3 − m sup sup |Bm+j(t) − Bm(t)| ≤ K 4 (log K) 4 m2 2 . (3.24) j≥1 0≤t≤K

−m 2m Lemma 9. Let Bm(t) = 2 Sem(t2 ), t ≥ 0, m ≥ 0, be a sequence of shrunken simple symmetric RWs that a.s. converges to BM (B(t), t ≥ 0), uniformly on bounded intervals. Then for any ν > 0, as m → ∞,

∞ −2m X  −m −2m Ym = 2 exp 2 Sem(k) − νk2 k=0 Z ∞ → I = exp (B(t) − νt) dt < ∞ a.s. 0

Proof. The basic idea of the proof is that the sequence of functions fm(t, ω) = exp (Bm(t) − νt), converges to f(t, ω) = exp (B(t) − νt) for t ∈ [0, ∞) as m → ∞, for almost every ω. If one can nd 1 a function g(t, ω) ∈ L [0, ∞), that dominates each fm for m ≥ m0(ω), then their integrals on [0, ∞) also converge to the integral of f, and then we are practically done. First, by (3.24), for a.e. ω there exists an m0 = m0(ω) so that for any K ≥ e,

1 3 1 4 4 2 (3.25) sup sup |Bm(t) − Bm0 (t)| ≤ K (log K) ≤ K log K, m≥m0 0≤t≤K CHAPTER 3. APPROXIMATION OF THE EXPONENTIAL FUNCTIONAL OF BM 57

−m0/2 where we supposed that m0 was chosen large enough so that m02 ≤ 1. Second, by the law of iterated logarithms,

B (t) S (u) m0 em0 a.s. lim sup 1 = lim sup 1 = 1 , t→∞ (2t log log t) 2 u→∞ (2u log log u) 2

2m0 where u = t2 . Hence for a.e. ω, there is a K0 = K0(ν, ω), such that for any t ≥ K0,

1 1 2 2 (3.26) Bm0 (t) ≤ 2(t log log t) ≤ 2t log t,

1 where K0 is chosen so large that 3t 2 log t ≤ νt/2 for any t ≥ K0. Since a.s. any path of is continuous, it is bounded on the interval . Then by (3.25), we Bm0 [0,K0] have an upper bound uniform in m: for any m ≥ m0 and t ∈ [0,K0], Bm(t) ≤ M(ω). On the other 1 1 hand, when , by (3.26), 2 and so by (3.25), 2 , for any . t > K0 Bm0 (t) ≤ 2t log t Bm(t) ≤ 3t log t m ≥ m0 Summarizing, the function

 M(ω) e if 0 ≤ t ≤ K0(ν, ω), g(t, ω) = −νt/2 e if t > K0(ν, ω), is an integrable function on [0, ∞), dominating exp(Bm(t) − νt) for each m ≥ m0(ω). This implies that Z ∞ Z ∞ lim exp (Bm(t) − νt) dt = exp (B(t) − νt) dt < ∞ a.s. m→∞ 0 0 Finally, compare R ∞ to −2m P∞ −2m −2m that 0 exp (Bm(t) − νt) dt Ym = 2 k=0 exp Bm(k2 ) −νk2 appears in the statement of the lemma. Applying the uniform domination of exp (Bm(t) − νt) by the function g shown above, both the tail of the integral on the interval [K0, ∞) and the tail of the sum for 2m R ∞ k ≥ dK02 e is smaller than exp(−νt/2) dt, thus their dierence is uniformly arbitrarily small for K0 any m ≥ m0 if K0 is large enough. On the interval [0,K0] the dierence of the integral and the sum (which is a Riemann sum of a continuous function) tends to zero uniformly as m → ∞, since on each −2m −2m −m subinterval of length 2 , the dierence of Bm(t) and Bm(k2 ) is at most 2 . This completes the proof of the lemma.

Next we want to apply the results of the previous sections to Ym. To do this we introduce the following notations. For m ≥ 0 and n ≥ 1 let

n −2m X  −m −2m Ym,n = 2 exp 2 Sem(k) − νk2 k=0 −2m = 2 (1 + ξm1 + ξm1ξm2 + ξm1 ··· ξmn) (3.27)

−2m −m −2m and Ym,0 = 2 , where ξmj = exp(2 Xem(j) − ν2 ). Here Xem(j) = Sem(j) − Sem(j − 1) (j = is an i.i.d. sequence, 1 . 1, 2,... ) P(Xem(j) = ±1) = 2 Then Ym,n % Ym as n → ∞, Ym < ∞ a.s. i ν > 0, and Ym satises the following self-similarity in distribution: d −2m 2m d 2m Ym = 2 + ξmYm or 2 Ym = 1 + ξm2 Ym, (3.28)

d −m where ξm and Ym are independent, ξm = ξmj. Using the notations of Section 2, now γ1 = exp(−2 − −2m , −m −2m , 1 . If m, holds, so the similarity trans- ν2 ) γ2 = exp(2 − ν2 ) p1 = p2 = 2 ν > 2 γ2 < 1 −1 −1 formations T1 and T2 are contractions, mapping the interval I = [(1 − γ1) , (1 − γ2) ] into itself. 2m By Theorem 10, the distribution of Ym is singular w.r.t. Lebesgue measure if ν > 2 log 2 (m ≥ 1). 2m −m Moreover, there is no overlap in the ranges of T1 and T2 i ν > 2 log(2 cosh(2 )). As m → ∞ this means asymptotically that 1 2m . ν > 2 + 2 log 2 + o(1) CHAPTER 3. APPROXIMATION OF THE EXPONENTIAL FUNCTIONAL OF BM 58

For m ≥ 0 and k integer let

k −2m −m µmk = E(ξm) = exp(−νk2 ) cosh(k2 ). Since Proposition 3 is applicable to 2m , one obtains that p if and only if and 2 Ym E(Ym) < ∞ µmp < 1 then the following recursion is valid for p ≥ 1 integer: p−1   p 1 X p −2m(p−k) k (3.29) E(Ym) = 2 µmkE(Ym). 1 − µmp k k=0 x −m −m Now using cosh(x) < e (x > 0), it follows that µmk < exp (k2 (1 − ν2 )). So µmk < 1 for any m m k ≥ 1 if ν ≥ 2 . If 0 < ν < 2 , only nitely many positive moments are nite, since µmk ≥ 1 when 0 < ν < 22mk−1 log cosh(k2−m) → 2m as k → ∞. More importantly,  k  µ < exp k2−2m − ν (k ≥ 1), (3.30) mk 2 2 since cosh(x) < exp(x /2) when x > 0 (compare the Taylor series). Thus µmk < 1 for any m ≥ 0 if k . This condition is sharp as . For, apply x and 2 2 ν ≥ 2 m → ∞ e = 1+x+o(x) cosh(x) = 1+x /2+o(x ) (as x → 0) to the denition of µmk. Then k  µ = 1 + k2−2m − ν + o(2−2m), (3.31) mk 2 for any xed k as m → ∞. Using these results one can evaluate the moments of the exponential functional.

3.4.1 The moments of the exponential functional of BM Lemma 10. If is a positive integer such that p , then p 2 < ν

p 1 (3.32) lim E(Ym) = < ∞. m→∞ Qp k  k=1 ν − 2 Proof. By (3.30), for any positive integer such that p we have . Since Proposition 3 p 2 < ν µmp < 1 is valid for 2m , the recursion formula (3.29) holds, and by induction one gets p as a rational 2 Ym E(Ym) function of the moments µm1, . . . , µmp, similarly to formula (3.18). The argument below formula (3.18) also applies here too, showing that the sum of the coecients in the numerator of this rational function −2mp 2m is p!2 . The extra factor comes from the dierence that Ym is multiplied by 2 here, compare 2mp equations (3.17) and (3.29). Since each µmk → 1 as m → ∞, it follows that 2 times the numerator tends to p!. By (3.31), we get that −2m k −2m if is xed and . So 2mp times 1 − µmk = k2 (ν − 2 ) + o(2 ) k m → ∞ 2 the denominator of the rational function tends to Qp k as . This and the limit of p! k=1(ν − 2 ) m → ∞ the numerator together imply the statement of the lemma. Our next objective is to give an asymptotic formula, similar to (3.32), for the negative moments of

Ym as m → ∞. Lemma 11. For all integer p ≥ 1, we have p−1   −p −1 Y k (3.33) lim E(Ym ) = lim E(Ym ) ν + , m→∞ m→∞ 2 k=1 where −1 . limm→∞ E(Ym ) < ∞ CHAPTER 3. APPROXIMATION OF THE EXPONENTIAL FUNCTIONAL OF BM 59

Proof. We want to show (3.33) by establishing a recursion. Introduce the notations −k zm,k = E(Ym ) and −k for integer. By (3.27), −1 2m, hence all negative moments µm,−k = E(ξm ) k ≥ 1 0 < Ym < 2 zm,k of Ym are nite. d −2m The self-similarity equation (3.28) implies that ξmYm = Ym − 2 and so

Y −1 ξ−1Y −1 =d m , (3.34) m m −2m −1 1 − 2 Ym where ξm and Ym are independent. Taking kth moment (k ≥ 1 integer) on both sides and applying the Taylor series ∞ xk X n − 1 = xn, (3.35) (1 − x)k k − 1 n=k valid for any |x| < 1, one obtains

∞ X n − 1 µ z = 2−2m(n−k)z , m,−k m,k k − 1 m,n n=k with the notations introduced above. This implies that

−2m (µm,−k − 1)zm,k − k2 zm,k+1 = a(m, k), (3.36) where ∞ X n − 1 a(m, k) = 2−2m(n−k)z ≥ 0. k − 1 m,n n=k+2 Next we want to give an upper bound for a(m, k), which goes to zero fast enough as m → ∞. Since −m −2m ξm ≥ γ1 = exp(−2 − ν2 ), by (3.27) it follows that

−1  ∞  −1 2m X j 2m 2m −m −2m m+1 (3.37) Ym ≤ 2  γ1 = 2 (1 − γ1) ≤ 2 (2 + ν2 ) ≤ 2 , j=0

−x if m ≥ log(ν)/ log(2), where we used that 1 − e ≤ x, for any real x. This implies that zm,r+j = −r−j −r −1 j (m+1)j for . Substituting this into the denition of E(Ym ) ≤ E(Ym ) sup(Ym ) ≤ zm,r2 r, j ≥ 0 a(m, k), one gets that

∞ X n − 1 a(m, k) ≤ z 2−2m(n−k)2(m+1)(n−k−1) m,k+1 k − 1 n=k+2 ∞ X n − 1 = z 2k(m−1)−m−1 2(1−m)n m,k+1 k − 1 n=k+2 −m−1 1−m −k 1−m = zm,k+12 (1 − 2 ) − 1 − k2 −m−1 −2m −3m ≤ zm,k+12 4k(k + 1)2 = zm,k+12k(k + 1)2 , if m is large enough, depending on k. Let us substitute this estimate of a(m, k) into (3.36) and express the following ratio:

zm,k+1 µm,−k − 1 = −2m −m . zm,k k2 (1 + O(2 )) CHAPTER 3. APPROXIMATION OF THE EXPONENTIAL FUNCTIONAL OF BM 60

Apply the asymptotics (3.31) here with −k:

k zm,k+1 ν + 2 + o(1) = −m , zm,k 1 + O(2 ) as . This implies the equality k , and thus for any positive integer m → ∞ limm→∞ zm,k+1/zm,k = ν + 2 p, p−1 E(Y −p) Y  k  lim m = ν + . (3.38) m→∞ E(Y −1) 2 m k=1 It remains to show that −1 has a nite limit as . Writing −2 −3 and E(Ym ) m → ∞ Ym = YmYm applying the Cauchy-Schwarz inequality, one obtains −2 2 2 −6 , or −2 E(Ym ) ≤ E(Ym)E(Ym ) E(Ym ) ≤ 2 −6 −2 . Suppose rst that and take limit here on the right hand side as E(Ym)E(Ym )/E(Ym ) ν > 1 , applying (3.32) and (3.38). It follows that −2 . As −2 is an increasing m → ∞ supm≥1 E(Ym ) ≤ ∞ E(Ym ) function of ν by its denition, hence the same is true for any ν ∈ (0, 1] as well. Since by Lemma 9, a.s., where each and also take values in a.s., it follows that −1 −1 a.s. Then Ym → I Ym I (0, ∞) Ym → I by the 2 uniform boundedness of −1 shown above, −1 −1 follows as well. L Ym (m ≥ 0) E(Ym ) → E(I ) < ∞ This ends the proof of the lemma.

Finally, it turns out that −1 converges to −1 in any p. This makes it possible to recover the Ym I L result (3.1) of [7] and [38].

−m 2m Theorem 12. Let Bm(t) = 2 Sem(t2 ), t ≥ 0, m ≥ 0, be a sequence of shrunken simple symmetric RWs that a.s. converges to BM (B(t), t ≥ 0), uniformly on bounded intervals. Take

∞ −2m X −2m −2m −2m Ym = 2 exp Bm(k2 ) − νk2 = 2 (1 + ξm1 + ξm1ξm2 + ··· ) k=0 and Z ∞ I = exp (B(t) − νt) dt 0 when ν > 0. Then the following statements hold true: (a) −1 converges to −1 in p for any real and −p −p ; Ym I L p ≥ 1 limm→∞ E(Ym ) = E(I ) < ∞ d (b) I = 2/Z2ν , where Z2ν is a gamma distributed random variable with index 2ν and parameter 1; (c) converges to in p for any integer such that (supposing 1 ) and then Ym I L p 1 ≤ p < 2ν ν > 2 p p . The same is true for any real , . limm→∞ E(Ym) = E(I ) < ∞ q 1 ≤ q < p

Proof. By Lemma 9, Ym → I a.s., where each Ym and also I take values in (0, ∞) a.s. Hence −1 −1 a.s. By Lemma 11, −k for any integer, so (a) follows. Ym → I limm→∞ E(Ym ) < ∞ k ≥ 1 Thus by (a) and Lemma 11, for any integer p ≥ 1,

p−1 Y  k  a = E(I−p) = c ν + = c21−p(2ν + 1) ··· (2ν + p − 1) < ∞, p 2 k=1 where c = E(I−1). By a classical result, see [25], a Stieltjes moment problem is determinate, that is the moments uniquely determine a probability distribution on [0, ∞), if there exist constants C > 0 p −p and R > 0 such that ap ≤ CR (2p)! for any p ≥ 1 integer. In the present case ap ≤ c2 (p + 1)! when p−1 −1 ν ≤ 1 and ap ≤ c(ν/2) (p + 1)! when ν > 1, so the moment problem for I is determinate and it also follows that I−1 has a nite moment generating function in a neighborhood of the origin. CHAPTER 3. APPROXIMATION OF THE EXPONENTIAL FUNCTIONAL OF BM 61

Also, using the moments of the gamma distribution we get

Γ(2ν + p) b = E(2−pZp ) = 2−p = 2−p(2ν)(2ν + 1) ··· (2ν + p − 1), p 2ν Γ(2ν) for any p ≥ 1, and Z2ν /2 has a nite moment generating function in a neighborhood of the origin as well. Writing down the two moment generating functions by the help of the moments ap and bp, respectively, it follows that c E(exp(uI−1)) = E(exp(uZ /2)) ν 2ν in a neighborhood of the origin. Substituting u = 0, one obtains c = E(I−1) = ν and this proves (b). Finally, again, Ym → I a.s. by Lemma 9. If p is an integer such that 1 ≤ p < 2ν, by Lemma 10, using the moments of the gamma distribution, and by (b), we have p p −p p . limm→∞ E(Ym) = E(2 Z2ν ) = E(I ) This proves (c).

3.4.2 Properties of the exponential functional process Theorem 12 says that the one dimensional marginal distributions of the process 1 X(ν) = I(ν) , 0 < ν are gamma distributed with index 2ν as that of a . In spite of the equality of the one dimensional marginals X is completely dierent from the gamma process as the next proposition shows. Proposition 4. The process 1 X(ν) := ∞ (3.39) R Bs−νs 0 e ds is almost surely continuous so it is not a gamma process.

Proof. Clearly, it is enough to prove that I(·) is continuous. Fix ν0 and let ν < ν0 so we have Z ∞   Bs−νs (ν−ν0)s I(ν) − I(ν0) = e 1 − e ds 0

Letting ν tends to ν0 by the Lebesgue dominated convergence theorem one gets that the integral on the right tends to 0. So I(ν) → I(ν0) almost surely as ν → ν0 if ν < ν0. The proof of the case ν > ν0 is likewise. Therefore, we get that I and so X are almost surely continuous. An interesting consequence of the technique used in the proof of Lemma 11 is the following lemma. −1 It says that every multi indices moment of the process (I (ν))ν>0 can be derived from some other multi indices moments with bigger individual moments. We remark that this statement seems to be insucient to determine all multi indices moments. For simplicity, we introduce some notations. Fix a positive integer and vectors d, d . Let d k ∈ Z+ ν ∈ R+

−k1 −kd  M((ν, k)) = E I(ν1) ... I(νd) .

Finally, let (ν, k)i = (ν, (k1, . . . , ki−1, ki + 1, ki+1, . . . kd)). For the multi indices moment M((ν, k)) we have the following Lemma 12.

 −1 d k1 + ··· + kd k1ν1 + ··· + kdνd X ki M((ν, k)) = + M((ν, k) ). 2 k + ··· + k k + ··· + k i 1 d i=1 1 d CHAPTER 3. APPROXIMATION OF THE EXPONENTIAL FUNCTIONAL OF BM 62

Proof. Basically, we use the technique introduced in the proof of Lemma 11 from (3.34) till (3.38). By using (3.35), we can determine the multidimensional version of (3.34) for higher moments

d ∞ ! n − 1 −k1 −k1 −kd −kd d Y X i −ni −2m(ni−ki) ξm (ν1)Ym (ν1) ··· ξm (νd)Ym (νd) = Ym (νi)2 ki − 1 i=1 ni=ki ∞ ∞      X X n1 − 1 nd − 1 Pd −n1 −nd −2m i=1(ni−ki) (3.40) = ··· ··· · Ym (ν1) ··· Ym (νd) · 2 k1 − 1 kd − 1 n1=k1 nd=kd

Let −k1 −kd and −n1 −nd . Using the deni- µm((ν, k)) = E ξm (ν1) ··· ξm (νd) zm((ν, n)) = E Ym (ν1) ··· Ym (νd) tion of ξm(ν) and the following two equations  1  (k + ··· + k )2 1  1  cosh (k + ··· + k ) = 1 + 1 d + o , 1 d 2m 2 2m 2m   (ν k +...ν k ) 1 1 1 e 1 1 d d 22m = 1 + (ν k + . . . ν k ) + o 1 1 d d 22m 2m simple calculation shows that   1 (ν k +...ν k ) 1 µ ((ν, k)) = cosh (k + ··· + k ) e 1 1 d d 22m m 1 d 2m   −2m k1 + ··· + kd k1ν1 + ··· + kdνd −2m = 1 + (k1 + ··· + kd)2 + + o(2 ). 2 k1 + ··· + kd

Now, taking expectation in both sides of (3.40) then using the independence of ξm(ν) and Ym(ν) we get d X −2m µm((ν, k))zm((ν, k)) = zm((ν, k)) + ki2 zm((ν, k)i) + am((ν, k)) (3.41) i=1 where       X X n1 − 1 nd − 1 Pd −n1 −nd −2m i=1(ni−ki) am((ν, k)) = E  ··· ··· · Ym (ν1) ··· Ym (νd) · 2  k1 − 1 kd − 1 n−k≥2 where Pd , that is, we sum those terms whose vector is either bigger than in  n − k = i=1(ni − ki) n k at least two entries or the dierence is bigger than 2 in at least one entry. Using the estimate (3.37), the remark and the estimation afterwards one can obtain

d X am((ν, k)) = zm((ν, k)i) i=1  d  1−m k1+···+kd−ki −3m X 1−m k1+···+kd−ki−kj 2−2m 1−m · (1 − 2 ) 2k(k + 1)2 + (1 − 2 ) 2 kj2  j=1 d −3m X ≤ Ck 2 zm((ν, k)i) i=1 for appropriate constant Ck. From (3.41), we have

−2m −2m k12 k12 −m zm((ν, k)) = zm((ν, k)1) + ··· + zm((ν, k)1) + O(2 ). 1 − µm((ν, k)) 1 − µm((ν, k)) CHAPTER 3. APPROXIMATION OF THE EXPONENTIAL FUNCTIONAL OF BM 63

Applying the convergence results of Theorem 12, that is, zm((ν, k)) → M((ν, k)) as m tends to innity, we get

 −1 d k1 + ··· + kd k1ν1 + ··· + kdνd X ki M((ν, k)) = + M((ν, k) ) 2 k + ··· + k k + ··· + k i 1 d i=1 1 d as it was required. Bibliography

[1] Carmona, P., Petit, F. and Yor, M. (1997) On the distribution and asymptotic results for expo- nential functionals of Lévy processes. In: Yor, M., ed. Exponential functionals and principal values related to Brownian motion, pp. 73-121. Biblioteca de la Revista Matemática Iberoamericana.

[2] Csörg®, M. (1999) Random walking around nancial mathematics. In: Révész, P. and Tóth, B., eds. Random walks, Bolyai Society Mathematical Studies, 9, pp. 59-111, J. Bolyai Math. Soc., Budapest.

[3] Dambis, K.E. (1965) On the decomposition of continuous martingales. Theor. Prob. Appl. 10, 401-410.

[4] Dubins, L. E.; Émery, Michel and Yor, M. (1993) On the Lévy transformation of Brownian motions and continuous martingales. In: Séminaire de Probabilités, XXVII, Lecture Notes in Math., 1557, pp. 122132, Springer, Berlin.

[5] Dubins, L. and Schwarz, G. (1965) On continuous martingales. Proc. Nat. Acad. Sci. USA 53, 913-916.

[6] Dubins, L. E. and Smorodinsky, M. (1992) The Modied, Discrete, Lévy-Transformation is Bernoulli. In: Séminaire de Probabilités, XXVI, Lecture Notes in Math., 1526, Springer.

[7] Dufresne, D. (1990) The distribution of a perpetuity, with applications to risk theory and pension funding. Scand. Actuarial J., 39-79.

[8] Falconer, K. (1990) Fractal geometry. Mathematical foundations and applications. Wiley, Chich- ester.

[9] Grincevi£ius, A.K. (1974) On the continuity of the distribution of a sum of dependent variables connected with independent walks on lines. Theory Prob. Appl. 19, 163-168.

[10] Hoeding, W. (1963) Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58, 13-30.

[11] Jacod, J. and Shiryaev, A. N. (1987) Limit Theorems for Stochastic Processes, Springer-Verlag, Berlin

[12] Karandikar, R.L. (1983) On the quadratic variation process of continuous martingales. Illinois J. Math. 27, 178-181.

[13] Karandikar, L. Rajeeva (1995) On pathwise stochastic integration. SPA 57, 11-18.

[14] Kiefer, J. (1969) On the deviation in the Skorokhod-Strassen approximation scheme. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 13 321-332.

64 BIBLIOGRAPHY 65

[15] Knight, F.B. (1962) On the random walk and Brownian motion. Trans. Amer. Math. Soc. 103, 218-228.

[16] Knuth, D.E. (1973) The Art of Computer Programming. Vol. 3. Sorting and searching. Addison- Wesley, Reading, Mass.

[17] R. S. Lipster and A. N. Shiryaev (1977) Statistic of Random Process Vol I: General Theory, Springer-Verlag, New York.

[18] MacMahon, P.A. (1915) Combinatory Analysis. Vol. 1. Cambridge Univ. Press, Cambridge, Eng- land.

[19] Marchal, P. (2003) Constructing a sequence of random walks strongly converging to Brownian motion, Preprint

[20] Ocone, D.L. (1993) A symmetry characterization of conditionally independent increment martin- gales. In: D. Nualart and M. Sanz-Solé, eds. Barcelona Seminar on Stochastic Analysis, Progress in Probability, 32, pp. 147-167, Birkhäuser, Basel.

[21] Pitman, J. (2003) Combinatorial Stochastic Processes, Preprint

[22] Révész, P. (1990) Random Walk in Random and Non-Random Environments. World Scientic, Singapore.

[23] Revuz, D. and Yor, M. (1999) Continuous Martingales and Brownian Motion. Third edition, Springer, Berlin.

[24] Rudin, W. (1970) Real and complex analysis. McGraw-Hill, New York.

[25] Simon, B. (1998) The classical moment problem as a self-adjoint nite dierence operator. Adv. Math. 137, 82-203.

[26] Simon, K., Solomyak, B., and Urba«ski, M. (2001) Invariant measures for parabolic IFS with overlaps and random continued fractions. Trans. Amer. Math. Soc. 353, 5145-5164.

[27] Stanley, R. (1986) Enumerative Combinatorics. Vol. 1. Wadsworth and Brooks/Cole Mathematics Series, Monterey, Calif.

[28] Szabados, T. (1996) An elementary introduction to the Wiener process and stochastic integrals. Studia Sci. Math. Hung. 31, 249-297.

[29] Szabados, T. (2001) Strong approximation of fractional Brownian motion by moving averages of simple random walks. Stochastic Process. Appl. 92, 31-60.

[30] Szabados, T. and Székely, B. (2003) An exponential functional of random walks. J. Appl. Prob. 40, 413-426.

[31] Szabados, T. and Székely, B. (2004) Moments of an exponential functional of random walks and permutations with given descent sets, Periodicam Math. Hung., to appear

[32] Székely, B. and Szabados, T (2004) Strong approximation of continuous local martingales by simple random walks, Studia Sci. Math. Hung. 41, 101-126.

[33] Székely, G.J. (1975) On the polynomials of independent random variables. In: Limit theorems of probability theory (Keszthely, 1974), Colloq. Math. Soc. János Bolyai, 11, pp. 365-371, North- Holland, Amsterdam. BIBLIOGRAPHY 66

[34] Van Zanten, Herry (2002) Continuous Ocone martingales as weak limits of rescaled martingales. Elect. Comm. in Probab. 7, 205-212.

[35] Vervaat, W. (1979) On a stochastic dierence equation and a representation of non-negative innitely divisible random variables. Adv. Appl. Prob. 11, 750-783.

[36] Vostrikova, L. and Yor, M. (2000) Some invariance properties (of the laws) of Ocone's martingales. In: Séminaire de Probabilités, XXXIV, Lecture Notes in Math., 1729, pp. 417431, Springer, Berlin.

[37] P. Walters (1982) An Introduction to . Spriger-Verlag, New York.

[38] Yor, M. (1992) On certain exponential functionals of real-valued Brownian motion. J. Appl. Prob., 29, 202-208.

[39] Yor, M. (2001) Exponential functionals of Brownian motion and related processes. Springer, Berlin.

[40] Zabrocki, M. (2001) Integer sequence A060351. On-line encyclopedia of integer sequences. http: //www.research.att.com/~njas/sequences/Seis.html DECLARATION

Alulírott Székely Balázs kijelentem, hogy ezt a doktori értekezést magam készítettem és abban csak a megadott forrásokat használtam fel. Minden olyan részt, amelyet szó szerint, vagy azonos tartalom- ban, de átfogalmazva más forrásból átvettem, egyértelm¶en, a forrás megadásával megjelöltem.

Budapest, 2004. április 29. Székely Balázs