<<

Part III — Stochastic and Applications

Based on lectures by R. Bauerschmidt Notes taken by Dexter Chua Lent 2018

These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine.

. Existence and sample path properties. – for continuous processes. Martingales, local martingales, semi- martingales, and cross-variation, Itˆo’sisometry, definition of the stochastic , Kunita–Watanabe theorem, and Itˆo’sformula. – Applications to Brownian motion and martingales. L´evycharacterization of Brownian motion, Dubins–Schwartz theorem, martingale representation, Gir- sanov theorem, conformal invariance of planar Brownian motion, and Dirichlet problems. – Stochastic differential equations. Strong and weak solutions, notions of existence and uniqueness, Yamada–Watanabe theorem, strong Markov property, and relation to second order partial differential equations.

Pre-requisites

Knowledge of measure theoretic probability as taught in Part III Advanced Probability will be assumed, in particular familiarity with discrete-time martingales and Brownian motion.

1 Contents III Stochastic Calculus and Applications

Contents

0 Introduction 3

1 The Lebesgue–Stieltjes integral 6

2 Semi-martingales 9 2.1 Finite variation processes ...... 9 2.2 ...... 11 2.3 Square integrable martingales ...... 15 2.4 Quadratic variation ...... 17 2.5 Covariation ...... 22 2.6 Semi-martingale ...... 24

3 The stochastic integral 25 3.1 Simple processes ...... 25 3.2 Itˆoisometry ...... 26 3.3 Extension to local martingales ...... 28 3.4 Extension to semi-martingales ...... 30 3.5 Itˆoformula ...... 32 3.6 The L´evycharacterization ...... 35 3.7 Girsanov’s theorem ...... 37

4 Stochastic differential equations 41 4.1 Existence and uniqueness of solutions ...... 41 4.2 Examples of stochastic differential equations ...... 45 4.3 Representations of solutions to PDEs ...... 47

Index 51

2 0 Introduction III Stochastic Calculus and Applications

0 Introduction

Ordinary differential equations are central in . The simplest class of equations tend to look like x˙(t) = F (x(t)). Stochastic differential equations are differential equations where we make the function F “random”. There are many ways of doing so, and the simplest way is to write it as x˙(t) = F (x(t)) + η(t), where η is a random function. For example, when modeling noisy physical systems, our physical bodies will be subject to random noise. What should we expect the function η to be like? We might expect that for |t − s|  0, the variables η(t) and η(s) are “essentially” independent. If we are interested in physical systems, then this is a rather reasonable assumption, since random noise is random! In practice, we work with the idealization, where we claim that η(t) and η(s) are independent for t 6= s. Such an η exists, and is known as . However, it is not a function, but just a Schwartz distribution. To understand the simplest case, we set F = 0. We then have the equation

x˙ = η.

We can write this in integral form as

Z t x(t) = x(0) + η(s) ds. 0 To make sense of this integral, the function η should at least be a signed measure. Unfortunately, white noise isn’t. This is bad news. We ignore this issue for a little bit, and proceed as if it made sense. If the equation held, then for any 0 = t0 < t1 < ··· , the increments

Z ti x(ti) − x(ti−1) = η(s) ds ti−1 should be independent, and moreover their variance should scale linearly with |ti − ti−1|. So maybe this x should be a Brownian motion! Formalizing these ideas will take up a large portion of the course, and the work isn’t always pleasant. Then why should we be interested in this continuous problem, as opposed to what we obtain when we discretize time? It turns out in some sense the continuous problem is easier. When we learn measure theory, there is a lot of work put into constructing the Lebesgue measure, as opposed to the sum, which we can just define. However, what we end up is much easier 1 P∞ 1 — it’s easier to integrate x3 than to sum n=1 n3 . Similarly, once we have set up the machinery of stochastic calculus, we have a powerful tool to do explicit computations, which is usually harder in the discrete world. Another reason to study stochastic calculus is that a lot of continuous time processes can be described as solutions to stochastic differential equations. Compare this with the fact that functions such as trigonometric and Bessel functions are described as solutions to ordinary differential equations!

3 0 Introduction III Stochastic Calculus and Applications

There are two ways to approach stochastic calculus, namely via the Itˆo integral and the . We will mostly focus on the Itˆointegral, which is more useful for our purposes. In particular, the Itˆointegral tends to give us martingales, which is useful. To give a flavour of the construction of the Itˆointegral, we consider a simpler scenario of the Wiener integral.

Definition (Gaussian space). Let (Ω, F, P) be a probability space. Then a subspace S ⊆ L2(Ω, F, P) is called a Gaussian space if it is a closed linear subspace and every X ∈ S is a centered Gaussian . An important construction is Proposition. Let H be any separable Hilbert space. Then there is a probability space (Ω, F, P) with a Gaussian subspace S ⊆ L2(Ω, F, P) and an isometry I : H → S. In other words, for any f ∈ H, there is a corresponding random variable I(f) ∼ N(0, (f, f)H ). Moreover, I(αf + βg) = αI(f) + βI(g) and (f, g)H = E[I(f)I(g)]. ∞ Proof. By separability, we can pick a Hilbert space basis (ei)i=1 of H. Let (Ω, F, P) be any probability space that carries an infinite independent sequence of standard Gaussian random variables Xi ∼ N(0, 1). Then send ei to Xi, extend by linearity and continuity, and take S to be the image.

2 In particular, we can take H = L (R+).

Definition (Gaussian white noise). A Gaussian white noise on R+ is an isometry 2 WN from L (R+) into some Gaussian space. For A ⊆ R+, we write WN(A) = WN(1A). Proposition.

– For A ⊆ R+ with |A| < ∞, WN(A) ∼ N(0, |A|).

– For disjoint A, B ⊆ R+, the variables WN(A) and WN(B) are indepen- dent. S∞ – If A = i=1 Ai for disjoint sets Ai ⊆ R+, with |A| < ∞, |Ai| < ∞, then ∞ X 2 WN(A) = WN(Ai) in L and a.s. i=1

Proof. Only the last point requires proof. Observe that the partial sum

n X Mn = WN(A) i=1 is a martingale, and is bounded in L2 as well, since

n n 2 X 2 X EMn = EWN(Ai) = |Ai| ≤ |A|. i=1 i=1 So we are done by the martingale convergence theorem. The limit is indeed P∞ WN(A) because 1A = n=1 1Ai .

4 0 Introduction III Stochastic Calculus and Applications

The point of the proposition is that WN really looks like a random measure on R+, except it is not. We only have convergence almost surely above, which means we have convergence on a set of measure 1. However, the set depends on which A and Ai we pick. For things to actually work out well, we must have a fixed set of measure 1 for which convergence holds for all A and Ai. But perhaps we can ignore this problem, and try to proceed. We define

Bt = WN([0, t])

for t ≥ 0.

Exercise. This Bt is a standard Brownian motion, except for the continuity n requirement. In other words, for any t1, t2, . . . , tn, the vector (Bti )i=1 is jointly Gaussian with E[BsBt] = s ∧ t for s, t ≥ 0.

Moreover, B0 = 0 a.s. and Bt − Bs is independent of σ(Br : r ≤ s). Moreover, Bt − Bs ∼ N(0, t − s) for t ≥ s.

2 In fact, by picking a good basis of L (R+), we can make Bt continuous. 2 We can now try to define some stochastic integral. If f ∈ L (R+) is a step function, n X f = fi1[si,ti] i=1 with si < ti, then n X WN(f) = fi(Bti − Bsi ) i=1 This motivates the notation Z WN(f) = f(s) dBS.

However, extending this to a function that is not a step function would be problematic.

5 1 The Lebesgue–Stieltjes integral III Stochastic Calculus and Applications

1 The Lebesgue–Stieltjes integral

R 1 In calculus, we are able to perform more exciting than simply 0 h(x) dx. In particular, if h, a : [0, 1] → R are C1 functions, we can perform integrals of the form Z 1 h(x) da(x). 0 For them, it is easy to make sense of what this means — it’s simply

Z 1 Z 1 h(x) da = h(x)a0(x) dx. 0 0 In our world, we wouldn’t expect our functions to be differentiable, so this is not a great definition. One reasonable strategy to make sense of this is to come up with a measure that should equal “da”. An immediate difficulty we encounter is that a0(x) need not be positive all the R 1 time. So for example, 0 1 da could be a negative number, which one wouldn’t expect for a usual measure! Thus, we are naturally lead to think about signed measures. From now on, we always use the Borel σ-algebra on [0,T ] unless otherwise specified.

Definition (Signed measure). A signed measure on [0,T ] is a difference µ = µ+ −µ− of two positive measures on [0,T ] of disjoint support. The decomposition µ = µ+ − µ− is called the Hahn decomposition.

In general, given two measures µ1 and µ2 with not necessarily disjoint supports, we may still want to talk about µ1 − µ2.

Theorem. For any two finite measures µ1, µ2, there is a signed measure µ with µ(A) = µ1(A) − µ2(A).

If µ1 and µ2 are given by densities f1, f2, then we can simply decompose µ + − + − as (f1 − f2) dt + (f1 − f2) dt, where and denote the positive and negative parts respectively. In general, they need not be given by densities with respect to dx, but they are always given by densities with respect to some other measure.

Proof. Let ν = µ1 + µ2. By Radon–Nikodym, there are positive functions f1, f2 such that µi(dt) = fi(t)ν(dt). Then

+ − (µ1 − µ2)(dt) = (f1 − f2) (t) · ν(dt) + (f1 − f2) (t) · ν(dt).

Definition (Total variation). The total variation of a signed measure µ = µ+ − µ− is |µ| = µ+ + µ−. We now want to figure out how we can go from a function to a signed measure. R 1 Let’s think about how one would attempt to define 0 f(x) dg as a Riemann sum. A natural would be to write something like

Z t nm X (m)  (m) (m)  h(s) da(s) = lim h(t ) a(t ) − a(t ) m→∞ i−1 i i−1 0 i=1

6 1 The Lebesgue–Stieltjes integral III Stochastic Calculus and Applications

(m) (m) for any sequence of subdivisions 0 = t0 < ··· < tnm = t of [0, t] with (m) (m) maxi |ti − ti−1| → 0. In particular, since we want the integral of h = 1 to be well-behaved, the P (m) (m) sum (a(ti ) − a(ti−1)) must be well-behaved. This leads to the notion of

Definition (Total variation). The total variation of a function a : [0,T ] → R is

( n ) X Va(t) = |a(0)| + sup |a(ti) − a(ti−1)| : 0 = t0 < t1 < ··· < tn = T . i=1

We say a has bounded variation if Va(T ) < ∞. In this case, we write a ∈ BV . We include the |a(0)| term because we want to pretend a is defined on all of R with a(t) = 0 for t < 0. We also define

Definition (C`adl`ag). A function a : [0,T ] → R is c`adl`ag if it is right-continuous and has left-limits. The following theorem is then clear: Theorem. There is a bijection

n o  c`adl`agfunctions of bounded  signed measures on [0,T ] ←→ variation a : [0,T ] → R

that sends a signed measure µ to a(t) = µ([0, t]). To construct the inverse, given a, we define 1 a = (V ± a). ± 2 a

Then a± are both positive, and a = a+ − a−. We can then define µ± by

µ±[0, t] = a±(t) − a±(0)

µ = µ+ − µ−

Moreover, Vµ[0,t] = |µ|[0, t].

Example. Let a : [0, 1] → R be given by

( 1 1 t < 2 a(t) = 1 . 0 t ≥ 2

This is c`adl`ag,and it’s total variation is v0(1) = 2. The associated signed measure is µ = δ0 − δ1/2, and the total variation measure is

|µ| = δ0 + δ1/2.

We are now ready to define the Lebesgue–Stieltjes integral.

7 1 The Lebesgue–Stieltjes integral III Stochastic Calculus and Applications

Definition (Lebesgue–Stieltjes integral). Let a : [0,T ] → R be c`adl`agof bounded variation and let µ be the associated signed measure. Then for h ∈ L1([0,T ], |µ|), the Lebesgue–Stieltjes integral is defined by Z t Z h(r) da(r) = h(r)µ(dr), s (s,t] where 0 ≤ s ≤ t ≤ T , and Z t Z h(r) |da(r)| = h(r)|µ|(dr). s (s,t] We also write Z t h · a(t) = h(r) da(r). 0 To let T = ∞, we need the following notation: Definition (Finite variation). A c`adl`agfunction a : [0, ∞) → R is of finite variation if a|[0,T ] ∈ BV [0, 1] for all T > 0. Fact. Let a : [0,T ] → R be c`adl`agand BV, and h ∈ L1([0,T ], |da|), then

Z T Z b

h(s) da(s) ≤ |h(s)| |da(s)|, 0 a and the function h · a : [0,T ] → R is c`adl`agand BV with associated signed measure h(s) da(s). Moreover, |h(s) da(s)| = |h(s)| |da(s)|. We can, unsurprisingly, characterize the Lebesgue–Stieltjes integral by a Riemann sum: Proposition. Let a be c`adl`agand BV on [0, t], and h bounded and left- continuous. Then Z t nm X (m)  (m) (m)  h(s) da(s) = lim h(t ) a(t ) − a(t ) m→∞ i−1 i i−1 0 i=1 Z t nm X (m) (m) (m) h(s) |da(s)| = lim h(t ) a(t ) − a(t ) m→∞ i−1 i i−1 0 i=1 (m) (m) for any sequence of subdivisions 0 = t0 < ··· < tnm = t of [0, t] with (m) (m) maxi |ti − ti−1| → 0.

Proof. We approximate h by hm defined by (m) (m) (m) hm(0) = 0, hm(s) = h(ti−1) for s ∈ (ti−1, ti ]. Then by left continuity, we have

h(s) = lim hm(s) n→∞ by left continuity, and moreover nm Z Z X (m) (m) (m) lim h(t )(a(t ) − a(t )) = lim hm(s)µ( ds) = h(s)µ(ds) m→∞ i−1 i i−1 m→∞ i=1 (0,t] (0,t] by dominated convergence theorem. The statement about |da(s)| is left as an exercise.

8 2 Semi-martingales III Stochastic Calculus and Applications

2 Semi-martingales

The title of the chapter is “semi-martingales”, but we are not going even meet the definition of a semi-martingale till the end of the chapter. The reason is that a semi-martingale is essentially defined to be the sum of a (local) martingale and a finite variation process, and understanding semi-martingales mostly involves understanding the two parts separately. Thus, for most of the chapter, we will be studying local martingales (finite variation processes are rather more boring), and at the end we will put them together to say a word or two about semi-martingales. From now on, (Ω, F, (Ft)t≥0, P) will be a filtered probability space. Recall the following definition: Definition (C`adl`agadapted process). A c`adl`agadapted process is a map X : Ω × [0, ∞) → R such that (i) X is c`adl`ag,i.e. X(ω, · ) : [0, ∞) → R is c`adl`agfor all ω ∈ Ω.

(ii) X is adapted, i.e. Xt = X( · , t) is Ft-measurable for every t ≥ 0. Notation. We will write X ∈ G to denote that a random variable X is measur- able with respect to a σ-algebra G.

2.1 Finite variation processes The definition of a finite variation function extends immediately to a finite variation process. Definition (Finite variation process). A finite variation process is a c`adl`ag adapted process A such that A(ω, · ) : [0, ∞) → R has finite variation for all ω ∈ Ω. The total variation process V of a finite variation process A is Z T Vt = |dAs|. 0 Proposition. The total variation process V of a c`adl`agadapted process A is also c`adl`ag,finite variation and adapted, and it is also increasing. Proof. We only have to check that it is adapted. But that follows directly from our previous expression of the integral as the limit of a sum. Indeed, let (m) (m) 0 = t0 < t1 < ··· < tnm = t be a (nested) sequence of subdivisions of [0, t] (m) (m) with maxi |ti − ti−1| → 0. We have seen n Xm Vt = lim |A (m) − A (m) | + |A(0)| ∈ Ft. m→∞ ti ti−1 i=1

Definition ((H · A)t). Let A be a finite variation process and H a process such that for all ω ∈ Ω and t ≥ 0, Z t Hs(ω)| |dAs(ω)| < ∞. 0

Then define a process ((H · A)t)t≥0 by Z t (H · A)t = Hs dAs. 0

9 2 Semi-martingales III Stochastic Calculus and Applications

For the process H · A to be adapted, we need a condition.

Definition (Previsible process). A process H :Ω × [0, ∞) → R is previsible if it is measurable with respect to the previsible σ-algebra P generated by the sets E × (s, t], where E ∈ Fs and s < t. We call the generating set Π. Very roughly, the idea is that a previsible event is one where whenever it happens, you know it a finite (though possibly arbitrarily small) before.

Definition (Simple process). A process H :Ω × [0, ∞) → R is simple, written H ∈ E, if n X H(ω, t) = Hi−1(ω)1(ti−1,ti](t) i=1

for random variables Hi−1 ∈ Fi−1 and 0 = t0 < ··· < tn. Fact. Simple processes and their limits are previsible.

Fact. Let X be a c`adl`agadapted process. Then Ht = Xt− defines a left- continuous process and is previsible. In particular, continuous processes are previsible. Proof. Since X is c`adl`agadapted, it is clear that H is left-continuous and adapted. Since H is left-continuous, it is approximated by simple processes. Indeed, let 2n n X Ht = H(i−1)2−n 1((i−1)2−n,i2−n](t) ∧ n ∈ E. i=1 n Then Ht → H for all t by left continuity, and previsibility follows. Exercise. Let H be previsible. Then

Ht ∈ Ft− = σ(Fs : s < t).

Example. Brownian motion is previsible (since it is continuous).

Example. A Poisson process (Nt) is not previsible since Nt 6∈ Ft− . Proposition. Let A be a finite variation process, and H previsible such that

Z t |H(ω, s)| |dA(ω, s)| < ∞ for all (ω, t) ∈ Ω × [0, ∞). 0 Then H · A is a finite variation process. Proof. The finite variation and c`adl`agparts follow directly from the deterministic versions. We only have to check that H · A is adapted, i.e. (H · A)( · , t) ∈ Ft for all t ≥ 0. First, H ·A is adapted if H(ω, s) = 1(u,v](s)1E(ω) for some u < v and E ∈ Fu, since (H · A)(ω, t) = 1E(ω)(A(ω, t ∧ v) − A(ω, t ∧ u)) ∈ Ft.

Thus, H · A is adapted for H = 1F when F ∈ Π. Clearly, Π is a π system, i.e. it is closed under intersections and non-empty, and by definition it generates the

10 2 Semi-martingales III Stochastic Calculus and Applications

previsible σ-algebra P. So to extend the adaptedness of H · A to all previsible H, we use the monotone class theorem. We let V = {H :Ω × [0, ∞) → R : H · A is adapted}. Then

(i)1 ∈ V

(ii)1 F ∈ V for all F ∈ Π. (iii) V is closed under monotone limits.

So V contains all bounded P-measurable functions. So the conclusion is that if A is a finite variation process, then as long as reasonable finiteness conditions are satisfied, we can integrate functions against dA. Moreover, this integral was easy to define, and it obeys all expected properties such as dominated convergence, since ultimately, it is just an integral in the usual measure-theoretic sense. This crucially depends on the fact that A is a finite variation process. However, in our motivating example, we wanted to take A to be Brownian motion, which is not of finite variation. The work we will do in this chapter and the next is to come up with a stochastic integral where we let A be a martingale instead. The heuristic idea is that while martingales can vary wildly, the martingale property implies there will be some large cancellation between the up and down movements, which leads to the possibility of a well-defined stochastic integral.

2.2 Local martingale

From now on, we assume that (Ω, F, (Ft)t, P) satisfies the usual conditions, namely that

(i) F0 contains all P-null sets T (ii)( Ft)t is right-continuous, i.e. Ft = (Ft+ = s>t Fs for all t ≥ 0. We recall some of the properties of continuous martingales. Theorem (Optional stopping theorem). Let X be a c`adl`agadapted integrable process. Then the following are equivalent:

1 (i) X is a martingale, i.e. Xt ∈ L for every t, and

E(Xt | Fs) = Xs for all t > s.

T T (ii) The stopped process X = (Xt ) = (XT ∧t) is a martingale for all stopping times T .

1 (iii) For all stopping times T,S with T bounded, XT ∈ L and E(XT | FS) = XT ∧S almost surely.

1 (iv) For all bounded stopping times T , XT ∈ L and E(XT ) = E(X0).

11 2 Semi-martingales III Stochastic Calculus and Applications

For X uniformly integrable, (iii) and (iv) hold for all stopping times. In practice, most of our results will be first proven for bounded martingales, or perhaps square integrable ones. The point is that the square-integrable martingales form a Hilbert space, and Hilbert space techniques can help us say something useful about these martingales. To get something about a general Tn martingale M, we can apply a cutoff Tn = inf{t > 0 : Mt ≥ n}, and then M will be a martingale for all n. We can then take the limit n → ∞ to recover something about the martingale itself. But if we are doing this, we might as well weaken the martingale condition a bit — we only need the M Tn to be martingales. Of course, we aren’t doing this just for fun. In general, martingales will not always be closed under the operations we are interested in, but local (or maybe semi-) martingales will be. In general, we define

Definition (Local martingale). A c`adl`agadapted process X is a local martingale if there exists a sequence of stopping times Tn such that Tn → ∞ almost surely, Tn and X is a martingale for every n. We say the sequence Tn reduces X. Example. (i) Every martingale is a local martingale, since by the optional stopping theorem, we can take Tn = n.

3 (ii) Let (Bt) to be a standard 3d Brownian motion on R . Then  1  (Xt)t≥1 = |Bt| t≥1 is a local martingale but not a martingale. To see this, first note that

2 sup EXt < ∞, EXt → 0. t≥1

Since EXt → 0 and Xt ≥ 0, we know X cannot be a martingale. However, 2 we can check that it is a local martingale. Recall that for any f ∈ Cb , Z t f 1 M = f(Bt) − f(B1) − ∆f(Bs) ds 2 0

1 1 is a martingale. Moreover, ∆ |x| = 0 for all x =6 0. Thus, if |x| didn’t have a singularity at 0, this would have told us Xt is a martingale. Thus, we are safe if we try to bound |Bs| away from zero. Let  1  T = inf t ≥ 1 : |B | < , n t n

2 1 1 T Tn and pick fn ∈ Cb such that fn(x) = |x| for |x| ≥ n . Then Xt − X1 = f M n . So XTn is a martingale. t∧Tn It remains to show that Tn → ∞, and this follows from the fact that EXt → 0.

12 2 Semi-martingales III Stochastic Calculus and Applications

Proposition. Let X be a local martingale and Xt ≥ 0 for all t. Then X is a supermartingale.

Proof. Let (Tn) be a reducing sequence for X. Then   (Xt | Fs) = lim inf Xt∧T | Fs E E n→∞ n

≤ lim (Xt∧T | Fs) n→∞ E n

= lim inf Xs∧Tn Tn→∞

= Xs.

Recall the following result from Advanced Probability:

Proposition. Let X ∈ L1(Ω, F, P). Then the set

χ = {E(X | G): G ⊆ F a sub-σ-algebra} is uniformly integrable, i.e.

sup E(|Y |1|Y |>λ) → 0 as λ → ∞. Y ∈χ

Recall also the following important result about uniformly integrable random variables:

1 Theorem (Vitali theorem). Xn → X in L iff (Xn) is uniformly integrable and Xn → X in probability. With these, we can state the following characterization of martingales in terms of local martingales: Proposition. The following are equivalent: (i) X is a martingale. (ii) X is a local martingale, and for all t ≥ 0, the set

χt = {XT : T is a stopping time with T ≤ t}

is uniformly integrable. Proof. – (a) ⇒ (b): Let X be a martingale. Then by the optional stopping theorem, XT = E(Xt | FT ) for any bounded stopping time T ≤ t. So χt is uniformly integrable.

– (b) ⇒ (a): Let X be a local martingale with reducing sequence (Tn), and assume that the sets χt are uniformly integrable for all t ≥ 0. By the optional stopping theorem, it suffices to show that E(XT ) = E(X0) for any bounded stopping time T . So let T be a bounded stopping time, say T ≤ t. Then

Tn Tn E(X0) = E(X0 ) = E(XT ) = E(XT ∧Tn )

13 2 Semi-martingales III Stochastic Calculus and Applications

for all n. Now T ∧ Tn is a stopping time ≤ t, so {XT ∧Tn } is uniformly integrable by assumption. Moreover, Tn ∧ T → T almost surely as n → ∞, 1 hence XT ∧Tn → XT in probability. Hence by Vitali, this converges in L . So E(XT ) = E(X0).

1 Corollary. If Z ∈ L is such that |Xt| ≤ Z for all t, then X is a martingale. In particular, every bounded local martingale is a martingale. The definition of a local martingale does not give us control over what the Tn reducing sequence {Tn} is. In particular, it is not necessarily true that X will be bounded, which is a helpful property to have. Fortunately, we have the following proposition:

Proposition. Let X be a continuous local martingale with X0 = 0. Define

Sn = inf{t ≥ 0 : |Xt| = n}.

Sn Then Sn is a stopping time, Sn → ∞ and X is a bounded martingale. In particular, (Sn) reduces X.

Proof. It is clear that Sn is a stopping time, since (if it is not clear)

\  1  \ [  1  {Sn ≤ t} = sup |Xs| > n − = |Xs| > n − ∈ Ft. s≤t k k k∈N k∈N s

It is also clear that Sn → ∞, since

sup |Xs| ≤ n ↔ Sn ≥ t, s≤t

and by continuity and compactness, sups≤t |Xs| is finite for every (ω, t). Finally, we show that XSn is a martingale. By the optional stopping theorem, XTn∧Sn is a martingale, so XSn is a local martingale. But it is also bounded by n. So it is a martingale. An important and useful theorem is the following:

Theorem. Let X be a continuous local martingale with X0 = 0. If X is also a finite variation process, then Xt = 0 for all t. R This would rule out interpreting Hs dXs as a Lebesgue–Stieltjes integral for X a non-zero continuous local martingale. In particular, we cannot take X to be Brownian motion. Instead, we have to develop a new theory of integration for continuous local martingales, namely the Itˆointegral. On the other hand, this theorem is very useful. We will later want to define the stochastic integral with respect to the sum of a continuous local martingale and a finite variation process, which is the appropriate generality for our theorems to make good sense. This theorem tells us there is a unique way to decompose a process as a sum of a finite variation process and a continuous local martingale (if it can be done). So we can simply define this stochastic integral by using the Lebesgue–Stieltjes integral on the finite variation part and the Itˆointegral on the continuous local martingale part.

14 2 Semi-martingales III Stochastic Calculus and Applications

Proof. Let X be a finite-variation continuous local martingale with X0 = 0. Since X is finite variation, we can define the total variation process (Vt) corresponding to X, and let

 Z 1  Sn = inf{t ≥ 0 : Vt ≥ n} = inf t ≥ 0 : |dXs| ≥ n . 0

Then Sn is a stopping time, and Sn → ∞ since X is assumed to be finite variation. Moreover, by optional stopping, XSn is a local martingale, and is also bounded, since Z t∧Sn Sn Xt ≤ |dXs| ≤ n. 0 So XSn is in fact a martingale. 2 We claim its L -norm vanishes. Let 0 = t0 < t1 < ··· < tn = t be a subdivision of [0, t]. Using the fact that XSn is a martingale and has orthogonal increments, we can write

k Sn 2 X Sn Sn 2 E((Xt ) ) = E((Xti − Xti−1 ) ). i=1

Observe that XSn is finite variation, but the right-hand side is summing the square of the variation, which ought to vanish when we take the limit max |ti − ti−1| → 0. Indeed, we can compute

k Sn 2 X Sn Sn 2 E((Xt ) ) = E((Xti − Xti−1 ) ) i=1 k ! Sn Sn X Sn Sn ≤ E max |Xt − Xt | |Xt − Xt | 1≤i≤k i i−1 i i−1 i=1   Sn Sn ≤ E max |Xt − Xt |Vt∧Sn 1≤i≤k i i−1   Sn Sn ≤ E max |Xt − Xt |n . 1≤i≤k i i−1

Of course, the first term is also bounded by the total variation. Moreover, we can make further subdivisions so that the mesh size tends to zero, and then the first term vanishes in the limit by continuity. So by dominated convergence, we Sn 2 Sn must have E((Xt ) ) = 0. So Xt = 0 almost surely for all n. So Xt = 0 for all t almost surely.

2.3 Square integrable martingales As previously discussed, we will want to use Hilbert space machinery to construct the Itˆointegral. The rough idea is to define the Itˆointegral with respect to a fixed martingale on simple processes via a (finite) Riemann sum, and then by calculating appropriate bounds on how this affects the norm, we can extend this to all processes by continuity, and this requires our space to be Hilbert. The interesting spaces are defined as follows:

15 2 Semi-martingales III Stochastic Calculus and Applications

Definition (M2). Let   2 2 M = X :Ω × [0, ∞) → R : X is c´adl´agmartingale with sup E(Xt ) < ∞ . t≥0 2  2 Mc = X ∈ M : X(ω, · ) is continuous for every ω ∈ Ω

We define an inner product on M2 by

(X,Y )M2 = E(X∞Y∞), which in aprticular induces a norm

2 1/2 kXkM2 = E(X∞) . We will prove this is indeed an inner product soon. Here recall that for X ∈ M2, 2 the martingale convergence theorem implies Xt → X∞ almost surely and in L . Our goal will be to prove that these spaces are indeed Hilbert spaces. First 2 2 2 observe that if X ∈ M , then (Xt )t≥0 is a submartingale by Jensen, so t 7→ EXt is increasing, and 2 2 EX∞ = sup EXt . t≥0 All the magic that lets us prove they are Hilbert spaces is Doob’s inequality. Theorem (Doob’s inequality). Let X ∈ M2. Then   2 2 E sup Xt ≤ 4E(X∞). t≥0

So once we control the limit X∞, we control the whole path. This is why the definition of the norm makes sense, and in particular we know kXkM2 = 0 implies that X = 0.

2 2 Theorem. M is a Hilbert space and Mc is a closed subspace. Proof. We need to check that M2 is complete. Thus let (Xn) ⊆ M2 be a Cauchy sequence, i.e. n m 2 E((X∞ − X∞) ) → 0 as n, m → ∞. By passing to a subsequence, we may assume that

n n−1 2 −n E((X∞ − X∞ ) ) ≤ 2 . First note that !  1/2 X n n−1 X n n−1 2 E sup |Xt − Xt | ≤ E sup |Xt − Xt | (CS) n t≥0 n t≥0 X n n−1 21/2 ≤ 2E |X∞ − X∞ | (Doob’s) n X ≤ 2 2−n/2 < ∞. n

16 2 Semi-martingales III Stochastic Calculus and Applications

So ∞ X n n−1 sup |Xt − Xt | < ∞ a.s. (∗) n=1 t≥0 n So on this event, (X ) is a Cauchy sequence in the space (D[0, ∞), k · k∞) of c´adl´agsequences. So there is some X(ω, · ) ∈ D[0, ∞) such that

n kX (ω, · ) − X(ω, · )k∞ → 0 for almost all ω.

and we set X = 0 outside this almost sure event (∗). We now claim that   n 2 E sup |X − X| → 0 as n → ∞. t≥0

We can just compute     n 2 n m 2 E sup |X − X| = E lim sup |X − X | t m→∞ t   n m 2 ≤ lim inf E sup |X − X | (Fatou) m→∞ t n ∞ 2 ≤ lim inf 4 (X∞ − Xm ) (Doob’s) m→∞ E and this goes to 0 in the limit n → ∞ as well. We finally have to check that X is indeed a martingale. We use the triangle inequality to write

n n kE(Xt | Fs) − XskL2 ≤ kE(Xt − Xt | Fs)kL2 + kXs − XskL2 n 2 1/2 n ≤ E(E((Xt − Xt ) | Fs)) + kXs − XskL2 n n = kXt − Xt kL2 + kXs − XskL2  1/2 n 2 ≤ 2E sup |Xt − Xt | → 0 t

as n → ∞. But the left-hand side does not depend on n. So it must vanish. So X ∈ M2. We could have done exactly the same with continuous martingales, so the second part follows.

2.4 Quadratic variation Physicists are used to dropping all terms above first-order. It turns out that Brownian motion, and continuous local martingales in general oscillate so wildly that second order terms become important. We first make the following definition: Definition (Uniformly on compact sets in probability). For a sequence of processes (Xn) and a process X, we say that Xn → X u.c.p. iff ! n P sup |Xs − Xs| > ε → 0 as n → ∞ for all t > 0, ε > 0. s∈[0,t]

17 2 Semi-martingales III Stochastic Calculus and Applications

Theorem. Let M be a continuous local martingale with M0 = 0. Then there exists a unique (up to indistinguishability) continuous adapted increasing process 2 (hMit)t≥0 such that hMi0 = 0 and Mt − hMit is a continuous local martingale. Moreover,

d2nte (n) (n) X 2 hMit = lim hMit , hMit = (Mt2−n − M(i−1)2−n ) , n→∞ i=1 where the limit u.c.p. Definition (Quadratic variation). hMi is called the quadratic variation of M.

It is probably more useful to understand hMit in terms of the explicit formula, 2 and the fact that Mt − hMit is a continuous local martingale is a convenient property.

2 Example. Let B be a standard Brownian motion. Then Bt − t is a martingale. Thus, hBit = t. The proof is long and mechanical, but not hard. All the magic happened 2 2 when we used the magical Doob’s inequality to show that Mc and M are Hilbert spaces. Proof. To show uniqueness, we use that finite variation and local martingale are incompatible. Suppose (At) and (A˜t) obey the conditions for hMi. Then ˜ 2 ˜ 2 At −At = (Mt −At)−(Mt −At) is a continuous adapted local martingale starting at 0. Moreover, both At and A˜t are increasing, hence have finite variation. So A − A˜ = 0 almost surely. To show existence, we need to show that the limit exists and has the right property. We do this in steps. Claim. The result holds if M is in fact bounded. 2 Suppose |M(ω, t)| ≤ C for all (ω, t). Then M ∈ Mc . Fix T > 0 deterministic. Let d2nT e n X Xt = M(i−1)2−n (Mi2−n∧t − M(i−1)2−n∧t). i=1 This is defined so that

(n) 2 n hMik2−n = Mk2−n − 2Xk2−n .

(n) n This reduces the study of hMi to that of Xk2−n . n 2 We check that (Xt ) is a Cauchy sequence in Mc . The fact that it is a martingale is an immediate computation. To show it is Cauchy, for n ≥ m, we calculate

d2nT e n m X X∞ − X∞ = (M(i−1)2−n − Mb(i−1)2m−nc2−m )(Mi2−n − M(i−1)2−n ). i=1

18 2 Semi-martingales III Stochastic Calculus and Applications

We now take the expectation of the square to get

d2nT e  n m 2 X 2 2 E(X∞ − X∞) = E  (M(i−1)2−n − Mb(i−1)2m−nc2−m ) (Mi2−n − M(i−1)2−n )  i=1

 d2nT e  2 X 2 ≤ E  sup |Mt − Ms| (Mi2−n − M(i−1)2−n )  −m |s−t|≤2 i=1 ! 2 (n) = E sup |Mt − Ms| hMiT |s−t|≤2−m 1/2 ! 1/2 4  (n) 2 ≤ E sup |Mt − Ms| E (hMiT ) |s−t|≤2−m (Cauchy–Schwarz) We shall show that the second factor is bounded, while the first factor tends to zero as m → ∞. These are both not surprising — the first term vanishing in the limit corresponds to M being continuous, and the second term is bounded since M itself is bounded. To show that the first term tends to zero, we note that we have

4 4 |Mt − Ms| ≤ 16C , and moreover

sup |Mt − Ms| → 0 as m → ∞ by uniform continuity. |s−t|≤2−m So we are done by the dominated convergence theorem. To show the second term is bounded, we do (writing N = d2nT e)

 N !2  (n) 2 X 2 E (hMiT ) = E  (Mi2−n − M(i−1)2−n )  i=1 N X 4 = E (Mi2−n − M(i−1)2−n ) i=1 N N ! X 2 X 2 + 2 E (Mi2−n − M(i−1)2−n ) (Mk2−n − M(k−1)2−n ) i=1 k=i+1 We use the martingale property and orthogonal increments the rearrange the off-diagonal term as 2 E (Mi2−n − M(i−1)2−n )(MN2−n − Mi2−n ) . Taking some sups, we get

N !  (n) 2 2 X 2 E (hMiT ) ≤ 12C E (Mi2−n − M(i−1)2−n ) i=1 2 2 = 12C E (MN2−n − M0) ≤ 12C2 · 4C2.

19 2 Semi-martingales III Stochastic Calculus and Applications

So done. n 2 2 So we now have X → X in Mc for some X ∈ Mc . In particular, we have

n sup |Xt − Xt| → 0 t L2 So we know that n sup |Xt − Xt| → 0 t almost surely along a subsequence Λ. Let N ⊆ Ω be the events on which this convergence fails. We define ( M 2 − 2X ω ∈ Ω \ N A(T ) = t t . t 0 ω ∈ N

(T ) 2 (T ) Then A is continuous, adapted since M and X are, and (Mt∧T − At∧T )t is a (T ) 2 n martingale since X is. Finally, A is increasing since Mt − Xt is increasing on 2−nZ ∩ [0,T ] and the limit is uniform. So this A(T ) basically satisfies all the properties we want hMit to satisfy, except we have the stopping time T . (T ) (T +1) We next observe that for any T ≥ 1, At∧T = At∧T for all t almost surely. This essentially follows from the same uniqueness argument as we had at the beginning of the proof. Thus, there is a process (hMit)t≥0 such that

(T ) hMit = At

for all t ∈ [0,T ] and T ∈ N, almost surely. Then this is the desired process. So we have constructed hMi in the case where M is bounded. Claim. hMi(n) → hMi u.c.p. Recall that (n) 2 n hMit = M2−nb2ntc − 2X2−nb2ntc. We also know that n sup |Xt − Xt| → 0 t≤T in L2, hence also in probability. So we have

(n) 2 2 |hMit − hMit | ≤ sup |M2−nb2ntc − Mt | t≤T n + sup |X2−nb2ntc − X2−nb2ntc| + sup |X2−nb2ntc − Xt|. t≤T t≤T

The first and last terms → 0 in probability since M and X are uniformly continuous on [0,T ]. The second term converges to zero by our previous assertion. So we are done. Claim. The theorem holds for M any continuous local martingale.

Tn We let Tn = inf{t ≥ 0 : |Mt| ≥ n}. Then (Tn) reduces M and M is a bounded martingale. So in particular M Tn is a bounded continuous martingale. We set An = hM Tn i.

20 2 Semi-martingales III Stochastic Calculus and Applications

Then (An) and (An+1 ) are indistinguishable for t < T by the uniqueness argu- t t∧Tn n n ment. Thus there is a process hMi such that hMit∧Tn = At are indistinguishable for all n. Clearly, hMi is increasing since the An are, and M 2 − hMi is a t∧Tn t∧Tn 2 martingale for every n, so Mt − hMit is a continuous local martingale. Claim. hMi(n) → hMi u.c.p. We have seen hM Tk i(n) → hM Tk i u.c.p. for every k. So     (n) Tk (n) Tk P sup |hMit − hMti| > ε ≤ P(Tk < T ) + P sup |hM it − hM it > ε . t≤T t≤T

So we can fisrt pick k large enough such that the first term is small, then pick n large enough so that the second is small.

There are a few easy consequence of this theorem. Fact. Let M be a continuous local martingale, and let T be a stopping time. Then alsmot surely for all t ≥ 0,

T hM it = hMit∧T

2 2 Proof. Since Mt − hMit is a continuous local martingle, so is Mt∧T − hMit∧T = T 2 (M )t − hMit∧T . So we are done by uniqueness.

Fact. Let M be a continuous local martingale with M0 = 0. Then M = 0 iff hMi = 0.

Proof. If M = 0, then hMi = 0. Conversely, if hMi = 0, then M 2 is a continuous 2 2 local martingale and positive. Thus EMt ≤ EM0 = 0. 2 2 Proposition. Let M ∈ Mc . Then M − hMi is a uniformly integrable martin- gale, and 1/2 kM − M0kM2 = (EhMi∞) . 1 Proof. We will show that hMi∞ ∈ L . This then implies

2 2 |Mt − hMit| ≤ sup Mt + hMi∞. t≥0

Then the right hand side is in L1. Since M 2 − hMi is a local martingale, this implies that it is in fact a uniformly integrable martingale. 1 To show hMi∞ ∈ L , we let

Sn = inf{t ≥ 0 : hMit ≥ n}.

Then Sn → ∞, Sn is a stopping time and moreover hMit∧Sn ≤ n. So we have

M 2 − hMi ≤ n + sup M 2, t∧Sn t∧Sn t t≥0

and the second term is in L1. So M 2 − hMi is a true martingale. t∧Sn t∧Sn

21 2 Semi-martingales III Stochastic Calculus and Applications

So M 2 − M 2 = hMi . E t∧Sn E 0 E t∧Sn Taking the limit t → ∞, we know M 2 → M 2 by dominated convergence. E t∧Sn E Sn

Since hMit∧Sn is increasing, we also have EhMit∧Sn → EhMiSn by monotone convergence. We can take n → ∞, and by the same justification, we have

2 2 2 EhMi ≤ EM∞ − EM0 = E(M∞ − M0) < ∞.

2.5 Covariation 2 We know Mc not only has a norm, but also an inner product. This can also be reflected in the bracket by the polarization identity, and it is natural to define Definition (Covariation). Let M,N be two continuous local martingales. Define the covariation (or simply the bracket) between M and N to be process 1 hM,Ni = (hM + Ni − hM − Ni ). t 4 t t

2 Then if in fact M,N ∈ Mc , then putting t = ∞ gives the inner product. Proposition. (i) hM,Ni is the unique (up to indistinguishability) finite variation process such that MtNt − hM,Nit is a continuous local martingale. (ii) The mapping (M,N) 7→ hM,Ni is bilinear and symmetric. (iii)

(n) hM,Nit = lim hM,Nit u.c.p. n→∞ d2nte (n) X −n hM,Ni = (M −n )(N −n − N 2 ). t i2 −M(i−1)2−n i2 (i−1) i=1

(iv) For every stopping time T ,

T T T hM ,N it = hM ,Nit = hM,Nit∧T .

2 (v) If M,N ∈ Mc , then MtNt − hM,Nit is a uniformly integrable martingale, and hM − M0,N − N0iM2 = EhM,Ni∞. Example. Let B,B0 be two independent Brownian motions (with respect to the same filtration). Then hB,B0i = 0.

Proof. Assume B = B0 = 0. Then X = √1 (B ± B0) are Brownian motions, 0 0 ± 2 and so hX±i = t. So their difference vanishes. An important result about the covariation is the following Cauchy–Schwarz like inequality:

22 2 Semi-martingales III Stochastic Calculus and Applications

Proposition (Kunita–Watanabe). Let M,N be continuous local martingales and let H,K be two (previsible) processes. Then almost surely

Z ∞ Z ∞ 1/2 Z ∞ 1/2 2 2 |Hs||Ks||dhM,Nis| ≤ Hs dhMis Hs hNis . 0 0 0 In fact, this is Cauchy–Schwarz. All we have to do is to take approximations and take limits and make sure everything works out well. Proof. For convenience, we write

t hM,Nis = hM,Nit − hM,Nis. Claim. For all 0 ≤ s ≤ t, we have

t p t p t |hM,Nis| ≤ hM,Mis hN,Nis. By continuity, we can assume that s, t are dyadic rationals. Then

2nt t X |hM,Nis| = lim (Mi2−n − M(i−1)2−n )(Ni2−n − N(i−1)2−n ) n→∞ i=2ns+1 1/2 2nt X 2 ≤ lim (Mi2−n − M(i−1)2−n ) × n→∞ i=2ns+1 n 1/2 2 t X 2 (Ni2−n − N(i−1)2−n ) (Cauchy–Schwarz) i=2ns+1 t 1/2 t 1/2 = hM,Mis hN,Nis , where all equalities are u.c.p. Claim. For all 0 ≤ s < t, we have

Z t p t p t |dhM,Niu| ≤ hM,Mis hN,Nis. s

Indeed, for any subdivision s = t0 < t1 < ··· tn = t, we have

n n q q X ti X ti ti |hM,Niti−1 | ≤ hM,Miti−1 hN,Niti−1 i=1 i=1 n !1/2 n !1/2 X ti X ti ≤ hM,Miti−1 hN,Niti−1 . i=1 i=1 (Cauchy–Schwarz)

Taking the supremum over all subdivisions, the claim follows. Claim. For all bounded Borel sets B ⊆ [0, ∞), we have

Z sZ sZ |dhM,Niu| ≤ dhMiu dhNiu. B B B

23 2 Semi-martingales III Stochastic Calculus and Applications

We already know this is true if B is an interval. If B is a finite union of integrals, then we apply Cauchy–Schwarz. By a monotone class argument, we can extend to all Borel sets. Claim. The theorem holds for

k n X X H = h`1B` ,K = k`1B` `=1 `=1

for B` ⊆ [0, ∞) bounded Borel sets with disjoint support. We have

n Z X Z |HsKs| |dhM,Nis| ≤ |h`k`| |dM,N¯ is| `=1 B` n 1/2 1/2 X Z  Z  ≤ |h`k`| dhMis dhNis `=1 B` B` n Z !1/2 n Z !1/2 X 2 X 2 ≤ h` dhMis k` dhNis `=1 B` `=1 B` To finish the proof, approximate general H and K by step functions and take the limit.

2.6 Semi-martingale Definition (Semi-martingale). A (continuous) adapted process X is a (contin- uous) semi-martingale if X = X0 + M + A, where X0 ∈ F0, M is a continuous local martingale with M0 = 0, and A is a continuous finite variation process with A0 = 0. This decomposition is unique up to indistinguishables.

0 0 0 0 Definition (Quadratic variation). Let X = X0 +M +A and X = X0 +M +A be (continuous) semi-martingales. Set

hXi = hMi, hX,X0i = hM,M 0i.

This definition makes sense, because continuous finite variation processes do not have quadratic variation. Exercise. We have

d2nte (n) X hX,Y it = (Xi2−n − X(i−1)2−n )(Yi2−n − Y(i−1)2−n ) → hX,Y i u.c.p. i=1

24 3 The stochastic integral III Stochastic Calculus and Applications

3 The stochastic integral 3.1 Simple processes We now have all the background required to define the stochastic integral, and we can start constructing it. As in the case of the Lebesgue integral, we first define it for simple processes, and then extend to general processes by taking a limit. Recall that we have Definition (Simple process). The space of simple processes E consists of func- tions H :Ω × [0, ∞) → R that can be written as n X Ht(ω) = Hi−1(ω)1(ti−1,ti](t) i=1

for some 0 ≤ t0 ≤ t1 ≤ · · · ≤ tn and bounded random variables Hi ∈ Fti . Definition (H · M). For M ∈ M2 and H ∈ E, we set

n Z t X H dM = (H · M)t = Hi−1(Mti∧t − Mti−1∧t). 0 i=1 If M were of finite variation, then this is the same as what we have previously seen. Recall that for the Lebesgue integral, extending this definition to general functions required results like monotone convergence. Here we need some similar results that put bounds on how large the integral can be. In fact, we get something better than a bound.

2 2 Proposition. If M ∈ Mc and H ∈ E, then H · M ∈ Mc and Z ∞  2 2 kH · MkM2 = E Hs dhMis . (∗) 0

2 Proof. We first show that H · M ∈ Mc . By linearity, we only have to check it for i Xt = Hi−1(Mti∧t − Mti−1∧t) i We have to check that E(Xt | Fs) = 0 for all t > s, and the only non-trivial case is when t > ti−1.

i E(Xt | Fs) = Hi−1E(Mti∧t − Mti−1∧t | Fs) = 0. We also check that i kX kM2 ≤ 2kHk∞kMkM2 . 2 So it is bounded. So H · M ∈ Mc . To prove (∗), we note that the Xi are orthogonal and that

i 2 hX it = Hi−1(hMiti∧t − hMiti−1∧t). So we have Z t X i i X 2 2 hH ·M,H ·Mi = hX ,X i = Hi−1(hMiti∧t −hMiti−1∧t) = Hs dhMis. 0

25 3 The stochastic integral III Stochastic Calculus and Applications

In particular, Z ∞  2 2 kH · MkM2 = EhH · Mi∞ = E Hs dhMis . 0 2 Proposition. Let M ∈ Mc and H ∈ E. Then hH · M,Ni = H · hM,Ni for all N ∈ M2. In other words, the stochastic integral commutes with the bracket.

P i P Proof. Write H · M = X = Hi−1(Mti∧t − Mti−1∧t) as before. Then i hX ,Nit = Hi−1hMti∧t − Mti−1∧t,Ni = Hi−1(hM,Niti∧t − hM,Niti−1∧t).

3.2 Itˆoisometry We now try to extend the above definition to something more general than simple processes. 2 2 2 Definition (L (M)). Let M ∈ Mc . Define L (M) to be the space of (equiva- lence classes of) previsible H :Ω × [0, ∞) → R such that Z ∞ 1/2 2 kHkL2(M) = kHkM = E Hs dhMis < ∞. 0 For H,K ∈ L2(M), we set Z ∞  (H,K)L2(M) = E HsKs dhMis . 0 In fact, L2(M) is equal to L2(Ω × [0, ∞), P, dP dhMi), where P is the previsible σ-algebra, and in particular is a Hilbert space. 2 2 Proposition. Let M ∈ Mc . Then E is dense in L (M). Proof. Since L2(M) is a Hilbert space, it suffices to show that if (K,H) = 0 for all H ∈ E, then K = 0. So assume that (K,H) = 0 for all H ∈ E and Z t Xt = Ks dhMis, 0 1 Then X is a well-defined finite variation process, and Xt ∈ L for all t. It suffices to show that Xt = 0 for all t, and we shall show that Xt is a continuous martingale. Let 0 ≤ s < t and F ∈ Fs bounded. We let H = F 1(s,t] ∈ E. By assumption, we know  Z t  0 = (K,H) = E F Ku dhMiu = E(F (Xt − XS)). s

Since this holds for all Fs measurable F , we have shown that

E(Xt | Fs) = Xs. So X is a (continuous) martingale, and we are done.

26 3 The stochastic integral III Stochastic Calculus and Applications

2 Theorem. Let M ∈ Mc . Then 2 2 (i) The map H ∈ E 7→ H ·M ∈ Mc extends uniquely to an isometry L (M) → 2 Mc , called the Itˆoisometry. 2 2 (ii) For H ∈ L (M), H · M is the unique martingale in Mc such that hH · M,Ni = H · hM,Ni

2 for all N ∈ Mc , where the integral on the LHS is the stochastic integral (as above) and the RHS is the finite variation integral.

T T (iii) If T is a stopping time, then (1[0,T ]H) · M = (H · M) = H · M . Definition (Stochastic integral). H · M is the stochastic integral of H with respect to M and we also write Z t (H · M)t = Hs dMs. 0 It is important that the integral of martingale is still a martingale. After proving Itˆo’sformula, we will use this fact to show that a lot of things are in fact martingales in a rather systematic manner. For example, it will be rather 2 effortless to show that Bt − t is a martingale when Bt is a standard Brownian motion. Proof. (i) We have already shown that this map is an isometry when restricted to E. 2 So extend by completeness of Mc and denseness of E. (ii) Again the equation to show is known for simple H, and we want to show it is preserved under taking limits. Suppose Hn → H in L2(M) with n 2 n 2 H ∈ L (M). Then H · M → H · M in Mc . We want to show that

n 1 hH · M,Ni∞ = lim hH · M,Ni∞ in L . n→∞ H · hM,Ni = lim Hn · hM,Ni in L1. n→∞

2 for all N ∈ Mc . To show the first holds, we use the Kunita–Watanabe inequality to get

n n 1/2 1/2 E|hH · M − H · M,Ni∞| ≤ E (hH · M − H · Mi∞) (EhNi∞) ,

n and the first factor is kH · M − H · MkM2 → 0, while the second is finite 2 since N ∈ Mc . The second follows from

n n E |((H − H ) · hM,Ni)∞| ≤ kH − H kL2(M)kNkM2 → 0.

So we know that hH · M,Ni∞ = (H · hM,Ni)∞. We can then replace N t by the stopped process N to get hH · M,Nit = (H · hM,Ni)t. 2 To see uniqueness, suppose X ∈ Mc is another such martingale. Then we have hX − H · M,Ni = 0 for all N. Take N = X − H · M, and then we are done.

27 3 The stochastic integral III Stochastic Calculus and Applications

2 (iii) For N ∈ Mc , we have T h(H · M) ,Nit = hH · M,Nit∧T = H · hM,Nit∧T = (H1[0,T ] · hM,Ni)t for every N. So we have shown that

T (H · M) = (1[0,T ]H · M) by (ii). To prove the second equality, we have

T T hH · M ,Nit = H · hM ,Nit = H · hM,Nit∧T = ((H1[0,T ] · hM,Ni)t.

Note that (ii) can be written as

*Z (−) + Z t Hs dMs,N = Hs dhM,Nis. 0 0 t Corollary. hH · M,K · Ni = H · (K · hM,Ni) = (HK) · hM,Ni. In other words,

*Z (−) Z (−) + Z t Hs dMs, Ks dNs = HsKs dhM,Nis. 0 0 0 t Corollary. Since H · M and (H · M)(K · N) − hH · M,K · Ni are martingales starting at 0, we have Z t  E H dMs = 0 0 Z t  Z t  Z t E Hs dMs Ks dNs = HsKs dhM,Nis. 0 0 0 Corollary. Let H ∈ L2(M), then HK ∈ L2(M) iff K ∈ L2(H · M), in which case (KH) · M = K · (H · M). Proof. We have Z ∞  Z ∞  2 2 2 E Ks Hs dhMsi = E Ks hH · Mis , 0 0 2 so kKkL2(H·M) = kHKkL2(M). For N ∈ Mc , we have

h(KH)·M,Nit = (KH ·hM,Ni)t = (K ·(H ·hM,Ni))t = (K ·hH ·M,Ni)t.

3.3 Extension to local martingales We have now defined the stochastic integral for continuous martingales. We next go through some formalities to extend this to local martingales, and ultimately to semi-martingales. We are not doing this just for fun. Rather, when we later prove results like Itˆo’sformula, even when we put in continuous (local) martingales, we usually end up with some semi-martingales. So it is useful to be able to deal with semi-martingales in general.

28 3 The stochastic integral III Stochastic Calculus and Applications

2 2 Definition (Lbc(M)). Let Lbc(M) be the space of previsible H such that Z t 2 Hs dhMis < ∞ a.s. 0 for all finite t > 0. Theorem. Let M be a continuous local martingale. 2 (i) For every H ∈ Lbc(M), there is a unique continuous local martingale H ·M with (H · M)0 = 0 and hH · M,Ni = H · hM,Ni for all N,M. (ii) If T is a stopping time, then T T (1[0,T ]H) · M = (H · M) = H · M .

2 2 2 (iii) If H ∈ Lloc(M), K is previsible, then K ∈ Lloc(H · M) iff HK ∈ Lloc(M), and then K · (H · M) = (KH) · M. 2 2 (iv) Finally, if M ∈ Mc and H ∈ L (M), then the definition is the same as the previous one. R t 2 Proof. Assume M0 = 0, and that 0 Hs dhMis < ∞ for all ω ∈ Ω (by setting H = 0 when this fails). Set  Z t  2 Sn = inf t ≥ 0 : (1 + Hs ) dhMis ≥ n . 0

These Sn are stopping times that tend to infinity. Then

Sn Sn hM ,M it = hM,Mit∧Sn ≤ n.

Sn 2 So M ∈ Mc . Also, Z ∞ Z Sn Sn 2 Hs dhM is = Hs dhMis ≤ n. 0 0 So H ∈ L2(M Sn ), and we have already defined what H · M Sn is. Now notice that H · M Sn = (H · M Sm )Sn for m ≥ n. So it makes sense to define H · M = lim H · M Sn . n→∞ This is the unique process such that (H · M)Sn = H · M Sn . We see that H · M is a continuous adapted local martingale with reducing sequence Sn. Claim. hH · M,Ni = H · hM,Ni. 0 Indeed, assume that N0 = 0. Set Sn = inf{t ≥ 0 : |Nt| ≥ n}. Set Tn = 0 S0 2 Sn ∧ Sn. Observe that N n ∈ Mc . Then T S S0 S S0 T hH · M,Ni n = hH · M n ,N n i = H · hM n ,N n i = H · hM,Ni n . Taking the limit n → ∞ gives the desired result. The proofs of the other claims are the same as before, since they only use the characterizing property hH · M,Ni = H · hM,Ni.

29 3 The stochastic integral III Stochastic Calculus and Applications

3.4 Extension to semi-martingales Definition (Locally boounded previsible process). A previsible process H is locally bounded if for all t ≥ 0, we have

sup |Hs| < ∞ a.s. s≤t

Fact.

(i) Any adapted continuous process is locally bounded. (ii) If H is locally bounded and A is a finite variation process, then for all t ≥ 0, we have Z t |Hs| |dAs| < ∞ a.s. 0

Now if X = X0 + M + A is a semi-martingale, where X0 ∈ F0, M is a continuous local martingale and A is a finite variation process, we want to define R Hs dXs. We already know what it means to define integration with respect to dMs and dAs, using the Itˆointegral and the finite variation integral respectively, and X0 doesn’t change, so we can ignore it.

Definition (Stochastic integral). Let X = X0 + M + A be a continuous semi- martingale, and H a locally bounded previsible process. Then the stochastic integral H · X is the continuous semi-martingale defined by

H · X = H · M + H · A,

and we write Z T (H · X)t = Hs dXs. 0 Proposition. (i)( H,X) 7→ H · X is bilinear.

(ii) H · (K · X) = (HK) · X if H and K are locally bounded.

T T (iii)( H · X) = H1[0,T ] · X = H · X for every stopping time T . (iv) If X is a continuous local martingale (resp. a finite variation process), then so is H · X. Pn (v) If H = i=1 Hi−11(ti−1,ti] and Hi−1 ∈ Fti−1 (not necessarily bounded), then n X (H · X)t = Hi−1(Xti∧t − Xti−1∧t). i=1

Proof. (i) to (iv) follow from analogous properties for H · M and H · A. The last part is also true by definition if the Hi are uniformly bounded. If Hi is not bounded, then the finite variation part is still fine, since for each fixed ω ∈ Ω, Hi(ω) is a fixed number. For the martingale part, set

Tn = inf{t ≥ 0 : |Ht| ≥ n}.

30 3 The stochastic integral III Stochastic Calculus and Applications

Then Tn are stopping times, Tn → ∞, and H1[0,Tn] ∈ E. Thus

n X (H · M)t∧Tn = Hi−1T[0,Tn](Xti∧t − Xti−1∧t). i=1 Then take the limit n → ∞. Before we get to Itˆo’sformula, we need a few more useful properties: Proposition (Stochastic dominated convergence theorem). Let X be a contin- uous semi-martingale. Let H,Hs be previsible and locally bounded, and let K be previsible and non-negative. Let t > 0. Suppose

n (i) Hs → Hs as n → ∞ for all s ∈ [0, t]. n (ii) |Hs | ≤ Ks for all s ∈ [0, t] and n ∈ N.

R t 2 R t (iii) 0 Ks dhMis < ∞ and 0 Ks |dAs| < ∞ (note that both conditions are okay if K is locally bounded).

Then Z t Z t n Hs dXs → Hs dXs in probability. 0 0 Proof. For the finite variation part, the convergence follows from the usual dominated convergence theorem. For the martingale part, we set

 Z t  2 Tm = inf t ≥ 0 : Ks dhMis ≥ m . 0 So we have  !2 ! Z Tm∧t Z Tn∧t Z Tn∧t n n 2 E  Hs dMs − Hs dMs  ≤ E (Hs − Hs) dhMis → 0. 0 0 0

R Tn∧t 2 using the usual dominated convergence theorem, since 0 Ks dhMis ≤ m. Since Tn ∧ t = t eventually as n → ∞ almost surely, hence in probability, we are done. Proposition. Let X be a continuous semi-martingale, and let H be an adapted (m) (m) bounded left-continuous process. Then for every subdivision 0 < t0 < t1 < (m) (m) (m) ··· < tnm of [0, t] with maxi |ti − ti−1| → 0, then

n Z t Xm Hs dXs = lim H (m) (X (m) − X (m) ) m→∞ ti−1 ti ti−1 0 i=1 in probability. Proof. We have already proved this for the Lebesgue–Stieltjes integral, and all we used was dominated convergence. So the same proof works using stochastic dominated convergence theorem.

31 3 The stochastic integral III Stochastic Calculus and Applications

3.5 Itˆoformula We now prove the equivalent of the and the , i.e. Itˆo’sformula. Compared to the world of usual integrals, the difference is that the quadratic variation, i.e. “second order terms” will crop up quite a lot, since they are no longer negligible. Theorem (Integration by parts). Let X,Y be a continuous semi-martingale. Then almost surely,

Z t Z t XtYt − X0Y0 = Xs dYs + Ys dXs + hX,Y it 0 0 The last term is called the Itˆocorrection. Note that if X,Y are martingales, then the first two terms on the right are martingales, but the last is not. So we are forced to think about semi-martingales. Observe that in the case of finite variation integrals, we don’t have the correction. Proof. We have

XtYt − XsYs = Xs(Yt − Ys) + (Xt − Xs)Ys + (Xt − Xs)(Yt − Ys).

When doing usual calculus, we can drop the last term, because it is second order. However, the quadratic variation of martingales is in general non-zero, and so we must keep track of this. We have

k X Xk2−n Yk2−n − X0Y0 = (Xi2−n Yi2−n − X(i−1)2−n Y(i−1)2−n ) i=1 n X  = X(i−1)2−n (Yi2−n − Y(i−1)2−n ) i=1

+ Y(i−1)2−n (Xi2−n − X(i−1)2−n )  + (Xi2−n − X(i−1)2−n )(Yi2−n − Y(i−1)2−n )

Taking the limit n → ∞ with k2−n fixed, we see that the formula holds for t a dyadic rational. Then by continuity, it holds for all t. The really useful formula is the following: Theorem (Itˆo’sformula). Let X1,...,Xp be continuous semi-martingales, and let f : Rp → R be C2. Then, writing X = (X1,...,Xp), we have, almost surely,

p p X Z t ∂f 1 X Z t ∂2f f(X ) = f(X ) + (X ) dXi + (X ) dhXi,Xji . t 0 ∂x s s 2 ∂xi∂xj s s i=1 0 i i,j=1 0

In particular, f(X) is a semi-martingale. The proof is long but not hard. We first do it for polynomials by explicit computation, and then use Weierstrass approximation to extend it to more general functions.

32 3 The stochastic integral III Stochastic Calculus and Applications

Proof. Claim. Itˆo’sformula holds when f is a polynomial. It clearly does when f is a constant! We then proceed by induction. Suppose Itˆo’sformula holds for some f. Then we apply integration by parts to

g(x) = xkf(x). where xk denotes the kth component of x. Then we have

Z t Z t k k k g(Xt) = g(X0) + Xs df(Xs) + f(Xs) dXs + hX , f(X)it 0 0 We now apply Itˆo’sformula for f to write

p Z t X Z t ∂f Xk df(X ) = Xk (X ) dXi s s s ∂xi s s 0 i=1 0 p 1 X Z t ∂2f + Xk (X ) dhXi,Xji . 2 s ∂xi∂xj s s i,j=1 0

We also have p X Z t ∂f hXk, f(X)i = (X ) dhXk,Xii . t ∂xi s s i=1 0 So we have

p p X Z t ∂g 1 X Z t ∂2g g(X ) = g(X ) + (X ) dXi + (X ) dhXi,Xji . t 0 ∂xi s s 2 ∂xi∂xj s s i=1 0 i,j=1 0

So by induction, Itˆo’sformula holds for all polynomials. 2 R t Claim. Itˆo’sformula holds for all f ∈ C if |Xt(ω)| ≤ n and 0 |dAs| ≤ n for all (t, ω).

By the Weierstrass approximation theorem, there are polynomials pk such that  2  ∂f ∂p ∂ f ∂pk 1 sup |f(x) − pk(x)| + max i − i + max i j − i j ≤ . |x|≤k i ∂x ∂x i,j ∂x ∂x ∂x ∂x k

By taking limits, in probability, we have

f(Xt) − f(X0) = lim (pk(Xt) − pk(X0)) k→∞ Z t ∂f i ∂pk i i (Xs) dXs = lim i (Xs) dXs 0 ∂x k→∞ ∂x by stochastic dominated convergence theorem, and by the regular dominated convergence, we have

Z t Z t 2 ∂f i j ∂ pk i j i j dhX ,X is = lim i j dhX ,X i. 0 ∂x ∂x k→∞ 0 ∂x ∂x

33 3 The stochastic integral III Stochastic Calculus and Applications

Claim. Itˆo’sformula holds for all X. Let  Z t  Tn = inf t ≥ 0 : |Xt| ≥ n or |dAs| ≥ n 0 Then by the previous claim, we have

p X Z t ∂f f(XTn ) = f(X ) + (XTn ) d(X )Tn t 0 ∂xi s i s i=1 0 1 X Z t ∂2f + (XTn ) dh(X )Tn , (Xj)Tn i 2 ∂xi∂xj s i s i,j 0 p X Z t∧Tn ∂f = f(X ) + (X ) d(X ) 0 ∂xi s i s i=1 0 1 X Z t∧Tn ∂2f + (X ) dh(X ), (Xj)i . 2 ∂xi∂xj s i s i,j 0

Then take Tn → ∞.

2 Example. Let B be a standard Brownian motion, B0 = 0 and f(x) = x . Then

Z t 2 Bt = 2 BS dBs + t. 0 In other words, Z t 2 Bt − t = 2 Bs dBs. 0 In particular, this is a continuous local martingale.

Example. Let B = (B1,...,Bd) be a d-dimensional Brownian motion. Then we apply Itˆo’sformula to the semi-martingale X = (t, B1,...,Bd). Then we find that

d Z t  ∂ 1  X Z t ∂ f(t, B ) − f(0,B ) − + ∆ f(s, B ) ds = f(s, B ) dBi t 0 ∂s 2 s ∂xi s s 0 i=1 0 is a continuous local martingale. There are some syntactic tricks that make stochastic integrals easier to manipulate, namely by working in differential form. We can state Itˆo’sformula in differential form

p p X ∂f 1 X ∂2f df(X ) = dXi + dhXi,Xji, t ∂xi 2 ∂xi∂xj i=1 i,j=1 which we can think of as the chain rule. For example, in the case case of Brownian motion, we have 1 df(B ) = f 0(B ) dB + f 00(B ) dt. t t t 2 t

34 3 The stochastic integral III Stochastic Calculus and Applications

Formally, one expands f using that that “(dt)2 = 0” but “(dB)2 = dt”. The following formal rules hold: Z t Zt − Z0 = Hs dXs ⇐⇒ dZt = Ht dXt 0 Z t Zt = hX,Y it = dhX,Y it ⇐⇒ dZt = dXt dYt. 0 Then we have rules such as

Ht(Kt dXt) = (HtKt) dXt

Ht(dXt dYt) = (Ht dXt) dYt

d(XtYt) = Xt dYt + Yt dXt + dXt dYt 1 df(X ) = f 0(X ) dX + f 00(X ) dX dX . t t t 2 t t t 3.6 The L´evycharacterization A more major application of the stochastic integral is the following convenient characterization of Brownian motion: Theorem (L´evy’s characterization of Brownian motion). Let (X1,...,Xd) be i j continuous local martingales. Suppose that X0 = 0 and that hX ,X it = δijt for all i, j = 1, . . . , d and t ≥ 0. Then (X1,...,Xd) is a standard d-dimensional Brownian motion. This might seem like a rather artificial condition, but it turns out to be quite useful in practice (though less so in this course). The point is that we know that 2 hH · Mit = Ht · hMit, and in particular if we are integrating things with respect to Brownian motions of some sort, we know hBtit = t, and so we are left with some explicit, familiar integral to do.

Proof. Let 0 ≤ s < t. It suffices to check that Xt − Xs is independent of Fs and Xt − Xs ∼ N(0, (t − s)I).

1 2 iθ·(Xt−Xs) − |θ| (t−s) d Claim. E(e | Fs) = e 2 for all θ ∈ R and s < t.

This is sufficient, since the right-hand side is independent of Fs, hence so is the left-hand side, and the Fourier transform characterizes the distribution. To check this, for θ ∈ Rd, we define

d X i i Yt = θ · Xt = θ Xt . i=1 Then Y is a continuous local martingale, and we have

d X j k j k 2 hY it = hY,Y it = θ θ hX ,X it = |θ| t. i,j=1

by assumption. Let

1 1 2 iYt+ hY it iθ·Xt+ |θ| t Zt = e 2 = e 2 .

35 3 The stochastic integral III Stochastic Calculus and Applications

1 x By Itˆo’sformula, with X = iY + 2 hY it and f(x) = e , we get  1 1  dZ = Z idY − dhY i + dhY i = iZ dY . t t t 2 t 2 t t t

So this implies Z is a continuous local martingale. Moreover, since Z is bounded on bounded intervals of t, we know Z is in fact a martingale, and Z0 = 1. Then by definition of a martingale, we have

E(Zt | Fs) = Zs,

And unwrapping the definition of Zt shows that the result follows. In general, the quadratic variation of a process doesn’t have to be linear in t. It turns out if the quadratic variation increases to infinity, then the martingale is still a Brownian motion up to reparametrization. Theorem (Dubins–Schwarz). Let M be a continuous local martingale with M0 = 0 and hMi∞ = ∞. Let

Ts = inf{t ≥ 0 : hMit > s},

the right-continuous inverse of hMit. Let Bs = MTs and Gs = FTs . Then Ts is a

(Ft) stopping time, hMiTs = s for all s ≥ 0, B is a (Gs)-Brownian motion, and

Mt = BhMit .

Proof. Since hMi is continuous and adapted, and hMi∞ = ∞, we know Ts is a stopping time and Ts < ∞ for all s ≥ 0.

Claim. (Gs) is a filtration obeying the usual conditions, and G∞ = F∞

Indeed, if A ∈ Gs and s < t, then

A ∩ {Tt ≤ u} = A ∩ {Ts ≤ u} ∩ {Tt ≤ u} ∈ Fu,

using that A ∩ {Ts ≤ u} ∈ Fu since A ∈ Gs. Then right-continuity follows from that of (Ft) and the right-continuity of s 7→ Ts.

Claim. B is adapted to (Gs).

In general, if X is c´adl´agand T is a stopping time, then XT 1{T <∞} ∈ FT . Apply this is with X = M, T = Ts and FT = Gs. Thus Bs ∈ Gs. Claim. B is continuous.

Here this is actually something to verify, because s 7→ Ts is only right contin- uous, not necessarily continuous. Thus, we only know Bs is right continuous, and we have to check it is left continuous.

− Now B is left-continuous at s iff Bs = Bs , iff MTs = MTs− . Now we have

Ts− = inf{t ≥ 0 : hMit ≥ s}.

If Ts = Ts−, then there is nothing to show. Thus, we may assume Ts > Ts−.

Then we have hMiTs = hMiTs− . Since hMit is increasing, it means hMiTs is constant in [Ts−,Ts]. We will later prove that

36 3 The stochastic integral III Stochastic Calculus and Applications

Lemma. M is constant on [a, b] iff hMi being constant on [a, b].

So we know that if Ts > Ts−, then MTs = MTs−. So B is left continuous. We then have to show that B is a martingale.

Claim. (M 2 − hMi)Ts is a uniformly integrable martingale.

Ts Ts To see this, observe that hM i∞ = hMiTs = s, and so M is bounded. So (M 2 − hMi)Ts is a uniformly integrable martingale. We now apply the optional stopping theorem, which tells us

Ts E(Bs | Gr) = E(M∞ | Gs) = MTt = Bt.

So Bt is a martingale. Moreover,

(B2 − s | G ) = ((M 2 − hMi)Ts | F ) = M 2 − hMi = B2 − r. E s r E Tr Tr Tr r

2 So Bt − t is a martingale, so by the characterizing property of the quadratic variation, hBit = t. So by L´evy’scriterion, this is a Brownian motion in one dimension.

The theorem is only true for martingales in one dimension. In two dimen- sions, this need not be true, because the time change needed for the horizontal and vertical may not agree. However, in the example sheet, we see that the holomorphic image of a Brownian motion is still a Brownian motion up to a time change.

Lemma. M is constant on [a, b] iff hMi being constant on [a, b]. Proof. It is clear that if M is constant, then so is hMi. To prove the converse, by continuity, it suffices to prove that for any fixed a < b,

{Mt = Ma for all t ∈ [a, b]} ⊇ {hMib = hMia} almost surely.

We set Nt = Mt − Mt ∧ a. Then hNit = hMit − hMit∧a. Define

Tε = inf{t ≥ 0 : hNit ≥ ε}.

Then since N 2 − hNi is a local martingale, we know that

(N 2 ) = (hNi ) ≤ ε. E t∧Tε E t∧Tε

Now observe that on the event {hMib = hMia}, we have hNib = 0. So for t ∈ [a, b], we have

(1 N 2) = (1 N 2 ) = (hNi ) = 0. E {hMib=hMia} t E {hMib=hMia t∧Tε E t∧Tε

3.7 Girsanov’s theorem Girsanov’s theorem tells us what happens to our (semi)-martingales when we change the measure of our space. We first look at a simple example when we perform a shift.

37 3 The stochastic integral III Stochastic Calculus and Applications

Example. Let X ∼ N(0,C) be an n-dimensional centered Gaussian with n −1 positive definite covariance C = (Cij)i,j=1. Put M = C . Then for any function f, we have  1/2 Z M − 1 (x,Mx) Ef(X) = det f(x)e 2 dx. 2π n R Now fix an a ∈ Rn. The distribution of X + a then satisfies  1/2 Z M − 1 (x−a,M(x−a)) Ef(X + a) = det f(x)e 2 dx = E[Zf(X)], 2π n R where − 1 (a,Ma)+(x,Ma) Z = Z(x) = e 2 . Thus, if P denotes the distribution of X, then the measure Q with d Q = Z dP is that of N(a, C) vector. Example. We can extend the above example to Brownian motion. Let B be a Brownian motion with B0 = 0, and h : [0, ∞) → R a deterministic function. We then want to understand the distribution of Bt + h. Fix a finite sequence of times 0 = t0 < t1 < ··· < tn. Then we know that n (Bti )i=1 is a centered Gaussian random variable. Thus, if f(B) = f(Bt1 ,...,Btn ) is a function, then

2 Z 1 Pn (xi−xi−1) − 2 i=1 t −t E(f(B)) = c · f(x)e i i−1 dx1 ··· dxn. n R Thus, after a shift, we get

E(f(B + h)) = E(Zf(B)), n 2 n ! 1 X (hti − hti−1 ) X (hti − hti−1 )(Bti − Bti−1 ) Z = exp − + . 2 t − t t − t i=1 i i−1 i=1 i i−1 In general, we are interested in what happens when we change the measure by an exponential: Definition (Stochastic exponential). Let M be a continuous local martingale. Then the stochastic exponential (or Dol´eans–Dadeexponential) of M is

1 Mt− hMit E(M)t = e 2 The point of introducing that quadratic variation term is

Proposition. Let M be a continuous local martingale with M0 = 0. Then E(M) = Z satisfies dZt = Zt dM, i.e. Z t Zt = 1 + Zs dMs. 0 In particular, E(M) is a continuous local martingale. Moreover, if hMi is uniformly bounded, then E(M) is a uniformly integrable martingale.

38 3 The stochastic integral III Stochastic Calculus and Applications

There is a more general condition for the final property, namely Novikov’s condition, but we will not go into that.

1 Proof. By Itˆo’sformula with X = M − 2 hMi, we have  1  1 dZ = Z d M − dhMi + Z dhMi = Z dM . t t t 2 t 2 t t t t R Since M is a continuous local martingale, so is Zs dMs. So Z is a continuous local martingale. Now suppose hMi∞ ≤ b < ∞. Then     −a2/2b P sup Mt ≥ a = P sup Mt ≥ a, hMi∞ ≤ b ≤ e , t≥0 t≥0 where the final equality is an exercise on the third example sheet, which is true for general continuous local martingales. So we get    Z ∞ E exp sup Mt = P(exp(sup Mt) ≥ λ) dλ t 0 Z ∞ = P(sup Mt ≥ log λ) dλ 0 ∞ Z 2 ≤ 1 + e−(log λ) /2b dλ < ∞. 1 Since hMi ≥ 0, we know that

sup E(M)t ≤ exp (sup Mt) , t≥0

So E(M) is a uniformly integrable martingale.

Theorem (Girsanov’s theorem). Let M be a continuous local martingale with M0 = 0. Suppose that E(M) is a uniformly integrable martingale. Define a new probability measure dQ = E(M)∞ dP Let X be a continuous local martingale with respect to P. Then X − hX,Mi is a continuous local martingale with respect to Q. Proof. Define the stopping time

Tn = inf{t ≥ 0 : |Xt − hX,Mit| ≥ n},

and P(Tn → ∞) = 1 by continuity. Since Q is absolutely continuous with respect to P, we know that Q(Tn → ∞) = 1. Thus it suffices to show that XTn − hXTn ,Mi is a continuous martingale for any n. Let

Y = XTn − hXTn ,Mi,Z = E(M).

Claim. ZY is a continuous local martingale with respect to P.

39 3 The stochastic integral III Stochastic Calculus and Applications

We use the to compute

d(ZY ) = Yt dZt + Zt dYt + dhY,Zit

Tn Tn Tn = YZt dMt + Zt(dX − dhX ,Mit) + Zt dhM,X i

Tn = YZt dMt + Zt dX

So we see that ZY is a stochastic integral with respect to a continuous local martingale. Thus ZY is a continuous local martingale. Claim. ZY is uniformly integrable.

Since Z is a uniformly integrable martingale, {ZT : T is a stopping time} is uniformly integrable. Since Y is bounded, {ZT YT : T is a stopping time} is also uniformly integrable. So ZY is a true martingale (with respect to P). Claim. Y is a martingale with respect to Q. We have

EQ(Yt − Ys | Fs) = EP(Z∞Yt − Z∞Ys | Fs) = EP(ZtYt − ZsYs | Fs) = 0. Note that the quadratic variation does not change since

b2ntc X 2 hX − hX,Mii = hXit = lim (Xi2−n − X(i−1)2−n ) a.s. n→∞ i=1 along a subsequence.

40 4 Stochastic differential equations III Stochastic Calculus and Applications

4 Stochastic differential equations 4.1 Existence and uniqueness of solutions After all this work, we can return to the problem we described in the introduction. We wanted to make sense of equations of the form

x˙(t) = F (x(t)) + η(t), where η(t) is Gaussian white noise. We can now interpret this equation as saying

dXt = F (Xt) dt + dBt,

or equivalently, in integral form,

Z T Xt − X0 = F (Xs) ds + Bt. 0 In general, we can make the following definition:

d d Definition (Stochastic differential equation). Let d, m ∈ N, b : R+ × R → R , d d×m σ : R+ × R → R be locally bounded (and measurable). A solution to the stochastic differential equation E(σ, b) given by

dXt = b(t, Xt) dt + σ(t, Xt) dBt

consists of

(i) a filtered probability space (Ω, F, (Ft), P) obeying the usual conditions;

(ii) an m-dimensional Brownian motion B with B0 = 0; and

d (iii) an (Ft)-adapted continuous process X with values in R such that Z t Z t Xt = X0 + σ(s, Xs) dBs + b(s, Xs) ds. 0 0

d If X0 = x ∈ R , then we say X is a (weak) solution to Ex(σ, b). It is a strong solution if it is adapted with respect to the canonical filtration of B. Our goal is to prove existence and uniqueness of solutions to a general class of SDEs. We already know what it means for solutions to be unique, and in general there can be multiple notions of uniqueness: Definition (Uniqueness of solutions). For the stochastic differential equation E(σ, b), we say there is

d – uniqueness in law if for every x ∈ R , all solutions to Ex(σ, b) have the same distribution.

– pathwise uniqueness if when (Ω, F, (Ft), P) and B are fixed, any two 0 0 solutions X,X with X0 = X0 are indistinguishable. These two notions are not equivalent, as the following example shows:

41 4 Stochastic differential equations III Stochastic Calculus and Applications

Example (Tanaka). Consider the stochastic differential equation

dXt = sgn(Xt) dBt,X0 = x, where ( +1 x > 0 sgn(x) = . −1 x ≤ 0 This has a weak solution which is unique in law, but pathwise uniqueness fails. To see the existence of solutions, let X be a one-dimensional Brownian motion with X0 = x, and set Z t Bt = sgn(Xs) dXs, 0 which is well-defined because sgn(Xs) is previsible and left-continuous. Then we have Z t Z t 2 x + sgn(Xs) dBs = x + sgn(Xs) dXs = x + Xt − X0 = Xt. 0 0 So it remains to show that B is a Brownian motion. We already know that B is a continuous local martingale, so by L´evy’scharacterization, it suffices to show its quadratic variation is t. We simply compute

Z t hB,Bit = dhXs,Xsi = t. 0 So there is weak existence. The same argument shows that any solution is a Brownian motion, so we have uniqueness in law. Finally, observe that if x = 0 and X is a solution, then −X is also a solution with the same Brownian motion. Indeed,

Z t Z t Z t

−Xt = sgn(Xs) dBs = sgn(−Xs) dBs + 2 1Xs=0 dBs, 0 0 0 where the second term vanishes, since it is a continuous local martingale with R t quadratic variation 0 1Xs=0 ds = 0. So pathwise uniqueness does not hold. In the other direction, however, it turns out pathwise uniqueness implies uniqueness in law.

Theorem (Yamada–Watanabe). Assume weak existence and pathwise unique- ness holds. Then (i) Uniqueness in law holds.

d (ii) For every (Ω, F, (Ft), P) and B and any x ∈ R , there is a unique strong solution to Ex(a, b). We will not prove this, since we will not actually need it. The key, important theorem we are now heading for is the existence and uniqueness of solutions to SDEs, assuming reasonable conditions. As in the case of ODEs, we need the following Lipschitz conditions:

42 4 Stochastic differential equations III Stochastic Calculus and Applications

d d Definition (Lipschitz coefficients). The coefficients b : R+ × R → R , σ : d d×m R+ × R → R are Lipschitz in x if there exists a constant K > 0 such that for all t ≥ 0 and x, y ∈ Rd, we have |b(t, x) − b(t, y)| ≤ K|x − y| |σ(t, x) − σ(t, y)| ≤ |x − y|

Theorem. Assume b, σ are Lipschitz in x. Then there is pathwise uniqueness for the E(σ, b) and for every (Ω, F, (Ft), P) satisfying the usual conditions and d every (Ft)-Brownian motion B, for every x ∈ R , there exists a unique strong solution to Ex(σ, b). Proof. To simplify notation, we assume m = d = 1. We first prove pathwise uniqueness. Suppose X,X0 are two solutions with 0 0 2 X0 = X0. We will show that E[(Xt − Xt) ] = 0. We will actually put some bounds to control our variables. Define the stopping time

0 S = inf{t ≥ 0 : |Xt| ≥ n or |Xt| ≥ n}. By continuity, S → ∞ as n → ∞. We also fix a deterministic time T > 0. Then whenever t ∈ [0,T ], we can bound, using the identity (a + b)2 ≤ 2a2 + 2b2,

 2 Z t∧S ! 0 2 0 E((Xt∧S − Xt∧S) ) ≤ 2E  (σ(s, Xs) − σ(s, Xs)) dBs  0

 2 Z t∧S ! 0 + 2E  (b(s, Xs) − b(s, Xs)) ds  . 0

We can apply the Lipschitz bound to the second term immediately, while we can simplify the first term using the (corollary of the) Itˆoisometry

 2 Z t∧S ! Z t∧S ! 0 0 2 E  (σ(s, Xs) − σ(s, Xs)) dBs  = E (σ(s, Xs) − σ(s, Xs)) ds . 0 0

So using the Lipschitz bound, we have

Z t∧S ! 0 2 2 0 2 E((Xt∧S − Xt∧S) ) ≤ 2K (1 + T )E |Xs − Xs| ds 0 Z t 2 0 2 ≤ 2K (1 + T ) E(|Xs∧S − Xs∧S| ) ds. 0 We now use Gr¨onwall’s lemma: Lemma. Let h(t) be a function such that

Z t h(t) ≤ c h(s) ds 0 for some constant c. Then h(t) ≤ h(0)ect.

43 4 Stochastic differential equations III Stochastic Calculus and Applications

Applying this to 0 2 h(t) = E((Xt∧S − Xt∧S) ), we deduce that h(t) ≤ h(0)ect = 0. So we know that

0 2 E(|Xt∧S − Xt∧S| ) = 0 for every t ∈ [0,T ]. Taking n → ∞ and T → ∞ gives pathwise uniqueness. We next prove existence of solutions. We fix (Ω, F, (Ft)t) and B, and define

Z t Z t F (X)t = X0 + σ(s, Xs) dBs + b(s, Xs) ds. 0 0

Then X is a solution to Ex(a, b) iff F (X) = X and X0 = x. To find a fixed point, we use Picard iteration. We fix T > 0, and define the T -norm of a continuous adapted process X as

 1/2 2 kXkT = E sup |Xt| . t≤T

In particular, if X is a martingale, then this is the same as the norm on the space of L2-bounded martingales by Doob’s inequality. Then

B = {X :Ω × [0,T ] → R : kXkT < ∞} is a Banach space.

Claim. kF (0)kT < ∞, and

Z T 2 2 2 kF (X) − F (Y )kT ≤ (2T + 8)K kX − Y kt dt. 0 We first see how this claim implies the theorem. First observe that the claim implies F indeed maps B into itself. We can then define a sequence of processes i Xt by 0 i+1 i Xt = x, X = F (X ). Then we have

Z T  i  i+1 i 2 i i−1 2 1 0 2 CT kX − X kT ≤ CT kX − X kt dt ≤ · · · ≤ kX − X kT . 0 i! So we find that ∞ X i i−1 2 kX − X kT < ∞ i=1 for all T . So Xi converges to X almost surely and uniformly on [0,T ], and F (X) = X. We then take T → ∞ and we are done. To prove the claim, we write

Z t Z t

kF (0)kT ≤ |X0| + b(s, 0) ds + σ(s, 0) dBs . 0 0 T

44 4 Stochastic differential equations III Stochastic Calculus and Applications

The first two terms are constant, and we can bound the last by Doob’s inequality and the Itˆoisometry:  2 Z t Z T Z T 2 σ(s, 0) dBs ≤ 2E  σ(s, 0) dBs  = 2 σ(s, 0) ds. 0 T 0 0 To prove the second part, we use

Z t 2! 2 kF (X) − F (Y )k ≤ 2E sup b(s, X − s) − b(s, Ys) ds t≤T 0 Z t 2!

+ 2E sup (σ(s, Xs) − σ(s, Ys)) dBs . t≤T 0 We can bound the first term with Cauchy–Schwartz by Z T ! Z T 2 2 2 T E |b(s, Xs) − b(s, Ys)| ≤ TK kX − Y kt dt, 0 0 and the second term with Doob’s inequality by Z T ! Z T 2 2 2 E |σ(s, Xs) − σ(s, Ys)| ds ≤ 4K kX − Y kt dt. 0 0

4.2 Examples of stochastic differential equations Example (The Ornstein–Uhlenbeck process). Let λ > 0. Then the Ornstein– Uhlenbeck process is the solution to

dXt = −λXt dt + dBt. The solution exists by the previous theorem, but we can also explicitly find one. λt By Itˆo’sformula applied to e Xt, we get λt λt λt d(e Xt) = e dXt + λe Xt dt = dBt. So we find that Z t −λt −λ(t−s) Xt = e X0 + e dBs. 0 Observe that the integrand is deterministic. So we can in fact interpret this as an Wiener integral. n Fact. If X0 = x ∈ R is fixed, then (Xt) is a , i.e. (Xti )i=1 is jointly Gaussian for all t1 < ··· < tn. Any Gaussian process is determined by the mean and covariance, and in this case, we have 1   X = e−λtx, cov(X ,X ) = e−λ|t−s| − e−λ|t+s| E t t s 2λ Proof. We only have to compute the covariance. By the Itˆoisometry, we have Z t Z s  −λ(t−u) −λ(s−u) E((Xt − EXt)(Xs − EXs)) = E e dBu e dBu 0 0 Z t∧s = e−λ(t+s) eλu du. 0

45 4 Stochastic differential equations III Stochastic Calculus and Applications

In particular,

 1 − e−2λt   1  X ∼ N e−λtx, → N 0, . t 2λ 2λ

1 Fact. If X0 ∼ N(0, 2λ ), then (Xt) is a centered Gaussian process with stationary covariance, i.e. the covariance depends only on time differences: 1 cov(X ,X ) = e−λ|t−s|. t s 2λ

The difference is that in the deterministic case, the EXt cancels the first −λt e X0 term, while in the non-deterministic case, it doesn’t. This is a very nice example where we can explicitly understand the long-time behaviour of the SDE. In general, this is non-trivial.

Dyson Brownian motion

Let HN be an inner product space of real symmetric N × N matrices with inner 1 dim(HN ) product N Tr(HK) for H,K ∈ HN . Let H ,...,H be an orthonormal basis for HN . Definition (Gaussian orthogonal ensemble). The Gaussian Orthogonal Ensem- ble GOEN is the standard Gaussian measure on HN , i.e. H ∼ GOEN if

dim H X n H = HiXi r=1 where each Xi are iid standard normals.

i 1 We now replace each X by a Ornstein–Uhlenbeck process with λ = 2 . Then GOEN is invariant under the process.

Theorem. The eigenvalues λ1(t) ≤ · · · ≤ λN (t) satisfies   i r i λ 1 X 1 2 i dλt = − +  dt + dB . 2 N λi − λj Nβ j6=i

Here β = 1, but if we replace symmetric matrices by Hermitian ones, we get β = 2; if we replace symmetric matrices by symplectic ones, we get β = 4.

This follows from Itˆo’sformula and formulas for of eigenvalues.

Example (Geometric Brownian motion). Fix σ > 0 and t ∈ R. Then geometric Brownian motion is given by

dXt = σXt dBt + rXt dt.

We apply Itˆo’sformula to log Xt to find that

  σ2   X = X exp σB + r − t . t 0 t 2

46 4 Stochastic differential equations III Stochastic Calculus and Applications

Example (). Let B = (B1,...,Bd) be a d-dimensional Brownian motion. Then Xt = |Bt| satisfies the stochastic differential equation d − 1 dXt = dt + dBt 2Xt

if t < inf{t ≥ 0 : Xt = 0}.

4.3 Representations of solutions to PDEs Recall that in Advanced Probability, we learnt that we can represent the solution to Laplace’s equation via Brownian motion, namely if D is a suitably nice domain and g : ∂D → R is a function, then the solution to the Laplace’s equation on D with boundary conditions g is given by

u(x) = Ex[g(BT )], where T is the first hitting time of the boundary ∂D. A similar statement we can make is that if we want to solve the heat equation ∂u = ∇2u ∂t with initial conditions u(x, 0) = u0(x), then we can write the solution as √ u(x, t) = Ex[u0( 2Bt)] This is just a fancy way to say that the Green’s function for the heat equation is a Gaussian, but is a good way to think about it nevertheless. In general, we would like to associate PDEs to certain stochastic processes. Recall that a stochastic PDE is generally of the form

dXt = b(Xt) dt + σ(Xt) dBt

for some b : Rd → R and σ : Rd → Rd×m which are measurable and locally bounded. Here we assume these functions do not have time dependence. We can then associate to this a differential operator L defined by 1 X X L = a ∂ ∂ + b ∂ . 2 ij i j i i i,j i where a = σσT . √ Example. If b = 0 and σ = 2I, then L = ∆ is the standard Laplacian. The basic computation is the following result, which is a standard application of the Itˆoformula: d Proposition. Let x ∈ R , and X a solution to Ex(σ, b). Then for every d 1 2 d f : R+ × R → R that is C in R+ and C in R , the process Z t   f ∂ Mt = f(t, Xt) − f(0,X0) − + L f(s, Xs) ds 0 ∂s is a continuous local martingale.

47 4 Stochastic differential equations III Stochastic Calculus and Applications

We first apply this to the Dirichlet–Poisson problem, which is essentially to solve −Lu = f. To be precise, let U ⊆ Rd be non-empty, bounded and open; 2 2 f ∈ Cb(U) and g ∈ Cb(∂U). We then want to find a u ∈ C (U¯) = C (U) ∩ C(U¯) such that −Lu(x) = f(x) for x ∈ U u(x) = g(x) for x ∈ ∂U. If f = 0, this is called the Dirichlet problem; if g = 0, this is called the Poisson problem. We will have to impose the following technical condition on a:

Definition (Uniformly elliptic). We say a : U¯ → Rd×d is uniformly elliptic if there is a constant c > 0 such that for all ξ ∈ Rd and x ∈ U¯, we have ξT a(x)ξ ≥ c|ξ|2. If a is symmetric (which it is in our case), this is the same as asking for the smallest eigenvalue of a to be bounded away from 0. It would be very nice if we can write down a solution to the Dirichlet–Poisson problem using a solution to Ex(σ, b), and then simply check that it works. We can indeed do that, but it takes a bit more time than we have. Instead, we shall prove a slightly weaker result that if we happen to have a solution, it must be given by our formula involving the SDE. So we first note the following theorem without proof: Theorem. Assume U has a smooth boundary (or satisfies the exterior cone condition), a, b are H¨oldercontinuous and a is uniformly elliptic. Then for every H¨older continuous f : U¯ → R and any continuous g : ∂U → R, the Dirichlet–Poisson process has a solution. The main theorem is the following: Theorem. Let σ and b be bounded measurable and σσT uniformly elliptic, U ⊆ Rd as above. Let u be a solution to the Dirichlet–Poisson problem and X a d solution to Ex(σ, b) for some x ∈ R . Define the stopping time

TU = inf{t ≥ 0 : Xt 6∈ U}.

Then ETU < ∞ and ! Z TU

u(x) = Ex g(XTU ) + f(Xs) ds . 0 In particular, the solution to the PDE is unique.

Proof. Our previous proposition applies to functions defined on all of Rn, while u is just defined on U. So we set  1  U = x ∈ U : dist(x, ∂U) > ,T = inf{t ≥ 0 : X 6∈ U }, n n n t n

2 d and pick un ∈ Cb (R ) such that u|Un = un|Un . Recalling our previous notation, let Z t∧Tn n un Tn Mt = (M )t = un(Xt∧Tn ) − un(X0) − Lun(Xs) ds. 0

48 4 Stochastic differential equations III Stochastic Calculus and Applications

Then this is a continuous local martingale that is bounded by the proposition, and is bounded, hence a true martingale. Thus for x ∈ U and n large enough, the martingale property implies ! Z t∧Tn

u(x) = un(x) = E u(Xt∧Tn ) − Lu(Xs) ds 0 ! Z t∧Tn

= E u(Xt∧Tn ) + f(Xs) ds . 0

We would be done if we can take n → ∞. To do so, we first show that E[TU ] < ∞. Note that this does not depend on f and g. So we can take f = 1 and g = 0, and let v be a solution. Then we have ! Z t∧Tn

E(t ∧ Tn) = E − Lv(Xs) ds = v(x) − E(v(Xt∧Tn )). 0

Since v is bounded, by dominated/monotone convergence, we can take the limit to get E(TU ) < ∞.

Thus, we know that t ∧ Tn → TU as t → ∞ and n → ∞. Since ! Z TU E |f(Xs)| ds ≤ kfk∞E[TU ] < ∞, 0 the dominated convergence theorem tells us ! ! Z t∧Tn Z TU E f(Xs) ds → E f(Xs) ds . 0 0

Since u is continuous on U¯, we also have

E(u(Xt∧Tn )) → E(u(Tu)) = E(g(Tu)). We can use SDE’s to solve the Cauchy problem for parabolic equations as 2 d well, just like the heat equation. The problem is as follows: for f ∈ Cb (R ), we d 1 2 d want to find u : R+ × R → R that is C in R+ and C in R such that ∂u = Lu on × d ∂t R+ R d u(0, · ) = f on R Again we will need the following theorem: 2 d Theorem. For every f ∈ Cb (R ), there exists a solution to the Cauchy problem.

Theorem. Let u be a solution to the Cauchy problem. Let X be a solution to d Ex(σ, b) for x ∈ R and 0 ≤ s ≤ t. Then

Ex(f(Xt) | Fs) = u(t − s, Xs). In particular, u(t, x) = Ex(f(Xt)).

49 4 Stochastic differential equations III Stochastic Calculus and Applications

In particular, this implies Xt is a continuous Markov process.

∂ ∂ Proof. The martingale has ∂t + L, but the heat equation has ∂t − L. So we set g(s, x) = u(t − s, x). Then

 ∂  ∂ + L g(s, x) = − u(t − s, x) + Lu(t − s, x) = 0. ∂s ∂t

So g(s, Xs) − g(0,X0) is a martingale (boundedness is an exercise), and hence

u(t − s, Xs) = g(s, Xs) = E(g(t, Xt) | Fs) = E(u(0,Xt) | Fs) = E(f(Xt) | Xs).

There is a generalization to the Feynman–Kac formula.

2 d d Theorem (Feynman–Kac formula). Let f ∈ Cb (R ) and V ∈ Cb(R ) and d suppose that u : R+ × R → R satisfies ∂u = Lu + V u on × d ∂t R+ R d u(0, · ) = f on R , where V u = V (x)u(x) is given by multiplication. d Then for all t > 0 and x ∈ R , and X a solution to Ex(σ, b). Then  Z t  u(t, x) = Ex f(Xt) exp V (Xs) ds . 0 If L is the Laplacian, then this is Schr¨odingerequation, which is why Feynman was thinking about this.

50 Index III Stochastic Calculus and Applications

Index

(H · A)t, 9 Lebesgue–Stieltjes integral, 8 H · X, 30 Lipschitz coefficients, 43 L2(M), 26 local martingale, 12 2 Lbc(M), 29 locally bounded previsible process, XT , 11 30 E, 10 M2, 16 Ornstein–Uhlenbeck process, 45

Bessel process, 47 pathwise uniqueness, 41 bounded variation, 7 Poisson problem, 48 bracket, 22 previsible σ-algebra, 10 BV, 7 previsible process, 10

c`adl`ag,7 quadratic variation, 18 c`adl`agadapted process, 9 semi-martingale, 24 Cauchy problem, 49 covariation, 22 reduces, 12

Dirichlet problem, 48 semi-martingale, 24 Dirichlet–Poisson problem, 48 signed measure, 6 Dol´eans–Dadeexponential, 38 simple process, 10, 25 Doob’s inequality, 16 stochastic differential equation, 41 Dubins–Schwarz theorem, 36 stochastic dominated convergence Dyson Brownian motion, 46 theorem, 31 stochastic exponential, 38 Feynman–Kac formula, 50 stochastic integral, 27, 30 finite variation, 8 stopped process, 11 Finite variation process, 9 strong solution, 41 Gaussian orthogonal ensemble, 46 Tanaka equation, 42 Gaussian space, 4 total variation, 6, 7 Gaussian white noise, 4 total variation process, 9 Geometric Brownian motion, 46 Girsanov’s theorem, 39 u.c.p., 17 Hahn decomposition, 6 uniformly elliptic, 48 uniformly on compact sets in integration by parts, 32 probability, 17 Itˆocorrection, 32 uniqueness in law, 41 Itˆoisometry, 27 usual conditions, 11 Itˆo’sformula, 32 Vitali theorem, 13 Kunita–Watanabe, 23 weak solution, 41 L´evy’scharacterization of Brownian motion, 35 Yamada–Watanabe, 42

51