<<

A Summary of The Cameron-Martin-Girsanov-Meyer Theorem(s).1 Cameron-Martin Let W = W (t), t ∈ [0,T ], be a standard Brownian motion on a probability space (Ω, F, P), and let µ be the measure generated by W on the spaces C([0,T ]) of continuous functions on [0,T ]:  µ(A) = P(W ∈ A),A ∈ B C([0,T ]) .

Given a function h ∈ C([0,T ]), define the measure µh on C([0,T ]) by  µh(A) = P(W + h ∈ A),A ∈ B C([0,T ]) .

Then the measureR µh is absolutely continuous with respect to the measure µ if and only if the function h has the t ′ ′ ∈ form h(t) = 0 h (s)ds for some function h L2((0,T )); in that case, Z ! T dµh 1 ′ 2 (x) = exp Ih′ (x) − |h (t)| dt , x ∈ C([0,T ]). (1) dµ 2 0

The mapping x 7→ Ih′ (x) is known as the Cameron-Martin functional or Paley-Wiener integral. The original reference: Cameron, R. H.; Martin, W. T. (1944). ”Transformations of Wiener Integrals under Translations”. Annals of Mathematics. 45 (2): 386–396.

Girsanov

W = W (t), t ∈ [0,T ], be a standard on a stochastic basis (Ω, F, {Ft}t≥0, P), and let h = h(t), t ≥ 0, be an adapted process satisfying Z ! T P h2(t) dt < ∞ = 1. 0 Define the process Z Z  t 1 t Z(t) = exp h(s)dW (s) − h2(s) ds , t ∈ [0,T ]. 0 2 0 IF EZ(T ) = 1, (2) R f − t F {F } Pe then the process W (t) = W (t) 0 h(s) ds is a standard Wiener process on the stochastic basis (Ω, , t t≥0, ), with dPe = Z(T )dP. The original reference: Girsanov, I. V. (1960). ”On transforming a certain class of stochastic processes by abso- lutely continuous substitution of measures”. Theory of Probability and Its Applications. 5 (3): 285–301.

Note that dZ(t) = h(t)Z(t)dW (t),Z(0) = 1, so that Z is a non-negative (and thus a supermartin- gale). Condition (2) means that Z is, in fact, a martingale. Novikov’s condition Z ! 1 T E exp h2(s) ds (3) 2 0 is sufficient for (2) to hold; on the one hand, (3) is far from necessary, but, on the other hand, (3) cannot, in general, be relaxed even as far as the factor 1/2. A detailed discussion is in Section 6.2 of Statistics of Random Processes by Liptser and Shiryaev.

An SODE version of Girsanov by Liptser and Shiryaev Let W = W (t), t ∈ [0,T ], be a standard Brownian motion on a probability space (Ω, F, P) and let b = b(t, x), σ = σ(t, x), h = h(t, x) be non-random functions such that each of the following equations dX = b(t, X)dt + σ(t, X)dW (t), dY = B(t, Y )dt + σ(t, Y )dW, where B(t, x) = b(t, x) + h(t, x)σ(t, x),

1Sergey Lototsky, USC

1 2 has a unique strong solution on [0,T ] for some non-random T < ∞. For 0 ≤ t ≤ T define the process Z   Z  ! t b s, Y (s) − B s, Y (s) 1 t b2 s, Y (s) − B2 s, Y (s) Z(t) = exp  ds −  2 2 0 σ s, Y (s) 2 0 σ s, Y (s)

If X(0) = Y (0) and Φ is a bounded functional on C([0,T ]), then   EΦ(X) = E Φ(Y )Z(T ) .

Note that, if we solve for dW in terms of dY , then  Z Z  t 1 t Z(t) = exp − h(s, Y (s))dW (s) − h2(s, Y (s)) ds , t ∈ [0,T ]. 2 0 0 R Pe P f t IF we assume that d = Z(T )d defines a new , then, by Girsanov, W (t) = W (t)+ 0 h(s, Y (s)) ds is a standard Brownian motion on (Ω, F, Pe). On the other hand, dY = B(t, Y )dt + σ(t, Y )dW = b(t, Y )dt + σ(t, Y )(h(t, Y )dt + dW ) = b(t, Y )dt + σ(t, Y )dW,f that is, the equation satisfied by X on (Ω, F, P) is the same as the equation satisfied by Y on (Ω, F, Pe) and so EΦ(X) = EeΦ(Y ). The details are in Chapter 7 of Statistics of Random Processes by Liptser and Shiryaev, in par- ticular, Theorem 19.

Meyer. e Let F = (Ω, F, {Ft}t≥0, P) be a stochastic basis with the usual assumptions, let Z be a non-negative random variable e e with EZ = 1, and let the martingale Z = Z(t) be the right-continuous version of E(Z|Ft). Define the probability e e e measure P by dP = ZdP. If M is a local martingale on F = (Ω, F, {Ft}t≥0, P) and the measures P and P are equivalent, then the process Mf = Mf(t) defined by Z t d [Z,M](s) Mf(t) = M(t) − , 0 Z(s) e is a local martingale on (Ω, F, {Ft}t≥0, P).

7→ h ci PFor a cadlag semi-martingale X, the process t [X,X](t) is the quadratic variation of X:[X,X](t) = X t + − 2 2 s≤t X(s) X(s-) ; it is the “corrector” in the Itˆoformula applied to X : Z t X2(t) = X2(0) + 2 X(s-) dX(s) + [X,X](t); 0  − − − h i similarly, [RX,Y ] = (1/4) [X + Y,X +RY ] [X Y,X Y ] . If X is a continuous localR martingale, then [X,X] = X ; t t h i t if X(t) = 0 f(s)dW (s) and Y (t) = 0 g(s)dW (s), then [X,Y ](t) = X,Y t = 0 f(s)g(s) ds. In particular, in the setting of the Girsanov theorem, M = W , dZ = hZdW , so that d[Z,M](t) = h(t)Z(t)dt.

The people William Ted Martin (1911–2004): MIT math department chair 1947–1968. Robert Horton Cameron (1908 — 1989): supervised 35 Ph.D. students in 30+ years at the University of Min- nesota; one of the students was M. Donsker. Igor Vladimirovich Girsanov (1934–1967): introduced the concept of a strong . Paul-Andr´eMeyer (1934–2003): until 1952, his last name was Meyerowitz; Probabilit´eset Potentiel, joint with Claude Dellacherie, consists of five volumes; played the violin, viola, and the flute.