<<

1 Introduction

The content of these notes is also covered by chapter 4 section A of [1].

2

In applications infinitesimal Wiener increments are represented as

dwt = ηt dt

The process ηt is referred to as white noise. Consistence of the definition requires ηt to be a with the following properties. • Zero average:

≺ ηt = 0 Namely we must have

0 =≺ wt+dt − wt =≺ ηt dt

• δ-correlation: at any instant of time 2 (1) 2 ≺ ηtηs = σ δ (t − s) , σ > 0 as it follows from the identification d d ≺ w w =≺ η η (2.1) dt ds t s t s Namely, writing the correlation of the Wiener process in terms of the Heaviside step function 2 ≺ wtws = σ [s H(t − s) + t H(s − t)] (2.2) and observing d H(t − s) = δ(1)(t − s) dt we obtain, upon inserting (2.2) into (2.1) d d d d ≺ w w = σ2 [H(t − s) − s δ(1)(t − s)] + σ2 [H(s − t) − t δ(1)(s − t)] dt ds t s dt ds By construction the δ is even an even function of its argument: the right hand side can be couched into the form d d  d  ≺ w w = σ2 2 δ(1)(t − s) + (t − s) δ(1)(t − s) dt ds t s dt In order to interpret the meaning of the derivative, we integrate the right hand side over a smooth test function f Z s+ε d dt f(t)(t − s) δ(1)(t − s) s−ε dt Z σ+ε df Z σ+ε = − dt (t)(t − s) δ(1)(t − s) − dt f(t) δ(1)(t − s) s−ε dt s−ε Z σ+ε = − dt f(t) δ(1)(t − s) = −f(s) (2.3) s−ε

1 We conclude that d d ≺ w w = σ2δ(1)(t − s) dt ds t s the identity determining the value of the white noise correlation.

3 Paley-Wiener-Zygmund

Definition 3.1 (Paley-Wiener-Zygmund integral). Let

g : [0,T ] → R a continuously differentiable function such that

g(0) = g(T ) = 0

The random variable Z T GT = dwt g(t) 0 is defined as

Z T Z T dg dwt g(t) = − dt wt (t) 0 0 dt The Paley-Wiener-Zygmund integral can be tackled resorting to standard Lebesgue-Stieltjes integration theory. We can prove

Proposition 3.1. i ≺ GT = 0

2 R T 2 ii ≺ GT = 0 dt g (t) Proof. i it follows by exchanging the order between integral and average.

ii By definition we have

Z T Z T Z T Z T 2 2 dg dg d ≺ GT = dt ds (t) (s) ≺ wtws = dt ds g(t)g(s) ≺ wtws 0 0 dt ds 0 0 dtds If we now use the calculation of section (2), we get into

Z T Z T Z T 2 (1) 2 ≺ GT = dt ds g(t)g(s) δ (t − s) = dt g (t) 0 0 0

(for σ2 = 1) which proves the claim.

2 4 Gaussian and δ-correlation

Gaussian statistics and δ-correlation imply that ηt is independent of ηs for any t 6= s. The claim is verified by inspection of the characteristic function of the white-noise. The Paley-Wiener-Zygmund integral allows us to write for some smooth g chosen as in section 3

ıλ R T dtη g(t) ıλ R T dw g(t) −ıλ R T dt w g0(t) ≺ e 0 t ≡≺ e 0 t =≺ e 0 t

The leftmost integral can be computed using e.g. the Karhunen-Loeve` representation of the

−ıλ R T dt w g0(t) −ıλ P c R T dt ψ (t) g0(t) ≺ e 0 t =≺ e n n 0 n here we used the shorthand notation dg g0 := dt ∞ Randomness is stored in the {cn}n=0 which form a sequence of independent Gaussian random variables with zero average and for cn equal to the n-th eigenvalue of the operator defined by

R(t , s) =≺ wt ws

Thus we obtain

2 2 −ıλ R T dt w g0(t) − λ P∞ R T dt R T ds r ψ (t)ψ (s) g0(t)g0(s) − λ R T dt R T ds R(t ,s) g0(t)g0(s) ≺ e 0 t = e 2 n=0 0 0 n n n = e 2 0 0 since by construction

∞ X R(t , s) = rn ψn(t)ψn(s) n=0 The lemma in 3 then guarantees us

2 ıλ R T dtη g(t) ıλ R T dw g(t) − λ R T dt g2(t) ≺ e 0 t ≡≺ e 0 t = e 2 0

We can read the result in two ways.

• Characteristic function of Z T GT := dwt g(t) 0 We have just proved that

2 ı λ G − λ ≺G2 ≺ e T = e 2 T

i.e. that GT has a Gaussian distribution. • “Characteristic function” of the white noise. Let us set λ equal to unit and inspect

ı R T dt η g(t) − 1 R T dt g2(t) ≺ e 0 t = e 2 0 (4.1)

3 Interpreting as sums

Z T X dt ηt g(t) ∼ dt ηtk g(tk) 0 k

were the ηtk a collection of random Gaussian variables with correlation

≺ ηtk ηtl = Ckl

we would write

ı P dt η g(t ) − 1 P dt P dt g(t )C g(t ) ≺ e k tk k = e 2 k l k k l l

Contrasting this latter result with (4.1) it is tempting to identify

T T X X dt↓0 Z Z dt dt g(tk)Ck lg(tl) → dt ds g(t)C(t − s) g(s) k l 0 0 and

C(t − s) = δ(1)(t − s)

This is in agreement with the claim that white noise is ”Gaussian” with zero average and δ-Dirac covariance.

From (4.1) we can also read all the moments of the white noise. In order to do so we need to take functional derivatives with respect to g(t) (see appendix A, in practice treat the function argument as an index and replace δ-Kroenecker with δ-functions). For any 0 < s < T

δ ı R T dt g(t)η δ − 1 R T dt g2(t) ≺ e 0 t = e 2 0 δg(s) δg(s) Z T − 1 R T dt g2(t) (1) − 1 R T dt g2(t) = −e 2 0 dt g(t) δ (t − s) = −g(s)e 2 0 0 implies

≺ ηs = 0

Furthermore for 0 < s u < T

2 δ ı R T dt g(t)η δ − 1 R T dt g2(t) ≺ e 0 t = − g(s) e 2 0 δg(s)δg(u) δg(u) (1) − 1 R T dt g2(t) − 1 R T dt g2(t) = −δ (u − s) e 2 0 + g(s) g(u) e 2 0 implies

(1) ≺ ηsηu = δ (u − s) which recovers for σ2 = 1 the result obtained in section 2

4 A Functional derivative (for practical purposes)

Consider a functional space of continuous/smooth functions φ (eventually also satisfying certain boundary conditions) and a functional F [φ]. We define the functional derivative of F , denoted δF/δφ(x), as the distribution δF [φ] such that for all test functions f: Z d δF [φ] d d x f(x) := F [φ + ε f] d δφ(x) dε R =0 Thus if Z F [φ] = ddx φ2(x) (A.1) d R the definition yields

Z δF [φ] Z ddx f(x) = 2 ddx φ(x)f(x) d δφ(x) d R R Alternatively, we can define

δF [φ] F [ϕ ] − F [φ] = lim x δφ(x) ε→0 ε with ϕx specified by

(d) ϕx(y) = φ(y) + ε δ (x − y)

For the the example (A.1) this means

δF [φ] = 2 φ(x) δφ(x)

References

[1] L.C. Evans, An Introduction to Stochastic Differential Equations, lecture notes, http://math.berkeley. edu/˜evans/

5