4.7 Brownian Bridge

黃于珊 4.7.1

Definition 4.7.1: A Gaussian process X(t), t ≥ 0, is a that has the property that, for arbitrary

times 0 < t1 < t2 < … < tn, the random variables X(t1), X(t2), …, X(tn) are jointly normally distributed.

2 • The joint of a set of vectors is determined by their means and .

3 • For a Gaussian process, the joint distribution

of X(t1), X(t2), …, X(tn) is determined by the means and covariances of these random variables. • We denote the mean of X(t) by m(t), and, for s ≥ 0, t ≥ 0, we denote the of X(s) and X(t) by c(s, t); i.e., m(t) = EX (t),c(s,t) = E(X (s) − m(s))(X (t) − m(t))

4 Example 4.7.2 ()

Brownian motion W(t) is a Gaussian process.

For 0 < t1< t2 <…< tn, the increments

I1 =W(t1), I2 =W(t2 ) −W(t1),, In =W(tn ) −W(tn−1)

W(t0)=0 are independent and normally distributed. Increments over nonoverlapping time intervals are independent W(t)-W(s)~N(0,t-s)

W(t2)=W(t1)+I2 Writing 2 n

W (t1) = I1,W (t2 ) = I j ,,W (tn ) = I j ,   5 j=1 j=1 Example 4.7.2 (Brownian motion)

0 < t1< t2 <…< tn • The random variables W(t1), W(t2), …, W(tn) are jointly normally distributed. Gaussian process Independent normal random variables are jointly normal.

I1,I2,…,In are jointly normal Linear combinations of jointly normal random variables are jointly normal. • The mean function for Brownian motion is m(t) = EW (t) = 0

6 Example 4.7.2 (Brownian motion)

• We may compute the covariance by letting 0 ≤ s ≤ t be given and noting that E[W(s)W(t)]-E[W(s)]E[W(t)] c(s,t) = EW (s)W (t)= EW (s)(W (t) −W (s) +W (s)) = EW (s)(W (t) −W (s))+ EW 2 (s) • Because W(s) and W(t) - W(s) are independent and both have mean zero, we see that EW (s)(W (t) −W (s))= 0

7 Example 4.7.2 (Brownian motion)

c(s,t) = EW (s)W (t)= EW (s)(W (t) −W (s) +W (s)) = EW (s)(W (t) −W (s))+ EW 2 (s) • The other term, E[W 2(s)], is the of W(s), which is s. E[W2(s)]-(E[W(s)])2

8 Example 4.7.2 (Brownian motion)

• We conclude that c(s,t)=s when 0 ≤ s ≤ t. • Reversing the roles of s and t, we conclude that c(s,t)=t when 0 ≤ t ≤ s. • In general, the covariance function for Brownian motion is then c(s,t) = s  t, where s  t denotes the minimum of s and t.

9 Example 4.7.3 (Itô integral of a deterministic integrand) • Let ∆(t) be a nonrandom function of time, and define t I(t) = (s)dW (s) 0 where W(t) is a Brownian motion. Then I(t) is a Gaussian process, as we now show. • In the proof of Theorem 4.4.9, we showed that, for fixed u  R , the process

t  1 2 2  M u (t) = expuI (t) − u  (s)ds  2 0  is a martingale. 10 Example 4.7.3 (Itô integral of a deterministic integrand)

is a martingale. • Then 1 t − u 2 2 (s)ds 2 0 uI(t) 1 = M u (0) = EM u (t) = e  Ee and we thus obtained the moment-generating function formula 1 t u2 2 (s)ds uI (t)  Ee = e 2 0 t 2 (s)ds (with mean zero and variance 0 ) • I(t)~N(0, ) 11 Example 4.7.3 (Itô integral of a deterministic integrand) • We have shown that I(t) is normally distributed, verification that the process is Gaussian requires more.

• Verify that, for 0 < t1 < t2 < … < tn, the random variables I(t1), I(t2), …, I(tn) are jointly normally distributed.

12 Example 4.7.3 (Itô integral of a deterministic integrand) • It turns out that the increments

I(t1) − I(0) = I(t1), I(t2 ) − I(t1),, I(tn ) − I(tn−1) are normally distributed and independent(p.14),

and from this the joint normality of I(t1), I(t2), … ,I(tn) follows by the same argument as used in Example 4.7.2 for Brownian motion.

I(t1)=(I(t1)-I(0)) I(t2)=(I(t2)-I(t1))+(I(t1)-I(0)) Independent normal random variables are jointly normal.

(I(t1)-I(0)), (I(t2)-I(t1)),…,(I(tn)-I(tn-1)) are jointly normal Linear combinations of jointly normal random variables are jointly normal. 13 Example 4.7.3 (Itô integral of a deterministic integrand)

• Next, we show that, for 0 < t1 < t2, the two random increments I(t1)-I(0)=I(t1) and I(t2)-I(t1) are normally distributed and independent. • The argument we provide can be iterated to prove this result for any number of increments.

14 Example 4.7.3 (Itô integral of a deterministic integrand) u  R • For fixed 2 , the martingale property of Mu2 implies that M (t ) = E[M (t ) | F(t )] u2 1 u2 2 1 M (t ) u1 1 • Now let u  R be fixed. Because M ( t ) is F(t )- 1 u2 1 1 measurable, we may multiply the equation above by this quotient to obtain

M u (t1)M u (t2 )  M (t ) = E 1 2 | F(t ) u1 1 M (t ) 1  u2 1 

t t   1 2 1 2 1 2 2 2   = E exp u1I(t1) + u2 (I(t2 ) − I(t1)) − u1  (s)ds − u2  (s)ds | F(t1)   0 t     2 2 1   15 Example 4.7.3 (Itô integral of a deterministic integrand)

• We now take expectations

1 = M (0) = EM (t ) u1 u1 1

t t   1 2 1 2 1 2 2 2  = E exp u1I(t1) + u2 (I(t2 ) − I(t1)) − u1  (s)ds − u2  (s)ds   0 t    2 2 1 

t t  1 2 1 2 1 2 2 2  = Eexpu1I(t1) + u2 (I(t2 ) − I(t1))exp − u1  (s)ds − u2  (s)ds  0 t   2 2 1  • Where we have used the fact that ∆2(s) is nonrandom to take the integrals of ∆2(s) outside the expectation on the right-hand side.16 Example 4.7.3 (Itô integral of a deterministic integrand)

• This leads to the moment-generating function formula Eexpu1I(t1) + u2 (I(t2 ) − I(t1))

t t 1 2 1 2  1 2 2 2  = exp u1  (s)ds exp u2  (s)ds  0   t  2  2 1 

17 Example 4.7.3 (Itô integral of a deterministic integrand)

Eexpu1I(t1) + u2 (I(t2 ) − I(t1))

t t 1 2 1 2  1 2 2 2  = exp u1  (s)ds exp u2  (s)ds  0   t  2  2 1  • The right hand side is the product of – the moment-generating function for a normal random t variable with mean zero and variance 1 2 (s)ds 0 – the moment-generating function for a normal random t variable with mean zero and variance 2 2 (s)ds t 1

18 Example 4.7.3 (Itô integral of a deterministic integrand)

Eexpu1I(t1) + u2 (I(t2 ) − I(t1))

t t 1 2 1 2  1 2 2 2  = exp u1  (s)ds exp u2  (s)ds  0   t  2  2 1  t t 2 2 I(t )~ N(0, 1  2 ( s ) ds ) N(0,  ( s )ds ) 1 0 I(t2)-I(t1) ~ t1

• It follows that I(t1) and I(t2)-I(t1) must have these distributions, and because their joint moment-generating function factors into this product of moment-generating functions,

they must be independent. 19 Example 4.7.3 (Itô integral of a deterministic integrand)

E[I(t1)I(t2)]-E[I(t1)]E[I(t2)]

• We have c(t1,t2 ) = EI(t1)I(t2 )= EI(t1)(I(t2 ) − I(t1) + I(t1)) 2 = EI(t1)(I(t2 ) − I(t1))+ EI (t1) t I(t )和I(t )-I(t )獨立 1 2 1 2 1 = EI(t1) EI(t2 ) − I(t1)+  (s)ds 0 t = 1 2 (s)ds 0 • For the general case where s ≥ 0 and t ≥ 0 and we do not know the relationship between s and t, we have the covariance formula st 2 c(s,t) =  (u)du 20 0 4.7.2 Brownian Bridge as a Gaussian Process Definition 4.7.4. Let W(t) be a Brownian motion. Fix T>0. We define the Brownian bridge from 0 to 0 (p.22) on [0,T] to be the process t X (t) = W (t) − W (T),0  t  T (4.7.2) T

21 4.7.2 Brownian Bridge as a Gaussian Process t X (t) = W (t) − W (T),0  t  T T • The process X(t) satisfies t=0 X(0)=W(0)-0=0 X(0) = X(T) = 0 t=T X(T)=W(T)-W(T)=0 • Because W(T) enters the definition of X(t) for 0 ≤ t ≤ T, the Brownian bridge X(t) is not adapted to the filtration F(t) generated by W(t).

22 4.7.2 Brownian Bridge as a Gaussian Process t X (t) = W (t) − W (T),0  t  T T

• For 0 < t1 < t2 < … < tn < T, the random variables t t X (t ) = W (t ) − 1 W (T),, X (t ) = W (t ) − n W (T) 1 1 T n n T

are jointly normal because W(t1),…, W(tn), W(T) are jointly normal. p.6 Example 4.7.2 Linear combinations of jointly normal random variables are jointly normal. • Hence, the Brownian bridge from 0 to 0 is a Gaussian process. 23 4.7.2 Brownian Bridge as a Gaussian Process • Its mean function is easily seen to be 푡 E[W(t)]- 퐸[푊(푇)] 푇  t  m(t) = EX (t) = E W (t) − W (T) = 0  T  • For s , t  ( 0 , T ) ,we compute the covariance function E[X(s)X(t)]-E[X(s)]E[X(t)] X(s) X(t)  s  t  c(s,t) = EW (s) − W (T)W (t) − W (T)  T  T  t s st = EW (s)W (t)− EW (s)W (T)− EW (t)W (T)+ EW 2 (T) T T T 2 2st st st = s  t − + = s  t − T T T 24 4.7.2 Brownian Bridge as a Gaussian Process Definition 4.7.5. Let W(t) be a Brownian motion. Fix T>0, a R and b  R . We define the Brownian bridge from a to

b on [0, T] to be the process Gaussian process Gaussian process (b − a)t X a→b (t) = a + + X (t),0  t  T T where X(t) = X0→0 is the Brownian bridge from 0 to 0.

0 T + Adding a nonrandom function to a Gaussian process 25 gives us another Gaussian process. 4.7.2 Brownian Bridge as a Gaussian Process • The mean function is affected: (b − a)t ma→b (t) = EX a→b (t) = a + T • However, the covariance function is not affected: st ca→b (s,t) = E(X a→b (s) − ma→b (s))(X a→b (t) − ma→b (t))= s  t − T 푏−푎 푠 푏−푎 푠 푏−푎 푡 푏−푎 푡 =E[((a+ + 푋(푠))-(a+ ))((a+ + 푋(푡))-(a+ ))] 푇 푇 푇 푇 =E[X(s)X(t)] 26 4.7.3 Brownian Bridge as a Scaled Stochastic Integral • We cannot write the Brownian bridge as a stochastic integral of a deterministic integrand because the variance of the Brownian bridge, t 2 t(T −t) EX 2 (t) = c(t,t) = t − = T T increases for 0 ≤ t ≤ T/2 and then decreases for 2푡 T/2 ≤ t ≤ T. F.O.C 1- = 0 푇 t=푇 2 2 S.O.C - < 0 푇

27 4.7.3 Brownian Bridge as a Scaled Stochastic Integral • We can obtain a process with the same distribution as the Brownian bridge from 0 to 0 as a scaled stochastic integral.

28 4.7.3 Brownian Bridge as a Scaled Stochastic Integral • Consider t 1 Y(t) = (T − t) dW (u),0  t  T 0 T − u p.31

t 1 • The integral I(t) = dW (u) 0 T − u is a Gaussian process of the type discussed in Example 4.7.3, provided t < T so the integrand is defined.

29 4.7.3 Brownian Bridge as a Scaled Stochastic Integral

• For 0 < t1 < t2 < … < tn < T, the random variables

Y(t1) = (T - t1)I(t1), Y(t2) = (T - t2)I(t2),…,Y(tn) = (T - tn)I(tn)

are jointly normal because I(t1), I(t2),…,I(tn) are jointly normal. • In particular, Y is a Gaussian process.

Linear combinations of jointly normal random variables are jointly normal. 30 4.7.3 Brownian Bridge as a Scaled Stochastic Integral • The mean and covariance functions of I are −2 I 1 V=T-u ׬ − V 푑V m (t) = 0 [ ] −1 푇−푢 0 dV=-du [V ] st 1 1 1 cI (s,t) = du = − 0 (T −u)2 T − s t T for all s,t [0,T)

• This means that the mean function for Y is mY(t) = 0

31 4.7.3 Brownian Bridge as a Scaled Stochastic Integral • To compute the covariance function for Y, we assume for the moment that 0 ≤ s ≤ t ≤ T so that 1 1 s cI (s,t) = − = T − s T T(T − s) Then E[Y(s)Y(t)]-E[Y(s)]E[Y(t)] s (T −t)s st cY (s,t) = E[(T − s)(T −t)I(s)I(t)] = (T − s)(T −t) = = s − T(T − s) T T • If we had taken 0 ≤ t ≤ s < T, the roles of s and t would have be reversed. In general st cY (s,t) = s  t − ,s,t [0,T) T

32 4.7.3 Brownian Bridge as a Scaled Stochastic Integral • This is the same covariance formula (4.7.3) we obtained for the Brownian bridge. • Because the mean and covariance functions for Gaussian process completely determine the distribution of the process, we conclude that the process Y has the same distribution as the Brownian bridge from 0 to 0 on [0,T].

33 4.7.3 Brownian Bridge as a Scaled Stochastic Integral

t(T − t) EY 2 (t) = cY (t,t) = ,0  t  T T

34 4.7.3 Brownian Bridge as a Scaled Stochastic Integral Theorem 4.7.6 Define the process  t 1 (T − t) dW (u) for 0  t  T, Y(t) =  0 T − u for t = T 0 Then Y(t) is a continuous Gaussian process on [0,T] and has mean and covariance functions mY (t) = 0,t [0,T] st cY (s,t) = s  t − ,s,t [0,T] T In particular, the process Y(t) has the same distribution as the Brownian bridge from 0 to 0 on [0,T] (Definition 4.7.5)

35 4.7.3 Brownian Bridge as a Scaled Stochastic Integral • We note that the process Y(t) is adapted to the filtration generated by the Brownian motion W(t).

36 • Compute the stochastic differential of Y(t), which is

t 1 t 1 dY (t) = dW (u)d(T − t) + (T − t)d dW (u) 0 T − u 0 T − u t 1 = − dW (u)dt + dW (t) 0 T − u Y(t) = − dt + dW (t) T − t

1 푑푊(푡) T − 푡 37 4.7.3 Brownian Bridge as a Scaled Stochastic Integral • If Y(t) is positive as t approaches T, the drift Y(t) term − dt becomes large in absolute value T − t and is negative. – This drives Y(t) toward zero. • On the other hand, if Y(t) is negative, the drift term becomes large and positive, and this again drives Y(t) toward zero. • This strongly suggests, and it is indeed true, that as t T the process Y(t) converges to zero almost surely. 38 4.7.4 Multidimensional Distribution of the Brownian Bridge

• We fix a  R and b  R and let Xt ab → () denote the Brownian bridge 0,T from a to b on   . We also fix 0 = t 0  t 1  t 2   t n  T . In this a→→ b a b section, We compute the joint density of X ( t 1 ), , X ( t n ) . We recall that the Brownian bridge from a to b has the mean function ()()b−− a t T t a bt mab→ () t= a + = + TTT and covariance function st c(,) s t= s  t − T When st  , we may write this as st s() T− t c( s , t )= s − = , 0  s  t  T TT  =−Tt  = T To simplify notation, we set jj so that 0 . T- t0 • We define random variable

a→→ b a b X()() tjj X t −1 Z j =− jj−1

Because X a →→ b ( t ), , X a b ( t ) are jointly normal, so that 1 n Z( t1 ), , Z ( tn ) are jointly normal. We compute EZ j , Var () Z j and Cov (,) Z ij Z . Linear combinations of jointly normal random variables are jointly normal.  jj=−Tt

11a→→ b a b aabtj bt j-1 bt j()() T− t j− 1 − bt j -1 T − t j E()()() Zj= EX t j − EX t j−1 = + − + = j  j−1TTTTT  j  j -1  j  j -1 b() t− t = jj−1 . jj-1 ()T− t a bt ()T− tjj−−11 a bt jj+ + TTTT

41 1a→ b 2 a → b a → b 1 a → b Var() Zj=22 Var ( X ()) t j − Cov ( X (), t j X ()) t j−−11 + Var ( X ()) t j jjjj−1 −1 1 2 1 =22c ( tj , t j ) − c ( t j , t j−1 ) + c ( t j − 1 , t j − 1 ) jjjj−1 −1 t2 t t t ( T−− t ) 2 t (T− t)() + t T − t t − t =j − j-1 + j -1 = j j− 1 j -1 j j-1 j= j j− 1 . TTTj  j-1  j -1 Tj  j-1  j  j -1

st s() T− t c( s , t )= s − = , 0  s  t  T TT t t t() T− t t  c(,) t t= t −j−1 j = j − 1 j = j − 1 j j−−11 j j TTT

 jj=−Tt

42 1 1 1 1 Cov(,)(,)-(,)-(,)(,) Z Z=+ c t t c t t c t t c t t i j  i j   i j-1   i -1 j   i -1 j -1 ij i j i j−−1 i -1 j i -1 j 1 tTt()()()()− tTt − tTt − tTt − =i j - i j-1 - i -1 j + i -1 j -1 = 0. TTTTi  j  i  j-1  i -1  j  i -1  j -1

 jj=−Tt

st s() T− t c( s , t )= s − = , 0  s  t  T TT t t t() T− t t  c(,) t t= t −j−1 j = j − 1 j = j − 1 j j−−11 j j TTT

43 • Z( t1 ), , Z ( tn ) are jointly normal. •

Cov(X,Y)=0 ⇔ X,Y are independent

• The normal random variable ZZ 1 ,, n are independent.

44 • So we conclude that the normal random variable ZZ 1 ,, n are independent, and we can write down their joint density, which is mean 2 b() t− t jj−1 z j − n  11jj-1 fZ( t ),..., Z ( t )( z 1 ,..., z n )= exp  −  1 n  2 tt− j=1 ttjj− −1 jj−1 2   jj-1 jj-1  variance variance 2 b() t− t jj−1 z j − n n 1 jj-1 1 = exp -  . 2  tt−  j=1 jj−1 j=1 ttjj− −1 2 jj-1  jj-1 接下來把z的部份做變數變換 we make the change of variables xxjj−1 zj = −, j = 1,..., n , jj−1 a→→ b a b • Where xa 0 = , to find joint density for X ( t 1 ), , X ( t n ) . We work first on the sum in the exponent to see the effect of this change of variables.

2 b() t− t jj−1 z j − n  11jj-1 fZ( t ),..., Z ( t )( z 1 ,..., z n )= exp  −  1 n  2 tt− j=1 ttjj− −1 jj−1 2   jj-1 jj-1  2 b() t− t jj−1 z j − n n 1 jj-1 1 = exp -  . 2  tt−  j=1 jj−1 j=1 ttjj− −1 2 jj-1  jj-1 • We have

)

47 48 휏푗+휏푗 1 휏푗 휏푗 1 휏 휏 푡 −푡 = − − = − = 푗−1− 푗 = 푗 푗−1 휏푗 휏푗 휏푗휏푗−1 휏푗휏푗−1 휏 휏 휏 휏 = 푗−1− 푗−1+ 푗= 푗 휏푗−1 휏푗−1

49 50 푥 2 푥 2 푥 2 푥 2 푥 2 푥 2 푥 2 푥 2 = ( 1 - 0 )+ ( 2 - 1 )+…+ ( 푛 - 푛−1 )= ( 푛 - 0 ) 휏1 휏0 휏2 휏1 휏푛 휏푛 1 휏푛 휏0 −

= ( 1 - 1 )+ ( 1 - 1 )+…+ ( 1 - 1 )= ( 1 - 1 ) 휏1 휏0 휏2 휏1 휏푛 휏푛 1 휏푛 휏0 −

푥1 푥0 푥2 푥1 푥푛 푥푛−1 푥푛 푥0 = ( - )+ ( - )+…+ ( - )= ( - ) 51 휏1 휏0 휏2 휏1 휏푛 휏푛 1 휏푛 휏0 − 52 • To change a density, we also need to account for the Jacobian of the change of variables. In this case, we have

z 1 j == , jn 1,..., , x jj z 1 j = − , jn = 2,..., , x jj−−11 and all other partial derivatives are zero. This 1 leads to the Jacobian matrix 00  1 11 − 0 J =  12   1 00   n 53 n 1 • Whose determinant is  j =1 . Multiplying  j f( z ,..., z ) Z( t1 ),..., Z ( tn ) 1 n by this determinant and using the change of variables worked out above, we

a→→ b a b obtain the density for X ( t 1 ), , X ( t n ) ,

1 00  1 11 − 0 J =  12   1 00   n

54 55 τ τ τ τ 0 X 1 X … X 푛−1 = 0 τ1 τ2 τ푛 τ푛

56 2휋 X 2휋

課本p.108

57 4.7.5 Brownian Bridge as a Conditioned Brownian Motion a→→ b a b • The joint density (4.7.6) for X ( t 1 ), , X ( t n ) permits us to give one more interpretation for Brownian bridge from a to b on  0,T  . It is a Brownian motion Wt () on this time interval, starting at Wa (0) = and conditioned to arrive at b at time T (i.e., conditioned on W () T = b ). Let 0 = t 0  t 1  t 2   t n  T be given. The joint density of W( t1 ), , W ( tn ), W ( T ) is n f(,...,,) xxbpTtxb= ( − ,,) pttxx ( − , ,) WtWtWT(1 ),..., (n ), ( ) 1 n nn jjjj−− 1 1 j=1

, where W (0) == x0 a

This is because p (,,)(,,) t −= t x x p t a x is the density for the Brownian motion going from1 0 0 1 to 1 1 in the time between and . W() t11= x t = 0 Similarly, p (,,) t 2 − t 1 x 1 x 2 is the density for going from to W () t 22 = x between time tt = and tt = . The joint density for Wt () and Wt () is then the product 1 2 1 2

p( t1 , a , x 1 ) p ( t 2− t 1 , x 1 , x 2 ). So, the density of W ( t 1 ), , W ( t n ) conditioned on W () T = b is thus the quotient

n joint density of p(-,,) T tnn x b W(t ), . . . , W(t ), p(,,) t− t x x 1 n  j j−−11 j j W(T) marginal density of W(T) p(,,) T a b j=1

and this is f a →→ b a b ( x 1 ,..., x n ) of (4.7.6). X( t1 ),..., X ( tn ) Finally, let us define

Ma→→ b( T )= max X a b ( t ) 0tT to be the maximum value obtained by the Brownian bridge from a to b on 0,T . This random variable has the following distribution. Corollary 4.7.7. Brownian bridge from a to b on [0, T] a Brownian motion W(t) on this time interval, starting at W(0) = a and conditioned on W(T) = b

the maximum of the Brownian motion on [0, t] 60 Corollary 4.7.7.

61