Enlargements of Filtrations Monique Jeanblanc Jena, June 2010

June 9, 2010 2

These notes are not complete, and have to be considered as a preliminary version of a full text. The aim was to give a support to the lectures given by Monique Jeanblanc and Giorgia Callegaro at University El Manar, Tunis, in March 2010 and later by Monique Jeanblanc at Iena, in June 2010. Section marked with a ♠ are used only in very specific (but important) cases and can be forgotten for a first reading. A large part of these notes can be found in the book Mathematical Methods in Financial Markets, by Jeanblanc, M., Yor M., and Chesney, M. (Springer), indicated as [3M]. We thank Springer Verlag for allowing us to reproduce part of this volume.

Many examples related to Credit risk can be found in Credit Risk Modeling by Bielecki, T.R., Jeanblanc, M. and Rutkowski, M., CSFI lecture note series, Osaka University Press, 2009, indicated as [BJR]. I thank all the participants to Jena workshop for their remarks, questions and comments Chapter 1

Theory of Stochastic Processes

1.1 Background

In this section, we recall some facts on theory of stochastic processes. Proofs can be found for example in Dellacherie [18], Dellacherie and Meyer [22] and Rogers and Williams [68].

As usual, we start with a filtered probability space (Ω, F, F, P) where F = (Ft, t ≥ 0) is a given filtration satisfying the usual conditions, and F = F∞.

1.1.1 Path properties

A process X is continuous if, for almost all ω, the map t → Xt(ω) is continuous. The process is continuous on the right with limits on the left (in short c`adl`ag following the French acronym1) if, for almost all ω, the map t → Xt(ω) is c`adl`ag.

Definition 1.1.1 A process X is increasing if X0 = 0, X is right-continuous, and Xs ≤ Xt, a.s. for s ≤ t.

Definition 1.1.2 Let (Ω, F, F, P) be a filtered probability space. The process X is F-adapted if for any t ≥ 0, the random variable Xt is Ft-measurable.

The natural filtration FX of a X is the smallest filtration F which satisfies the usual hypotheses and such that X is F-adapted. We shall write in short (with an abuse of notation) X Ft = σ(Xs, s ≤ t).

Exercise 1.1.3 Write carefully the definition of the filtration F so that the right-continuity as- sumption is satisfied. Starting from a non-continuous on right filtration F0, define the smallest right continuous filtration F which contains F0. C

1.1.2 Predictable and optional σ-algebra

If τ and ϑ are two stopping times, the stochastic interval ]]ϑ, τ]]is the set {(ω, t): ϑ(ω) < t ≤ τ(ω)}. The optional σ-algebra O is the σ-algebra generated on F × B by the stochastic intervals [[τ, ∞[[ where τ is an F-.

1In French, continuous on the right is continu `a droite, and with limits on the left is admettant des limites `a gauche. We shall also use c`adfor continuous on the right. The use of this acronym comes from P-A. Meyer.

3 4 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

The predictable σ-algebra P is the σ-algebra generated on F × B by the stochastic intervals ]]ϑ, τ]] where ϑ and τ are two F-stopping times such that ϑ ≤ τ.

A process X is said to be F-predictable (resp. F-optional) if the map (ω, t) → Xt(ω) is P- measurable (resp. O-measurable).

Example 1.1.4 An adapted c`agprocess is predictable.

Proposition 1.1.5 Let F be a given filtration (called the reference filtration).

• The optional σ-algebra O is the σ-algebra on R+ × Ω generated by c`adl`ag F-adapted processes (considered as mappings on R+ × Ω). • The predictable σ-algebra P is the σ-algebra on R+ × Ω generated by the F-adapted c`ag(or continuous) processes. The inclusion P ⊂ O holds.

These two σ-algebras P and O are equal if all F-martingales are continuous. In general O = P ∨ σ(∆M,M describing the set of martingales) .

A process is said to be predictable (resp. optional) if it is measurable with respect to the predictable (resp. optional) σ-field. If X is a predictable (resp. optional) process and T a stopping time, then the stopped process XT is also predictable (resp. optional). Every process which is c`agand adapted is predictable, every process which is c`adand adapted is optional. If X is a c`adl`agadapted process, then (Xt− , t ≥ 0) is a .

A stopping time T is predictable if and only if the process (11{t

If τ is an F-stopping time, the σ-algebra of events prior to τ, Fτ , and the σ-algebra of events strictly prior to τ Fτ− are defined as:

Fτ = {A ∈ F∞ : A ∩ {τ ≤ t} ∈ Ft, ∀t}.

Fτ− is the smallest σ-algebra which contains F0 and all the sets of the form A ∩ {t < τ}, t > 0 for A ∈ Ft.

If X is F-progressively measurable and τ a F-stopping time, then the r.v. Xτ is Fτ -measurable on the set {τ < ∞}.

If τ is a random time (i.e. a non negative r.v.), the σ-algebra Fτ and Fτ− are defined as

Fτ = σ(Yτ ,Y F − optional)

Fτ− = σ(Yτ ,Y F − predictable)

Exercise 1.1.6 Give a simple example of predictable process which is not continuous on left. Hint: Deterministic functions are predictable C

1.1.3 Doob-Meyer decomposition ♠

2 An adapted process X is said to be of class (D) if the collection XT 11T <∞ where T is a stopping time is uniformly integrable. If Z is a non-negative supermartingale of class (D), there exists a unique increasing, integrable and predictable process A such that Zt = E(A∞ − At|Ft). In particular, any non-negative super- martingale of class (D) can be written as Zt = Mt −At where M is a uniformly integrable martingale.

2Class (D) is in honor of Doob. 1.1. BACKGROUND 5

1.1.4 Semi-martingales

We recall that an F-adapted process X is an F-semi-martingale if X = M +A where M is an F- and A an F-adapted process with finite variation. If there exists a decomposition with a process A which is predictable, the decomposition X = M + A where M is an F-martingale and A an F-predictable process with finite variation is unique and X is called a special semi-martingale. If X is continuous, the process A is continuous.

In general, if G = (Gt, t ≥ 0) is a filtration larger than F = (Ft, t ≥ 0), i.e., Ft ⊂ Gt, ∀t ≥ 0 (we shall write F ⊂ G), it is not true that an F-martingale remains a martingale in the filtration G. It is not even true that F-martingales remain G-semi-martingales.

Example 1.1.7 (a) Let Gt = F∞. Then, the only F-martingales which are G-martingales are constants. (b)♠ An interesting example is Az´ema’smartingale µ, defined as follows. Let B be a Brownian motion and gt = sup{s ≤ t, Bs = 0}. The process √ µt = (sgnBt) t − gt, t ≥ 0 is a martingale in its own filtration. This discontinuous Fµ-martingale is not an FB-martingale, it is not even an FB-semi-martingale.

We recall an important(but difficult result) due to Stricker [74].

Proposition 1.1.8 Let F and G be two filtrations such that F ⊂ G. If X is a G- which is F-adapted, then it is also an F-semimartingale.

Exercise 1.1.9 Let N be a Poisson process (i.e., a process with stationary and independent incre- ments, such that the law of Nt is a Poisson law with parameter λ. Prove that, the process M defines 2 2 as Mt = Nt − λt is a martingale and that the process Mt − λt = (Nt − λt) − λt is also a martingale. Prove that for any θ ∈ [0, 1],

Nt = θ(Nt − λt) + (1 − θ)Nt + θλt is a decomposition of the semi-martingale N. Which one has a predictable finite variation process? C

Exercise 1.1.10 Prove that τ is a H-stopping time, where H is the natural filtration of Ht = 11{τ≤t}, and that τ is a G stopping time, where G = F ∨ H, for any filtration F. C

Exercise 1.1.11 Prove that if M is a G-martingale and is F-adapted, then M is an F-martingale. Prove that, if M is a G-martingale, then Mc defined as Mct = E(Mt|Ft) is an F-martingale. C

Exercise 1.1.12 Prove that, if G = F ∨ Fe where Fe is independent of F, then any F martingale remains a G-martingale. Prove that, if F is generated by a Brownian motion W , and if there exists a probability Q equivalent to P such that Fe is independent of F under Q, then any (P, F)-martingale remains a (P, G)-semi martingale. C

Stochastic Integration

If X = M + A is a semi-martingale and Y a (bounded) predictable process, we denote Y ?X the stochastic integral Z t Z t Z t (Y ?X)t = YsdXs = YsdMs + YsdAs 0 0 0 6 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

Integration by Parts ♠

Any semi-martingale X admits a decomposition as a martingale M and a bounded variation process. The martingale part admits a decomposition as M = M c + M d where M c is continuous and M d a discontinuous martingale. The process M c is denoted in the literature as Xc (even if this notation is missleading!). The optional Itˆoformula is R R t 0 1 t 00 c f(Xt) = f(X0) + 0 f (Xs− )dXs + 2 0 f (Xs)dhX is

P 0 + 0

If U and V are two finite variation processes, the Stieltjes’ integration by parts formula can be written as follows: Z Z

UtVt = U0V0 + VsdUs + Us− dVs (1.1.1) ]0,t] ]0,t] Z Z X = U0V0 + Vs− dUs + Us− dVs + ∆Us ∆Vs . ]0,t] ]0,t] s≤t

As a partial check, one can verify that the of the left-hand side, i.e., UtVt − Ut− Vt− , is equal to the jump process of the right-hand side, i.e., Vt− ∆Ut + Ut− ∆Vt + ∆Ut ∆Vt. The integration by parts is

Z t Z t XtYt = X0Y0 + Xs−dYs + Ys−dXs + [X,Y ]t . (1.1.2) 0 0

For bounded variation processes X [U, V ]t = ∆Us ∆Vs s≤t Let X be a continuous local martingale. The predictable process of X is the continuous increasing process hXi such that X2 − hXi is a local martingale. Let X and Y be two continuous local martingales. The predictable covariation process is the continuous finite variation process hX,Y i such that XY − hX,Y i is a local martingale. Let X and Y be two local martingales. The covariation process is the finite variation process [X,Y ] such that (i) XY − [X,Y ] is a local martingale (ii) ∆[X,Y ]t = ∆Xt∆Yt The process [X] = [X,X] is non-decreasing. The predictable covariation process is (if it exists) the predictable finite variation process hX,Y i such that XY − hX,Y i is a local martingale.

Exercise 1.1.13 Prove that if X and Y are continuous, hX,Y i = [X,Y ]. Prove that if M is the compensated martingale of a Poisson process with intensity λ,[M] = N and hMi = λt. C

1.1.5 Change of probability

Brownian filtration

Let F be a Brownian filtration and L an F-martingale, strictly positive such that L0 = 1 and define dQ| = L dP| . Then, Ft t Ft Z t 1 Bet := Bt − dhB,Lis 0 Ls 1.1. BACKGROUND 7 is a Q, F-Brownian motion. Then, if M is an F-martingale, Z t 1 Mft := Mt − dhM,Lis 0 Ls is a Q, F-martingale.

General case ♠

More generally, let F be a filtration and L an F-martingale, strictly positive such that L0 = 1 and define dQ|Ft = LtdP|Ft . Then, if M is an F-martingale, Z t 1 Mft := Mt − d[M,L]s 0 Ls is a Q, F-martingale. If the predictable co-variation process hM,Li exists, Z t 1 Mt − dhM,Lis 0 Ls− is a Q, F-martingale.

1.1.6 Dual Predictable Projections

In this section, after recalling some basic facts about optional and predictable projections, we in- troduce the concept of a dual predictable projection3, which leads to the fundamental notion of predictable compensators. We recommend the survey paper of Nikeghbali [62].

Definitions

Let X be a bounded (or positive) process, and F a given filtration. The optional projection of X is the unique optional process (o)X which satisfies: for any F-stopping time τ

(o) E(Xτ 11{τ<∞}) = E( Xτ 11{τ<∞}) . (1.1.3)

For any F-stopping time τ, let Γ ∈ Fτ and apply the equality (1.1.3) to the stopping time τΓ = τ11Γ + ∞11Γc . We get the re-inforced identity:

(o) E(Xτ 11{τ<∞}|Fτ ) = Xτ 11{τ<∞} .

In particular, if A is an increasing process, then, for s ≤ t:

(o) (o) E( At − As|Fs) = E(At − As|Fs) ≥ 0 . (1.1.4)

(o) Note that, for any t, E(Xt|Ft) = Xt. However, E(Xt|Ft) is defined almost surely for any t; thus uncountably many null sets are involved, hence, a priori, E(Xt|Ft) is not a well-defined process whereas (o)X takes care of this difficulty.

Comment 1.1.14 Let us comment the difficulty here. If X is an integrable random variable, the quantity E(X|Ft) is defined a.s. i.e. if Xt = E(X|Ft) and Xet = E(X|Ft), then P(Xt = Xet) = 1. That means that, for any fixed t, there exists a negligeable set Ωt such that Xt(ω) = Xet(ω) for ω∈ / Ωt. For processes, we introduce the following definition: the process X is a modification of Y if, for any t, P(Xt = Yt) = 1. However, one needs a stronger assumption to be able to compare

3See Dellacherie [18] for the notion of dual optional projection. 8 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES functionals of the processes. The process X is indistinguishable from (or a version of) Y if {ω : Xt(ω) = Yt(ω), ∀t} is a measurable set and P(Xt = Yt, ∀t) = 1. If X and Y are modifications of each other and are a.s. continuous, they are indistinguishable. A difficult, but important result (see Dellacherie [17], p. 73) states: Let X and Y two optional (resp. predictable) processes. If for every finite stopping time (resp. predictable stopping time) τ, Xτ = Yτ a.s., then the processes X and Y are indistinguishable.

Likewise, the predictable projection of X is the unique predictable process (p)X such that for any F-predictable stopping time τ

(p) E(Xτ 11{τ<∞}) = E( Xτ 11{τ<∞}) . (1.1.5) As above, this identity reinforces as

(p) E(Xτ 11{τ<∞}|Fτ−) = Xτ 11{τ<∞} , for any F-predictable stopping time τ (see Section 1.1 for the definition of Fτ−).

Example 1.1.15 Let ϑi, i = 1, 2 be two stopping times such that ϑ1 ≤ ϑ2 and Z a bounded (o) (p) r.v.. Let X = Z11]]ϑ1,ϑ2]]. Then, X = U11]]ϑ1,ϑ2]], X = V 11]]ϑ1,ϑ2]] where U (resp. V ) is the right-continuous (resp. left-continuous) version of the martingale (E(Z|Ft), t ≥ 0).

Let τ and ϑ be two stopping times such that ϑ ≤ τ and X a positive process. If A is an increasing optional process, then, µZ τ ¶ µZ τ ¶ (o) E XtdAt = E XtdAt . ϑ ϑ

If A is an increasing predictable process, then,since 11]]ϑ,τ]](t) is predictable

Z τ Z τ (p) E( XtdAt) = E( XtdAt) . ϑ ϑ

The notion of interest in this section is that of dual predictable projection, which we define as follows:

Proposition 1.1.16 Let (At, t ≥ 0) be an integrable increasing process (not necessarily F-adapted). (p) There exists a unique integrable F-predictable increasing process (At , t ≥ 0), called the dual pre- dictable projection of A such that

µZ ∞ ¶ µZ ∞ ¶ (p) E YsdAs = E YsdAs 0 0 for any positive F-predictable process Y . R t In the particular case where At = 0 asds, one has

Z t (p) (p) At = asds (1.1.6) 0

Proof: See Dellacherie [18], Chapter V, Dellacherie and Meyer [22], Chapter 6 paragraph (73), page (p) 148, or Protter [67] Chapter 3, Section 5. The integrability condition of (At , t ≥ 0) results from (p) the definition, since for Y = 1, one obtains E(A∞ ) = E(A∞). ¤

This definition extends to the difference between two integrable (for simplicity) increasing pro- cesses. The terminology “dual predictable projection” refers to the fact that it is the dtAt(ω) which is relevant when performing that operation. Note that the predictable projection of an increasing process is not necessarily increasing, whereas its predictable dual projection is. 1.1. BACKGROUND 9

If X is bounded and A has integrable variation (not necessarily adapted), then

(p) (p) E((X?A )∞) = E(( X?A)∞) .

This is equivalent to: for s < t,

(p) (p) E(At − As|Fs) = E(At − As |Fs) . (1.1.7)

(p) If A is F-adapted (not necessarily predictable), then (At − At , t ≥ 0) is an F-martingale. In that (p) case, At is also called the predictable compensator of A.

(p) Example 1.1.17 If N is a Poisson process, At = λt. If X is a L´evyprocess and f aP positive func- tion with compact support which does not contain 0, the predictable compensator of f(∆X ) R s≤t s is thν, fi = t f(x)ν(dx)

More generally, from Proposition 1.1.16 and (1.1.7), the process (o)A − A(p) is a martingale.

Proposition 1.1.18 If A is increasing, the process (o)A is a sub-martingale and A(p) is the pre- dictable increasing process in the Doob-Meyer decomposition of the sub-martingale (o)A.

In a general setting, the predictable projection of an increasing process A is a sub-martingale whereas the dual predictable projection is an increasing process. The predictable projection and the dual (p) predictable projection of an increasing process A are equal if and only if ( At, t ≥ 0) is increasing.

It will also be convenient to introduce the following terminology:

Definition 1.1.19 If τ is a random time, we call the F-predictable compensator associated τ with τ the F-dual predictable projection A of the increasing process 11{τ≤t}. This dual predictable projection Aτ satisfies µZ ∞ ¶ τ E(Yτ ) = E YsdAs (1.1.8) 0 for any positive, predictable process Y .

τ It follows that the process E(11{τ≤t} − At is an F-martingale.

Lemma 1.1.20 Assume that the random time τ avoids the F-stopping times, i.e., P(τ = ϑ) = 0 for any F-stopping time ϑ, and that F-martingales are continuous. τ Then, the F-dual predictable projection of the process Ht : = 11τ≤t, denoted A , is continuous.

Proof: Indeed, if ϑ is a jump time of Aτ , it is an F-stopping time, hence is predictable, and

τ τ E(Aϑ − Aϑ−) = E(11τ=ϑ) = 0 ; the continuity of Aτ follows. ¤

Lemma 1.1.21 Assume that the random time τ avoids the F-stopping times (condition (A) ) and τ τ τ τ let a be the dual optional projection of 11τ≤t. Then, A = a , and A is continuous. Assume that all F-martingales are continuous (condition (C)). Then, aτ is continuous and conse- τ τ quently A = a . Under conditions (C) and (A), Z = P(τ > t|Ft) is continuous.

Proof: Recall that under (C) the predictable and optional sigma fields are equal. See Dellacherie and Meyer [22] or Nikeghbali [62]. 10 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

Lemma 1.1.22 Let τ be a finite random time such that its associated Az´ema’ssupermartingale Zτ is continuous. Then τ avoids F-stopping times.

Proof: See Coculescu and Nikeghbali

τ Proposition 1.1.23 The Doob-Meyer decomposition of the super-martingale Zt = P(τ > t|Ft) is

τ τ τ τ τ Zt = E(A∞|Ft) − At = µt − At

τ τ where µt : = E(A∞|Ft).

Proof: From the definition of the dual predictable projection, for any predictable process Y , one has µZ ∞ ¶ τ E(Yτ ) = E YudAu . 0

Let t be fixed and Ft ∈ Ft. Then, the process Yu = Ft11{t

τ τ E(Ft11{t<τ}) = E(Ft(A∞ − At )) .

τ τ τ It follows that E(A∞|Ft) = Zt + At . ¤

Comment 1.1.24 ♠ It can be proved that the martingale

τ τ τ τ µt : = E(A∞|Ft) = At + Zt is BMO. We recall that a continuous uniformly integrable martingale M belongs to BMO space if there exists a constant m such that

E(hMi∞ − hMiτ |Fτ ) ≤ m for any stopping time τ. It can be proved (see, e.g., Dellacherie and Meyer [22], Chapter VII,) that 1 the space BMO is the dual of H , the space of martingales such that E(sup |Mt|) < ∞.

Example

We now present an example of computation of dual predictable projection. Let (Bs)s≥0 be an F− (ν) (ν) Brownian motion starting from 0 and Bs = Bs + νs. Let G be the filtration generated by (ν) (ν) 2 the process (|Bs |, s ≥ 0) (which coincides with the one generated by (Bs ) ). We now compute the decomposition of the semi-martingale (B(ν))2 in the filtration G(ν) and the dual predictable R (ν) t (ν) projection (with respect to G ) of the finite variation process 0 Bs ds. Itˆo’slemma provides us with the decomposition of the process (B(ν))2 in the filtration F:

Z t Z t (ν) 2 (ν) (ν) (Bt ) = 2 Bs dBs + 2ν Bs ds + t . (1.1.9) 0 0

To obtain the decomposition in the filtration G(ν) we remark that,

νBs |B| E(e |Fs ) = cosh(νBs)(= cosh(ν|Bs|)) which leads, thanks to Girsanov’s Theorem to the equality:

νBs |B| |B| E(Bse |Fs ) E(Bs + νs|Fs ) = = Bs tanh(νBs) = ψ(νBs)/ν , νB |B| E(e s |Fs ) 1.1. BACKGROUND 11 where ψ(x) = x tanh(x). We now come back to equality (1.1.9). Due to (1.1.6), we have just shown that: Z t Z t (ν) (ν) The dual predictable projection of 2ν Bs ds is 2 ds ψ(νBs ) . (1.1.10) 0 0 As a consequence, Z t (ν) 2 (ν) (Bt ) − 2 ds ψ(νBs ) − t 0 R (ν) t (ν) 2 (ν) is a G -martingale with increasing process 4 0 (Bs ) ds. Hence, there exists a G -Brownian motion β such that

Z t Z t 2 (Bt + νt) = 2 |Bs + νs|dβs + 2 ds ψ(ν(Bs + νs)) + t . (1.1.11) 0 0

Exercise 1.1.25 Let M a c`adl`agmartingale. Prove that its predictable projection is Mt−. C R R Exercise 1.1.26 Let X be a measurable process such that E( t |X |ds) < ∞ and Y = t X ds, R 0 s t 0 s (o) t (o) then Yt − 0 Xsds is an F-martingale (RW) C

Exercise 1.1.27 Prove that if X is bounded and Y predictable (p)(YX) = Y (p)X (RW p 349) C

R t (ν) Exercise 1.1.28 Prove that, more generally than (1.1.10), the dual predictable projection of 0 f(Bs )ds R t (ν) (ν) is 0 E(f(Bs )|Gs )ds and that

(ν) νB(ν) (ν) −νB(ν) f(B )e s + f(−B )e s E(f(B(ν))|G(ν)) = s s . s s (ν) 2 cosh(νBs ) C

Exercise 1.1.29 Prove that, if (αs, s ≥ 0) is an increasing process and X a positive measurable process, then µZ · ¶(p) Z · (p) Xsdαs = Xsdαs 0 t 0 In particular µZ · ¶(p) Z · (p) Xsds = Xsds 0 t 0 C l

1.1.7 Itˆo-Kunita-Wentcell formula

We recall here the Itˆo-Kunita-Wentzell formula (see Kunita [55]). Let Ft(x) be a family of stochastic d processes, continuous in (t, x) ∈ (R+ × R ) a.s., and satisfying the following conditions: 2 d (i) for each t > 0, x → Ft(x) is C from R to R, (ii) for each x,(Ft(x), t ≥ 0) is a continuous semimartingale Xn j j dFt(x) = ft (x) dMt , j=1 12 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES where M j are continuous , and f j(x) are stochastic processes continuous in (t, x), j 1 j such that for every s > 0, the map x → fs (x) is C , and for every x, f (x) is an adapted process. Let X = (X1, ··· ,Xd) be a continuous semimartingale. Then Z Z Xn t Xd t ∂F F (X ) = F (X ) + f j(X ) dM j + s (X ) dXi t t 0 0 s s s ∂x s s j=1 0 i=1 0 i Xd Xn Z t Xd Z t 2 ∂fs j i 1 ∂ Fs k i + (Xs) dhM ,X is + dhX ,X is. ∂xi 2 ∂xi∂xk i=1 j=1 0 i,k=1 0

1.2 Some Important Exercices

Exercise 1.2.1 Let B be a Brownian motion, F its natural filtration and Mt = sups≤t Bs. Prove that, for t < 1, E(f(M1)|Ft) = F (1 − t, Bt,Mt) with r à ! Z b−a Z ∞ µ 2 ¶ 2 2 (u − a) F (s, a, b) = f(b) e−u /(2s)du + f(u) exp − du . πs 0 b 2s

Hint: Note that sup Bs = sup Bs ∨ sup Bs = sup Bs ∨ (Mc1−t + Bt) s≤1 s≤t t≤s≤1 s≤t c b b where Ms = supu≤s Bu for Bu = Bu+t − Bt. C

1 Exercise 1.2.2 Let ϕ be a C function, B a Brownian motion and Mt = sups≤t Bs. Prove that the process 0 ϕ(Mt) − (Mt − Bt)ϕ (Mt) is a local martingale. C

R ∞ Hint: Apply Doob’s Theorem to the martingale h(Mt)(Mt − Bt) + du h(u). Mt

Exercise 1.2.3 A Useful Lemma: Doob’s Maximal Identity. Let M be a positive continuous martingale such that M0 = x.

(i) Prove that if limt→∞ Mt = 0, then ³x´ P(sup M > a) = ∧ 1 (1.2.1) t a x and sup M law= where U is a random variable with a uniform law on [0, 1]. t U x (ii) Conversely, if sup M law= , show that M = 0. t U ∞

Hint: Apply Doob’s optional sampling theorem to Ta ∧ t and prove, passing to the limit when t goes to infinity, that

x = E(MTa ) = aP(Ta < ∞) = aP(sup Mt ≥ a) . C

Exercise 1.2.4 Let F ⊂ G and dQ|Gt = LtdP |Gt , where L is continuous. Prove that dQ|Lt = `tdP|Ft R P P t dhM ,Lis where `t = E(Lt|Ft). Prove that any (G, Q)-martingale can be written as M − where t 0 Ls M P is a (G, P)-martingale. C 1.2. SOME IMPORTANT EXERCICES 13

Exercise 1.2.5 Prove that, for any (bounded) process a (not necessarily adapted)

Z t Z t Mt := E( audu|Ft) − E(au|Fu)du 0 0

R · is an F-martingale. Extend the result to the case 0 Xsdαs where (αs, s ≥ 0) is an increasing predictable process and X a positive measurable process. C

R t R t Hint: Compute E(Mt − Ms|Fs) = E( s audu − s E(au|Fu)du|Fs). 14 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES Chapter 2

Enlargement of Filtrations

From the end of the seventies, Barlow, Jeulin and Yor started a systematic study of the problem of enlargement of filtrations: namely which F-martingales M remain G-semi-martingales and if it is the case, what is the semi-martingale decomposition of M in G? There are mainly two kinds of enlargement of filtration: • Initial enlargement of filtrations: in that case, Gt = Ft ∨ σ(L) where L is a r.v. (or, more generally Gt = Ft ∨ Fe where Fe is a σ-algebra) • Progressive enlargement of filtrations, where Gt = Ft ∨ Ht with H the natural filtration e e of Ht = 11{τ≤t} where τ is a random time (or, more generally Gt = Ft ∨ Ft where F is another filtration). Up to now, three lecture notes volumes have been dedicated to this question: Jeulin [46], Jeulin and Yor [50] and Mansuy and Yor [60]. See also related chapters in the books of Protter [67] and Dellacherie, Maisonneuve and Meyer [19] and Yor [80]. Some important papers are Br´emaudand Yor [13], Barlow [7], Jacod [41, 40] and Jeulin and Yor [47], and the survey papers of Nikeghbali [62]. See also the theses of Amendinger [1], Ankirchner [4], and Song [72] and the papers of Ankirchner et al. [5] and Yoeurp [78].

Very few studies are done in the case Gt = Ft ∨ Fet. One exception is for Fet = σ(Jt) where Jt = infs≥t Xs when X is a three dimensional . These results are extensively used in finance to study two specific problems occurring in insider trading: existence of arbitrage using strategies adapted w.r.t. the large filtration, and change of prices dynamics, when an F-martingale is no longer a G-martingale. They are also a main stone for study of default risk. An incomplete list of authors concerned with enlargement of filtration in finance for insider trading is: Ankirchner [5, 4], Amendinger [2], Amendinger et al. [3], Baudoin [8], Corcuera et al. [15], Eyraud-Loisel [30], Florens and Foug`ere[31], Gasbarra et al. [33], Grorud and Pontier [34], Hillairet [36], Imkeller [37], Karatzas and Pikovsky [51], Wu [77] and Kohatsu-Higa and Øksendal [54]. Di Nunno et al. [24], Imkeller [38], Imkeller et al. [39], Kohatsu-Higa [52, 53] have introduced to study the insider trading problem. We shall not discuss this approach here. Enlargement theory is also used to study asymmetric information, see, e.g., F¨ollmeret al. [32] and progressive enlargement is an important tool for the study of default in the reduced form approach by Bielecki et al. [10, 11, 12], Elliott et al.[28] and Kusuoka [56]. We start with the case where F-martingales remain G-martingales. In that case, there is a complete characterization so that this property holds. Then, we study a particular example: Brownian and Poisson bridges. In the following step, we study initial enlargement, where Gt = Ft ∨ σ(L) for a random variable L. The goal is to give conditions such that F-martingales remain G-semi-martingale and, in that case, to give the G-semi-martingale decomposition of the F-martingales. Then, we answer to the same

15 16 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

questions for progressive enlargements where Gt = Ft ∨σ(τ ∧t) for a non negative random variable τ. Only sufficient conditions are known. The study of a particular and important case of last passage times is presented.

2.1 Immersion of Filtrations

Let F and G be two filtrations such that F ⊂ G. Our aim is to study some conditions which ensure that F-martingales are G-semi-martingales, and one can ask in a first step whether all F-martingales 1 are G-martingales. This last property is equivalent to E(ζ|Ft) = E(ζ|Gt), for any t and ζ ∈ L (F∞). 1 Let us study a simple example where G = F∨σ(ζ) where ζ ∈ L (F∞) and ζ is not F0-measurable. Obviously E(ζ|Gt) = ζ is a G-martingale and E(ζ|Ft) is an F-martingale. However E(ζ|G0) 6= E(ζ|F0), and some F-martingales are not G-martingales (as, for example E(ζ|Ft)).

2.1.1 Definition

The filtration F is said to be immersed in G if any square integrable F-martingale is a G-martingale (Tsirel’son [75], Emery´ [29]). This is also referred to as the (H) hypothesis by Br´emaudand Yor [13] which was defined as: (H) Every F-square integrable martingale is a G-square integrable martingale.

Proposition 2.1.1 Hypothesis (H) is equivalent to any of the following properties:

(H1) ∀ t ≥ 0, the σ-fields F∞ and Gt are conditionally independent given Ft, i.e., ∀ t ≥ 0, ∀ Gt ∈ 2 2 L (Gt), ∀ F ∈ L (F∞), E(Gt F |Ft) = E(Gt|Ft)E(F |Ft).

1 (H2) ∀ t ≥ 0, ∀ Gt ∈ L (Gt), E(Gt|F∞) = E(Gt|Ft).

1 (H3) ∀ t ≥ 0, ∀ F ∈ L (F∞), E(F |Gt) = E(F |Ft).

In particular, (H) holds if and only if every F-local martingale is a G-local martingale.

Proof: 2 • (H) ⇒ (H1). Let F ∈ L (F∞) and assume that hypothesis (H) is satisfied. This implies that the martingale Ft = E(F |Ft) is a G-martingale such that F∞ = F , hence Ft = E(F |Gt). It follows that 2 for any t and any Gt ∈ L (Gt):

E(FGt|Ft) = E(GtE(F |Gt)|Ft) = E(GtE(F |Ft)|Ft) = E(Gt|Ft)E(F |Ft) which is exactly (H1). 2 2 • (H1) ⇒ (H2). Let F ∈ L (F∞) and Gt ∈ L (Gt). Under (H1),

H1 E(F E(Gt|Ft)) = E(E(F |Ft)E(Gt|Ft)) = E(E(FGt|Ft)) = E(FGt) which is (H2). 2 2 • (H2) ⇒ (H3). Let F ∈ L (F∞) and Gt ∈ L (Gt). If (H2) holds, then it is easy to prove that, for 2 F ∈ L (F∞), H2 E(GtE(F |Ft)) = E(F E(Gt|Ft)) = E(FGt), which implies (H3). The general case follows by approximation. • Obviously (H3) implies (H). ¤ 2.1. IMMERSION OF FILTRATIONS 17

In particular, under (H), if W is an F-Brownian motion, then it is a G-martingale with bracket t, since such a bracket does not depend on the filtration. Hence, it is a G-Brownian motion. It is R t important to note that 0 ψsdWs is a local martingale, for a G-adapted process ψ, satisfying usual integrability conditions. A trivial (but useful) example for which (H) is satisfied is G = F ∨ Fe where F and Fe are two

filtrations such that F∞ is independent of Fe∞.

Exercise 2.1.2 Assume that (H) holds, and that W is an F-Brownian motion. Prove that W is a G-Brownian motion without using the bracket. 2 Hint: either use the fact that W and (Wt − t, t ≥ 0) are F, hence G-martingales, or the fact that, 1 2 for any λ, exp(λWt − 2 λ t) is a F, hence a G martingale. C

Exercise 2.1.3 Prove that, if F is immersed in G, then, for any t, Ft = Gt ∩ F∞. Show that, if τ ∈ F∞, immersion holds if and only if τ is an F-stopping time. Hint: if A ∈ Gt ∩ F∞, then 11A = E(11A|F∞) = E(11A|Ft) C

2.1.2 Change of probability

Of course, the notion of immersion depends strongly on the probability measure, and in particular, is not stable by change of probability. See Subsection 2.4.6 for a counter example. We now study in which setup the immersion property is preserved under change of probability.

Proposition 2.1.4 Assume that the filtration F is immersed in G under P, and let Q be equivalent to P, with Q|Gt = LtP|Gt where L is assumed to be F-adapted. Then, F is immersed in G under Q.

Proof: Let N be a (F, Q)-martingale, then (NtLt, t ≥ 0) is a (F, P)-martingale, and since F is immersed in G under P,(NtLt, t ≥ 0) is a (G, P)-martingale which implies that N is a (G, Q)- martingale. ¤

Note that, if one defines a change of probability on F with an F-martingale L, one can not extend this change of probability to G by setting Q|Gt = LtP|Gt , since, in general, L fails to be a G-martingale. In the next proposition, we do not assume that the Radon-Nikod´ymdensity is F-adapted.

Proposition 2.1.5 Assume that F is immersed in G under P, and let Q be equivalent to P with

Q|Gt = LtP|Gt where L is a G-martingale and define `t = E(Lt|Ft). Assume that all F-martingales are continuous and that L is continuous. Then, F is immersed in G under Q if and only if the (G, P)-local martingale Z t Z t dLs d`s − : = L(L)t − L(`)t 0 Ls 0 `s is orthogonal to the set of all (F, P)-local martingales.

Proof: We prove that any (F, Q)-martingale is a (G, Q)-martingale. Every (F, Q)-martingale M Q may be written as Z t P Q P dhM , `is Mt = Mt − 0 `s where M P is an (F, P)-martingale. By hypothesis, M P is a (G, P)-martingale and, from Girsanov’s R P theorem, M P = N Q + t dhM ,Lis where N Q is an (G, Q)-martingale. It follows that t t 0 Ls Z t P Z t P Q Q dhM ,Lis dhM , `is Mt = Nt + − 0 Ls 0 `s Z t Q P = Nt + dhM , L(L) − L(`)is . 0 18 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Q P Thus M is a (G, Q) martingale if and only if hM , L(L) − L(`)is = 0. Here L is the stochastic logarithm, as the inverse of the stochastic exponential. ¤

Proposition 2.1.6 Let P be a probability measure, and

Q|Gt = LtP|Gt ; Q|Ft = `tP|Ft .

Then, hypothesis (H) holds under Q if and only if:

E(XLT |Gt) E(X`T |Ft) ∀T, ∀X ≥ 0,X ∈ FT , ∀t < T, = (2.1.1) Lt `t

Proof: Note that, for X ∈ FT , 1 1 EQ(X|Gt) = EP(XLT |Gt) , EQ(X|Ft) = EP(X`T |Gt) Lt `t and that, from MTC, (H) holds under Q if and only if, ∀T, ∀X ∈ FT , ∀t ≤ T , one has

EQ(X|Gt) = EQ(X|Ft) .

¤

Comment 2.1.7 The (H) hypothesis was studied by Br´emaudand Yor [13] and Mazziotto and Szpirglas [61], and in a financial setting by Kusuoka [56], Elliott et al. [28] and Jeanblanc and Rutkowski [42, 43].

Exercise 2.1.8 Prove that, if F is immersed in G under P and if Q is a probability equivalent to P, then, any (Q, F)-semi-martingale is a (Q, G)-semi-martingale. Let

Q|Gt = LtP|Gt ; Q|Ft = `tP|Ft . and X be a (Q, F) martingale. Assuming that F is a Brownian filtration and that L is continuous, prove that Z µ ¶ t 1 1 Xt + dhX, `is − dhX,Lis 0 `s Ls is a (G, Q) martingale. In a general case, prove that

Z t µ ¶ Ls− 1 1 Xt + d[X, `]s − d[X,L]s 0 Ls `s− Ls− is a (G, Q) martingale. See Jeulin and Yor [48]. C

(L) Exercise 2.1.9 Assume that Ft = Ft ∨ σ(L) where L is a random variable. Find under which conditions on L, immersion property holds. C

Exercise 2.1.10 Construct an example where some F-martingales are G-martingales, but not all F martingales are G-martingales. C

Exercise 2.1.11 Assume that F ⊂ Ge where (H) holds for F and Ge. τ Let τ be a Ge-stopping time. Prove that (H) holds for F and F = F ∨ H where Ht = σ(τ ∧ t). Let G be such that F ⊂ G ⊂ Ge. Prove that F be immersed in G. C 2.2. BRIDGES 19

2.2 Bridges

2.2.1 The

Rather than studying ab initio the general problem of initial enlargement, we discuss an interesting B example. Let us start with a BM (Bt, t ≥ 0) and its natural filtration F . Define a new filtration (B1) B as Ft = Ft ∨ σ(B1). In this filtration, the process (Bt, t ≥ 0) is no longer a martingale. It (B1) is easy to be convinced of this by looking at the process (E(B1|Ft ), t ≤ 1): this process is identically equal to B1, not to Bt, hence (Bt, t ≥ 0) is not a G-martingale. However, (Bt, t ≥ 0) is a F(B1)-semi-martingale, as follows from the next proposition 2.2.2. Before giving this proposition, we recall some facts on Brownian bridge.

The Brownian bridge (bt, 0 ≤ t ≤ 1) is defined as the conditioned process (Bt, t ≤ 1|B1 = 0). Note that Bt = (Bt − tB1) + tB1 where, from the Gaussian property, the process (Bt − tB1, t ≤ 1) law and the random variable B1 are independent. Hence (bt, 0 ≤ t ≤ 1) = (Bt − tB1, 0 ≤ t ≤ 1). The Brownian bridge process is a , with zero mean and covariance function s(1−t), s ≤ t. Moreover, it satisfies b0 = b1 = 0. We can represent the Brownian bridge between 0 and y during the time interval [0, 1] as

(Bt − tB1 + ty; t ≤ 1) .

More generally, the Brownian bridge between x and y during the time interval [0,T ] may be expressed as µ ¶ t t x + B − B + (y − x); t ≤ T , t T T T where (Bt; t ≤ T ) is a standard BM starting from 0.

R t∧1 B1−Bs Exercise 2.2.1 Prove that 0 1−s ds is convergent. t−s Prove that, for 0 ≤ s < t ≤ 1, E(Bt − Bs|B1 − Bs) = 1−s (B1 − Bs) C

Decomposition of the BM in the enlarged filtration F(B1)

(B1) Proposition 2.2.2 Let Ft = ∩²>0Ft+² ∨ σ(B1). The process

Z t∧1 B1 − Bs βt = Bt − ds 0 1 − s is an F(B1)-martingale, and an F(B1) Brownian motion. In other words,

Z t∧1 B1 − Bs Bt = βt − ds 0 1 − s is the decomposition of B as an F(B1)-semi-martingale.

Proof: Note that the definition of F(B1) is done to satisfy the right-continuity assumption. We shall (B1) note Ft = Ft ∨ σ(B1) = Ft ∨ σ(B1 − Bs). Then, since Fs is independent of (Bs+h − Bs, h ≥ 0), one has

(B1) E(Bt − Bs|Fs ) = E(Bt − Bs|B1 − Bs) . 20 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

For t < 1,

Z t Z t B1 − Bu 1 E( du|F (B1)) = E(B − B |B − B )du 1 − u s 1 − u 1 u 1 s s Zs t 1 = (B − B − E(B − B |B − B )) du 1 − u 1 s u s 1 s Zs µ ¶ t 1 u − s = B1 − Bs − (B1 − Bs) du s 1 − u 1 − s Z 1 t t − s = (B1 − Bs) du = (B1 − Bs) 1 − s s 1 − s It follows that (B1) E(βt − βs|Fs ) = 0 hence, β is an F(B1)-martingale (and an F(B1)-Brownian motion) (WHY?). ¤ R 1 It follows that if M is an F-local martingale such that √ 1 d|hM,Bi| is finite, then 0 1−s s

Z t∧1 B1 − Bs Mct = Mt − dhM,Bis 0 1 − s is a F(B1)-local martingale.

B1−Bt B1−Bt Comment 2.2.3 The singularity of 1−t at t = 1, i.e., the fact that 1−t is not square integrable between 0 and 1 prevents a Girsanov measure change transforming the (P, F(B1)) semi-martingale B into a (Q, F(B1)) martingale.

We obtain that the standard Brownian bridge b is a solution of the following stochastic equation (take care about the change of notation)  b  db = − t dt + dW ; 0 ≤ t < 1 t 1 − t t  b0 = 0 . R t 1 The solution of the above equation is bt = (1 − t) 0 1−s dWs which is a Gaussian process with zero mean and covariance s(1 − t), s ≤ t.

Exercise 2.2.4 Using the notation of Proposition 2.2.2, prove that B1 and β are independent. Check that the projection of β on FB is equal to B. (B1) (B1) Hint: The F BM β is independent of F0 . C

Exercise 2.2.5 Consider the SDE ( X dX = − t dt + dW ; 0 ≤ t < 1 t 1 − t t X0 = 0

1. Prove that Z t dWs Xt = (1 − t) ; 0 ≤ t < 1 . 0 1 − s

2. Prove that (Xt, t ≥ 0) is a Gaussian process. Compute its expectation and its covariance.

3. Prove that limt→1 Xt = 0. 2.2. BRIDGES 21

C R Exercise 2.2.6 (See Jeulin and Yor [49]) Let X = t ϕ dB where ϕ is predictable such that R t 0 s s t 2 0 ϕsds < ∞. Prove that the following assertions are equivalent

1. X is an F(B1)-semimartingale with decomposition

Z t Z t∧1 B1 − Bs Xt = ϕsdβs + ϕsds 0 0 t − s R 1 |B1−Bs| 2. 0 |ϕs| 1−s ds < ∞

R 1 |ϕ | 3. √ s ds < ∞ 0 1−s C

2.2.2 Poisson Bridge

N Let N be a Poisson process with constant intensity λ, Ft = σ(Ns, s ≤ t) its natural filtration and ∗ T > 0 a fixed time. The process Mt = Nt − λt is a martingale. Let Gt = σ(Ns, s ≤ t; NT ) be the natural filtration of N enlarged with the terminal value NT of the process N.

Proposition 2.2.7 Assume that λ = 1. The process

Z t∧T MT − Ms ηt = Mt − ds, 0 T − s is a G∗-martingale with predictable bracket, for t ≤ T ,

Z t NT − Ns Λt = ds . 0 T − s

Proof: For 0 < s < t < T , t − s E(N − N |G∗) = E(N − N |N − N ) = (N − N ) t s s t s T s T − s T s where the last equality follows from the fact that, if X and Y are independent with Poisson laws with parameters µ and ν respectively, then n! P(X = k|X + Y = n) = αk(1 − α)n−k k!(n − k)! µ where α = . Hence, µ + ν

µZ t ¶ Z t NT − Nu ∗ du ∗ E du |Gs = (NT − Ns − E(Nu − Ns|Gs )) s T − u s T − u Z µ ¶ t du u − s = NT − Ns − (NT − Ns) s T − u T − s Z t du t − s = (NT − Ns) = (NT − Ns) . s T − s T − s Therefore,

Z t NT − Nu ∗ t − s t − s E(Nt − Ns − du|Gs ) = (NT − Ns) − (NT − Ns) = 0 s T − u T − s T − s 22 CHAPTER 2. ENLARGEMENT OF FILTRATIONS and the result follows. ¤

Comment 2.2.8 Poisson bridges are studied in Jeulin and Yor [49]. This kind of enlargement of filtration is used for modelling insider trading in Elliott and Jeanblanc [27], Grorud and Pontier [34] and Kohatsu-Higa and Øksendal [54].

Exercise 2.2.9 Prove that, for any enlargement of filtration the compensated martingale M re- mains a semi-martingale. Hint: M has bounded variation. C

Exercise 2.2.10 Prove that any FN -martingale is a G∗-semimartingale. C

Exercise 2.2.11 Prove that

Z t∧T NT − Ns + ηt = Nt − ds − (t − T ) , 0 T − s Prove that Z t∧T NT − Ns + hηit = ds + (t − T ) . 0 T − s R ∗ t NT −Ns Therefore, (η, t ≤ t) is a compensated G -Poisson process, time-changed by 0 T −s ds, i.e., ηt = R t f NT −Ns f M( 0 T −s ds) where (M(t), t ≥ 0) is a compensated Poisson process. C

Exercise 2.2.12 A process X fulfills the harness property if µ ¯ ¶ Xt − Xs ¯ XT − Xs0 E ¯ Fs0], [T = t − s T − s0

for s0 ≤ s < t ≤ T where Fs0], [T = σ(Xu, u ≤ s0, u ≥ T ). Prove that a process with the harness property satisfies ³ ¯ ´ ¯ T − t t − s E X ¯ F = X + X , t s], [T T − s s T − s T and conversely. Prove that, if X satisfies the harness property, then, for any fixed T ,

Z t T XT − Xu Mt = Xt − du , t < T 0 T − u is an Ft], [T -martingale and conversely. See [3M] for more comments. C

2.2.3 Insider trading

Brownian Bridge

We consider a first example. Let dSt = St(µdt + σdBt) where µ and σ are constants, be the price of a risky asset. Assume that the riskless asset has an constant interest rate r. The wealth of an agent is

dXt = rXtdt + πbt(dSt − rStdt) = rXtdt + πtσXt(dWt + θdt),X0 = x 2.2. BRIDGES 23

µ−r B where θ = σ and π = πSb t/Xt assumed to be an F adapted process. Here πb is the number of shares of the risky asset, and π the proportion of wealth invested in the risky asset. It follows that Z T Z T π,x 1 2 2 ln(XT ) = ln x + (r − πs σ + θπsσ)ds + σπsdWs 0 2 0 Then, Z T µ ¶ π,x 1 2 2 E(ln(XT )) = ln x + E r − πs σ + θπsσ ds 0 2 π,x θ The solution of max E(ln(XT )) is πs = σ and µ ¶ 1 sup E(ln(Xπ,x)) = ln x + T r + θ2 T 2 Note that, if the coefficients r, µ and σ are F adapted, the same computation leads to Z T µ ¶ π,x 1 2 sup E(ln(XT )) = ln x + E rt + θt dt 0 2

µt−rt where θt = . σt We now enlarge the filtration with S1 (or equivalently, with B1 (WHY?)). In the enlarged B1−Bt filtration, setting, for t < 1, αt = 1−t , the dynamics of S are

dSt = St((µ + σαt)dt + σdβt) , and the dynamics of the wealth are e dXt = rXtdt + πtσXt(dβt + θtdt),X0 = x e e µ−r π,x θs with θt = σ + αt. The solution of max E(ln(XT )) is πs = σ . Then, for T < 1, Z T Z T π,x,∗ 1 e2 ln(XT ) = ln x + (r + θs )ds + σπsdβs 0 2 0 Z T Z T π,x,∗ 1 2 2 1 2 1 2 E(ln(XT )) = ln x + (r + (θ + E(αs) + 2θE(αs))ds = ln x + (r + θ )T + E(αs)ds 0 2 2 2 0 where we have used the fact that E(αt) = 0 (if the coefficients r, µ and σ are F adapted, α is orthogonal to Ft, hence E(αtθt) = 0). Let F π,x V (x) = max E(ln(XT )) ; π is F admissible G π,x V (x) = max E(ln(XT )) ; π is G admissible R G F 1 T 2 F 1 Then V (x) = V (x) + 2 E 0 αsds = V (x) − 2 ln(1 − T ). If T = 1, the value function is infinite: there is an arbitrage opportunity and there does not exist −rt an e.m.m. such that the discounted price process (e St, t ≤ 1) is a G-martingale. However, for any ² ∈ ]0, 1], there exists a uniformly integrable G-martingale L defined as µ − r + σζ dL = t L dβ , t ≤ 1 − ², L = 1 , t σ t t 0 −rt such that, setting dQ|Gt = LtdP|Gt , the process (e St, t ≤ 1 − ²) is a (Q, G)-martingale. This is the main point in the theory of insider trading where the knowledge of the terminal value of the underlying asset creates an arbitrage opportunity, which is effective at time 1. −rt R It is important to mention, that in both cases, the wealth of the investor is Xte = x + t −rs 0 πsd(Sse ). The insider has a larger class of portfolio, and in order to give a meaning to the stochastic integral for processes π which are not adapted with respect to the semi-martingale S, one has to give the decomposition of this semi-martingale in the larger filtration. 24 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Exercise 2.2.13 Prove carefully that there does not exist any emm in the enlarged filtration. Make precise the arbitrage opportunity. C

Poisson Bridge

We suppose that the interest rate is null and that the risky asset has dynamics

dSt = St− (µdt + σdWt + φdMt) where M is the compensated martingale of a standard Poisson process. Let (Xt, t ≥ 0) be the wealth of an un-informed agent whose portfolio is described by (πt), the proportion of wealth invested in the asset S at time t. Then

dXt = πtXt−(µdt + σdWt + φdMt) (2.2.1)

Then,

µZ t Z t Z t Z t ¶ 1 2 2 Xt = x exp πs(µ − φλ)ds + σπsdWs + σ πs ds + ln(1 + πsφ)dNs 0 0 2 0 0 Assuming that the stochastic integrals with respect to W and M are martingales,

Z T 1 2 2 E[ln(XT )] = ln(x) + E(µπs − σ πs + λ(ln(1 + φπs) − φπs)ds . 0 2 Our aim is to solve x,π V (x) = sup E (ln(XT )) π We can then maximize the quantity under the integral sign for each s and ω. The maximum attainable wealth for the uninformed agent is obtained using the constant strategy π˜ for which 1 1 πµ˜ + λ[ln(1 +πφ ˜ ) − πφ˜ ] − π˜2σ2 = sup[πµ + λ[ln(1 + πφ) − πφ] − π2σ2] . 2 π 2 Hence 1 ³ p ´ π˜ = µφ − φ2λ − σ2 ± (µφ − φ2λ − σ2)2 + 4σ2φµ . 2σ2φ The quantity under the square root is (µφ − φ2λ + σ2)2 + 4σ2φ2λ and is non-negative. The sign to be used depends on the sign of quantities related to the parameters. The optimal πe is the only one such that 1 + φπe > 0. Solving the equation (2.2.1), it can be proved that the optimal 1 wealth is Xe = x(Le )−1 where dLe = Le (−σπdWe + ( − 1)dM ) is a Radon Nikodym density t t t t− t 1 + φπe t of an equivalent martingale measure. In this incomplete market, we thus obtain the utility equivalent martingale measure defined by Davis [16] and duality approach (See Kramkov and Schachermayer).

We assume that the informed agent knows NT from time 0. Therefore, his wealth evolves according to the dynamics

∗ ∗ ∗ dXt = πtXt−[(µ + φ(Λt − λ)]dt + σdWt + φdMt ] where Λ is given in Proposition 2.2.7. Exactly the same computations as above can be carried out. In fact these only require changing µ to (µ + φ(Λt − λ)) and the intensity of the jumps from λ to Λt. 1 The optimal portfolio π∗ is now such that µ − λφ + φΛ [ ] − π∗σ2 = 0 and is given by s 1 + π∗φ 1 ³ p ´ π∗ = µφ − φ2λ − σ2 ± (µφ − φ2λ + σ2)2 + 4σ2φ2Λ , s 2σ2φ s 2.3. INITIAL ENLARGEMENT 25

∗ ∗ −1 The optimal wealth is Xt = x(Lt ) where

∗ ∗ ∗ 1 ∗ dLt = Lt−(−σπs dWt + ( ∗ − 1)dMt ) . 1 + φπs Whereas the optimal portfolio of the uninformed agent is constant, the optimal portfolio of the informed agent is time-varying and has a jump as soon as a jump occurs for the prices. The informed agent must maximize at each (s, ω) the quantity

1 πµ + Λ (ω) ln(1 + πφ) − λπφ − π2σ2 . s 2 Consequently,

1 2 2 1 2 2 sup πµ + Λs ln(1 + πφ) − λπφ − π σ ≥ πµ˜ + Λs ln(1 +πφ ˜ ) − λπφ˜ − π˜ σ π 2 2

Now, E[Λs] = λ, so

Z T ∗ 1 2 2 sup E(ln XT ) = ln x + sup E(πµ + Λs ln(1 + πφ) − λπφ − π σ )ds π π 0 2 Z T 1 2 2 ≥ ln x + π˜(µ + λ ln(1 +πφ ˜ ) − λπφ˜ − π˜ σ )ds = E(ln XeT ) 0 2 Therefore, the maximum expected wealth for the informed agent is greater than that of the un- informed agent. This is obvious because the informed agent can use any strategy available to the uninformed agent.

Exercise 2.2.14 Solve the same problem for power utility function. C

2.3 Initial Enlargement

(L) Let F be a Brownian filtration generated by B. We consider Ft = Ft∨σ(L) where L is a real-valued random variable. More precisely, in order to satisfy the usual hypotheses, redefine

(L) Ft = ∩²>0 {Ft+² ∨ σ(L)} .

Proposition 2.3.1 One has (L) d (i) Every F -optional process Y is of the form Yt(ω) = yt(ω, L(ω)) for some F ⊗ B(R )-optional process (yt(ω, u), t ≥ 0). (L) (ii) Every F -predictable process Y is of the form Yt(ω) = yt(ω, L(ω)) where (t, ω, u) 7→ yt(ω, u) is a P(F) ⊗ B(Rd)-measurable function.

We shall now simply write yt(L) for yt(ω, L(ω)).

We recall that there exists a family of regular conditional distributions Pt(ω, dx) such that Pt(·,A) is a version of E(11{L∈A}|Ft) and for any ω, Pt(ω, ·) is a probability on R.

2.3.1 Jacod’s criterion

In what follows, for y(u) a family of martingales and X a martingale, we shall write hy(L),Xi for hy(u),Xi|u=L. 26 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Proposition 2.3.2 (Jacod’s Criterion.) Suppose that, for each t < T , Pt(ω, dx) <<ν(dx) where ν is the law of L and T is a fixed horizon T ≤ ∞. Then, every F-semi-martingale (Xt, t < T ) is also an F(L)-semi-martingale. Moreover, if Pt(ω, dx) = pt(ω, x)ν(dx) and if X is an F-martingale, the process

Z t dhp·(L),Xis Xet = Xt − , t < T 0 ps− (L) is an F(L)-martingale. In other words, the decomposition of the F(L)-semi-martingale X is

Z t dhp·(L),Xis Xt = Xet + . 0 ps− (L)

Proof: In a first step, we show that, for any θ, the process p(θ) is an F-martingale. One has to show that, for ζs ∈ bFs and s < t

E(pt(θ)ζs) = E(ps(θ)ζs)

This follows from

E(E(11τ>θ|Ft)ζs) = E(E(11τ>θ|Fs)ζs) .

In a second step, we assume that F-martingales are continuous, and that X and p are square integrable. In that case, hp·(L),Xi exists. Let Fs be a bounded Fs-measurable random variable and h : R+ → R, be a bounded Borel function. Then the variable F h(L) is F (L)-measurable and if a R s s decomposition of the form X = Xe + t dK (L) holds, the martingale property of Xe should imply ³ ³ ´´ t t 0 u that E Fsh(L) Xet − Xes = 0, hence

µ Z t ¶ E (Fsh(L)(Xt − Xs)) = E Fsh(L) dKu(L) . s We can write:

µ Z ∞ ¶ E (Fsh(L)(Xt − Xs)) = E Fs (Xt − Xs) h(θ)pt(θ)ν(dθ) Z −∞

= h(θ)E (Fs (Xtpt(θ) − Xsps(θ))) ν(dθ) R Z µ Z t ¶

= h(θ)E Fs d hX, p(θ)iv ν(dθ) R s where the first equality comes from a conditioning w.r.t. Ft, the second from the martingale property of p(θ), and the third from the fact that both X and p(θ) are square-integrable F-martingales. Moreover:

µ Z t ¶ µ Z Z t ¶ E Fsh(L) dKv(L) = E Fs h(θ) dKv(θ)pt(θ)ν(dθ) s R s Z µ Z t ¶ = h(θ)E Fs pv(θ)dKv(θ) ν(dθ) R s where the first equality comes from the definition of p, and the second from the martingale prop- erty of p(θ). By equalization of these two quantities, we obtain that it is necessary to have dKu(θ) = d hX, p(θ)iu /pu(θ). ¤

The stability of initial times under a change of probability is rather obvious. 2.3. INITIAL ENLARGEMENT 27

Regularity Conditions ♠

One of the major difficulties is to prove the existence of a universal c`adl`agmartingale version of the family of densities. Fortunately, results of Jacod [40] or Stricker and Yor [73] help us to solve this technical problem. See also [1] for a detailed discussion. We emphazise that these results are the most important part of enlargement of filtration theory. Jacod ([40], Lemme 1.8 and 1.10) establishes the existence of a universal c`adl`agversion of the density process in the following sense: there exists a non negative function pt(ω, θ) c`adl`agin t, + + optional w.r.t. the filtration Fb on Ωb = Ω × R , generated by Ft ⊗ B(R ), such that

θ • for any θ, p.(θ) is an F-martingale; moreover, denoting ζ = inf{t : pt−(θ) = 0}, then p.(θ) > 0, θ θ and p−(θ) > 0 on [0, ζ ), and p.(θ) = 0 on [ζ , ∞).

+ • For any bounded family (Yt(ω, θ), t ≥ 0) measurable w.r.t. P(F) ⊗ B(R ), the F-predictable (p) R projection of the process Yt(ω, L(ω)) is the process Yt = pt−(θ)Yt(θ)ν(dθ). • Let m be a continuous F-local martingale. There exists a P(Fb)-measurable function k such that θ hp(θ), mi = (k(θ)p(θ)−) ? hMi, P a.s.on ∩n {t ≤ Rn} θ where Rn = inf{t : pt−(θ) ≤ 1/n}. • Let m be a local F-martingale. There exists predictable increasing process A and a Fb- predictable function k such that

Z t hp(θ), mit = ks(θ)p(θ)s−dAs . 0 If m is locally square integrable, one can chose A = hmi.

Lemma 2.3.3 The process p(L) does not vanish on [0,T [.

Corollary 2.3.4 Let Z be a random variable taking on only a countable number of values. Then every F semimartingale is a F(Z) semimartingale.

Proof: If we note X∞

η (dx) = P (Z = xk) δxk (dx) , k=1 where δxk (dx) is the Dirac measure at xk, the law of Z, then Pt (ω, dx) is absolutely continuous with respect to η with Radon-Nikodym density: X∞ P (Z = xk|Ft) 1x=xk . P (Z = xk) k=1 Now the result follows from Jacod’s theorem.¤ R t dhp·(L),Xis Exercise 2.3.5 Assume that F is a Brownian filtration. Then, E( |Ft) is an F-martingale. 0 ps− (L) Answer: Note that the result is obvious from the decomposition theorem! indeed taking expectation w.r.t. Ft of the two sides of Z t dhp·(L),Xis Xet = Xt − 0 ps− (L) leads to Z t dhp·(L),Xis E(Xet|Ft) = Xt − E( |Ft), 0 ps− (L) 28 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

and E(Xet|Ft) is an F-martingale. Our aim is toR check it directly. Writing dpt(u) = pt(uσt(u)dWt and dXt = xtdWt, we note that P(L ∈ R|Fs) = R ps(u)ν(du) = 1 implies that Z Z Z t Z Z t Z ps(u)ν(du) = p0(u)ν(du) + dWs σs(u)ps(u)ν(du) = 1 + dWs σs(u)ps(u)ν(du) R R 0 R 0 R R R t dhp·(L),Xis hence σs(u)ps(u)ν(du) = 0. The process E( |Ft) is equal to a martingale plus R 0 ps− (L) R t R t R 0 E(σs(L)xs|Fs)ds = 0 ds xs R σs(u)ps(u)ν(du) = 0. C

Exercise 2.3.6 Prove that if there exists a probability Q∗ equivalent to P such that, under Q∗, (L) the r.v. L is independent of F∞, then every (P, F)-semi-martingale X is also an (P, F )-semi- martingale. C

2.3.2 Yor’s Method

We follow here Yor [80]. We assume that F is a Brownian filtration. For a bounded Borel function f, let (λt(f), t ≥ 0) be the continuous version of the martingale (E(f(L)|Ft), t ≥ 0). There exists a predictable kernel λt(dx) such that Z

λt(f) = λt(dx)f(x) . R

From the predictable representation property applied to the martingale E(f(L)|Ft), there exists a predictable process λb(f) such that

Z t b λt(f) = E(f(L)) + λs(f)dBs . 0 b Proposition 2.3.7 We assume that there exists a predictable kernel λt(dx) such that Z b b dt a.s., λt(f) = λt(dx)f(x) . R b Assume furthermore that dt × dP a.s. the measure λt(dx) is absolutely continuous with respect to λt(dx): b λt(dx) = ρ(t, x)λt(dx) . Then, if X is an F-martingale, there exists a F(L)-martingale Xb such that

Z t Xt = Xbt + ρ(s, L)dhX,Bis . 0

Sketch of the proof: Let X be an F-martingale, f a given bounded Borel function and Ft = E(f(L)|Ft). From the hypothesis

Z t b Ft = E(f(L)) + λs(f)dBs . 0

Then, for As ∈ Fs, s < t:

E(11As f(L)(Xt − Xs)) = E(11As (FtXt − FsXs)) = E(11As (hF,Xit − hF,Xis)) µ Z t ¶ b = E 11As dhX,Biu λu(f) s µ Z t Z ¶

= E 11As dhX,Biu λu(dx)f(x)ρ(u, x) . s R 2.3. INITIAL ENLARGEMENT 29

R t Therefore, Vt = 0 ρ(u, L) dhX,Biu satisfies

E(11As f(L)(Xt − Xs)) = E(11As f(L)(Vt − Vs)) .

(L) It follows that, for any Gs ∈ Fs ,

E(11Gs (Xt − Xs)) = E(11Gs (Vt − Vs)) ,

(L) hence, (Xt − Vt, t ≥ 0) is an F -martingale. ¤

Let us write the result of Proposition 2.3.7 in terms of Jacod’s criterion. If λt(dx) = pt(x)ν(dx), then Z

λt(f) = pt(x)f(x)ν(dx) .

Hence, Z b dhλ·(f),Bit = λt(f)dt = dxf(x) dthp·(x),Bit and b dthp·(x),Bit λt(dx) = dthp·(x),Bit = pt(x)dx pt(x) therefore, b dthp·(x),Bit λt(dx)dt = λt(dx) . pt(x)

In the case where λt(dx) = Φ(t, x)dx, with Φ > 0, it is possible to find ψ such that

µZ t Z t ¶ 1 2 Φ(t, x) = Φ(0, x) exp ψ(s, x)dBs − ψ (s, x)ds 0 2 0 and it follows that λb (dx) = ψ(t, x)λ (dx). Then, if X is an F-martingale of the form X = x + R t R t t t t (L) 0 xsdBs, the process (Xt − 0 ds xs ψ(s, L), t ≥ 0) is an F -martingale.

2.3.3 Examples

We now give some examples taken from Mansuy and Yor [60] in a Brownian set-up for which we use the preceding. Here, B is a standard Brownian motion.

I Enlargement with B1. We compare the results obtained in Subsection 2.2.1 and the method presented in Subsection 2.3.2. Let L = B1. From the

E(g(B1)|Ft) = E(g(B1 − Bt + Bt)|Ft) = Fg(Bt, 1 − t) ³ ´ R (x−y)2 where F (y, 1 − t) = g(x)P (1 − t; y, x)dx and P (s; y, x) = √ 1 exp − . It follows that g 2πs 2s ³ 2 ´ 1 (x−Bt) λt(dx) = √ exp − dx. Then 2π(1−t) 2(1−t)

1 −x2/2 λt(dx) = pt(x)P(B1 ∈ dx) = pt(x)√ e 2π with µ 2 2 ¶ 1 (x − Bt) x pt(x) = p exp − + . (1 − t) 2(1 − t) 2 30 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

From Itˆo’sformula, x − B d p (x) = p (x) t dB . t t t 1 − t t

(This can be considered as a partial check of the martingale property of (pt(x), t ≥ 0).) It follows x−Bt that dhp(x),Bit = pt(x) 1−t dt, hence

Z t B1 − Bs Bt = Bet + ds . 0 1 − s Note that, in the notation of Proposition 2.3.7, one has

µ 2 ¶ b x − Bt 1 (x − Bt) λt(dx) = p exp − dx . 1 − t 2π(1 − t) 2(1 − t)

B I Enlargement with M = sups≤1 Bs. From Exercise 1.2.1,

B B E(f(M )|Ft) = F (1 − t, Bt,Mt )

B where Mt = sups≤t Bs with r à ! Z b−a Z ∞ 2 2 2 F (s, a, b) = f(b) e−u /(2s)du + f(u)e−(u−a) /(2s)du πs 0 b and, denoting by δy the Dirac measure at y, s ( Z B µ ¶ Mt −Bt 2 2 B u λt(dy) = δy(Mt ) exp − du π(1 − t) 0 2(1 − t) µ 2 ¶ ¾ (y − Bt) + 11{y>M B } exp − dy . t 2(1 − t)

Hence, by applying Itˆo’sformula s ½ µ ¶ 2 (M B − B )2 λb (dy) = δ (M B) exp − t t t π(1 − t) y t 2(1 − t) µ 2 ¶¾ y − Bt (y − Bt) + 11{y>M B } exp − . t 1 − t 2(1 − t)

It follows that

0 µ ¶ x − Bt 1 Φ x − Bt ρ(t, x) = 11{x>M B } + 11{M B =x} √ √ t 1 − t t 1 − t Φ 1 − t q R x u2 2 − 2 with Φ(x) = π 0 e du. R R ∞ ∞ 2 I Enlargement with Z := 0 f(s)dBs where 0 f (s)ds < ∞, for a deterministic function f. The above method applies step by step: it is easy to compute λ (dx), since conditionally on F , Z R R t t t 2 t 2 is gaussian, with mean mt = 0 ϕ (s) dBs, and variance σt = 0 ϕ (s) ds. The absolute continuity requirement is satisfied with: x − ms ρ (x, t) = ϕ (s) 2 . σs But here, we have to impose an extra integrability condition. For example, if we assume that Z t |ϕ (s) | ds < ∞, 0 σs 2.3. INITIAL ENLARGEMENT 31 then B is a F(B1)-semimartingale with canonical decomposition:

Z t µZ ∞ ¶ e ϕ (s) Bt = Bt + ds 2 ϕ (u) dBu , 0 σs s

As a particular case, we may take Z = Bt0 , for some fixed t0 and we recover the results for the Brownian bridge. More examples can be found in Jeulin [46] and Mansuy and Yor [60].

2.3.4 Change of Probability

We assume that Pt, the conditional law of L given Ft, is equivalent to ν for any t > 0, so that the hypotheses of Proposition 2.3.2 hold and that p does not vanishes (on the support of ν).

(L) Lemma 2.3.8 The process 1/pt(L), t ≥ 0 is an F -martingale. Let, for any t, dQ = 1/pt(L)dP (L) on Ft . Under Q, the r.v. L is independent of F∞. Moreover,

Q|Ft = P|Ft , Q|σ(L) = P|σ(L)

(L) Proof: Let Zt(L) = 1/pt(L). In a first step, we prove that Z is an F -martingale. Indeed, (L) E(Zt(L)|Fs ) = Zs(L) if (and only if) E(Zt(L)h(L)As) = E(Zs(L)h(L)As) for any (bounded) Borel function h and any Fs-measurable (bounded) random variable As. From definition of p, one has Z Z

E(Zt(L)h(L)As) = E( Zt(x)h(x)pt(x)ν(dx)As) = E( h(x)ν(dx)As) Z R R

= h(x)ν(dx)E(As) = E(As) E(h(L)) R

The particular case t = s leads to E(Zs(L)h(L)As) = E(h(L)) E(As), hence E(Zs(L)h(L)As) = E(Zt(L)h(L)As). Note that, since p0(x) = 1, one has E(1/pt(L)) = 1/p0(L) = 1. Now, we prove the required independence. From the above,

EQ(h(L)As) = EP(Zs(L)h(L)As) = EP(h(L)) EP(As)

For h = 1 (resp. At = 1), one obtains EQ(As) = EP(As) (resp. EQ(h(L)) = EP(h(L))) and we are done. ¤ This fact appears in Song [72] and plays an important rˆolein Grorud and Pontier [35].

L L Lemma 2.3.9 Let m be a (P, F) martingale. The process m defined by mt = mt/pt(L) is a (L) L (P, F )-martingale and satisfies E(mt |Ft) = mt.

(L) Proof: To establish the martingale property, it suffices to check that for s < t and A ∈ Fs , one L L has E(mt 11A) = E(ms 11A), which is equivalent to EQ(mt11A) = EQ(ms11A). The last equality follows from the fact that the (F, P) martingale m is also a (F, Q) martingale (indeed P and Q coincide on F), hence a (F(L), Q) martingale (by independence of L and F under Q. Bayes criteria shows that mL) (L) is a (P, F )-martingale. Noting that E(1/pt(L)|Ft) = 1 (take As = 1 and h = 1 in the preceeding proof), the equality L E(mt |Ft) = mtE(1/pt(L)|Ft) = mt ends the proof. ¤ Of course, the reverse holds true: if there exists a probability equivalent to P such that, under Q, (L) the r.v. L is independent to F∞, then (P, F)-martingales are (P, F )-semi martingales (WHY?).

(L) Exercise 2.3.10 Prove that (Yt(L), t ≥ 0) is a (P, F )-martingale if and only if Yt(x)pt(x) is a family of F-martingales. C 32 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Exercise 2.3.11 Let F be a Brownian filtration. Prove that, if X is a square integrable (P, F(L))- martingale, then, there exists a function h and a process ψ such that

Z t Xt = h(L) + ψs(L)dBs 0 C

Equivalent Martingale Measures

We consider a financial market, where some assets, with prices S are traded, as well as a riskless asset S0. The dynamics of S under the so-called historical probability are assumed to be semi-martingales (WHY?). We denote by F the filtration generated by prices and by QF the set of probability measures, equivalent to P on F such that the discounted prices Se are F-(local) martingales. We now enlarge the filtration and consider prices as F(L) semi-martingales (assuming that it is the case). We denote by Q(L) the set of probability measures, equivalent to P on F(L) such that the discounted prices Se are F(L)-(local) martingales. We assume that there exists a probability Q equivalent to P such that, under Q, the r.v. L is (L) (L) independent to F∞. Then, Q is equal to the set of probability measures, equivalent to Q on F such that the discounted prices Se are F(L)-(local) martingales. Let P∗ ∈ QF and L its RN density ∗ ∗ (i.e., dP |Ft = LtdP|Ft . The probability Q defined as

∗ 1 dQ |F (L) = LtdQ|F (L) = Lt dP|F(L) t t pt(τ) t

(L) belongs to QF . Indeed, Se being a (P∗, F)-martingale, SLe is a (P, F)-martingale, hence SL/pe (L) is a (P, F(L))-martingale and Se is a (Q∗, F(L))-martingale. The probability Qb defined as

b 1 dQ|F (L) = h(τ)Lt dP|F(L) t pt(τ) t

F(L) where EP(h(τ)) = 1 belongs also to Q . ∗ F(L) ∗ ∗ Conversely, if Q ∈ Q , with RN density L , then Q defined as dQ|Ft = E(Lt |Ft)dP|Ft belongs to QF. Indeed (if the interest rate is null), SL∗ is a (Q∗, F(L))-martingale, hence S`∗ is a ∗ (L) ∗ ∗ ∗ ∗ (Q , F )-martingale, where `t = E(Lt |Ft). It follows that S` is a (Q , F)-martingale.

2.3.5 An Example from Shrinkage ♠

Let W = (Wt)te0 be a Brownian motion defined on the probability space (Ω, G, P), and τ be a random time, independent of W and such that P(τ > t) = e−λt, for all t ≥ 0 and some λ > 0 fixed. We define X = (Xt)t≥0 as the process µ ¶ ³ σ2 ´ X = x exp a + b − t − b(t − τ)+ + σ W t 2 t where a, and x, σ, b are some given strictly positive constants. It is easily shown that the process X solves the stochastic differential equation ¡ ¢ dXt = Xt a + b 11{τ>t} dt + Xt σ dWt .

Let us take as F = (F ) the natural filtration of the process X, that is, F = σ(X | 0 ≤ s ≤ t) t t≥0 R t sR W t t for t ≥ 0 (note that F is smaller than F ∨ σ(τ)). From Xt = x + 0 (a + b 11{τ>s})ds + 0 Xs σ dWs it follows that (from Exercise 1.2.5) 2.3. INITIAL ENLARGEMENT 33

¡ ¢ dXt = Xt a + b Gt dt + dmart

Identifying the brackets, one has dmart = XtσdW¯ t where W¯ is a martingale with bracket t, hence is a BM. It follows that the process X admits the following representation in its own filtration ¡ ¢ dXt = Xt a + b Gt dt + Xt σ dW¯ t .

Here, G = (Gt)t≥0 is the Az´emasupermartingale given by Gt = P(τ > t | Ft) and the innovation process W¯ = (W¯ t)t≥0 defined by Z t ¡ ¢ ¯ b Wt = Wt + 11{τ>s} − Gs ds σ 0 is a standard F-Brownian motion. It is easily shown using the arguments based on the notion of strong solutions of stochastic differential equations (see, e.g. [58, Chapter IV]) that the natural filtration of W¯ coincides with F. It follows from [58, Chapter IX] that the process G solves the stochastic differential equation b dG = −λG dt + G (1 − G ) dW¯ . (2.3.1) t t σ t t t

λt Observe that the process n = (nt)t≥0 with nt = e Gt admits the representation b dn = d(eλt G ) = eλt G (1 − G ) dW¯ t t σ t t t and thus, n is an F-martingale (to establish the true martingale property, note that the process (Gt(1−Gt))t≥0 is bounded). The equality (2.3.1) provides the (additive) Doob-Meyer decomposition λt −λt of the supermartingale G, while Gt = (Gt e ) e gives its multiplicative decomposition. It follows from these decompositions that the F-intensity rate of τ is λ, so that, the process M = (Mt)t≥0 with Mt = 11τ≤t − λ(t ∧ τ) is a G-martingale. It follows from the definition of the conditional survival probability process G and the fact that λt (Gt e )t≥0 is a martingale that the expression

λu −λu λ(t−u) P(τ > u | Ft) = E[P(τ > u | Fu) | Ft] = E[Gu e | Ft] e = Gt e holds for 0 ≤ t < u. From the standard arguments in [71, Chapter IV, Section 4], resulting from the application of Bayes’ formula, we obtain that the conditional survival probability process can be expressed as

Yu∧t Zu∧t −λu P(τ > u | Ft) = 1 − + e Yt Yt for all t, u ≥ 0. Here, the process Y = (Yt)t≥0 is defined by

Z t −λs −λt Yt = Zs λe ds + Zt e (2.3.2) 0 and the process Z = (Zt)t≥0 is given by µ ¶ b ³ X 2a + b − σ2 ´ Z = exp ln t − t . (2.3.3) t σ2 x 2

Moreover, by standard computations, we see that b dZ = Z (bG dt + σ dW ) t σ2 t t t 34 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

λt and hence, the process 1/Y = (1/Yt)t≥0, or its equivalent (e Gt/Zt)t≥0, admits the representation

³ ´ ³ λt ´ 1 e Gt b Gt d = d = − dW¯ t Yt Zt σ Yt and thus, it is an F-local martingale (in fact, an F-martingale since, for any u ≥ 0, one has

1 eλu = P(τ > u | Ft) Yt Zu for u > t). Hence, for each u ≥ 0 fixed, it can be checked that the process Z/Y = (Zt/Yt)t≥0 defines λt an F-martingale, as it must be (this process being equal to (e Gt)t≥0). We also get the representation

Z ∞ P(τ > u | Ft) = αt(v) η(dv) u

for t, u ≥ 0, and hence, the density of τ is given by

Zu∧t αt(u) = Yt where η(du) ≡ P(τ ∈ du) = λe−λu du. In particular, from (2.3.2), we have

Z ∞ αt(u) η(du) = 1 0 as expected.

2.4 Progressive Enlargement

We now consider a different case of enlargement, more precisely we assume that τ is a finite random time, i.e., a finite non-negative random variable, and we denote

τ Gt = Ft := ∩²>0 {Ft+² ∨ σ(τ ∧ (t + ²))} .

We define the right-continuous process H as

Ht = 11{τ≤t} .

We denote by H = (Ht, t ≥ 0) its natural filtration. With this notation, G = H ∨ F. Note that τ is an H-stopping time, hence a G-stopping time. (In fact, H is the smallest filtration making τ a stopping time). For a general random time τ, it is not true that F-martingales are Fτ -semi-martingales. Here is an example: due to the separability of the Brownian filtration, there exists a bounded random τ τ variable τ such that F∞ = σ(τ). Hence, Fτ+t = F∞, ∀t so that the F -martingales are constant after τ. Consequently, F-martingales are not Fτ -semi-martingales. τ We shall denote Zt = P(τ > t|Ft) and, if there is no ambiguity, we shall also use the notation Gt = P(τ > t|Ft). The process P(τ > t|Ft) is a super-martingale. Therefore, it admits a Doob-Meyer decomposition.

Lemma 2.4.1 (i) Let τ be a positive random time and

τ τ P(τ > t|Ft) = µt − At 2.4. PROGRESSIVE ENLARGEMENT 35

τ the Doob-Meyer decomposition of the super-martingale Gt = P(τ > t|Ft) (the process A is the F-predictable compensator of H, see Definition ??). Then, for any F-predictable positive process Y ,

µZ ∞ ¶ τ E(Yτ ) = E YudAu . 0

(ii) Ã ! Ã ! Z T Z T τ τ E(Yτ 11t<τ≤T |Ft) = E YudAu|Ft = −E YudZu |Ft t t

Proof: For any process c`adl`ag Y of the form Yu = ys11]s,t](u) with ys ∈ bFs, one has

E(Yτ ) = E(ys11]s,t](τ)) = E(ys(At − As)) . The result follows from MCT. See also Proposition 1.1.23. ¤

Proposition 2.4.2 Any Gt measurable random variable admits a decomposition as yt11t<τ +yt(τ)11τ≤t where yt is Ft-measurable, and, for any u ≤ t , yt(u) is Ft-measurable. Any G-predictable process Y admits a decomposition as Yt = yt11t≤τ + yt(τ)11τ

Proof: See Pham [65]. ¤

Proposition 2.4.3 The process µτ is a BMO martingale

Proof: From Doob-Meyer decomposition, since G is bounded, µ is a square integrable martingale.¤

2.4.1 General Facts

A Larger Filtration

We also introduce the filtration Gτ defined as

τ e e Gt = {A ∈ F∞ ∨ σ(τ): ∃At ∈ Ft : A ∩ {τ > t} = At ∩ {τ > t}}

τ τ τ Note that, on t < τ, one has Gt = Ft = Ft and Gt = F∞ ∨ σ(τ) on τ < t.

Lemma 2.4.4 Let ϑ be a Gτ -stopping time, then, there exists an F-stopping time ϑe such that ϑ ∧ τ = ϑe ∧ τ τ Let X ∈ F∞. A c`adl`agversion of E(X|Gt ) is 1 11t<τ E(X11t<τ |Ft) + X11t≥τ P(τ > t|Ft)

τ If Mt = E(X|Gt ), then the process Mt− is

1 p (X11]]0,τ]]) + X11]]τ,∞]] Gt− 36 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Conditional Expectations

Proposition 2.4.5 For any G-predictable process Y , there exists an F-predictable process y such that Yt11{t≤τ} = yt11{t≤τ}. Under the condition ∀t, P(τ ≤ t|Ft) < 1, the process (yt, t ≥ 0) is unique.

Proof: We refer to Dellacherie [23] and Dellacherie et al. [19], page 186. The process y may be recovered as the ratio of the F-predictable projections of Yt11{t≤τ} and 11{t≤τ}. ¤

Lemma 2.4.6 Key Lemma: Let X ∈ FT be an integrable r.v. Then, for any t ≤ T ,

E(XGT |Ft) E(X11{τ

Proof: On the set {t < τ}, any Gt measurable random variable is equal to an Ft-measurable random variable, therefore E(X11{τ

E(Yt11{t<τ}|Ft) where yt is Ft-measurable. Taking conditional expectation w.r.t. Ft, we get yt = . P(t < τ|Ft) (it can be proved that P(t < τ|Ft) does not vanishes on the set {t < τ}.) ¤

Exercise 2.4.7 Prove that τ {τ > t} ⊂ {Zt > 0} =: At (2.4.1) (where the inclusion is up to a negligible set. c Hint: P(At ∩ {τ > t}) = 0 C

2.4.2 Before τ

In this section, we assume that F-martingales are continuous and that τ avoids F-stopping times. It is proved in Yor [79] that if X is an F-martingale then the processes Xt∧τ and Xt(1 − Ht) are Fτ semi-martingales. Furthermore, the decompositions of the F-martingales in the filtration Fτ are known up to time τ (Jeulin and Yor [47]).

Proposition 2.4.8 Every F-martingale M stopped at time τ is an Fτ -semi-martingale with canon- ical decomposition Z t∧τ τ f dhM, µ is Mt∧τ = Mt + τ , 0 Zs− where Mf is an Fτ -local martingale.

τ Proof: Let Ys be an Fs -measurable random variable. There exists an Fs-measurable random variable ys such that Ys11{s≤τ} = ys11{s≤τ}, hence, if M is an F-martingale, for s < t,

E(Ys(Mt∧τ − Ms∧τ )) = E(Ys11{s<τ}(Mt∧τ − Ms∧τ ))

= E(ys11{s<τ}(Mt∧τ − Ms∧τ )) ¡ ¢ = E ys(11{s<τ≤t}(Mτ − Ms) + 11{t<τ}(Mt − Ms))

From the definition of Zτ (see also Definition ??),

µ Z t ¶ ¡ ¢ τ E ys11{s<τ≤t}Mτ = −E ys MudZu . s 2.4. PROGRESSIVE ENLARGEMENT 37

From integration by parts formula

Z t Z t Z t τ τ τ τ τ τ τ τ τ MudZu −MsZs +Zt Mt = Zu dMu +hM,Z it −hM,Z is = Zu dMu +hM, µ it −hM, µ is . s s s We have also

¡ ¢ τ τ E ys(11{s<τ≤t}Ms = E (ysMs(Zs − Zt )) ¡ ¢ τ E ys11{t<τ}(Mt − Ms)) = E (ysZt (Mt − Ms))) hence, from the martingale property of M

τ τ E(Ys(Mt∧τ − Ms∧τ )) = E(ys(hM, µ it − hM, µ is)) µ Z t τ ¶ µ Z t τ ¶ dhM, µ iu τ dhM, µ iu = E ys τ Zu = E ys τ E(11{u<τ}|Fu) s Zu s Zu µ Z t τ ¶ µ Z t∧τ τ ¶ dhM, µ iu dhM, µ iu = E ys τ 11{u<τ} = E ys τ . s Zu s∧τ Zu The result follows. ¤ See Jeulin for the general case.

2.4.3 Basic Results

Proposition 2.4.9 The process Z t∧τ τ dAu Mt = Ht − 0 Gu− is a G-martingale. For any bounded G-predictable process Y , the process

Z t∧τ Ys τ Yt11τ≤t − τ dAs 0 Zs− is a G-martingale. The process Lt = (1 − Ht)/Gt is a G -martingale.

Proof: We give the proof in the case where G is continuous. Let s < t. We proceed in two steps, τ τ using the Doob-Meyer decomposition of G as Gt = µt − At . First step: we prove 1 τ τ E(Ht − Hs|Gs) = 11s<τ E(At − As |Fs) Gs Indeed, 1 E(Ht − Hs|Gs) = P(s < τ ≤ t|Gs) = 11s<τ E(Gs − Gt|Fs) Gs 1 τ τ = 11s<τ E(At − As |Fs) . Gs In a second step, we prove that

Z t∧τ τ dAu 1 τ τ E( |Gs) = 11s<τ E(At − As |Fs) s∧τ Gu Gs One has

Z t∧τ τ Z t∧τ τ Z t∧τ τ dAu dAu 1 dAu E( |Gs) = 11s<τ E( |Gs) = 11s<τ E(11s<τ |Fs) s∧τ Gu s Gu Gs s Gu 38 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

R τ t dAu Setting Kt = , s Gu

Z t∧τ τ Z ∞ dAu E(11s<τ |Fs) = E(11s<τ Kt∧τ |Fs) = −E( Kt∧udGu|Fs) s Gu s Z t Z t Z t τ = −E( KudGu + KtGt|Fs) = E( GudKu|Fs) = E( dAu|Fs) s s s where we have used integration by parts formula to obtain

Z t Z t Z t − KudGu + KtGt = KsGt + GudKu = GudKu s s s

−Λt Note that, if Gt = NtDt = Nte is the multiplicative decomposition of G, then Ht − Λt∧τ is a martingale. ¤

Lemma 2.4.10 The process λ satisfies

1 P(t < τ < t + h|Ft) λt = lim . h→0 h P(t < τ|Ft)

Proof: The martingale property of M implies that

Z t+h E(11t<τ

It follows that, by projection on Ft

Z t+h P(t < τ < t + h|Ft) = λsP(s < τ|Ft)ds . t ¤ The reverse is know as Aven lemma [6].

Lemma 2.4.11 Let (Ω, Gt, P) be a filtered probability space and N be a counting process. Assume that E(Nt) < ∞ for any t. Let (hn, n ≥ 1) be a sequence of real numbers converging to 0, and

(n) 1 Yt = E(Nt+hn − Nt|Gt) hn

Assume that there exists λt and yt non-negative G-adapted processes such that

(n) (i) For any t, lim Yt = λt

(ii) For any t, there exists for almost all ω an n0 = n0(t, ω) such that

(n) |Ys − λs(ω)| ≤ ys(ω) , s ≤ t, n ≥ n0(t, ω)

R t (iii) 0 ysds < ∞, ∀t

R t Then, Nt − 0 λsds is a G-martingale.

at Definition 2.4.12 In the case where the process dAu = audu, the process λt = is called the Gt− e F-intensity of τ, and λt = 11t<τ λt is the G-intensity, and the process Z t∧τ Z t Z t e Ht − λsds = Ht − (1 − Hs)λsds = Ht − λsds 0 0 0 is a G-martingale. 2.4. PROGRESSIVE ENLARGEMENT 39

The intensity process is the basic tool to model default risk.

Exercise 2.4.13 Prove that if X is a (square-integrable) F-martingale, XL is a G -martingale, where L is defined in Proposition 2.4.9. Hint: See [BJR] for proofs. C

R ∞ Exercise 2.4.14 Let τ be a random time with density f, and G(s) = P(τ > s) = s f(u)du. Prove R t∧τ f(s) that the process Mt = Ht − 0 G(s) ds is a H-martingale. C

Restricting the information

Suppose from now on that Fet ⊂ Ft and define the σ-algebra Get = Fet ∨ Ht and the associated hazard process Γet = − ln(Get) with

Get = P(t < τ|Fet) = E(Gt|Fet) .

Let Ft = Zt + At be the F-Doob-Meyer decomposition of the F-submartingale F and assume R t that A is absolutely continuous with respect to Lebesgue’s measure: At = 0 asds. The process Aet = E(At|Fet) is a Fe-submartingale and its Fe-Doob-Meyer decomposition is

Aet = zet + αet .

Hence, setting Zet = E(Zt|Fet), the sub-martingale

Fet = P(t ≥ τ|Fet) = E(Ft|Fet) admits a Fe-Doob-Meyer decomposition as

Fet = Zet + zet + αet R e t e where Zt + zet is the martingale part. From Exercise 1.2.5, αet = 0 E(as|Fs)ds. It follows that

Z t∧τ αes Ht − ds 0 1 − Fes is a Ge-martingale and that the Fe-intensity of τ is equal to E(as|Fes)/Ges, and not ”as one could think” to E(as/Gs|Fes). Note that even if (H) hypothesis holds between Fe and F, this proof can not be simplified since Fe is increasing but not Fe-predictable (there is no raison for Fet to have an intensity).

This result can be directly proved thanks to Bremaud’s following result (a consequence of Exercise R R t e t e e e 1.2.5 ): if Ht − 0 λsds is a G-martingale, then Ht − 0 E(λs|Gs)ds is a G-martingale. Since

e 11{s≤τ} e E(11{s≤τ}λs|Gs) = E(11{s≤τ}λs|Fs) Ges 11{s≤τ} 11{s≤τ} = E(Gsλs|Fes) = E(as|Fes) Ges Ges R t∧τ e e e it follows that Ht − 0 E(as|Fs)/Gsds is a G-martingale, and we are done. 40 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Survival Process

Lemma 2.4.15 The super-martingale G admits a multiplicative decomposition as Gt = NtDt where D is a decreasing process and N a local martingale.

Proof: In the proof, we assume that G is continuous. Assuming that the supermartingale G = µ−A admits a multiplicative decomposition Gt = DtNt where N is a local-martingale and D a decreasing process, we obtain that dGt = NtdDt + DtdNt is a Doob-Meyer decomposition of G. Hence, 1 1 dNt = dµt, dDt = −Dt dAt , Dt Gt therefore, Z t 1 Dt = exp − dAs 0 Gs

Λt Conversely, if G admits the Doob-MeyerR decomposition µ − A, then Gt = NtDt with Dt = e , t 1 Λ being the intensity process Λt = − dAs. ¤ 0 Gs

Exercise 2.4.16 We consider, as in the paper of Biagini, Lee and Rheinlander a mortality bond, a R τ∧T financial instrument with payoff Y = 0 Gsds. We assume that G is continuous. 1. Compute, in the case r = 0, the price Y of the mortality bond (it will be convenient to R t T 2 introduce mt = E( 0 Gsds|Ft). Is the process m a (P, F) martingale? a (P, G)-martingale? 2. Determine α, β and γ so that 1 1 dYt = αtdMt + βt(dmt − dhm, Zit) + γt(dZt − dhZit) Gt Gt

3. Determine the price D(t, T ) of a defaultable zero-coupon bond with maturity T , i.e. a financial asset with terminal payoff 11T <τ . Give the dynamics of this price. 4. We now assume that F is a Brownian filtration, and that

dSt = St(bdt + σdWt) Explain how one can hedge the mortality bond.

Hint: From the key lemma, ÃZ ! Z Z τ∧T 1 τ∧T τ E Gsds|Gt = 11t<τ E(11t<τ Gsds|Ft) + 11τ≤t Gsds 0 Gt 0 0 and from the second part of key lemma

Z τ∧T Z ∞ Z u∧T I := E(11t<τ Gsds|Ft) = E( dFu Gsds|Ft) 0 t 0 Z T Z u Z ∞ Z T = E( dFu Gsds|Ft) + E( dFu Gsds|Ft) t 0 T 0 Z T Z u Z T = E( dFu Gsds|Ft) + E(GT Gsds|Ft) t 0 0 R t From integration by parts formula, setting ζt = 0 Gsds Z T Z T GT ζT = Gtζt + ζsdGs + Gsdζs t t 2.4. PROGRESSIVE ENLARGEMENT 41 hence Z T Z t 2 2 I = Gtζt + E( Gsds) = Gtζt − Gsds + mt t 0 and

µZ t µ Z t ¶¶ Z τ 1 2 It := E(I|Gt) = 11t<τ Gsds + mt − Gsds + 11τ≤t Gudu 0 Gt 0 0 Z τ Z t Z s = it11t<τ + 11τ≤t Gudu = it11t<τ + dHs Gudu 0 0 0

Differentiating this expression leads to

Z t µ Z t ¶ µ ¶ 1 2 1 dIt = ( Gsds − it)dHt + (1 − Ht) 2 mt − Gsds −dZt + dAt + dhZit 0 Gt 0 Gt µ Z t ¶ 1 1 1 2 +(1 − Ht) 2 dhm, Git + dmt − 2 (mt − Gsds)dhm, Git Gt Gt Gt 0

dAt After some simplifications, and using the fact that dMt = dHt − (1 − Ht) Gt

Z t Z t 1 2 1 1 1 2 1 dIt = (mt− Gsds)dMt+(1−Ht) (dmt− dhm, Zit)−(1−Ht) 2 (mt− Gsds)(dZt− dhZit) Gt 0 Gt Gt Gt 0 Gt C

2.4.4 Immersion Setting

Characterization of Immersion

Let us first investigate the case where the (H) hypothesis holds.

Lemma 2.4.17 In the progressive enlargement setting, (H) holds between F and G if and only if one of the following equivalent conditions holds:

(i) ∀(t, s), s ≤ t, P(τ ≤ s|F ) = P(τ ≤ s|F ), ∞ t (2.4.2) (ii) ∀t, P(τ ≤ t|F∞) = P(τ ≤ t|Ft).

Proof: If (ii) holds, then (i) holds too. If (i) holds, F∞ and σ(t ∧ τ) are conditionally independent given Ft. The property follows. This result can also be found in Dellacherie and Meyer [21]. ¤

Note that, if (H) holds, then (ii) implies that the process P(τ ≤ t|Ft) is increasing.

Exercise 2.4.18 Prove that if H and F are immersed in G, and if any F martingale is continuous, then τ and F∞ are independent. C

Exercise 2.4.19 Assume that immersion property holds and let yt(u) be a family of F-martingales. Prove that, for t > s,

11τ≤sE(yt(τ)|Gs) = 11τ≤sys(τ)

C 42 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Norros’s lemma

Proposition 2.4.20 Assume that G = (Gt)t≥0 is continuous and G∞ = 0 and let Λ be the increas- ing predictable process such that Mt = Ht − Λt∧τ is a martingale. Then the following conclusions hold:

(i) the variable Λτ has standard exponential law (with parameter 1);

(ii) if F is immersed in G, then the variable Λτ is independent of F∞.

Proof: (i) Consider the process X = (Xt, t ≥ 0), defined by:

Ht −zΛt∧τ Xt = (1 + z) e for all t ≥ 0 and any z > 0 fixed. Then, applying the integration by parts formula, we get:

−zΛt∧τ dXt = z e dMt . (2.4.3)

Hence, by virtue of the assumption that z > 0, it follows from (2.4.3) that X is also a G-martingale, so that: £ ¯ ¤ Ht −zΛt∧τ Hs −zΛs∧τ E (1 + z) e ¯ Gs = (1 + z) e (2.4.4) holds for all 0 ≤ s ≤ t. In view of the implied by z > 0 uniform integrability of X, we may let t go to infinity in (2.4.4). Setting s equal to zero in (2.4.4), we therefore obtain: £ ¤ E (1 + z) e−zΛτ = 1 .

This means that the Laplace transform of Λτ is the same as one of a standard exponential variable and thus proves the claim. (ii) Recall that, under immersion property, G is decreasing and dΛ = dG/G. Applying the change-of-variable formula, we get: Z t 11 −zΛt∧τ −zΛs (τ>s) e = 1 + z e dGs (2.4.5) 0 Gs for all t ≥ 0 and any z > 0 fixed. Then, taking conditional expectations under Ft from both parts of expression (2.4.5) and applying Fubini’s theorem, we obtain from the immersion of F in G that: Z · ¸ £ ¯ ¤ t 11 ¯ −zΛt∧τ −zΛs (τ>s) ¯ E e ¯ Ft = 1 + z E e ¯ Ft dGs (2.4.6) 0 Gs Z t P(τ > s | F ) −zΛs t = 1 + z e dGs 0 Gs Z t −zΛs = 1 + z e dGs 0 for all t ≥ 0. Hence, using the fact that Λt = − ln Gt, we see from (2.4.6) that: £ ¤ z ¡ ¢ E e−zΛt∧τ | F = 1 + (G )1+z − (G )1+z t 1 + z t 0 holds for all t ≥ 0. Letting t go to infinity and using the assumption G0 = 1, as well as the fact that G∞ = 0 (P-a.s.), we therefore obtain: £ ¤ 1 E e−zΛτ | F = ∞ 1 + z that signifies the desired assertion. ¤ 2.4. PROGRESSIVE ENLARGEMENT 43

G-martingales versus F martingales

Proposition 2.4.21 Assume that F is immersed in G under P. Let ZG be a G-adapted, P-integrable process given by the formula G Zt = Zt11τ>t + Zt(τ)11τ≤t, ∀ t ∈ R+, (2.4.7) where: (i) the projection of ZG onto F, which is defined by F G Zt := EP(Zt |Ft) = Zt P(τ > t|Ft) + EP(Zt(τ)11τ≤t|Ft), is a (P, F)-martingale, (ii) for any fixed u ∈ R+, the process (Zt(u), t ∈ [u, ∞)) is a (P, F)-martingale. Then the process ZG is a (P, G)-martingale.

Proof: Let us take s < t. Then G EP(Zt |Gs) = EP(Zt11τ>t|Gs) + EP(Zt(τ)11s<τ≤t|Gs) + EP(Zt(τ)11τ≤s|Gs) 1 = 11s<τ (EP(ZtGt|Fs) + EP(Zt(τ)11s<τ≤t|Fs)) + EP(Zt(τ)11τ≤s|Gs) Gs On the one hand, EP(Zt(τ)11τ≤s|Gs) = 11τ≤sZs(τ) (2.4.8)

Indeed, it suffices to prove the previous equality for Zt(u) = h(u)Xt where X is an F-martingale. In that case,

EP(Xth(τ)11τ≤s|Gs) = 11τ≤sh(τ)EP(Xt|Gs) = 11τ≤sh(τ)EP(Xt|Fs) = 11τ≤sh(τ)Xs = 11τ≤sZs(τ) In the other hand, from (i)

EP(ZtGt + Zt(τ)11τ≤t|Fs) = ZsGs + EP(Zs(τ)11τ≤s|Fs) It follows that

G 1 EP(Zt |Gs) = 11s<τ (ZsGs + EP((Zs(τ) − Zt(τ))11τ≤s|Fs)) + 11τ≤sZs(τ) Gs It remains to check that EP((Zt(τ) − Zs(τ))11τ≤s|Fs) = 0 which follows from

EP(Zt(τ)11τ≤s|Fs) = EP(Zt(τ)11τ≤s|Gs|Fs) = EP(Zs(τ)11τ≤s|Fs) where we have used (2.4.8). ¤

Exercise 2.4.22 (A different proof of Norros’ result) Suppose that

−Γt P(τ ≤ t|F∞) = 1 − e where Γ is an arbitrary continuous strictly increasing F-adapted process. Prove, using the inverse of Γ that the random variable Γτ is independent of F∞, with exponential law of parameter 1. Hint: Suppose that −Γt P (τ ≤ t|F∞) = 1 − e where Γ is an arbitrary continuous strictly increasing F-adapted process. Let us set Θ : = Γτ . Then

{t < Θ} = {t < Γτ } = {Ct < τ}, where C is the right inverse of Γ, so that ΓCt = t. Therefore

−ΓC −u P (Θ > u|F∞) = e u = e . We have thus established the required properties, namely, the probability law of Θ and its indepen- dence of the σ-field F∞. Furthermore, τ = inf{t :Γt > Γτ } = inf{t :Γt > Θ}. C 44 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

2.4.5 Cox processes

We present here an particular case where immersion property holds. We deal here with the basic and important model for credit risk. Let (Ω, G, P) be a probability space endowed with a filtration F. A nonnegative F-adapted process λ is given. We assume that there exists, on the space (Ω, G, P), a random variable Θ, independent of −t F∞, with an exponential law: P(Θ ≥ t) = e . We define the default time τ as the first time when R t the increasing process Λt = 0 λs ds is above the random level Θ, i.e.,

τ = inf {t ≥ 0 : Λt ≥ Θ}.

In particular, using the increasing property of Λ, one gets {τ > s} = {Λs < Θ}. We assume that Λt < ∞, ∀t, Λ∞ = ∞, hence τ is a real-valued r.v..

Comment 2.4.23 (i) In order to construct the r.v. Θ, one needs to enlarge the probability space as follows. Let (Ωˆ, Fˆ, Pˆ) be an auxiliary probability space with a r.v. Θ with exponential law. We ∗ ∗ ∗ introduce the product probability space (Ω , G , Q ) = (Ω × Ωˆ, F∞ ⊗ Fˆ, P ⊗ Pˆ). (ii) Another construction for the default time τ is to choose τ = inf {t ≥ 0 : N˜ = 1}, where R Λt t ˜ Λt = 0 λs ds and N is a Poisson process with intensity 1, independent of the filtration F. This second method is in fact equivalent to the first. Cox processes are used in a great number of studies (see, e.g., [57]). (iii) We shall see that, in many cases, one can restrict attention to Cox processes models.

Conditional Expectations

Lemma 2.4.24 The conditional distribution function of τ given the σ-field Ft is for t ≥ s ³ ´ P(τ > s|Ft) = exp − Λs .

Proof: The proof follows from the equality {τ > s} = {Λs < Θ}. From the independence assump- tion and the Ft-measurability of Λs for s ≤ t, we obtain ³ ¯ ´ ³ ´ ¯ P(τ > s|Ft) = P Λs < Θ ¯ Ft = exp − Λs .

In particular, we have P(τ ≤ t|Ft) = P(τ ≤ t|F∞), (2.4.9) and, for t ≥ s, P(τ > s|Ft) = P(τ > s|Fs). Let us notice that the process Ft = P(τ ≤ t|Ft) is here an increasing process, as the right-hand side of 2.4.9 is. ¤

Corollary 2.4.25 In the Cox process modeling, immersion property holds for F and G.

Remark 2.4.26 If the process λ is not non-negative, we get,

{τ > s} = {sup Λu < Θ} , u≤s hence for s < t P(τ > s|Ft) = exp(− sup Λu) . u≤s More generally, some authors define the default time as

τ = inf {t ≥ 0 : Xt ≥ Θ} where X is a given F-semi-martingale. Then, for s ≤ t

P(τ > s|Ft) = exp(− sup Xu) . u≤s 2.4. PROGRESSIVE ENLARGEMENT 45

Exercise 2.4.27 Prove that τ is independent of F∞ if and only if λ is a deterministic function. C

Exercise 2.4.28 Compute P(τ > t|Ft) in the case when Θ is a non negative random variable, independent of F∞, with cumulative distribution function F . C

Exercise 2.4.29 Prove that, if P(τ > t|Ft) is continuous and strictly decreasing, then there exists Θ independent of F∞ such that τ = inf{t :Λt > Θ}. C

Exercise 2.4.30 Write the Doob-Meyer and the multiplicative decomposition of G. C

Exercise 2.4.31 Show how one can compute P(τ > t|Ft) when

τ = inf{t : Xt > Θ} where X is an F-adapted process, not necessarily increasing, and Θ independent of F∞. Does immersion property still holds? Same questions if Θ is not independent of F∞. C

2.4.6 Successive Enlargements

i i Proposition 2.4.32 Let τ1 < τ2 a.s. , H be the filtration generated by the default process Ht = 1 2 11τi≤t, and G = F ∨ H ∨ H . Then, the two following assertions are equivalent: (i) F is immersed in G (ii) F is immersed in F ∨ H1 and F ∨ H1 is immersed in G.

Proof: (this result was obtained by Ehlers and Sch¨onbucher [25], we give here a slightly different proof.) The only fact to check is that if F is immersed in G, then F ∨ H1 is immersed in G, or that

1 1 P(τ2 > t|Ft ∨ Ht ) = P(τ2 > t|F∞ ∨ H∞)

This is equivalent to, for any h, and any A∞ ∈ F∞

1 E(A∞h(τ1)11τ2>t) = E(A∞h(τ1)P(τ2 > t|Ft ∨ Ht ))

We spilt this equality in two parts. The first equality

1 E(A∞h(τ1)11τ1>t11τ2>t) = E(A∞h(τ1)11τ1>tP(τ2 > t|Ft ∨ Ht ))

1 is obvious since 11τ1>t11τ2>t = 11τ1>t and 11τ1>tP(τ2 > t|Ft ∨ Ht ) = 11τ1>t. Since F is immersed in G, 1 one has E(A∞|Gt) = E(A∞|Ft) and it follows (WHY?) that E(A∞|Gt) = E(A∞|Ft ∨ Ht ), therefore

E(A∞h(τ1)11τ2>t≥τ1 ) = E(E(A∞|Gt)h(τ1)11τ2>t≥τ1 ) 1 = E(E(A∞|Ft ∨ Ht )h(τ1)11τ2>t≥τ1 ) 1 1 = E(E(A∞|Ft ∨ Ht )E(h(τ1)11τ2>t≥τ1 |Ft ∨ Ht )) 1 = E(A∞E(h(τ1)11τ2>t≥τ1 |Ft ∨ Ht ))

Lemma 2.4.33 Norros Lemma. 1 n Let τi, i = 1, ··· , n be n finite-valued random times and Gt = Ht ∨ · · · ∨ Ht . Assume that

(i) P (τi = τj) = 0, ∀i 6= j

i i i i (ii) there exists continuous processes Λ such that Mt = Ht − Λt∧τi are G-martingales

i then, the r.v’s Λτi are independent with exponential law. 46 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

i i i H −µiΛt∧τ Proof: For any µi > −1 the processes Lt = (1 + µi) t e i , solution of

i i i dLt = Lt− µidMt are uniformly integrable martingales. Moreover, these martingales have no common jumps, and are Q i −µiΛτ orthogonal. Hence E( i(1 + µi)e i ) = 1, which implies

Y i Y −µiΛτ −1 E( e i ) = (1 + µi) i i hence the independence property. ¤ AJOUTER F

As an application, let us study the particular case of Poisson process. Let τ1 and τ2 are the two first jumps of a Poisson process, we have ½ e−λt for s < t G(t, s) = e−λs(1 + λ(s − t)) for s > t with partial derivatives ½ ½ −λe−λt for t > s 0 for t > s ∂ G(t, s) = , ∂ G(t, s) = 1 −λe−λs for s > t 2 −λ2e−λs(s − t) for s > t and ½ ½ 1 for t > s 0 for t > s h(t, s) = t , ∂1h(t, s) = 1 s for s > t s for s > t ½ ½ 0 for t > s 0 for t > s k(t, s) = , ∂ k(t, s) = 1 − e−λ(s−t) for s > t 2 λe−λ(s−t) for s > t

Then, one obtains U1 = τ1 et U2 = τ2 − τ1

Martingales associated with a default time in the filtration generated by two default processes

We present the computation of the martingales associated with the times τi in different filtrations. In particular, we shall obtain the computation of the intensities in various filtrations.

We have established that, if F is a given reference filtration and if Gt = P(τ > t|Ft) is the Az´ema R t supermartingale admitting a Doob-Meyer decomposition Gt = Zt − 0 asds, then the process

Z t∧τ as Ht − ds , 0 Gs is a G-martingale, where G = F ∨ H and Ht = σ(t ∧ τ). We now consider the case where two default times τi, i = 1; 2 are given. We assume that

G(t, s) := P(τ1 > t, τ2 > s)

i i is twice differentiable and does not vanish. We denote by Ht = σ(Hs, s ≤ t) the filtration generated by he default process Hi and by H = H1 ∨ H2 the filtration generated by the two default processes. Our aim is to give the compensator of H1 in the filtration H1 and in the filtration H. We apply the general result to the case F = H2 and H = H1. Let

1|2 2 Gt = P(τ1 > t|Ht ) 2.4. PROGRESSIVE ENLARGEMENT 47 be the Az´emasupermartingale of τ in the filtration H2, with Doob-Meyer decomposition G1|2 = R 1 t 1|2 t 1|2 1|2 2 Zt − 0 as ds where Z is a H -martingale. Then, the process Z t∧τ1 a1|2 H1 − s ds t 1|2 0 Gs

R 1|2 1|2 t∧τ1 as 1 is a G-martingale. The process At = 0 1|2 ds is the G-adapted compensator of H . The same Gs methodology can be applied for the compensator of H2.

Proposition 2.4.34 The process M 1,G defined as

Z t∧τ1∧τ2 Z t∧τ1 1,G 1 ∂1G(s, s) f(s, τ2) Mt := Ht + ds + ds 0 G(s, s) t∧τ1∧τ2 ∂2G(s, τ2) is a G-martingale.

Proof: Some easy computation enables us to write

1|2 2 2 2 P(τ1 > t, τ2 > t) Gt = P(τ1 > t|Ht ) = Ht P(τ1 > t|τ2) + (1 − Ht ) P(τ2 > t) 2 2 = Ht h(t, τ2) + (1 − Ht )ψ(t) (2.4.10) where ∂ G(t, v) h(t, v) = 2 ; ψ(t) = G(t, t)/G(0, t). ∂2G(0, v)

Function t → ψ(t) and process t → h(t, τ2) are continuous and of finite variation, hence integration by parts rule leads to

1|2 2 2 2 0 2 dGt = h(t, τ2)dHt + Ht ∂1h(t, τ2)dt + (1 − Ht )ψ (t)dt − ψ(t)dHt 2 ¡ 2 2 0 ¢ = (h(t, τ2) − ψ(t)) dH + H ∂1h(t, τ2) + (1 − H )ψ (t) dt µ t ¶ t t ∂2G(t, τ2) G(t, t) 2 ¡ 2 2 0 ¢ = − dHt + Ht ∂1h(t, τ2) + (1 − Ht )ψ (t) dt ∂2G(0, τ2) G(0, t) From the computation of the Stieljes integral, we can write

Z T µ ¶ µ ¶ G(t, t) ∂2G(t, τ2) 2 G(τ2, τ2) ∂2G(τ2, τ2) − dHt = − 1{τ2≤T } 0 G(0, t) ∂2G(0, τ2) G(0, τ2) ∂2G(0, τ2) Z T µ ¶ G(t, t) ∂2G(t, t) 2 = − dHt 0 G(0, t) ∂2G(0, t) and substitute it in the expression of dG1|2 : µ ¶ 1|2 ∂2G(t, t) G(t, t) 2 ¡ 2 2 0 ¢ dGt = − dHt + Ht ∂1h(t, τ2) + (1 − Ht )ψ (t) dt ∂2G(0, t) G(0, t) We now use that ¡ ¢ ∂ G(0, t) dH2 = dM 2 − 1 − H2 2 dt t t t G(0, t) where M 2 is a H2-martingale, and we get the H2− Doob-Meyer decomposition of G1|2 : µ ¶ µ ¶ 1|2 ∂2G(t, t) G(t, t) 2 ¡ 2¢ G(t, t) ∂2G(t, t) ∂2G(0, t) dGt = − dMt − 1 − Ht − dt ∂2G(0, t) G(0, t) G(0, t) ∂2G(0, t) G(0, t) ¡ 2 2 0 ¢ + Ht ∂1h(t, τ2) + (1 − Ht )ψ (t) dt 48 CHAPTER 2. ENLARGEMENT OF FILTRATIONS and from µ ¶ ∂ G(t, t) G(t, t) ∂ G(0, t) ∂ G(t, t) ψ0(t) = 2 − 2 + 1 ∂2G(0, t) G(0, t) G(0, t) G(0, t) we conclude µ ¶ µ ¶ 1|2 G(t, t) ∂2G(t, t) 2 2 (1) 2 ∂1G(t, t) dGt = − dMt + Ht ∂1h (t, τ2) + (1 − Ht ) dt G(0, t) ∂2G(0, t) G(0, t)

From (2.4.10), the process G1|2 has a single jump of size ∂2G(t,t) − G(t,t) . From (2.4.10), ∂2G(0,t) G(0,t)

G(t, t) G1|2 = = ψ(t) t G(0, t)

0 on the set τ2 > t, and its bounded variation part is ψ (t). The hazard process has a non null martingale part, except if G(t,t) = ∂2G(t,t) (this is the case if the default are independent). Hence, G(0,t) ∂2G(0,t) (H) hypothesis is not satisfied in a general setting between Hi and G. It follows that the intensity ∂1G(s,s) ∂1h(s,τ2) of τ1 in the G-filtration is on the set {t < τ2 ∧ τ1} and on the set {τ2 < t < τ1}. It G(s,s) h(s,τ2) can be proved that the intensity of τ1 ∧ τ2 is ∂ G(s, s) ∂ G(s, s) g(t) 1 + 2 = G(s, s) G(s, s) G(t, t) where

i 1 2 Exercise 2.4.35 Prove that H , i = 1, 2 is immersed in H ∨ H if and only if τi, i = 1, 2 are independent. C

Kusuoka counter example

Kusuoka [56] presents a counter-example of the stability of H hypothesis under a change of proba- bility, based on two independent random times τ and τ given on some probability space (Ω, G, P) 1 2 R 1 1 t∧τ1 and admiting a density w.r.t. Lebesgue’s measure. The process Mt = Ht − 0 λ1(u) du, where 1 i Ht = 11{t≥τ1} and λi is the deterministic intensity function of τi under P, is a (P, H ) and a (P, G)- P(τi∈ds martingale (Recall that λi(s)ds = ) . Let us set dQ | G = Lt dP | G , where P(τi>s t t

Z t 1 Lt = 1 + Lu−κu dMu 0 for some G-predictable process κ satisfying κ > 1 (WHY?), where G = H1 ∨ H2. We set F = H1 and H = H2. Manifestly, immersion hypothesis holds under P. Let

Z t∧τ1 c1 1 b Mt = Ht − λ1(u) du 0 Z t∧τ1 f1 1 Mt = Ht − λ1(u)(1 + κu) du 0 b where λ(u)du = Q(τ1 ∈ du)/Q(τ1 > u) is deterministic. It is easy to see that, under Q, the process Mc1 is a (Q, H1)-martingale and Mf1 is a (Q, G) martingale. The process Mc1 is not a (Q, G)- martingale (WHY?), hence, immersion does not hold under Q.

2 Exercise 2.4.36 Compute Q(τ1 > t|Ht ). C 2.4. PROGRESSIVE ENLARGEMENT 49

2.4.7 Ordered times

Assume that τi, i = 1, . . . , n are n random times admitting a density process with respect to a (k) reference filtration F. Let σi, i = 1, . . . , n be the sequence of ordered random times and G = F ∨ H(1) · · · ∨ H(k) where H(i) = (H(i) = σ(t ∧ σ ), t ≥ 0). The G(k)-intensity of σ is the positive t i R k (k) k (k) t k (k) G -adapted process λ such that (Mt := 11{σk≤t} − 0 λs ds, t ≥ 0) is a G -martingale. The (k) (k) (k) k G -martingale M is stopped at σk and the G -intensity of σk satisfies λt = 0 on {t ≥ σk}. (k) (n) The following lemma shows the G -intensity of σk coincides with its G -intensity.

(k) (n) Lemma 2.4.37 For any k, a G -martingale stopped at σk is a G -martingale.

(k) Proof: Let Y be a G martingale stopped at σk, i.e. Yt = Yt∧σk for any t. Then it is also a (G(k) ) -martingale. Since G(k) = G(n) , we shall prove that the (G(n) ) -martingale Y is also t∧σk t≥0 t∧σk t∧σk t∧σk t≥0 a G(n)-martingale. (n) Let A ∈ Gt . For any T > t, we have E[YT 11A] = E[YT 11A∩{σk≥t}] + E[YT 11A∩{σk

E[Yt11A∩{σk≥t}]. On the other hand, since Y is stopped at σk, E[YT 11A∩{σk

The following is a familiar result in the literature.

(k) k Proposition 2.4.38 Assume that the G -intensity λ of σk exists for all k ∈ Θ. Then the loss intensity is the sum of the intensities of σk, i.e. Xn λL = λk, a.s.. (2.4.11) k=1 R Proof: Since (11 − t λkds, t ≥ 0) is a G(k)-martingale stopped at σ , it is a G(n)-martingale. {σk≤t} 0 s R k t Pn k (n) L Pn k We have by taking the sum that (Lt − 0 k=1 λs ds, t ≥ 0) is a G -martingale. So λt = k=1 λt for all t ≥ 0. ¤

2.4.8 Pseudo-stopping Times ♠

τ As we have mentioned, if (H) holds, the process (Zt , t ≥ 0) is a decreasing process. The converse is not true. The decreasing property of Zτ is closely related with the definition of pseudo-stopping times, a notion developed from D. Williams example (see Example 2.4.41 below).

Definition 2.4.39 A random time τ is a pseudo-stopping time if, for any bounded F-martingale m, E(mτ ) = m0 .

Proposition 2.4.40 The random time τ is a pseudo-stopping time if and only if one of the following equivalent properties holds:

τ • For any local F-martingale m, the process (mt∧τ , t ≥ 0) is a local F -martingale,

τ • A∞ = 1,

τ • µt = 1, ∀t ≥ 0, • The process Zτ is a decreasing F-predictable process. 50 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Proof: The implication d ⇒ a is a consequence of Jeulin lemma. The implication a ⇒ b follows from the properties of the compensator Aτ : indeed

Z ∞ τ τ E(mτ) = E( mudAu) = E(m∞A∞) = m0 0

τ implies that A∞ = 1 We refer to Nikeghbali and Yor [63]. ¤

Example 2.4.41 The first example of a pseudo-stopping time was given by Williams [76]. Let B be a Brownian motion and define the stopping time T1 = inf{t : Bt = 1} and the random time ϑ = sup{t < T1 : Bt = 0}. Set B τ = sup{s < θ : Bs = Ms } B where Ms is the running maximum of the Brownian motion. Then, τ is a pseudo-stopping time. Note that E(Bτ ) is not equal to 0; this illustrates the fact we cannot take any martingale in Definition

2.4.39. The martingale (Bt∧T1 , t ≥ 0) is neither bounded, nor uniformly integrable. In fact, since B the maximum Mθ (=Bτ ) is uniformly distributed on [0, 1], one has E(Bτ ) = 1/2.

2.4.9 Honest Times

There exists an interesting class of random times τ such that F-martingales are Fτ -semi-martingales.

Definition 2.4.42 A random time τ is honest if τ is equal to an Ft-measurable random variable on τ < t.

Example 2.4.43 (i) Let t fixed and gt = sup{s < t : Bs = 0} and set τ = g1. Then, g1 = gt on {g1 < t}. ∗ ∗ (ii) Let X be an adapted continuous process and X = sup Xs,Xt = sups≤t Xs. The random time

∗ τ = inf{s : Xs = X }

∗ is honest. Indeed, on the set {τ < t}, one has τ inf{s : Xs = Xs }.

Proposition 2.4.44 ( Jeulin [46] ) A random time τ is honest if it is the end of a predictable set, i.e., τ(ω) = sup{t :(t, ω) ∈ Γ}, where Γ is an F-predictable set.

In particular, an honest time is F∞-measurable. If X is a transient diffusion, the last passage time Λa (see Proposition 3.1.1) is honest.

Lemma 2.4.45 The process Y is Fτ -predictable if and only if there exist two F predictable processes y and ye such that Yt = yt11t≤τ + yet11t>τ . 1 τ Let X ∈ L . Then a c`adl`agversion of the martingale Xt = E [X|Ft ] is given by: 1 1 Xt = τ E [ξ1t<τ |Ft] 1t<τ + τ E [ξ1t≥τ |Ft] 1t≥τ . Zt 1 − Zt

Every Fτ optional process decomposes as

L11[0,τ[ + J11[τ] + K11]τ,∞[, where L and K are F-optional processes and where J is a F progressively measurable process. 2.4. PROGRESSIVE ENLARGEMENT 51

See Jeulin [46] for a proof.

Lemma 2.4.46 (Az´ema)Let τ be an honest time which avoids F-stopping times. Then: τ (i) A∞ has an exponential law with parameter 1. τ τ (ii) The measure dAt is carried by {t : Zt = 1} (iii) τ = sup{t : 1 − Zt = 1} τ τ (iv)A∞ = Aτ

Proposition 2.4.47 Let τ be honest. We assume that τ avoids stopping times. Then, if M is an F-local martingale, there exists an Fτ -local martingale Mf such that Z t∧τ τ Z τ∨t τ f dhM, µ is dhM, µ is Mt = Mt + τ − τ . 0 Zs− τ 1 − Zs−

1 τ Proof: Let M be an F-martingale which belongs to H and Gs ∈ Fs . We define a G-predictable process Y as Yu = 11Gs 11]s,t](u). For s < t, one has, using the decomposition of G-predictable processes: µZ ∞ ¶

E(11Gs (Mt − Ms)) = E YudMu 0 µZ τ ¶ µZ ∞ ¶ = E yudMu + E yeudMu . 0 τ R t ¡R ∞ ¢ Noting that 0 yeudMu is a martingale yields E 0 yeudMu = 0, µZ τ ¶

E(11Gs (Mt − Ms)) = E (yu − yeu)dMu 0 µZ ∞ Z v ¶ τ = E dAv (yu − yeu)dMu . 0 0 R t By integration by parts, setting Nt = 0 (yu − yeu)dMu, we get µZ ∞ ¶ τ τ τ E(11Gs (Mt − Ms)) = E(N∞A∞) = E(N∞µ∞) = E (yu − yeu)dhM, µ iu . 0 Now, it remains to note that µZ ∞ µ τ τ ¶¶ dhM, µ iu dhM, µ iu E Yu 11{u≤τ} − 11{u>τ} 0 Zu− 1 − Zu− µZ ∞ µ τ τ ¶¶ dhM, µ iu dhM, µ iu = E yu 11{u≤τ} − yeu 11{u>τ} 0 Zu− 1 − Zu− µZ ∞ ¶ τ τ = E (yudhM, µ iu − yeudhM, µ iu) 0 µZ ∞ ¶ τ = E (yu − yeu) dhM, µ iu 0 to conclude the result in the case M ∈ H1. The general result follows by localization. ¤

Example 2.4.48 Let W be a Brownian motion, and τ = g1, the last time when the BM reaches g1 0 before time 1, i.e., τ = sup{t ≤ 1 : Wt = 0}. Using the computation of Z in the following Subsection 3.3 and applying Proposition 2.4.47, we obtain the decomposition of the Brownian motion in the enlarged filtration Z t 0 µ ¶ f Φ |Ws| sgn(Ws) Wt = Wt − 11[0,τ](s) √ √ ds 0 1 − Φ 1 − s 1 − s Z t 0 µ ¶ Φ |Ws| +11{τ≤t} sgn(W1) √ ds τ Φ 1 − s 52 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

q R 2 x 2 where Φ(x) = π 0 exp(−u /2)du.

Comment 2.4.49 Note that the random time τ presented in Subsection 3.4 is not the end of a predictable set, hence, is not honest. However, F-martingales are semi-martingales in the progressive enlarged filtration: it suffices to note that F-martingales are semi-martingales in the filtration initially enlarged with W1.

Exercise 2.4.50 Prove that any F-stopping time is honest C

Exercise 2.4.51 Prove that Z t∧τ τ Z τ∨t τ dhM, µ is dhM, µ is E( τ − τ Ft) 0 Zs− τ 1 − Zs− is an F-martingale. C

2.4.10 An example (Nikeghbali and Yor ♠

This section is a part of [64]).

Definition 2.4.52 An F-local martingale N belongs to the class (C0), if it is strictly positive, with no positive jumps, and limt→∞ Nt = 0.

Let N be a local martingale which belongs to the class (C0), with N0 = x. Let St = sups≤t Ns. We consider:

g = sup {t ≥ 0 : Nt = S∞} = sup {t ≥ 0 : St − Nt = 0} . (2.4.12)

Lemma 2.4.53 [Doob’s maximal identity] For any a > 0, we have: ³x´ P (S > a) = ∧ 1. (2.4.13) ∞ a x In particular, is a uniform random variable on (0, 1). S∞ ϑ For any stopping time ϑ, denoting S = supu≥ϑ Nu : µ ¶ ¡ ¢ N P Sϑ > a|F = ϑ ∧ 1, (2.4.14) ϑ a N Hence ϑ is also a uniform random variable on (0, 1), independent of F . Sϑ ϑ Proof: The first part was done in Exercise 1.2.3. The second part is an application of the first one for the martingale Nϑ+t, t ≥ 0) and the filtration (Fϑ+t, t ≥ 0). ¤ Without loss of generality, we restrict our attention to the case x = 1.

Proposition 2.4.54 The submartingale Gt ≡ P (g > t | Ft) admits a multiplicative decomposition Nt as Gt = , t ≥ 0. St Proof: We have the following equalities between sets {g > t} = {∃ u > t : S = N } = {∃ u > t : S ≤ N } ½ u ¾ u t u t = sup Nu ≥ St = {S ≥ Ss}. u≥t

Nt Hence, from (2.4.14), we get: P (g > t | Ft) = .¤ St 2.4. PROGRESSIVE ENLARGEMENT 53

Proposition 2.4.55 Let N be a local martingale such that its supremum process S is continuous (this is the case if N is in the class C0). Let f be a locally bounded Borel function and define R x F (x) = 0 dyf (y). Then, Xt := F (St) − f (St)(St − Nt) is a local martingale and:

Z t F (St) − f (St)(St − Nt) = f (Ss) dNs + F (S0) , (2.4.15) 0

Proof: In the case of a Brownian motion (i.e., N = B), this was done in Exercise 1.2.2. In the case of continuous martingales, if F is C2,

Z t Z t F (St) − f (St)(St − Nt) = F (St) − f (Ss) dSs + f (Ss) dNs 0 0 Z t 0 + (Ss − Ns)f (Ss)dSs 0

R t The last integral is null, because dS is carried by {S − N = 0} and 0 f (Ss) dSs = F (St) − F (S0). For the general case, we refer the reader to [64]. ¤ σ(S∞) Let us define the new filtration Ft . Since g = inf {t : Nt = S∞} ;, the random variable g is an Fσ(S∞)-stopping time. Consequently: g σ(S∞) Ft ⊂ Ft .

Proposition 2.4.56 For any Borel bounded or positive function f, we have: µ ¶ µ ¶ Z Nt/St Nt Nt E (f (S∞) |Ft) = f (St) 1 − + dxf St 0 x

Proof: In the following, U is a random variable, which follows the standard uniform law and which is independent of Ft.

¡ ¡ t¢ ¢ E (f (S∞) |Ft) = E f St ∨ S |Ft ¡ ¢ ¡ ¡ t¢ ¢ t t = E f (St) 11{St≥S }|Ft + E f S 11{St

µ ¶ Z ∞ Nt f (y) E (f (S∞) |Ft) = f (St) 1 − + Nt dy 2 St St y µ ¶ Z ∞ Nt f (y) = f (St) 1 − + Nt dy 2 St St y Z ∞ µZ ∞ ¶ f (y) f (y) f (St) = St dy 2 − (St − Nt) dy 2 − . St y St y St Hence,

E (f (S∞) |Ft) = H (1) + H (St) − h (St)(St − Nt) , 54 CHAPTER 2. ENLARGEMENT OF FILTRATIONS with Z ∞ f (y) H (x) = x dy 2 , x y and Z Z ∞ f (y) f (x) ∞ dy h (x) = hf (x) ≡ dy 2 − = 2 (f (y) − f (x)) . x y x x y

Moreover, from the Az´ema-Yor type formula 2.4.15, we have the following representation of E (f (S∞) |Ft) as a stochastic integral:

Z t E (f (S∞) |Ft) = E (f (S∞)) + h (Ss) dNs. 0 ³ ´ ˙ Moreover, there exist two families of random measures (λt (dx))t≥0 and λt (dx) , with t≥0 µ ¶ N dx λ (dx) = 1 − t δ (dx) + N 11 t St t {x>St} 2 St x 1 dx λ˙ (dx) = − δ (dx) + 11 , t St {x>St} 2 St x such that Z

E (f (S∞) |Ft) = λt (f) = λt (dx) f (x) Z ˙ ˙ λt (f) = λt (dx) f (x) .

˙ Finally, we notice that there is an absolute continuity relationship between λt (dx) and λt (dx); more precisely, ˙ λt (dx) = λt (dx) ρ (x, t) , with −1 1 ρ (x, t) = 11{St=x} + 11{St

Theorem¡ 2.4.57 Let¢ N be a local martingale in the class C0 (recall N0 = 1). Then, the pair of filtrations F, Fσ(S∞) satisfies the (H0) hypothesis and every F local martingale X is an Fσ(S∞) semimartingale with canonical decomposition:

Z t Z t e dhX,Nis dhX,Nis Xt = Xt + 11{g>s} − 11{g≤s} , 0 Ns− 0 S∞ − Ns− where Xe is a Fσ(S∞)-local martingale.

1 Proof: We can first assume that X is in H ; the general case follows by localization. Let Ks be an Fs measurable set, and take t > s. Then, for any bounded test function f, λt (f) is a bounded martingale, hence in BMO, and we have:

E (11Ks f (S∞)(Xt − Xs)) = E (11Ks (λt (f) Xt − λs (f) Xs))

= E (11Ks (hλ (f) ,Xit − hλ (f) ,Xis)) µ µZ t ¶¶ ˙ = E 11Ks λu (f) dhX,Niu s µ µZ t Z ¶¶

= E 11Ks λu (dx) ρ (x, u) f (x) dhX,Niu s µ µZ t ¶¶

= E 11Ks dhX,Niuρ (S∞, u) . s 2.4. PROGRESSIVE ENLARGEMENT 55

But we also have: −1 1 ρ (S∞, t) = 11{St=S∞} + 11{St

It now suffices to use the fact that S is constant after g and g is the first time when S∞ = St, or in other words:

11{S∞>St} = 11{g>t}, and 11{S∞=St} = 11{g≤t}.

Corollary 2.4.58 The pair of filtrations (F, Fg) satisfies the (H0) hypothesis. Moreover, every F- local martingale X decomposes as:

Z t Z t e dhX,Nis dhX,Nis Xt = Xt + 11{g>s} − 11{g≤s} , 0 Ns 0 S∞ − Ns where Xe is an Fg-local martingale.

Proof: Let X be an F-martingale which is in H1; the general case follows by localization.

Z t Z t e dhX,Nis dhX,Nis Xt = Xt + 11{g>s} − 11{g≤s} , 0 Ns 0 S∞ − Ns where Xe denotes an Fσ(S∞) martingale. Thus, Xe, which is equal to:

µZ t Z t ¶ dhX,Nis dhX,Nis Xt − 11{g>s} − 11{g≤s} , , 0 Ns 0 S∞ − Ns

g g σ(S∞) g is F adapted (recall that Ft ⊂ Ft ), and hence it is an F -martingale. These results extend to honest times:

Theorem 2.4.59 Let L be an honest time. Then, under the conditions (CA), the supermartingale Zt = P (L > t | Ft) admits the following additive and multiplicative representations: there exists a continuous and nonnegative local martingale N, with N0 = 1 and limt→∞ Nt = 0, such that:

Nt Zt = P (L > t | Ft) = St

Zt = Mt − At. where these two representations are related as follows:

µZ t Z t ¶ dMs 1 d < M >s Nt = exp − 2 ,St = exp (At); 0 Zs 2 0 Zs Z t dNs Mt = 1 + = E (log S∞ | Ft) ,At = log St. 0 Ss

Remark 2.4.60 From the Doob-Meyer (additive) decomposition of Z, we have 1 − Zt = (1 − mt) + ln St. From Skorokhod’s reflection lemma (See [3M]) we deduce that

ln St = sup ms − 1 s≤t

λ2 Exercise 2.4.61 Prove that exp(λBt − 2 t) belongs to (C0). C 56 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

2.5 Initial Times

We assume that τ is a random time satisfying Jacod’s criteria (see Proposition 2.3.2)

Z ∞ P(τ > θ|Ft) = pt(u)ν(du) θ where ν is the law of τ. Moreover, in this section, we assume that p is strictly positive. We call initial times the random variables that enjoy that property.

2.5.1 Progressive enlargement with initial times

As seen in Proposition 2.3.2, if τ s an initial time and X is an F-martingale, it is a F(τ) = F∨σ(τ)-semi martingale with decomposition Z ¯ t ¯ e dhX, p(θ)iu ¯ Xt = Xt + ¯ 0 pu−(θ) θ=τ where Xe is an F ∨ σ(τ)-martingale Since X is a F(τ)-semimartingale, and is G = Fτ adapted, it is a G-semimartingale. We are now looking for its representation. By projection on G, it follows that µZ ¯ ¶ t ¯ e dhX, p(θ)is ¯ Xt = E(Xt|Gt) + E ¯ |Gt 0 ps− (θ) θ=τ

The process E(Xet|Gt) is a G-martingale.

Proposition 2.5.1 Let X be a F-martingale. Then, it is a G-semi-martingale with decomposition Z Z ¯ t∧τ t ¯ b dhX,Gis dhX, p(θ)is ¯ Xt = Xt + + ¯ 0 Gs− t∧τ ps− (θ) θ=τ where Xb is a G-martingale.

Proof: We give the proof in the case where F is a Brownian filtration. Then, for any θ, there exists two predictable processes γ(θ) and x such that

dXt = xtdWt

dtpt(θ) = pt(θ)γt(θ)dWt R ¯ R t dhX,p(θ)is ¯ t and dhX, p(θ)it = xtpt(θ)γt(θ)dt. It follows that 0 p (θ) ¯ = 0 xsγs(τ)ds. We write s− θ=τ µZ t ¶ µZ t∧τ ¶ Z t dhX, p(θ)is dhX, p(θ)is E |θ=τ |Gt = E xsγs(θ)ds|θ=τ |Gt + 0 ps− (θ) 0 t∧τ ps− (θ) From Exercise 1.2.5,

µZ t∧τ ¶ Z t E xsγs(τ)ds|Gt = mt + xsE(11s<τ γs(τ)|Gs)ds 0 0 where m is a G-martingale. From key lemma Z 1 1 ∞ E(11 γ (τ)|G ) = 11 E(11 γ (τ)|F ) = 11 E( γ (u)p (u)ν(du)|F ) s<τ s s s<τ G <τ s s s<τ G s s s s Z s s 1 ∞ = 11s<τ γs(u)ps(u)ν(du) Gs s 2.5. INITIAL TIMES 57

Now, we compute dhX,Gi. In order to compute this bracket, one needs the martingale part of G R ∞ From Gt = t pt(u)ν(du), one deduces that

µZ ∞ ¶ dGt = γt(u)pt(u)ν(du) dWt − pt(t)ν(dt) t The last property can be obtained from Itˆo-Kunita-Wentcell (this can also be checked by hands: t indeed it is straightforward to check that Gt+ ∈0 ps(S)ν(dS) is a martingale). It follows that R ∞ dhX,Git = xt( t γt(u)pt(u)ν(du))dt, hence the result.

Proposition 2.5.2 A c`adl`agprocess Y G is a G-martingale if (and only if) there exist an F-adapted + c`adl`agprocess y and an Ft ⊗ B(R )-optional process yt(.) such that

G Yt = yt11{τ>t} + yt(τ)11{τ≤t} and that G • E(Yt |Ft) is an F-local martingale; • For nay θ, (yt(θ)pt(θ), t ≥ θ) is an F-martingale.

1 Proof: We prove only the ’if’ part. Let Q defined as dQ| (τ) = dP| (τ) . It follows that Ft pt(τ) Ft 1 dQ|G = dP|G (or, equivalently, dP|G = LtdQ|G where EQ(pt(τ)|Gt)) . The process Y is a (P, G) t Lt t t t martingale if and only if YL is a (Q, G)- martingale where

Lt = EQ(pt(τ)|Gt) = 11t<τ `t + 11τ≤tpt(τ)

1 (with `t = Q(t<τ) EQ(11t<τ pt(τ)|Ft)) so that YtLt = 11t<τ yt`t + 11τ≤tpt(τ)yt(τ). Under Q, τ and F∞ are independent, and obviously F is immersed in G. From Proposition 2.4.21, the process YL is a (G, Q) martingale if pt(u)yt(u), t ≥ u is, for any u, an (F, Q)-martingale and if EQ(YtLt|Ft) is an (F, Q)-martingale. We now note that, from Bayes rule, EQ(YtLt|Ft) = EQ(Lt|Ft)EP(Yt|Ft) and that

Z ∞ EQ(Lt|Ft) = EQ(pt(τ)|Ft) = pt(u)ν(du) = 1 0 therefore, EQ(YtLt|Ft) = EP(Yt|Ft). Note that

Z t G EP(Yt |Ft) = ytGt + yt(s)pt(s)ν(ds), t ≥ 0 0 and that Z t Z t yt(s)pt(s)ds − ys(s)ps(s)ν(ds), t ≥ 0 0 0 is a martingale (WHY?) so that the condition EP(Yt|Ft) is a martingale is equivalent to

Z t ytGt + ys(s)ps(s)ν(ds), t ≥ 0 0 is a martingale. ¤

Theorem 2.5.3 Let ZG = z 11 + z (τ)11 be a positive G-martingale with ZG = 1 and let R t t {τ>t} t {τ≤t} 0 F t Zt = ztGt + 0 zt(u)pt(u)ν(du) be its F projection. G Let Q be the probability measure defined on Gt by dQ = Zt dP. Q zt(θ) Then, for t ≥ θ pt (θ) = pt(θ) F , and: Zt Q zt (i) the Q-conditional survival process is defined by Gt = Gt F Zt 58 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

F,Q F zt(t) (ii) the (F, Q)-intensity process is λt = λt , dt- a.s.; zt− (iii) LF,Q is the (F, Q)-local martingale

Z t F,Q F zt F,Q F Lt = Lt F exp (λs − λs )ds Zt 0

Proof: From change of probability Z ∞ 1 G 1 1 Q(τ > θ|Ft) = G EP(Zt 11τ>θ|Ft) = F EP(zt(τ)11τ>θ|Ft) = F zt(u)pt(u)ν(du) E(Zt |Ft) Zt Zt θ The form of the survival process follows immediately from the fact that Z ∞ ztGt + zt(u)pt(u)ν(du) t is a martingale (or by an immediate application of Bayes formula). The form of the intensity is obvious. The form of L is obtained from Bayes formula. ¤

Exercise 2.5.4 Prove that the change of probability measure generated by the two processes

F −1 pθ(θ) zt = (Lt ) , zt(θ) = pt(θ) provides a model where the immersion property holds true, and where the intensity processes does not change C

Exercise 2.5.5 Check that Z Z ¯ t∧τ t ¯ dhX,Gis dhX, p(θ)is ¯ E( − ¯ |Ft) 0 Gs− t∧τ ps− (θ) θ=τ is an F-martingale. Check that that Z ¯ t ¯ dhX, p(θ)is ¯ E( ¯ |Gt) 0 ps− (θ) θ=τ is a G martingale. C

R t Exercise 2.5.6 Let λ be a positive F-adapted process and Λt = 0 λsds and Θ be a strictly positive R ∞ random variable such that there exists a family γt(u) such that P(Θ > θ|Ft) = θ γt(u)du. Let τ = inf{t > 0 : Λt ≥ Θ}.Prove that the density of τ is given by

pt(θ) = λθγt(Λθ) if t ≥ θ and pt(θ) = E[λθγθ(Λθ)|Ft] if t < θ. Conversely, if we are given a density p, prove that it is possible to construct a threshold Θ such that τ has p as density. C

Application: Defaultable Zero-Coupon Bonds

A defaultable zero-coupon with maturity T associated with the default time τ is an asset which pays one monetary unit at time T if (and only if) the default has not occurred before T . We assume that P is the pricing measure. By definition, the risk-neutral price under P of the T -maturity defaultable zero-coupon bond with zero recovery equals, for every t ∈ [0,T ], ¡ ¢ −ΛT P(τ > T | Ft) EP NT e | Ft D(t, T ) := P(τ > T | Gt) = 11{τ>t} = 11{τ>t} (2.5.1) Gt Gt 2.5. INITIAL TIMES 59 where N is the martingalepart in the multiplicative decomposition of G (see Proposition ). Using (2.5.1), we obtain 1 −ΛT D(t, T ) := P(τ > T | Gt) = 11{τ>t} −Λ EP(NT e | Ft) Nte t

However, using a change of probability, one can get rid of the martingale part N, assuming that there exists p such that Z ∞ P(τ > θ|Ft) = pt(u)du θ Let P∗ be defined as ∗ ∗ dP |Gt = Zt dP|Gt where Z∗ is the (P, G)-martingale defined as

N ∗ −Λτ t Zt = 11{t<τ} + 11{t≥τ}λτ e pt(τ) Note that ∗ dP |Ft = NtdP|Ft = NtdP|Ft ∗ and that P and P coincide on Gτ . Indeed, Z t N ∗ −Λu t EP(Zt |Ft) = Gt + λue pt(u)η(du) 0 pt(u) Z t −Λt −Λu −Λt −Λt = Nte + Nt λue η(du) = Nte + Nt(1 − e ) 0 Then, for t > θ, 1 1 N ∗ ∗ −Λτ t P (θ < τ|Ft) = EP(Zt 11θ<τ |Ft) = EP(11t<τ + 11{t≥τ>θ}λτ e |Ft) Nt Nt pt(τ) µ Z ¶ 1 t N −Λt −Λu t = Nte + λue pt(u)du Nt θ pt(u) 1 ¡ ¢ −Λt −Λθ −Λt −Λθ = Nte + Nt(e − e ) = e Nt which proves that immersion holds true under P∗, and the intensity of τ is the same under P and P∗. It follows that 1 ∗ ∗ −ΛT EP(X11{T <τ}|Gt) = E (X11{T <τ}|Gt) = 11{t<τ} E (e X|Ft) e−Λt Note that, if the intensity is the same under P andP∗, its dynamics under P∗ will involve a change ∗ of driving process, since P and P do not coincide on F∞. Let us now study the pricing of a recovery. Let Z be an F-predictable bounded process. Z 1 T EP(Zτ 11{t<τ≤T }|Gt) = 11{t<τ} EP(− ZudGu|Ft) Gt t Z 1 T −Λu = 11{t<τ} EP( ZuNuλue du|Ft) Gt t ∗ = E (Zτ 11{t<τ≤T }|Gt) Z 1 T ∗ −Λu = 11{t<τ} E ( Zuλue du|Ft) −Λt e t 60 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

The problem is more difficult for pricing a recovery paid at maturity, i.e. for X ∈ FT 1 ¡ ¢ −ΛT EP(X11τT |Gt) = EP(X|Gt) − 11{τ>t} −Λ EP XNT e | Ft Nte t 1 ¡ ¢ ∗ −ΛT = EP(X|Gt) − 11{τ>t} E Xe | Ft e−Λt Since immersion holds true under P∗ 1 ¡ ¢ ∗ ∗ ∗ −ΛT E (X11τt} E XNT e | Ft e−Λt 1 ¡ ¢ ∗ ∗ −ΛT = E (X|Ft) − 11{τ>t} E XNT e | Ft e−Λt

∗ If both quantities EP(X11τ

2.5.2 Survival Process ♠

Proposition 2.5.7 Let (Ω, F, P) be a given filtered probability space. Assume that N is a (P, F)- −Λt local martingale and Λ an F-adapted increasing process such that 0 < Nte < 1 for t > 0 and −Λ0 N0 = 1 = e . Then, there exists (on an extended probability space) a probability Q which satisfies −Λt Q|Ft = P|Ft ) and a random time τ such that Q(τ > t|Ft) = Nte .

R t Proof: We give the proof when F is a Brownian filtration and Λt = 0 λudu. For the general case, see [45, 44]. We shall construct, using a change of probability, a conditional probability Gt(u) which −Λt admits the given survival process (i.e. Gt(t) = Gt = Nte ). From the conditional probability, one can deduce a density process, hence one can construct a random time admitting tGt(u) as conditional probability. We are looking for conditional probabilities with a particular form (the idea is linked −Λt with the results obtained in Subsection 2.3.5). Let us start with a model in which P(τ > t|Ft) = e , R t −Λt where Λt = 0 λsds and let N be an F-local martingale such that 0 ≤ Nte ≤ 1. The goal is to prove that there exists a G-martingale L such that, setting dQ = LdP

(i) Q|F∞ = P|F∞ −Λt (ii) Q(τ > t|Ft) = Nte The G-adapted process L Lt = `t11t<τ + `t(τ)11τ≤t is a martingale if for any u,(`t(u), t ≥ u) is a martingale and if E(Lt|Ft) is a F-martingale. Then, (i) is satisfied if Z t −Λt −Λu 1 = E(Lt|Ft) = `te + `t(u)λue du 0 and (ii) implies that ` = N and `t(t) = `t. We are now reduced to find a family of martingales `t(u), t ≥ u such that Z t −Λt −Λu `u(u) = Nu, 1 = Nte + `t(u)λue du 0 We restrict our attention to families ` of the form

`t(u) = XtYu, t ≥ u where X is a martingale such that

Z t −Λt −Λu XtYt = Nt, 1 = Nte + Xt Yuλue du 0 2.5. INITIAL TIMES 61

. It is easy to show that Z t 1 Λu Yt = Y0 + e d( ) 0 Xu In a Brownian filtration case, there exists a process ν such that dNt = νtNtdWt and the positive martingale X is of the form dXt = xtXtdWt. Then, using the fact that integration by parts implies 1 Λt Λt d(XtYt) = YtdXt − e dXt = xt(XtYt − e )dWt = dNt , Xt we are lead to chose νtGt xt = Gt − 1

2.5.3 Generalized Cox construction

In this subsection, we generalize the Cox construction in a setting where the threshold Θ is no more independent of F∞. Instead, we make the assumption that Θ admits a non trivial density with respect to F. Clearly, the H-hypothesis is no longer satisfied and we shall explore the density in this case. R t Let λ be a strictly positive F-adapted process, and Λt = 0 λsds. Let Θ be a strictly positive random variable whose conditional distribution w.r.t. F admits a density w.r.t. the Lebesgue measure, i.e., there exists a family of Ft ⊗ B(R+)-measurable functions γt(u) such that P(Θ > R ∞ θ|Ft) = θ γt(u)du. Let τ = inf{t > 0 : Λt ≥ Θ}. We say that a random time τ constructed in such a setting is given by a generalized Cox construction.

Proposition 2.5.8 Let τ be given by a generalized Cox construction. Then τ admits the density

pt(θ) = λθγt(Λθ) if t ≥ θ and pt(θ) = E[λθγθ(Λθ)|Ft] if t < θ. (2.5.2)

Proof: By definition and by the fact that Λ is strictly increasing and absolutely continuous, we have for t ≥ θ, Z ∞ Z ∞ Z ∞ P(τ > θ|Ft) = P(Θ > Λθ|Ft) = γt(u)du = γt(Λu)dΛu = γt(Λu)λudu, Λθ θ θ which implies pt(θ) = λθγt(Λθ). The martingale property of p gives the whole density. ¤

R t Conversely, if we are given a density p, and hence an associated process Λt = 0 λsds with ps(s) λs = , then it is possible to construct a random time τ by a generalized Cox construction, that Gs is, to find a threshold Θ such that τ has p as density. We denote by Λ−1 the inverse of the strictly increasing process λ. R t ps(s) Proposition 2.5.9 Let τ be a random time with density (pt(θ), t ≥ 0), θ ≥ 0. We let Λt = ds 0 Gs and Θ = Λτ . Then Θ has a density γ with respect to F given by h i −1 1 γt(θ) = E pt∨Λ−1 (Λθ ) |Ft . (2.5.3) θ λ −1 Λθ

Obviously τ = inf{t > 0 : Λt ≥ Θ}. In particular, if the H-hypothesis holds, then Θ follows the exponential distribution with parameter 1 and is independent of F.

Proof: We set Θ = Λτ and compute the density of Θ w.r.t. F

P(Θ > θ|Ft) = P(Λτ > θ|Ft) = P(τ > t, Λτ > θ|Ft) + P(τ ≤ t, Λτ > θ|Ft) Z ∞ Z t

= E[− 11{Λu>θ}dGu|Ft] + 11{Λu>θ}pt(u)du t 0 (2.5.4) Z ∞ Z t

= E[ 11{Λu>θ}pu(u)du|Ft] + 11{Λu>θ}pt(u)du t 0 62 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

R t where the last equality comes from the fact that (Gt + 0 pu(u)du, t ≥ 0) is an F-martingale. Note that since the process Λ is continuous and strictly increasing, also is its inverse. Hence

Z ∞ Z ∞ £ −1 1 ¤ £ 1 ¤ E p −1 (Λs ) ds|Ft = E ps∨t(s) dΛs|Ft Λs ∨t −1 θ λΛ−1 Λ λs Z s Z θ £ ∞ ¤ £ ∞ ¤ −1 = E 11{s>Λ }ps∨t(s)ds|Ft = E 11{Λs>θ}ps∨t(s)ds|Ft , 0 θ 0 which equals P(Θ > θ|Ft) by comparing with (2.5.4). So the density of Θ w.r.t. F is given by (2.5.3). In the particular case where H-hypothesis holds,

−Λu pt(u) = pu(u) = λuGu = λue when t ≥ u and (2.5.4) equals

Z ∞ Z ∞ −Λu P(Θ > θ|Ft) = E[ 11{Λu>θ}pu(u)du|Ft] = E[ 11{Λu>θ}λue du|Ft] 0 0 Z ∞ −x = E[ 11{x>θ}e dx|Ft], 0 which completes the proof. ¤

2.5.4 Carthagese Filtrations

In Carthegese, one can find three levels of different civilisation. This had lead us to make use of three levels of filtration, and a change of measure. We consider the three filtrations

F ⊂ G = F ∨ H ⊂ F(τ) = F ∨ σ(τ) and we assume that the F-conditional law of τ is equivalent to the law of τ. This framework will allow us to give new (and simple) proofs to some results on enlargement of filtrations. We refer the reader to [14] for details.

Semi-martingale properties

(τ) We note Q the probability measure defined on Ft as 1 dQ|F (τ) = dP|F (τ) = LtdP|F (τ) t pt(τ) t t

1 (τ) where Lt = is a (P, F )-martingale. Hence pt(τ)

dQ|Gt = `tdP|Gt with ` is the (P, G)-martingale Z 1 ∞ 1 `t = EP(Lt|Gt) = 11t<τ EP( ν(du)|Gt) + 11τ≤t Gt t pt(τ) R ∞ where Gt = EP(τ > t|Ft) = t pt(u)ν(du), and, obviously

dP| (τ) = pt(τ)dQ| (τ) = ΛtdQ| (τ) Ft Ft Ft where Λ = 1/L is a (Q, F(τ))-martingale and

dP|Gt = EQ(pt(τ)|Gt)dQ|Gt = λtdQ|Gt 2.5. INITIAL TIMES 63 with Z 1 ∞ 1 1 λt = EQ(pt(τ)|Gt) = 11t<τ EQ( pt(u)ν(du)|Gt) + 11τ≤tpt(τ) = 11t<τ Gt + 11τ≤tpt(τ) = G(t) t G(t) `t is a (Q, G) martingale, where

Z ∞ G(t) = EQ(τ > t|Ft) = Q(τ > t) = P(τ > t) = ν(du) t

Assume now that ν is absolutely continuous with respect to Lebesque measure ν(du) = ν(u)du R -with a light abuse of notation). We know, from the general study that M P := H − t∧τ λPds with R t t 0 s P pt(t)ν(t) Q t∧τ Q Q ν(t) λ = is a (P, G) martingale and that M := Ht − λ ds with λ = is a (P, G) t Gt t 0 s t G(t) martingale. (τ) The predictable increasing process A such that Ht − At∧τ is a (Q, F ) martingale is H (indeed, τ is F(τ)-predictable) Let now m be a (P, F) martingale. Then, it is a (Q, F) martingale, hence a (Q, G) and a (Q, F(τ)) martingale. It follows that mΛ is a (P, F(τ)) martingale and mλ a (P, G) martingale. Writing m = mΛ/Λ and m = mλ/λ and making use of integration by parts formula leads us to the decomposition of m as a F(τ) and a G semi-martingale.

Representation theorem

Using the same methodology, one can check that, if there exists a martingale µ (eventually multi- dimensional) with the predictable representation property in (P, F), then the pair µ,b M posses the predictable representation property in (P, G) where µb is the martingale part of the G semi-martingale µ. 64 CHAPTER 2. ENLARGEMENT OF FILTRATIONS Chapter 3

Last Passage Times ♠

We now present the study of the law (and the conditional law) of some last passage times for diffusion processes. In this section, W is a standard Brownian motion and its natural filtration is F. These random times have been studied in Jeanblanc and Rutkowski [42] as theoretical examples of default times, in Imkeller [37] as examples of insider private information and, in a pure mathematical point of view, in Pitman and Yor [66] and Salminen [69]. We now show that, in a diffusion setup, the Doob-Meyer decomposition of the Az´emasuper- martingale may be computed explicitly for some random times τ.

3.1 Last Passage Time of a Transient Diffusion

Proposition 3.1.1 Let X be a transient homogeneous diffusion such that Xt → +∞ when t → ∞, and s a scale function such that s(+∞) = 0 (hence, s(x) < 0 for x ∈ R) and Λy = sup{t : Xt = y} the last time that X hits y. Then, s(X ) P (Λ > t|F ) = t ∧ 1 . x y t s(y)

Proof: We follow Pitman and Yor [66] and Yor [80], p.48, and use that under the hypotheses of the proposition, one can choose a scale function such that s(x) < 0 and s(+∞) = 0 (see Sharpe [70]). Observe that ³ ¯ ´ ³ ¯ ´ ¡ ¢ ¯ ¯ Px Λy > t|Ft = Px inf Xu < y ¯ Ft = Px sup(−s(Xu)) > −s(y) ¯ Ft u≥t u≥t ³ ´ s(Xt) = PXt sup(−s(Xu)) > −s(y) = ∧ 1, u≥0 s(y) where we have used the Markov property of X, and the fact that if M is a continuous local martingale with M0 = 1,Mt ≥ 0, and lim Mt = 0, then t→∞

law 1 sup Mt = , t≥0 U where U has a uniform law on [0, 1] (see Exercise 1.2.3). ¤

X Lemma 3.1.2 The F -predictable compensator A associated with the random time Λy is the process 1 A defined as A = − Ls(y)(Y ), where L(Y ) is the process of the continuous martingale t 2s(y) t Y = s(X).

65 66 CHAPTER 3. LAST PASSAGE TIMES ♠

Proof: From x ∧ y = x − (x − y)+, Proposition 3.1.1 and Tanaka’s formula, it follows that

s(X ) 1 1 t ∧ 1 = M + Ls(y)(Y ) = M + `y(X) s(y) t 2s(y) t t s(y) t where M is a martingale. The required result is then easily obtained. ¤ DONNER FORMULE TANAKA. DEFINIR p(m) We deduce the law of the last passage time: µ ¶ s(x) 1 P (Λ > t) = ∧ 1 + E (`y(X)) x y s(y) s(y) x t µ ¶ Z t s(x) 1 (m) = ∧ 1 + du pu (x, y) . s(y) s(y) 0 Hence, for x < y dt dt P (Λ ∈ dt) = − p(m)(x, y) = − p (x, y) x y s(y) t s(y)m(y) t σ2(y)s0(y) = − p (x, y)dt . (3.1.1) 2s(y) t

For x > y, we have to add a mass at point 0 equal to µ ¶ s(x) s(x) 1 − ∧ 1 = 1 − = P (T < ∞) . s(y) s(y) x y

Example 3.1.3 Last Passage Time for a Transient Bessel Process: For a Bessel process of dimension δ > 2 and index ν (see [3M] Chapter 6), starting from 0,

δ δ δ −2ν −2ν P0(Λa < t) = P0(inf Ru > a) = P0(sup Ru < a ) u≥t u≥t µ ¶ µ ¶ R−2ν a2 = Pδ t < a−2ν = Pδ(a2ν < UR2ν ) = Pδ < t . 0 0 t 0 2 1/ν U R1U

a2 a2 law a2 Thus, the r.v. Λa = 2 1/ν is distributed as = where γ(ν) is a gamma variable R1U 2γ(ν+1)βν,1 2γ(ν) with parameter ν: tν−1e−t P(γ(ν) ∈ dt) = 11 dt . {t≥0} Γ(ν) Hence, µ 2 ¶ν 1 a 2 Pδ(Λ ∈ dt) = 11 e−a /(2t)dt . (3.1.2) 0 a {t≥0} tΓ(ν) 2t We might also find this result directly from the general formula (3.1.1).

Proposition 3.1.4 For H a positive predictable process

Ex(HΛy |Λy = t) = Ex(Ht|Xt = y) and, for y > x, Z ∞

Ex(HΛy ) = Ex(Λy ∈ dt) Ex(Ht|Xt = y) . 0 In the case x > y, µ ¶ Z s(x) ∞ Ex(HΛy ) = H0 1 − + Ex(Λy ∈ dt) Ex(Ht|Xt = y) . s(y) 0 3.2. LAST PASSAGE TIME BEFORE HITTING A LEVEL 67

Proof: We have shown in the previous Proposition 3.1.1 that

s(X ) P (Λ > t|F ) = t ∧ 1 . x y t s(y)

From Itˆo-Tanaka’s formula

Z t s(Xt) s(x) s(Xu) 1 s(y) ∧ 1 = ∧ 1 + 11{Xu>y} d − Lt (s(X)) . s(y) s(y) 0 s(y) 2 It follows, using Lemma 2.4.1 that

µZ ∞ ¶ 1 s(y) Ex(HΛx ) = Ex Hu duLu (s(X)) 2 0 µZ ∞ ¶ 1 s(y) = Ex Ex(Hu|Xu = y) duLu (s(X)) . 2 0

Therefore, replacing Hu by Hug(u), we get

µZ ∞ ¶ 1 s(y) Ex (HΛx g(Λx)) = Ex g(u) Ex (Hu|Xu = y) duLu (s(X)) . (3.1.3) 2 0 Consequently, from (3.1.3), we obtain ³ ´ 1 s(y) Px (Λy ∈ du) = duEx Lu (s(X)) ¡ ¢ 2 Ex HΛy |Λy = t = Ex(Ht|Xt = y) .

¤

ν Exercise 3.1.5 Let X be a drifted Brownian motion with positive drift ν and Λy its last passage time at level y. Prove that µ ¶ (ν) ν 1 2 Px(Λ ∈ dt) = √ exp − (x − y + νt) dt , y 2πt 2t and ½ 1 − e−2ν(x−y), for x > y P (Λ(ν) = 0) = x y 0 for x < y . Prove, using time inversion that, for x = 0, 1 Λ(ν) law= y (y) Tν where (b) Ta = inf{t : Bt + bt = a} See Madan et al. [59]. C

3.2 Last Passage Time Before Hitting a Level

Let Xt = x + σWt where the initial value x is positive and σ is a positive constant. We consider, for 0 < a < x the last passage time at the level a before hitting the level 0, given as ga (X) = sup {t ≤ T0 T0 : Xt = a}, where T0 = T0(X) = inf {t ≥ 0 : Xt = 0} . 68 CHAPTER 3. LAST PASSAGE TIMES ♠

(In a financial setting, T0 can be interpreted as the time of bankruptcy.) Then, setting α = (a−x)/σ, α T−x/σ(W ) = inf{t : Wt = −x/σ} and dt (W ) = inf{s ≥ t : Ws = α}

¡ a ¢ ¡ α ¢ Px gT0 (X) ≤ t|Ft = P dt (W ) > T−x/σ(W )|Ft on the set {t < T−x/σ(W )}. It is easy to prove that

¡ α ¢ P dt (W ) < T−x/σ(W )|Ft = Ψ(Wt∧T−x/σ (W ), α, −x/σ), where the function Ψ(·, a, b): R → R equals, for a > b,   (a − y)/(a − b) for b < y < a, Ψ(y, a, b) = P (T (W ) > T (W )) = 1 for a < y, y a b  0 for y < b.

(See Proposition ?? for the computation of Ψ.) Consequently, on the set {T0(X) > t} we have

¡ ¢ + + + a (α − Wt∧T0 ) (α − Wt) (a − Xt) Px g (X) ≤ t|Ft = = = . (3.2.1) T0 a/σ a/σ a

As a consequence, applying Tanaka’s formula, we obtain the following result.

Lemma 3.2.1 Let Xt = x + σWt, where σ > 0. The F-predictable compensator associated with the a 1 α α random time g is the process A defined as At = L (W ), where L (W ) is the local T0(X) 2α t∧T−x/σ (W ) time of the Brownian Motion W at level α = (a − x)/σ.

3.3 Last Passage Time Before Maturity

In this subsection, we study the last passage time at level a of a diffusion process X before the fixed horizon (maturity) T . We start with the case where X = W is a Brownian motion starting from 0 and where the level a is null: gT = sup{t ≤ T : Wt = 0} .

Lemma 3.3.1 The F-predictable compensator associated with the random time gT equals r Z t∧T 2 dLs At = √ , π 0 T − s where L is the local time at level 0 of the Brownian motion W.

Proof: It suffices to give the proof for T = 1, and we work with t < 1. Let G be a standard Gaussian variable. Then ³ a2 ´ ³ |a| ´ P > 1 − t = Φ √ , G2 1 − t r 2 R 2 where Φ(x) = x exp(− u )du. For t < 1, the set {g ≤ t} is equal to {d > 1}. It follows (see π 0 2 1 t [3M]) that µ ¶ |Wt| P(g1 ≤ t|Ft) = Φ √ . 1 − t Then, the Itˆo-Tanaka formula combined with the identity

xΦ0(x) + Φ00(x) = 0 3.3. LAST PASSAGE TIME BEFORE MATURITY 69 leads to

Z t µ ¶ µ ¶ Z t µ ¶ 0 |Ws| |Ws| 1 ds 00 |Ws| P(g1 ≤ t|Ft) = Φ √ d √ + Φ √ 0 1 − s 1 − s 2 0 1 − s 1 − s Z t µ ¶ Z t µ ¶ 0 |Ws| sgn(Ws) dLs 0 |Ws| = Φ √ √ dWs + √ Φ √ 1 − s 1 − s 1 − s 1 − s 0 r0 Z t µ ¶ Z t 0 |Ws| sgn(Ws) 2 dLs = Φ √ √ dWs + √ . 0 1 − s 1 − s π 0 1 − s

It follows that the F-predictable compensator associated with g1 is r Z t 2 dLs At = √ , (t < 1) . π 0 1 − s ¤

These results can be extended to the last time before T when the Brownian motion reaches the α level α, i.e., gT = sup {t ≤ T : Wt = α}, where we set sup(∅) = T. The predictable compensator associated with gα is T r Z t∧T α 2 dLs At = √ , π 0 T − s where Lα is the local time of W at level α.

We now study the case where Xt = x + µ t + σ Wt, with constant coefficients µ and σ > 0. Let

a g1 (X) = sup {t ≤ 1 : Xt = a}

= sup {t ≤ 1 : νt + Wt = α} where ν = µ/σ and α = (a − x)/σ. Setting

Vt = α − νt − Wt = (a − Xt)/σ , we obtain, using standard computations (see [3M])

a νVt P(g1 (X) ≤ t|Ft) = (1 − e H(ν, |Vt|, 1 − t))11{T0(V )≤t}, where µ ¶ µ ¶ νs − y −νs − y H(ν, y, s) = e−νyN √ + eνyN √ . s s

νVt Using Itˆo’slemma, we obtain the decomposition of 1 − e H(ν, |Vt|, 1 − t) as a semi-martingale Mt + Ct. a We note that C increases only on the set {t : Xt = a}. Indeed, setting g1 (X) = g, for any predictable process H, one has µZ ∞ ¶ E(Hg) = E dCsHs 0 hence, since Xg = a, µZ ∞ ¶

0 = E(11Xg 6=a) = E dCs11Xs6=a . 0 a Therefore, dCt = κtdLt (X) and, since L increases only at points such that Xt = a (i.e., Vt = 0), one has 0 κt = Hx(ν, 0, 1 − t) . 70 CHAPTER 3. LAST PASSAGE TIMES ♠

The martingale part is given by dMt = mtdWt where

νVt 0 mt = e (νH(ν, |Vt|, 1 − t) − sgn(Vt) Hx(ν, |Vt|, 1 − t)) .

a Therefore, the predictable compensator associated with g1 (X) is Z t H0 (ν, 0, 1 − s) x dLa . νVs s 0 e H(ν, 0, 1 − s)

Exercise 3.3.2 The aim of this exercise is to compute, for t < T < 1 , the quantity E(h(WT )11{T

3.4 Absolutely Continuous Compensator

From the preceding computations, the reader might think that the F-predictable compensator is always singular w.r.t. the Lebesgue measure. This is not the case, as we show now. We are indebted to Michel Emery´ for this example.

Let W be a Brownian motion and let τ = sup {t ≤ 1 : W1 − 2Wt = 0}, that is the last time before 1 when the Brownian motion is equal to half of its terminal value at time 1. Then, ½ ¾ ½ ¾

{τ ≤ t} = inf 2Ws ≥ W1 ≥ 0 ∪ sup 2Ws ≤ W1 ≤ 0 . t≤s≤1 t≤s≤1

I The quantity µ ¶

P(τ ≤ t, W1 ≥ 0|Ft) = P inf 2Ws ≥ W1 ≥ 0|Ft t≤s≤1 can be evaluated using the equalities ½ ¾ ½ ¾ W1 W1 inf Ws ≥ ≥ 0 = inf (Ws − Wt) ≥ − Wt ≥ −Wt t≤s≤1 2 t≤s≤1 2 ( ) Wf1−t Wt = inf (Wfu) ≥ − ≥ −Wt , 0≤u≤1−t 2 2 where (Wfu = Wt+u − Wt, u ≥ 0) is a Brownian motion independent of Ft. It follows that µ ¶ W1 P inf Ws ≥ ≥ 0|Ft = Ψ(1 − t, Wt) , t≤s≤1 2 3.5. TIME WHEN THE SUPREMUM IS REACHED 71 where à ! ³ ´ Wfs x x x Ψ(s, x) = P inf Wfu ≥ − ≥ −x = P 2Ms − Ws ≤ ,Ws ≤ 0≤u≤s 2 2 2 2 µ ¶ x x = P 2M − W ≤ √ ,W ≤ √ . 1 1 2 s 1 2 s I The same kind of computation leads to µ ¶

P sup 2Ws ≤ W1 ≤ 0|Ft = Ψ(1 − t, −Wt) . t≤s≤1 I The quantity Ψ(s, x) can now be computed from the joint law of the maximum and of the process at time 1; however, we prefer to use Pitman’s theorem (see [3M]): let Ue be a r.v. uniformly distributed on [−1, +1] independent of R1 := 2M1 − W1, then

P(2M1 − W1 ≤ y, W1 ≤ y) = P(R1 ≤ y, URe 1 ≤ y) Z 1 1 = P(R1 ≤ y, uR1 ≤ y)du . 2 −1

For y > 0, Z Z 1 1 1 1 P(R1 ≤ y, uR1 ≤ y)du = P(R1 ≤ y)du = P(R1 ≤ y) . 2 −1 2 −1 For y < 0 Z 1 P(R1 ≤ y, uR1 ≤ y)du = 0 . −1 Therefore µ ¶ |Wt| P(τ ≤ t|Ft) = Ψ(1 − t, Wt) + Ψ(1 − t, −Wt) = ρ √ 1 − t where r Z y 2 2 −x2/2 ρ(y) = P(R1 ≤ y) = x e dx . π 0

|W | Then Z = P(τ > t|F ) = 1 − ρ( √ t ). We can now apply Tanaka’s formula to the function ρ. t t 1−t Noting that ρ0(0) = 0, the contribution to the Doob-Meyer decomposition of Z of the local time of W at level 0 is 0. Furthermore, the increasing process A of the Doob-Meyer decomposition of Z is given by à µ ¶ µ ¶ ! 1 00 |Wt| 1 1 0 |Wt| |Wt| dAt = ρ √ + ρ √ p dt 2 1 − t 1 − t 2 1 − t (1 − t)3

1 |Wt| −W 2/2(1−t) = √ e t dt . 1 − t 1 − t

We note that A may be obtained as the dual predictable projection on the Brownian filtration of (W1) (x) the process As , s ≤ 1, where (As , s ≤ 1) is the compensator of τ under the law of the Brownian (1) bridge P0→x.

3.5 Time When the Supremum is Reached

Let W be a Brownian motion, Mt = sups≤t Ws and let τ be the time when the supremum on the interval [0, 1] is reached, i.e.,

τ = inf{t ≤ 1 : Wt = M1} = sup{t ≤ 1 : Mt − Wt = 0} . 72 CHAPTER 3. LAST PASSAGE TIMES ♠

Let us denote by ζ the positive continuous semimartingale

Mt − Wt ζt = √ , t < 1. 1 − t q R 2 x u2 Let Ft = P(τ ≤ t|Ft). Since Ft = Φ(ζt), (where Φ(x) = π 0 exp(− 2 )du, (see Exercise in Chapter 4 in [3M]) using Itˆo’sformula, we obtain the canonical decomposition of F as follows: Z Z t 1 t du F = Φ0(ζ ) dζ + Φ00(ζ ) t u u 2 u 1 − u 0 0 r Z t Z t (i) 0 dWu 2 dMu (ii) = − Φ (ζu)√ + √ = Ut + F˜t, 0 1 − u π 0 1 − u R t 0 dWu e where Ut = − 0 Φ (ζu)√ is a martingale and F a predictable increasing process. To obtain 1 − u p (i), we have used that xΦ0 + Φ00 = 0; to obtain (ii), we have used that Φ0(0) = 2/π and also that the process M increases only on the set

{u ∈ [0, t]: Mu = Wu} = {u ∈ [0, t]: ζu = 0}.

3.6 Last Passage Times for Particular Martingales

We now study the Az´emasupermartingale associated with the random time L, a last passage time or the end of a predictable set Γ, i.e.,

L(ω) = sup{t :(t, ω) ∈ Γ} .

Proposition 3.6.1 Let L be the end of a predictable set. Assume that all the F-martingales are continuous and that L avoids the F-stopping times. Then, there exists a continuous and nonnegative local martingale N, with N0 = 1 and limt→∞ Nt = 0, such that:

Nt Zt = P (L > t | Ft) = Σt where Σt = sups≤t Ns. The Doob-Meyer decomposition of Z is

Zt = mt − At and the following relations hold

µZ t Z t ¶ dms 1 dhmis Nt = exp − 2 0 Zs 2 0 Zs Σt = exp(At) Z t dNs mt = 1 + = E(ln S∞|Ft) 0 Σs

Proof: As recalled previously, the Doob-Meyer decomposition of Z reads Zt = mt − At with m and A continuous, and dAt is carried by {t : Zt = 1}. Then, for t < T0 := inf{t : Zt = 0}

µZ t Z t ¶ dms 1 dhmis − ln Zt = − − 2 + At 0 Zs 2 0 Zs From Skorokhod’s reflection lemma we deduce that µZ u Z u ¶ dms 1 dhmis At = sup − 2 u≤t 0 Zs 2 0 Zs 3.6. LAST PASSAGE TIMES FOR PARTICULAR MARTINGALES 73

Introducing the local martingale N defined by

µZ t Z t ¶ dms 1 dhmis Nt = exp − 2 , 0 Zs 2 0 Zs it follows that Nt Zt = Σt and µ µZ Z ¶¶ u dm 1 u dhmi s s At Σt = sup Nu = exp sup − 2 = e u≤t u≤t 0 Zs 2 0 Zs ¤

The three following exercises are from the work of Bentata and Yor [9].

Exercise 3.6.2 Let M be a positive martingale, such that M0 = 1 and limt→∞ Mt = 0. Let a ∈ [0, 1[ and define Ga = sup{t : Mt = a}. Prove that µ ¶ M + P(G ≤ t|F ) = 1 − t a t a

Assume that, for every t > 0, the law of the r.v. Mt admits a density (mt(x), x ≥ 0), and (t, x) → 2 2 mt(x) may be chosen continuous on (0, ∞) and that dhMit = σt dt, and there exists a jointly 2 2 continuous function (t, x) → θt(x) = E(σt |Mt = x) on (0, ∞) . Prove that M 1 P(G ∈ dt) = (1 − 0 )δ (dt) + 11 θ (a)m (a)dt a a 0 {t>0} 2a t t

a Hint: Use Tanaka’s formula to prove that the result is equivalent to dtE(Lt (M)) = dtθt(a)mt(a) where L is the Tanaka-Meyer local time. C

Exercise 3.6.3 Let B be a Brownian motion and

(ν) Ta = inf{t : Bt + νt = a} (ν) Ga = sup{t : Bt + νt = a}

Prove that µ ¶ 1 1 (T (ν),G(ν)) law= , a a (a) (a) Gν Tν (ν) (ν) Give the law of the pair (Ta ,Ga ). C

Exercise 3.6.4 Let X be a transient diffusion, such that

Px(T0 < ∞) = 0, x > 0

Px( lim Xt = ∞) = 1, x > 0 t→∞ and note s the scale function satisfying s(0+) = −∞, s(∞) = 0. Prove that for all x, t > 0, −1 P (G ∈ dt) = p(m)(x, y)dt x y 2s(y) t where p(m) is the density transition w.r.t. the speed measure m. C 74 CHAPTER 3. LAST PASSAGE TIMES ♠ Bibliography

[1] J. Amendinger. Initial enlargement of filtrations and additional information in financial mar- kets. PhD thesis, Technischen Universit¨atBerlin, 1999.

[2] J. Amendinger. Martingale representation theorems for initially enlarged filtrations. Stochastic Processes and their Appl., 89:101–116, 2000.

[3] J. Amendinger, P. Imkeller, and M. Schweizer. Additional logarithmic utility of an insider. Stochastic Processes and their Appl., 75:263–286, 1998.

[4] S. Ankirchner. Information and Semimartingales. PhD thesis, Humboldt Universit¨atBerlin, 2005.

[5] S. Ankirchner, S. Dereich, and P. Imkeller. Enlargement of filtrations and continuous Girsanov- type embeddings. In C. Donati-Martin, M. Emery, A. Rouault, and Ch. Stricker, editors, S´eminaire de Probabilit´esXL, volume 1899 of Lecture Notes in Mathematics. Springer-Verlag, 2007.

[6] T. Aven. A theorem for determining the compensator of a counting process. Scandinavian Journal of , 12:69–72, 1985.

[7] M.T. Barlow. Study of filtration expanded to include an honest time. Z. Wahr. Verw. Gebiete, 44:307–323, 1978.

[8] F. Baudoin. Modeling anticipations on financial markets. In Paris-Princeton Lecture on math- ematical Finance 2002, volume 1814 of Lecture Notes in Mathematics, pages 43–92. Springer- Verlag, 2003.

[9] A. Bentata and M. Yor. From Black-Scholes and Dupire formulae to last passage times of local martingales. Preprint, 2008.

[10] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Stochastic methods in credit risk modelling, valuation and hedging. In M. Frittelli and W. Rungaldier, editors, CIME-EMS Summer School on Stochastic Methods in Finance, Bressanone, volume 1856 of Lecture Notes in Mathematics. Springer, 2004.

[11] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Market completeness under constrained trad- ing. In A.N. Shiryaev, M.R. Grossinho, P.E. Oliveira, and M.L. Esquivel, editors, Stochastic Finance, Proceedings of International Lisbon Conference, pages 83–107. Springer, 2005.

[12] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Completeness of a reduced-form credit risk model with discontinuous asset prices. Stochastic Models, 22:661–687, 2006.

[13] P. Br´emaudand M. Yor. Changes of filtration and of probability measures. Z. Wahr. Verw. Gebiete, 45:269–295, 1978.

[14] G. Callegaro, M Jeanblanc, and B. Zargari. Carthagese Filtrations. Preprint, 2010.

75 76 BIBLIOGRAPHY

[15] J.M. Corcuera, P. Imkeller, A. Kohatsu-Higa, and D. Nualart. Additional utility of insiders with imperfect dynamical information. Finance and Stochastics, 8:437–450, 2004. [16] M.H.A. Davis. Option Pricing in Incomplete Markets. In M.A.H. Dempster and S.R. Pliska, editors, Mathematics of Derivative Securities, Publication of the Newton Institute, pages 216– 227. Cambridge University Press, 1997. [17] C. Dellacherie. Un exemple de la th´eorieg´en´eraledes processus. In P-A. Meyer, editor, S´eminaire de Probabilit´es IV, volume 124 of Lecture Notes in Mathematics, pages 60–70. Springer-Verlag, 1970. [18] C. Dellacherie. Capacit´eset processus stochastiques, volume 67 of Ergebnisse. Springer, 1972. [19] C. Dellacherie, B. Maisonneuve, and P-A. Meyer. Probabilit´eset Potentiel, chapitres XVII- XXIV, Processus de Markov (fin). Compl´ementsde calcul stochastique. Hermann, Paris, 1992. [20] C. Dellacherie and P-A. Meyer. Probabilit´eset Potentiel, chapitres I-IV. Hermann, Paris, 1975. English translation: Probabilities and Potentiel, chapters I-IV, North-Holland, (1982). [21] C. Dellacherie and P-A. Meyer. A propos du travail de Yor sur les grossissements des tribus. In C. Dellacherie, P-A. Meyer, and M. Weil, editors, S´eminaire de Probabilit´esXII, volume 649 of Lecture Notes in Mathematics, pages 69–78. Springer-Verlag, 1978. [22] C. Dellacherie and P-A. Meyer. Probabilit´eset Potentiel, chapitres V-VIII. Hermann, Paris, 1980. English translation : Probabilities and Potentiel, chapters V-VIII, North-Holland, (1982). [23] D.M. Delong. Crossing probabilities for a square root boundary for a Bessel process. Comm. Statist A-Theory Methods, 10:2197–2213, 1981. [24] G. Di Nunno, T. Meyer-Brandis, B. Øksendal, and F. Proske. Optimal portfolio for an insider in a market driven by l´evyprocesses. Quantitative Finance, 6:83–94, 2006. [25] P. Ehlers and Ph. Sch¨onbucher. Background filtrations and canonical loss processes for top-down models of portfolio credit risk. Finance and Stochastics, 13:79–103, 2009. [26] R.J. Elliott. and Applications. Springer, Berlin, 1982. [27] R.J. Elliott and M. Jeanblanc. Incomplete markets and informed agents. Mathematical Method of Operations Research, 50:475–492, 1998. [28] R.J. Elliott, M. Jeanblanc, and M. Yor. On models of default risk. Math. Finance, 10:179–196, 2000. [29] M. Emery. Espaces probabilis´esfiltr´es: de la th´eoriede Vershik au mouvement Brownien, via des id´eesde Tsirelson. In S´eminaire Bourbaki, 53i`emeann´ee, volume 282, pages 63–83. Ast´erisque,2002. [30] A. Eyraud-Loisel. BSDE with enlarged filtration. option hedging of an insider trader in a financial market with jumps. Stochastic Processes and their Appl., 115:1745–1763, 2005. [31] J-P. Florens and D. Fougere. Noncausality in continuous time. Econometrica, 64:1195–1212, 1996. [32] H. F¨ollmer,C-T Wu, and M. Yor. Canonical decomposition of linear transformations of two independent Brownian motions motivated by models of insider trading. Stochastic Processes and their Appl., 84:137–164, 1999. [33] D. Gasbarra, E. Valkeika, and L. Vostrikova. Enlargement of filtration and additional infor- mation in pricing models: a Bayesian approach. In Yu. Kabanov, R. Lipster, and J. Stoyanov, editors, From Stochastic Calculus to : The Shiryaev Festschrift, pages 257–286, 2006. BIBLIOGRAPHY 77

[34] A. Grorud and M. Pontier. Insider trading in a continuous time market model. International Journal of Theoretical and Applied Finance, 1:331–347, 1998. [35] A. Grorud and M. Pontier. Asymmetrical information and incomplete markets. International Journal of Theoretical and Applied Finance, 4:285–302, 2001. [36] C. Hillairet. Comparison of insiders’ optimal strategies depending on the type of side- information. Stochastic Processes and their Appl., 115:1603–1627, 2005. [37] P. Imkeller. Random times at which insiders can have free lunches. Stochastics and Stochastics Reports, 74:465–487, 2002. [38] P. Imkeller. Malliavin’s calculus in insider models: additional utility and free lunches. Mathe- matical Finance, 13:153169, 2003. [39] P. Imkeller, M. Pontier, and F. Weisz. Free lunch and arbitrage possibilities in a financial market model with an insider. Stochastic Processes and their Appl., 92:103–130, 2001. [40] J. Jacod. Calcul stochastique et Probl`emesde martingales, volume 714 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1979. [41] J. Jacod. Grossissement initial, hypoth`ese H0 et th´eor`emede Girsanov. In S´eminaire de Calcul Stochastique 1982-83, volume 1118 of Lecture Notes in Mathematics. Springer-Verlag, 1987. [42] M. Jeanblanc and M. Rutkowski. Modeling default risk: an overview. In Mathematical Fi- nance: Theory and Practice, Fudan University, pages 171–269. Modern Mathematics Series, High Education press. Beijing, 2000. [43] M. Jeanblanc and M. Rutkowski. Modeling default risk: Mathematical tools. Fixed Income and Credit risk modeling and Management, New York University, Stern School of Business, Statistics and Operations Research Department, Workshop, 2000. [44] M. Jeanblanc and S. Song. Default times with given survival probability and their F-martingale decomposition formula. Working Paper, 2010. [45] M. Jeanblanc and S. Song. Explicit model of default time with given survival process. Working Paper, 2010. [46] Th. Jeulin. Semi-martingales et grossissement de filtration, volume 833 of Lecture Notes in Mathematics. Springer-Verlag, 1980. [47] Th. Jeulin and M. Yor. Grossissement d’une filtration et semi-martingales : formules explicites. In C. Dellacherie, P-A. Meyer, and M. Weil, editors, S´eminaire de Probabilit´esXII, volume 649 of Lecture Notes in Mathematics, pages 78–97. Springer-Verlag, 1978. [48] Th. Jeulin and M. Yor. Nouveaux r´esultatssur le grossissement des tribus. Ann. Scient. Ec. Norm. Sup., 11:429–443, 1978. [49] Th. Jeulin and M. Yor. In´egalit´ede Hardy, semimartingales et faux-amis. In P-A. Meyer, editor, S´eminaire de Probabilit´esXIII, volume 721 of Lecture Notes in Mathematics, pages 332–359. Springer-Verlag, 1979. [50] Th. Jeulin and M Yor, editors. Grossissements de filtrations: exemples et applications, volume 1118 of Lecture Notes in Mathematics. Springer-Verlag, 1985. [51] I. Karatzas and I. Pikovsky. Anticipative portfolio optimization. Adv. Appl. Prob., 28:1095– 1122, 1996. [52] A. Kohatsu-Higa. Enlargement of filtrations and models for insider trading. In J. Akahori, S. Ogawa, and S. Watanabe, editors, Stochastic Processes and Applications to Mathematical Finance, pages 151–166. World Scientific, 2004. 78 BIBLIOGRAPHY

[53] A. Kohatsu-Higa. Enlargement of filtration. In Paris-Princeton Lecture on Mathematical Fi- nance, 2004, volume 1919 of Lecture Notes in Mathematics, pages 103–172. Springer-Verlag, 2007. [54] A. Kohatsu-Higa and B. Øksendal. Enlargement of filtration and insider trading. Preprint, 2004. [55] H. Kunita. Some extensions of Itˆo’sformula. In J. Az´emaand M. Yor, editors, S´eminaire de Probabilit´esXV, volume 850 of Lecture Notes in Mathematics, pages 118–141. Springer-Verlag, 1981. [56] S. Kusuoka. A remark on default risk models. Adv. Math. Econ., 1:69–82, 1999. [57] D. Lando. On Cox processes and credit risky securities. Review of Derivatives Research, 2:99– 120, 1998. [58] R.S.R. Liptser and A.N. Shiryaev. Statistics of Random Processes. Springer, 2nd printing 2001, 1977. [59] D. Madan, B. Roynette, and M. Yor. An alternative expression for the Black-Scholes formula in terms of Brownian first and last passage times. Preprint, Universit´eParis 6, 2008. [60] R. Mansuy and M. Yor. Random Times and (Enlargement of) Filtrations in a Brownian Setting, volume 1873 of Lectures Notes in Mathematics. Springer, 2006. [61] G. Mazziotto and J. Szpirglas. Mod`eleg´en´eralde filtrage non lin´eaireet ´equationsdiff´erentielles stochastiques associ´ees. Ann. Inst. H. Poincar´e, 15:147–173, 1979. [62] A. Nikeghbali. An essay on the general theory of stochastic processes. Probability Surveys, 3:345–412, 2006. [63] A. Nikeghbali and M. Yor. A definition and some properties of pseudo-stopping times. The Annals of Probability, 33:1804–1824, 2005. [64] A. Nikeghbali and M. Yor. Doob’s maximal identity, multiplicative decompositions and en- largements of filtrations. In D. Burkholder, editor, Joseph Doob: A Collection of Mathematical Articles in his Memory, volume 50, pages 791–814. Illinois Journal of Mathematics, 2007. [65] H. Pham. Stochastic control under progressive enlargement of filtrations and applications to multiple defaults risk management. Preprint, 2009. [66] J.W. Pitman and M. Yor. Bessel processes and infinitely divisible laws. In D. Williams, editor, Stochastic integrals, LMS Durham Symposium, Lect. Notes 851, pages 285–370. Springer, 1980. [67] Ph. Protter. Stochastic Integration and Differential Equations. Springer, Berlin, Second edition, 2005. [68] L.C.G. Rogers and D. Williams. Diffusions, Markov processes and Martingales, Vol 1. Foun- dations. Cambridge University Press, Cambridge, second edition, 2000. [69] P. Salminen. On last exit decomposition of linear diffusions. Studia Sci. Math. Hungar., 33:251– 262, 1997. [70] M.J. Sharpe. Some transformations of diffusion by time reversal. The Annals of Probability, 8:1157–1162, 1980. [71] A.N. Shiryaev. Optimal Stopping Rules. Springer, Berlin, 1978. [72] S. Song. Grossissement de filtration et probl`emesconnexes. Th`esis,Paris IV University, 1997. [73] C. Stricker and M. Yor. Calcul stochastique d´ependant d’un param`etre. Z. Wahrsch. Verw. Gebiete, 45(2):109–133, 1978. BIBLIOGRAPHY 79

[74] Ch. Stricker. Quasi-martingales, martingales locales, semimartingales et filtration naturelle. Z. Wahr. Verw. Gebiete, 39:55–63, 1977.

[75] B. Tsirel’son. Within and beyond the reach of Brownian innovations. In Proceedings of the International Congress of Mathematicians, pages 311–320. Documenta Mathematica, 1998.

[76] D. Williams. A non-stopping time with the optional-stopping property. Bull. London Math. Soc., 34:610–612, 2002.

[77] C-T. Wu. Construction of Brownian motions in enlarged filtrations and their role in mathe- matical models of insider trading. Th`esede doctorat d’´etat,Humbolt-Universit¨atBerlin, 1999.

[78] Ch. Yoeurp. Grossissement de filtration et th´eor`emede Girsanov g´en´eralis´e.In Th. Jeulin and M. Yor, editors, Grossissements de filtrations: exemples et applications, volume 1118. Springer, 1985.

[79] M. Yor. Grossissement d’une filtration et semi-martingales : th´eor`emesg´en´eraux.In C. Del- lacherie, P-A. Meyer, and M. Weil, editors, S´eminaire de Probabilit´esXII, volume 649 of Lecture Notes in Mathematics, pages 61–69. Springer-Verlag, 1978.

[80] M. Yor. Some Aspects of Brownian Motion, Part II: Some Recent Martingale Problems. Lectures in Mathematics. ETH Z¨urich. Birkh¨auser,Basel, 1997. Index

(H) Hypothesis, 16 predictable, 8 Pseudo stopping time, 49 Brownian bridge, 19 Random time honest, 50 Class D, 4 Running maximum, 50 Compensator predictable, 9 semi-martingale, 5 Special semi-martingale, 5 Decomposition Doob-Meyer, 55, 72 multiplicative, 55, 72 Doob-Meyer decomposition, 4 Dual predictable projection, 69

Filtration natural, 3 First time when the supremum is reached, 71

Gaussian process, 19

Honest time, 50

Immersion, 16 Initial time, 56 Integration by parts for Stieltjes, 6

Last passage time, 65 before hitting a level, 67 before maturity, 68 of a transient diffusion, 65

Multiplicative decomposition, 55, 72

Norros lemma, 42, 45

Predictable compensator, 9 Process adapted, 3 increasing, 3 Projection dual predictable, 8 optional, 7

80