Malliavin for Markov Chains using Perturbations of Time

Laurent Denis 1 and Tuyet Mai Nguyen 2 ∗∗ 1 Laboratoire Manceau de Math´ematiques(LMM) Universit´edu Maine Avenue O. Messiaen 72 018 LE MANS cedex, France [email protected] 2 Laboratoire de Math´ematiqueset Mod´elisation(LaMME) Universit´ed’Evry Val d’Essonne 91 037 EVRY cedex, France [email protected] October 6, 2015

Abstract In this article, we develop a Malliavin calculus associated to a time- continuous with finite state space. We apply it to get a criterion of density for solutions of SDE involving the Markov chain and also to compute greeks.

keywords: Dirichlet form; Integration by parts formula; Malliavin cal- culus; Markov chain; computation of greeks. Mathematics subject classification: 60H07; 60J10; 60G55; 91B70.

1 Introduction

The aim of this article is to construct a Dirichlet structure associated to a time- homogeneous Markov chain with finite state space in the spirit of Carlen and

∗∗Corresponding author. The research of T. M. Nguyen benefited from the support of the “Chair Markets in Transition” under the aegis of Louis Bachelier laboratory, a joint initiative of Ecole´ Polytechnique, Universit´ed’Evry´ Val d’Essonne and F´ed´erationBancaire Fran¸caise

1 Pardoux ([8]) who obtained criteria of density for stochastic differential equa- tions (SDEs) driven by a Poisson process. More precisely, we develop a Malliavin calculus on the canonical space by “derivating” with respect to the jump times of the Markov chain, the main difficulty is that the times of jumps of Markov chain we consider are no more distributed according to an homogeneous Poisson process. Extensions of Malliavin calculus to the case of SDEs with jumps have been soon proposed and gave rise to an extensive literature. The approach is either dealing with local operators acting on the size of the jumps (cf [2] [9] [15] etc.) or acting on the instants of the jumps (cf [8] [10]) or based on the Fock space representa- tion of the Poisson space and finite difference operators (cf [17] [18] [12] etc.). Let us mention that developping a Malliavin Calculus or a Dirichlet structure on the Poisson space is not the only way to prove the absolutely continuity of the law of the solution of SDE’s driven by a L´evyprocess, see for example [16] or the recent works of Bally and Cl´ement [1] who consider a very general case. In this article we consider a Markov chain and we construct “explicitly” a gradi- ent operator and a local Dirichlet form. With respect to the Malliavin analysis or the integration by part formula approach , what brings the Dirichlet forms approach is threefold: a) The arguments hold under only Lipschitz hypotheses, e.g. for density of solutions of stochastic differential equations cf [7], this is due to the celebrated property that contractions operate on Dirichlet forms and the Emile´ Picard iterated scheme may be performed under the Dirichlet norm. b) A general criterion exists, the energy image density property (EID), proved on the Wiener space for the Ornstein-Uhlenbeck form, and in several other cases (but still a conjecture in general since 1986 cf [6]), which provides an efficient tool for obtaining existence of densities in . c) Dirichlet forms are easy to construct in the infinite dimensional frameworks encountered in prob- ability theory (cf [7] Chap.V) and this yields a theory of errors propagation, especially for finance and physics cf [3]. Moreover, since the gradient operator may be calculated easily, this permits to make numerical simulations, for example in order to compute greeks of an asset in a market with jumps. The plan of the article is as follows. In Section 2, we describe the probabilistic framework and introduce the Markov chain. Then in Section 3, we introduce the w.r.t. an element of the Cameron-Martin and give some basic properties. The next section is devoted to the construction of the local Dirichlet structure and the associated operators namely the gradient and divergence, and also to the establishment of an integration by parts formula. In Section 5, we prove that this Dirichlet form satisfies the Energy Image Density (EID) property that we apply in Section 6 to get a criterion of density for solu- tion of SDEs involving the Markov chain. In the last section, we show that this

2 Malliavin calculus may be applied to compute some greeks in finance.

2 Probability Space

Let C = (Ct)t≥0 be a time-homogeneous Markov chain, with finite state space K = {k1, k2, ··· , kl}. The one-step transition probability matrix is P = (pij)1≤i,j≤l, and the infinitesimal generator matrix Λ = (λij)1≤i,j≤l. We assume that (Ct)t≥0 has a finite number of jumps over a bounded horizon of time so that we consider that it is defined on the canonical probability space (Ω, (Ft)t≥0, P) where Ω is the set of K-valued right continuous maps w : [0, ∞) → K starting at w(0)= ki0 ∈ K with i0 ∈ {1, ··· , l}, and there exists sequences i1, i2, · · · ∈ {1, ··· , l} and 0 = t0 < t1 < t2 < ··· such that

∞ X w(t) = kin 1[tn,tn+1[(t). (2.1) n=0

The filtration (Ft)t≥0 we consider is the natural filtration of the Markov chain (Ct)t≥0 satisfying the usual hypotheses.

Let (Tn)n∈N denote the sequence of successive jump times of C and Zn denote the position of C at time Tn. More explicitly, for any n ∈ N, the random variables Tn and Zn are defined as follow  T = 0,Z = k ,  0 0 i0 Tn = inf{t > Tn−1 : Ct 6= Zn−1},  Zn = CTn .

We have 0 = T0 < T1 < T2 < ··· , and Tn → ∞ when n → ∞ almost surely. These are two well-known properties

λiit 1. P(Tn − Tn−1 > t|Zn−1 = ki) = e .

λij 2. P(Zn = j|Zn−1 = ki) = pij = − . λii i i Let un = (un)1≤i≤l be the distribution of Zn, i.e., un = P(Zn = ki), i = 1, ··· , l. From the second property and the law of total probability we have

l l j X X i un = P(Zn = kj|Zn−1 = ki)P(Zn−1 = ki) = pijun−1. i=1 i=1

2 n Hence, un = Pun−1 = P un−2 = ··· = P u0. The first property means that conditionally on the position Zn−1 = ki at the jump time Tn−1, the random time that elapses until the next jump has an exponential probability law with parameter −λii > 0. Moreover, we have the following proposition whose proof is obvious:

3 Proposition 2.0. Conditionally on the positions Z1 = ki1 , ··· ,Zn−1 = kin−1 ,

1. the increments τ1 = T1, τ2 = T2 − T1, ··· , τn = Tn − Tn−1 are indepen-

dent and exponentially distributed with parameters λ1 = −λi0i0 , ··· , λn =

−λin−1in−1 ;

n 2. the joint law (T1,T2, ··· ,Tn) has the following density function on R n Y −λi(ti−ti−1) λie 1{0

We can also compute the law of τn = Tn − Tn−1 in the same way as Zn:

l l X X λiit i P(τn > t) = P(Tn − Tn−1 > t|Zn−1 = ki)P(Zn−1 = ki) = e un−1. i=1 i=1 (2.2)

Let (Nt)t≥0 be the process that counts the number of jumps of the Markov chain (Ct)t≥0 up to t X Nt = 1{Tn≤t}. (2.3) n≥1 We now define the process λ by

l X X λ(t) = −λii1{Zn=ki}1{Tn≤t

which is nothing but the intensity of N. The three processes (Ct)t≥0, (Nt)t≥0 and (λ(t))t≥0 have the same jump times. More importantly, the compensated process defined by Z t N˜t = Nt − λ(u)du (2.4) 0 is an Ft-martingale. Indeed, conditionally on the positions Z1,Z2, ··· ,Zn−1, R t∧Tn 0 λ(u)du is the compensator of the process Nt∧Tn . Hence for every 0 ≤ s ≤ t and i1, ··· , in−1 in {1, ··· , l}

Z t∧Tn

E[Nt∧Tn − Ns∧Tn − λ(u)du|Fs,Z1 = ki1 , ··· ,Zn−1 = kin−1 ] = 0, s∧Tn which implies

Z t∧Tn

E[Nt∧Tn − Ns∧Tn − λ(u)du|Fs] s∧Tn X Z t∧Tn = E[Nt∧Tn − Ns∧Tn − λ(u)du|Fs,Z1 = ki1 , ··· ,Zn−1 = kin−1 ] s∧Tn 1≤i1,··· ,in−1≤l

× P(Z1 = ki1 , ··· ,Zn−1 = kin−1 ) = 0.

4 R t By making n tend to +∞, we get E[Nt −Ns − s λ(u)du|Fs] = 0, so the process (N˜t)t≥0 is an Ft-martingale.

3 Directional derivation

In this section, we will consider the Markov chain (Ct) defined on the filtered probability space (Ω, F, P, {Ft}0≤t≤T ) where F = FT for a fixed time horizon 0 < T < ∞. We apply the same approach as in [8] to define the directional derivative using the reparametrization of time with respect to a function in a Cameron-Martin space. Let H be the closed subspace of L2([0,T ]) orthogonal to the constant func- tions: Z T H = {m ∈ L2([0,T ]) : m(t)dt = 0}. 0 In a natural way , H inherits the Hilbert structure of L2([0,T ]) and we denote by k kH and h·, ·iH the norm and the scalar product on it. From now on in this section, we fix a function m ∈ H. The condition R T 0 m(t)dt = 0 ensures that the change of intensity that we are about to define simply shifts the jump times without affecting the total number of jumps. We define  −1 −1 3 if m(t) ≤ 3 ;  −1 1 m˜ (t) = m(t) if 3 ≤ m(t) ≤ 3 ;  1 1 3 if m(t) ≥ 3 . 1 R T R T and m(t) =m ˜ (t) − T 0 m˜ (t)dt. Then we have again 0 m(t)dt = 0, and 1 5 1 1 3 ≤ 1 + m(t) ≤ 3 (since − 3 ≤ m˜ (t) ≤ 3 ). Moreover, ||m − m||H → 0 as  → 0. We define the reparametrization of time with respect to m as follow Z t τ(t) = t +  m(s)ds, t ≥ 0. 0

0 Notice that τ(0) = 0, τ(T ) = T , and τ(t) = 1+m(t) > 0, so the number and the order of jump times between 0 and T remain unchanged. Let T :Ω → Ω be the map defined by

(T(w))(t) = w(τ(t)) for all w ∈ Ω,

2  −1 TF = F ◦ T for every F ∈ L (Ω), and P be the probability measure PT . We denote

0 2 ∂TF 1 2 Dm = {F ∈ L (Ω) : |=0 = lim (TF − F ) in L (Ω) exists}. ∂ →0 

0 For F ∈ Dm,DmF is defined as the above limit.

5 Example 3.0. Now we give some examples of random variables whose direc- tional derivatives can be computed directly from the definition.

0 1. The random variables Zn do not change under T, so Zn ∈ Dm and DmZn = 0. 2. Let w ∈ Ω have the form (2.1). Then

∞ ∞ X X (T (w))(t) = w(τ (t)) = k 1 (τ (t)) = k 1 −1 −1 (t),   in [tn,tn+1[  in [τ (tn),τ (tn+1)[ n=0 n=0 (3.5) −1 −1 which implies TTn(w) = Tn(T(w)) = τ (tn) = τ (Tn(w)). We put ¯ ¯ −1 ¯ Tn = Tn ∧ T and clearly we also have TTn = τ ◦ Tn. By the , we have 1 1 (T T¯ (w) − T¯ (w)) = τ −1(T¯ (w)) − τ −1(τ (T¯ (w)))   n n   n   n 1 = (τ −1)0(S(w)) T¯ (w) − τ (T¯ (w))   n  n ¯ Z Tn(w) −1 0 = −(τ ) (S(w)) m(s)ds, 0

where S(w) is some value between T¯n(w) and τ(T¯n(w)). It is clear that −1 0 for any s ∈ [0,T ], (τ ) (s) is bounded by a deterministic constant which does not depend on  and that it converges to 1 as  tends to 0, hence we ¯ ¯ 0 ¯ R Tn easily conclude that Tn ∈ Dm, and DmTn = − 0 m(t)dt.

3. Since the number of jumps of (Ct)t∈[0,T ] does not change after the reparametriza- 0 tion of time, we have TNT = NT . Hence NT ∈ Dm, and DmNT = 0. R T Remark .1. If the assumption 0 m(t)dt = 0 was relaxed, the number of jumps 0 in [0,T ] would change, which does not ensure that NT ∈ Dm. Without loss of R T generality, we can assume that 0 m(t)dt > 0, which deduces τ(T ) > T for  > 0. Indeed,

1 2 1 (T N − N ) = (T N − N )2 E   T T 2 E  T T 1 ≥ (T N − N )2|T N − N ≥ 1 (T N − N ≥ 1) 2 E  T T  T T P  T T 1 ≥ (T N − N ≥ 1). 2 P  T T P P From (3.5), we have T N (w) = N (T (w)) = 1 −1 = 1 .  T T  n≥1 {τ (tn)≤T } n≥1 {tn≤τ(T )} In other words, TNT is the number of jumps up to τ(T ), so the inequal- ity TNT − NT ≥ 1 is equivalent to TNT +1 < τ(T ), which can be ensured if

6 ¯ TNT +1 − TNT < τ(T ) − T , since TNT < T . Put λ = max1≤i≤lλii, then from ¯ Pl λiit i (2.2) and notice that λ < 0, we have P(Tn − Tn−1 ≤ t) = 1 − i=1 e un−1 ≥ 1 − eλt¯ > 0 for every n ≥ 1, t > 0. In summary, we have

1 2 1 (T N − N ) ≥ (T − T ≤ τ (T ) − T ) E   T T 2 P NT +1 NT  1  ¯  ≥ 1 − eλ(τ(T )−T ) → ∞ as  ↓ 0. 2

0 Thus, in this case we would have NT ∈/ Dm. This explains why we need the R T assumption 0 m(t)dt = 0 in our construction.

Let us define the set S of “smooth” functions. A map F :Ω → R belongs to S if and only if there exists a ∈ R, d ∈ N∗ and for any n ∈ {1, ··· , d}, i1,··· ,in n i1, ··· , in ∈ {1, ··· , l}, there exists a function fn : R → R such that

Pd P i1,··· ,in 1. F = a1A + f (T1, ··· ,Tn)1 i1,··· ,in , 0 n=1 i1,··· ,in∈{1,··· ,l} n An i1,··· ,in where A0 = {NT = 0},An = {NT = n, Z1 = ki1 , ··· ,Zn = kin };

i1,··· ,in 2. for any n ∈ {1, ··· , d}, any i1, ··· , in ∈ {1, ··· , l}, fn is smooth with bounded derivatives of any order.

It is known that S is dense in L2(Ω, F, P). We recall that for all i, T¯i = Ti ∧ T .

Proposition 3.0. Let n ∈ N∗, for every smooth function with bounded deriva- n ¯ ¯ ¯ 0 tives of any order f : R → R, we have f(T1, T2, ··· , Tn) ∈ Dm, and

n ¯ X ∂f Z Tj D f(T¯ , T¯ , ··· , T¯ ) = − (T¯ , T¯ , ··· , T¯ ) m(t)dt. m 1 2 n ∂t 1 2 n j=1 j 0

0 As a consequence, S is a subset of Dm.

Proof. By the definition of Dm we have

∂T f(T¯ , T¯ , ··· , T¯ ) D f(T¯ , T¯ , ··· , T¯ ) =  1 2 n | m 1 2 n ∂ =0 n ∂ X ∂f ∂ = f(T T¯ , T T¯ , ··· , T T¯ )| = T T¯ | ∂  1  2  n =0 ∂t ∂  j =0 j=1 j n n ¯ X ∂f X ∂f Z Tj = D T¯ = − (T¯ , T¯ , ··· , T¯ ) m(t)dt. ∂t m j ∂t 1 2 n j=1 j j=1 j 0

Proposition 3.0. (Functional calculus properties)

7 1. If F,G ∈ S then FG ∈ S and Dm(FG) = (DmF )G + F (DmG).

n 2. If F1,F2, ··· ,Fn ∈ S and Φ: R → R is a smooth function then Φ(F1,F2, ··· ,Fn) ∈ S and n X ∂Φ D Φ(F ,F , ··· ,F ) = (F ,F , ··· ,F )D F . m 1 2 n ∂x 1 2 n m j j=1 j

Now we study the absolute continuity of P with respect to P. Let E be the  expectation taken under the probability P . For every n ∈ N, i1, ··· , in ∈ {1, ··· , l}, n λ1, ··· , λn as in Proposition 2.0, and every measurable function f : R → R we have

 E [f(T1, ··· ,Tn)1{NT =n}|Z1 = ki1 , ··· ,Zn = kin ] −1 −1 = E[f(T T1, ··· , T Tn)1{NT =n}|Z1 = ki1 , ··· ,Zn = kin ] −1 = E[(f ◦ Φ )(T1, ··· ,Tn)1{Tn≤T

Z u1 Z un Φ(u1, ··· , un) = (u1 +  m(t)dt, ··· , un +  m(t)dt) 0 0 n −λn+1(T −tn) Y −λi(ti−ti−1) ϕ(t1, ··· , tn) = e λie i=1 n R Tn Pn R Ti (λn+1 0 mdt− i=1 λi T mdt) Y pn = e i−1 (1 + m(Ti)). i=1

R Tn R T Notice that mdt = − mdt, and conditionally to NT = n, 0 Tn n X Z Ti Z T Z T λi mdt + λn+1 mdt = λ(t)m(t)dt, i=1 Ti−1 Tn 0

8 R T − λ(t)m(t)dt Qn we can rewrite pn = e 0 i=1(1 + m(Ti)) and so we have proved:

Proposition 3.0. P is absolutely continuous with respect to P with density

 NT dP − R T λ(t)m (t)dt Y  = e 0  (1 + m (T )) := G . d  i P i=1

R T In case of Poisson process λi = 1, so 0 λ(t)m(t)dt = 0. Hence we have again the result obtained in [8] for standard Poisson processes

N d  YT P = (1 + m (T )). d  i P i=1 4 Gradient and divergence

For any h ∈ H, set

N N T Z T T Z Ti ˆ X X δ(h) = h(Ti) − λ(TNT ) h(t)dt − λ(Ti−1) h(t)dt (4.6) i=1 TNT i=1 Ti−1 ˆ with convention that δ(h) = 0 if NT = 0. We can rewrite (4.6) as follows

Z T Z T Z T ˆ δ(h) = h(t)dNt − λ(t)h(t)dt = h(t)dN˜t. 0 0 0 where the compensated process N˜ is defined in (2.4). Moreover, if we assume that h ∈ H ∩ C1([0,T ]), by taking the directional derivative of (4.6), we have

NT NT ˆ X 0 X Dmδ(h) = h (Ti)DmTi + λ(TNT )h(TNT )DmTNT − λ(Ti−1)(h(Ti)DmTi − h(Ti−1)DmTi−1) i=1 i=1 N N N T Z Ti T Z Ti T Z Ti X 0 X X = − h (Ti) m(s)ds − λ(Ti)h(Ti) m(s)ds + λ(Ti−1)h(Ti) m(s)ds i=1 0 i=0 0 i=1 0 Z T Z t Z T Z t Z T Z t 0 = − h (t) m(s)dsdNt − λ(t)h(t) m(s)dsdNt + λ(t−)h(t) m(s)dsdNt 0 0 0 0 0 0 Z T Z t Z T Z t 0 = − h (t) m(s)dsdNt − (λ(t) − λ(t−))h(t) m(s)dsdNt. (4.7) 0 0 0 0 R t By applying Itˆoformula for the process f(t, λ(t)) = λ(t)h(t) 0 m(s)ds with

9 f(0, λ(0)) = f(T, λ(T )) = 0, we have

Z T Z t Z Tn 0 X 0 = λ(t)[h (t) m(s)ds + h(t)m(t)]dt + (λ(Tn) − λ(Tn−))h(Tn) m(s)ds 0 0 0 n≥1,Tn≤1 Z T Z t Z T Z T Z t 0 = λ(t)h (t) m(s)dsdt + λ(t)h(t)m(t)dt + (λ(t) − λ(t−))h(t) m(s)dsdNt. 0 0 0 0 0 (4.8)

From (4.7) and (4.8), we obtain

Z T Z t Z T Z t Z T ˆ 0 0 Dmδ(h) = − h (t) m(s)dsdNt + λ(t)h (t) m(s)dsdt + λ(t)h(t)m(t)dt 0 0 0 0 0 Z T Z t Z T 0 = − h (t) m(s)dsdN˜t + λ(t)h(t)m(t)dt. 0 0 0 By taking expectation, we get

"Z T # ˆ E[Dmδ(h)] = E λ(t)h(t)m(t)dt . (4.9) 0

4.1 Integration by parts formula in Bismut’s way In this section, we will establish an integration by parts formula directly from the perturbation of measure as Bismut’s way (c.f. [2]).

0 Proposition 4.0. For all F ∈ Dm and m in H, ˆ E[DmF ] = E[δ(m)F ].

 Proof. By the definitions of T and P , we have

  E[TF ] = E [F ] = E [G F ] . (4.10)

By derivating in both sides of (4.10) with respect to  at  = 0, we obtain

∂G  [D F ] = | F . E m E ∂ =0

Since we have

N ∂G Z T XT | = − λ(t)m(t)dt + m(T ) = δˆ(m), ∂ =0 i 0 i=1 the result follows.

10 Remark .2. We can also retrieve the equality (4.9) by using the proposition 4.0 for F = δˆ(h) as follow

"Z T Z T # "Z T # ˆ ˆ ˆ ˜ ˜ E[Dmδ(h)] = E[δ(m)δ(h)] = E m(t)dNt h(t)dNt = E λ(t)h(t)m(t)dt . 0 0 0

0 0 Corollary 0. For all F,G ∈ Dm such that FG ∈ Dm and m in H, ˆ E[GDmF ] = E[F (δ(m)G − DmG)].

Proof. The result is directly obtained by applying Proposition 4.0 for FG.

0 0 Corollary 0. Let ψ ∈ Dm, Φ = (φ1, ··· , φd) with φi ∈ Dm, i = 1, ··· , d, and f ∈ C1(Rd). Then

d X ∂f [ψD (f ◦ Φ)] = [ψ (Φ)D φ ] = [f(Φ)(δˆ(m)ψ − D ψ)]. E m E ∂x m i E m i=1 i

Proof. By applying Proposition 4.0 for F = ψf(Φ), we obtain

ˆ E[Dm(ψf(Φ))] = E[δ(m)ψf(Φ)].

Pd ∂f But Dm(ψf(Φ)) = Dmψf(Φ) + ψDmf(Φ) = Dmψf(Φ) + ψ i=1 (Φ)Dmφi. ∂xi Hence

d X ∂f [ψ (Φ)D φ ] = [δˆ(m)ψf(Φ)] − [D ψf(Φ)] E ∂x m i E E m i=1 i ˆ = E[f(Φ)(δ(m)ψ − Dmψ)].

Proposition 4.0. (Closability of Dm) For any m ∈ H, Dm is closable.

Proof. To prove that Dm is closable, we have to prove that if (Fi)i∈N ⊂ S 2 satisfying Fi → 0 and DmFi → u in L (Ω) then u = 0 P a.s. From Corollary 0 we have ˆ E[GDmFi] = E[Fi(δ(m)G − DmG)]. Let i tend to infinite we get E[Gu] = 0 for every G ∈ S. Since S is dense in L2(Ω), we deduce u = 0 P a.s.

1,2 We shall identify Dm with its closed extension and denote by Dm its domain, i.e.,

1,2 2 2 Dm = {F ∈ L (Ω) : ∃(Fi)i∈N ⊂ S s.t (Fi)i∈N → F, (DmFi)i∈N converges in L (Ω)}.

11 1,2 1,2 Obviously, S ⊂ Dm . For every F ∈ Dm , we define DmF = limi→∞ DmFi. Thanks to Proposition 4.0, this limit does not depend on the choice of the sequence (Fi)i∈N, so DmF is well-defined and coincides with the already defined 1,2 DmF in case F ∈ S. For every F ∈ Dm , we have

2 2 2 ||F || 1,2 = ||F ||L2(Ω) + ||DmF ||L2(Ω) < +∞. Dm

4.2 The local Dirichlet form Now we would like to define an operator D from L2(Ω) into L2(Ω; H) which plays the role of stochastic derivative in the sense that for every F in its domain and m ∈ H, Z T DmF =< DF, m >H= DtF m(t)dt. 0

Let (mi)i∈N be an orthonormal basis of the space H. Then every function m ∈ H P+∞ can be expressed as m = i=1 < m, mi >H mi. We now follow the construction of Bouleau and Hirsch [7]. We set

+∞ +∞ 1,2 \ 1,2 X 2 = {X ∈ : k D X k 2 < +∞}, and D Dmi mi L (Ω) i=1 i=1 +∞ 1,2 X ∀X,Y ∈ D , E(X,Y ) = E[Dmi XDmi Y ]. i=1

We denote E(X) := E(X,X) for convenience. The next proposition defines the local Dirichlet form. Concerning the definition of local Dirichlet forms, our reference is [7].

Proposition 4.0. The bilinear form (D1,2, E) is a local Dirichlet admitting a gradient, D, and a carr´edu champ, Γ, given respectively by the following formulas:

+∞ 1,2 X 2 ∀X ∈ D ,DX = Dmi X mi ∈ L (Ω; H), i=1 1,2 ∀X,Y ∈ D , Γ[X,Y ] = hDX,DY iH.

As a consequence D1,2 is a Hilbert equipped with the norm:

1,2 2 2 ∀X ∈ , k X k 1,2 = ||X|| 2 + E(X). D D L (Ω) Proof. The proof is obvious and uses the same arguments as the proof of Propo- sition 4.2.1 in [7], Chapter II. Let us remark that the locality property is a direct consequence of the functional calculus, see Proposition 3.0.

12 Example 4.0. By using the result in Example 3.0, for any n ∈ N we have P+∞ P+∞ 1. DZn = i=1 < DZn, mi >H mi = i=1 (Dmi Zn) mi = 0. Similarly, DNT = 0.

2. In the same way,

¯ X ¯ X ¯ DTn = < DTn, mi >H mi = (Dmi Tn) mi i i ¯ X Z Tn X Z T = − mi mi(s)ds = − mi mi(s)1[0,T¯n](s)ds i 0 i 0 ¯ ¯ X Tn Tn = < m , − 1 ¯ > m = − 1 ¯ . i T [0,Tn] H i T [0,Tn] i

Moreover, as a consequence of the functional calculus specific to local Dirich- let forms (see [7], Section I.6) the set of smooth function S is dense in D1,2 and if Pd P i1,··· ,in F ∈ S, F = a1A + f (T1, ··· ,Tn)1 i1,··· ,in , then 0 n=1 i1,··· ,in∈{1,··· ,l} n An

d n ∂  T  X X X i1,··· ,in i D F = − f (T , ··· ,T )× 1 (t) − 1 i ,··· ,in . t n 1 n [0,Ti] A 1 ∂ti T n n=1 i1,··· ,in∈{1,··· ,l} i=1

Remark .3. The expression of the gradient operator D above prove that nor the Dirichlet form (D1,2, E) nor the gradient D depend on the choice of the orthonormal basis (mi)i∈N of H.

4.3 Divergence operator Let δ : L2(Ω; H) → L2(Ω) be the adjoint operator of D. Its domain, Domδ, is the set of u ∈ L2(Ω; H) such that there exists c > 0 satisfying

"Z T # 1,2 D F u dt ≤ c||F || 1,2 , ∀F ∈ . E t t D D 0 It follows from the properties of D that δ is also a closed densely defined oper- ator. We have the integration by parts formula by the duality:

"Z T # 1,2 E[δ(u)F ] = E [< u, DF >H] = E utDtF dt , ∀F ∈ D , u ∈ Domδ. 0

Proposition 4.0. For every u ∈ D1,2 ⊗ H, we have u ∈ Domδ and Z T Z T δ(u) = utdN˜t − Dtutdt. 0 0

13 Proof. First of all, if u = mG with m ∈ H,G ∈ D1,2 then

"Z T # " Z T # ˆ E utDtF dt = E G m(t)DtF dt = E[GDmF ] = E[F (δ(m)G−DmG)] 0 0 for every F ∈ D1,2. From the uniqueness of δ, we have Z T Z T Z T Z T ˆ δ(u) = δ(m)G−DmG = G m(t)dN˜t− DtGm(t)dt = utdN˜t− Dtutdt. 0 0 0 0 By linearity, this is true for every function in

n 1,2 X 1,2 {u ∈ D ⊗ H : u = miGi, mi ∈ H,Gi ∈ D }. i=1

The result of the proposition follows since this set is dense in D1,2 ⊗ H.

Remark .4. If u ∈ H, then Dsus = 0 for every s ∈ [0,T ], so the divergence operator δ coincides with the w.r.t. the compensated Poisson process δˆ. From the proof of Proposition 4.0, we can retain that

1,2 1. δ(mG) = δ(m)G − DmG for every m ∈ H,G ∈ D .

1,2 2. E[GDmF ] = E[F δ(mG)] for every m ∈ H,F,G ∈ D . Corollary 0. If u ∈ L2(Ω; H) is an adapted process then

Z T δ(u) = utdN˜t. 0

Proof. We establish first the result for an elementary process u in D1,2 ⊗H given by q X ¯ ¯ ut = fj(T1, ··· , Tn,Z1, ··· ,Zn)1]tj−1,tj ](t), j=1 ∗ n n where q, n ∈ N , 0 ≤ t1 ≤ · · · ≤ tn ≤ T and for each j, fj : R × K → R is infinitely differentiable with bounded derivatives of any order with respect to n ¯ ¯ variables in R and such that fj(T1, ··· , Tn,Z1, ··· ,Zn) is Ftj−1 -measurable. j As a consequence, on the set ∆i = {0 ≤ s1 ≤ · · · ≤ si ≤ tj−1 ≤ si+1 ≤ · · · ≤ sn ≤ T },

fj(s1, ··· , sn, c1, ··· , cn) = fj(s1, ··· , si, tj−1, ··· , tj−1, c1, ··· , ci, k1, ··· , k1)

n for every (c1, ··· , cn) ∈ K . Hence for each k > i,

j ∂fj ∀(s1, ··· , sn) ∈ ∆i , (s1, ··· , sn, c1, ··· cn) = 0. ∂sk

14 1,2 R T As u belongs to D ⊗ H, one has to keep in mind that 0 utdt = 0. We note also that the set of such elementary adapted processes is dense in the (closed) subspace of adapted processes in D1,2 ⊗ H. We have q n  ¯  X X ∂fj Ti D u = (T¯ , ··· , T¯ ,Z , ··· ,Z ) − 1 ¯ (t) 1 (t). t t ∂s 1 n 1 n T [0,Ti] ]tj−1,tj ] j=1 i=1 i Since u is adapted we have for each (i, j),

∂fj ¯ ¯ (T1, ··· , Tn,Z1, ··· ,Zn)1[0,T¯i](t)1]tj−1,tj ](t) = 0. ∂si Then, adopting obvious notations, we have for each i Z T q Z T X ∂fj Ti Ti ∂ (T , ··· ,T ,Z , ··· ,Z ) 1 (t)dt = u ds = 0, ∂s 1 n 1 n T ]tj−1,tj ] T ∂s s 0 j=1 i i 0 R T hence 0 Dtutdt = 0 which proves the result in this case. We conclude using a density argument.

The next proposition is an important property of the operator δ, which follows from the fact that D is a derivative.

Proposition 4.0. Let F ∈ D1,2 and X ∈ Domδ such that R T 2 F δ(X) − 0 DtFXtdt ∈ L (Ω), then FX ∈ Domδ and Z T δ(FX) = F δ(X) − DtFXtdt. 0 Proof. For every G ∈ S, we have Z T Z T E[δ(FX)G] = E FXtDtGdt = E Xt(Dt(GF ) − GDtF )dt 0 0 " Z T # = E G(F δ(X) − DtFXtdt) . 0

R T ˜ R T In particular, if X = m ∈ H ⊂ Domδ then δ(m) = 0 mtdNt and 0 DtF m(t)dt = DmF . Hence we have a simpler formula Z T δ(mF ) = F m(t)dN˜t − DmF. 0 Remark .5. In this approach, the Clark-Ocone formula does not hold. For example, we have already shown that DmNT = 0, ∀m ∈ H, so DtNT = 0, ∀t ∈ [0,T ], but we do not have NT = ENT .

15 5 An absolute continuity criterion

i1,··· ,in Recall that A0 = {NT = 0}, and An = {NT = n, Z1 = ki1 , ··· ,Zn = kin } ∗ for all n ∈ N , i1, ··· , in ∈ {1, ··· , l}.

Lemma 5.0. Let ∆n = {(t1, ··· , tn) : 0 < t1 < ··· < tn < T } and λ1, ··· , λn as in Proposition 2.0. The joint distribution of (T1, ··· ,Tn) conditionally to i1,··· ,in An has a density

i1,··· ,in n + kn : R → R n (λ − λ )eti(λi+1−λi) i1,··· ,in Y i+1 i t = (t1, ··· , tn) 7→ kn (t) = n! 1∆n (t) e(λi+1−λi)T − 1 i=1,λi+16=λi n with respect to the Lebesgue measure νn on R . Proof. Let f be a measureable function on Rn. Then

i1,··· ,in i1,··· ,in [f(T1, ··· ,Tn)|A ] = [1 i1,··· ,in f(T1, ··· ,Tn)]/ (A ) E n E An P n

E[f(T1, ··· ,Tn)1{NT =n}1{Z1=ki ,··· ,Zn=ki }] = 1 n [1 1 ] E {NT =n} {Z1=ki1 ,··· ,Zn=kin }

E[f(T1, ··· ,Tn)1{NT =n}|1{Z1=ki ,··· ,Zn=ki }] E[1{Z1=ki ,··· ,Zn=ki }] = 1 n 1 n [1 |1 ] [1 ] E {NT =n} {Z1=ki1 ,··· ,Zn=kin } E {Z1=ki1 ,··· ,Zn=kin }

E[f(T1, ··· ,Tn)1{Tn≤T

RR Qn+1 −λi(ti−ti−1) f(t1, ··· , tn) i=1 λie dt1 ··· dtn+1 = 0

16 i1,··· ,in ∞ Remark .6. The density kn is a positive function of class C on ∆n. Particularly, in case C is a homogeneous Poisson process, we have λ1 = λ2 = i1,··· ,in ··· = λn+1, so kn (t1, ··· , tn) = n! 1∆n (t), which means (T1, ··· ,Tn) has i1,··· ,in uniform distribution on ∆n conditionnally to An .

∗ i1,··· ,in Now fix n ∈ N , i1, ··· , in ∈ {1, ··· , l}, we consider dn the set of n 2 i1,··· ,in B(R )-measurable functions u in L (kn dt) such that for any i ∈ {1, ··· , n}, ¯ n−1 (i) and νn−1-almost all t = (t1, ··· , ti−1, ti+1, ··· , tn) ∈ R , ut¯ = u(t1, ··· , ti−1, ., ti+1, ··· , tn) (i) has an absolutely continuous versionu ˜t¯ on {ti :(t1, ··· , tn) ∈ ∆n} such that n   X ∂u ∂u titj t ∧ t − ∈ L1(ki1,··· ,in dt), ∂t ∂t i j T n i,j=1 i j

∂u ∂u˜(i) t¯ i1,··· ,in where = . We consider the following quadratic form on dn ∂ti ∂s

1 Z n ∂u ∂v  t t  i1,··· ,in X i j i1,··· ,in i1,··· ,in en (u, v) = ti ∧ tj − kn dt, ∀u, v ∈ dn . 2 n ∂t ∂t T R i,j=1 i j

As usual, we denote e(u, u) by e(u).

i1,··· ,in i1,··· ,in Proposition 5.0. 1. The bilinear form (dn , en ) defines a local 2 i1,··· ,in Dirichlet form on L (kn dt) which admits a “carr´edu champ” op- n erator γn and a gradient operator D˜ given by n   n   X ∂u ∂v titj X ∂u ti γ [u, v](t) = t ∧ t − , D˜ nu(t) = − 1 (s) n ∂t ∂t i j T s ∂t T [0,ti] i,j=1 i j i=1 i

i1,··· ,in n for all u, v ∈ dn , t = (t1, ··· , tn) ∈ R , s ∈ [0,T ].

n n i1,··· ,in i1,··· ,in 2. The structure (R , B(R ), kn dt, dn , γn) satisfies the (EID) prop- ∗ i1,··· ,in d erty, i.e., for every d ∈ N , u = (u1, ··· , ud) ∈ (dn ) ,

i1,··· ,in u∗[(detγn[u]).kn νn]  νd,

where γn[u] denotes the matrix (γn(ui, uj))1≤i,j≤d.

Proof. The results are obtained by applying Proposition 1 and Theorem 2 in i1,··· ,in i1,··· ,in titj [4] for d = dn , k = kn and ξij(t) = ti ∧ tj − T , ∀1 ≤ i, j ≤ n, t = (t1, ··· , tn). We only have to prove that ξ is locally elliptic on ∆n. Indeed, for n every α = (α1, ··· , αn) ∈ R , t = (t1, ··· , tn) ∈ ∆n, we have

2 n   Z T n  ! X titj X ti α∗ξ(t)α = α α t ∧ t − = α 1 (s) − ds ≥ 0. i j i j T i [0,ti] T i,j=1 0 i=1

17 Pn ti The equality occurs if and only if i=1 αi(1[0,ti](s) − T ) = 0, ∀s ∈ [0,T ]. By Pn Pn taking tn < s < T , we obtain i=1 αiti = 0, so i=1 αi1[0,ti](s) = 0, ∀s ∈ [0,T ]. Pn−1 By taking tn−1 < s < tn, we obtain αn = 0, so i=1 αi1[0,ti](s) = 0, ∀s ∈ [0,T ]. Continue this process and finally we obtain αi = 0, ∀i = 1, ··· , n.

i1,··· ,in Remark .7. Since the density function kn is bounded in ∆n both below p i1,··· ,in p and above by positive constants, we have L (kn dt) = L (dt), so the domain i1,··· ,in dn does not depend on i1, ··· , in.

For every measurable random variable F :Ω → R, there exists a constant a ∈ i1,··· ,in n ∗ R and measurable functions fn : R → R, ∀n ∈ N , i1, ··· , in ∈ {1, ··· , l} such that ∞ X X i1,··· ,in F = a1A + 1 i1,··· ,in f (T1, ··· ,Tn) − a.s.. (5.11) 0 An n P n=1 i1,··· ,in∈{1,··· ,l}

From now on, we will write P instead of P∞ P for simplicity. n,i n=1 i1,··· ,in∈{1,··· ,l} Proposition 5.0. For every measurable random variable F of the form (5.11), 1,2 i1,··· ,in i1,··· ,in ∗ we have F ∈ D if and only if fn ∈ dn , ∀n ∈ N , ∀i1, ··· , in ∈ {1, ··· , l} and X i1,··· ,in i1,··· ,in 2 P(An )||fn || i1,··· ,in < ∞. dn n,i Moreover, if F ∈ D1,2 then

2 2 X i1,··· ,in i1,··· ,in 2 ||F || 1,2 = a P(A0) + P(An )||fn || i1,··· ,in . (5.12) D dn n,i Proof. For every F of the form (5.11), we have

h 2i 2 2 2 X i1,··· ,in  ||F || 2 = [F ] = a (A0) + 1 i1,··· ,in f (T1, ··· ,Tn) L (Ω) E P E An n n,i h 2 i 2 X i1,··· ,in i1,··· ,in  i1,··· ,in = a P(A0) + P(An )E fn (T1, ··· ,Tn) |An n,i Z 2 2 X i1,··· ,in i1,··· ,in  i1,··· ,in = a P(A0) + P(An ) fn (t) kn (t)dt n n,i R

2 X i1,··· ,in i1,··· ,in 2 = a P(A0) + P(An )||fn || 2 i1,··· ,in . L (kn dt) n,i And the Malliavin derivative n ! i1,··· ,in   X X ∂fn Ti DsF = 1 i1,··· ,in (T1, ··· ,Tn) − 1[0,T ](s) An ∂t T i n,i i=1 i

X ˜ n i1,··· ,in = 1 i1,··· ,in D f (T1, ··· ,Tn) , An s n n,i

18 which deduces Z T 2 2 ||DF ||L2(Ω;H) = E[(DsF ) ]ds 0 Z T   2 X ˜ n i1,··· ,in = 1 i1,··· ,in D f (T1, ··· ,Tn) ds E An s n n,i 0 Z T  2  X i1,··· ,in ˜ n i1,··· ,in i1,··· ,in = P(An )E Ds fn (T1, ··· ,Tn) |An ds n,i 0 Z T Z  2 X i1,··· ,in ˜ n i1,··· ,in i1,··· ,in = P(An ) Ds fn (t) kn (t)dtds n n,i 0 R

X i1,··· ,in ˜ n i1,··· ,in 2 = P(An )||D fn || 2 n i1,··· ,in L ((R ,kn dt);H) n,i

X i1,··· ,in i1,··· ,in i1,··· ,in = P(An )en [fn ]. n,i

Therefore,

2 2 X i1,··· ,in i1,··· ,in 2 ||F ||L2(Ω) + ||DF ||L2(Ω;H) = P(An )||fn || i1,··· ,in . dn n,i

From here we obtain the condition for F ∈ D1,2. The equation (5.12) is obvious 2 2 2 2 since the fact that ||F || 1,2 = ||F || 2 +E(F ) = ||F || 2 +||DF || 2 . D L (Ω) L (Ω) L (Ω;H) Remark .8. In summary, we have the following relations between the Dirichlet 1,2 n n i1,··· ,in i1,··· ,in structure (Ω, F, P, D , Γ) and the Dirichlet structures (R , B(R ), kn dt, dn , γn): for every F ∈ D1,2 having the form (5.11),

2 2 P i1,··· ,in i1,··· ,in 2 1. ||F ||L2(Ω) = a P(A0) + n,i P(An )||fn || 2 i1,··· ,in . L (kn dt)

P ˜ n i1,··· ,in 2. DsF = 1 i1,··· ,in D f (T1, ··· ,Tn) , ∀s ∈ [0,T ]. n,i An s n

P i1,··· ,in 3. Γ[F ] = 1 i1,··· ,in γn[f ](T1, ··· ,Tn) n,i An n

P i1,··· ,in i1,··· ,in i1,··· ,in 4. E(F ) = n,i P(An )en [fn ].

2 2 P i1,··· ,in i1,··· ,in 2 5. ||F || 1,2 = a P(A0) + n,i P(An )||fn || i1,··· ,in . D dn

∗ 1,2 d Now let d ∈ N and F = (F1, ··· ,Fd) ∈ (D ) , we denote by Γ[F ] the d×d symmetric matrix (Γ[Fi,Fj])1≤i,j≤d.

Theorem 5.1. (Energy image density property) 1,2 d If F ∈ (D ) , then F∗[(detΓ[F ]).P] is absolutely continuous with respect to the d Lebesgue measure νd on R .

19 d Proof. Let B ⊂ R such that νd(B) = 0. We have to prove that

F∗[(detΓ[F ]).P](B) = 0 or equivalently, E[1B(F )detΓ[F ]] = 0. Thanks to the previous proposition, we write

∞ X X i1,··· ,in F = a1A + 1 i1,··· ,in f (T1, ··· ,Tn), 0 An n n=1 i1,··· ,in∈{1,··· ,l}

i1,··· ,in i1,··· ,in d ∗ with fn ∈ (dn ) , ∀n ∈ N , ∀i1, ··· , in ∈ {1, ··· , l}. Then

X i1,··· ,in Γ[F,F ] = 1 i1,··· ,in γn[f ](T1, ··· ,Tn) , An n n,i ∞ Z X X i1,··· ,in i1,··· ,in i1,··· ,in E[1B(F )detΓ[F ]] = P(An ) 1Bdetγn[fn ]kn dt. n n=1 i1,··· ,in∈{1,··· ,l} R

i1,··· ,in By applying Part 2 of Proposition 5.0 for u = fn , we obtain Z i1,··· ,in i1,··· ,in ∗ 1Bdetγn[fn ]kn dt = 0, ∀n ∈ N . n R

Hence E[1B(F )detΓ[F ]] = 0.

6 Application to SDEs involving the Markov chain

Reminding that the original works on Malliavin calculus aimed to study the existence and the smoothness of densities of solutions to stochastic differential equations, this section is contributed to infer the existence in some sense of a density from the non-degeneracy of the Malliavin covariance matrix and deduce a simple result for Markov chain driven stochastic differential equations. First notice that every random variable F in Ω can not have a density nor R T 2 the Malliavin variance 0 |DtF | dt be a.s. strictly positive. Indeed, from the decomposition

∞ X X i1,··· ,in F = a1A + 1 i1,··· ,in f (T1, ··· ,Tn) 0 An n n=1 i1,··· ,in∈{1,··· ,l} we deduce that the law of F has a point mass at a (since P(F = a) ≥ P(NT = T −λ1T R 2 0) = P(T1 > T ) = e > 0), and 0 |DtF | dt = 0 on {NT = 0}. Therefore, −1 we shall rather give conditions under which (1{NT ≥1}P)F has a density. Now we will study the regularity of the solution of a stochastic differential equation driven by the Markov chain C. Let d ∈ N∗ and consider the SDE Z t Z t Xt = x0 + f(s, Xs,Cs)ds + g(s, Xs−,Cs)dNs, (6.13) 0 0

20 or in the differential form

dXt = f(t, Xt,Ct)dt + g(t, Xt−,Ct)dNt,X0 = x0.

d where x0 ∈ R fixed, (Nt)t≥0 is the process of cumulative of jumps of (Ct)t≥0 defined by (2.3), the functions f, g : R+ × Rd × K → Rd are measurable and satisfy

1. ∀t ∈ R+, k ∈ K, the maps x 7→ f(t, x, k), x 7→ g(t, x, k) are C1;

2. sup(t,x,k) |∇xf(t, x, k)| + |∇xg(t, x, k)| < +∞.

Remark .9. Here, the term g(s, Xs− ,Cs) is not predictable but it is not a real problem since N is of finite variation so that that this equation may be solved pathwise. By adapting the proofs in [2], [13] or using the explicit expression of the solution given below, it is clear that Equation (6.13) admits a unique solution, X, such that

\ p ∀T > 0, sup |Xt| ∈ L (Ω). t∈[0,T ] p≥1

We would like to apply Theorem 5.1 to F = XT . For each k ∈ K, let {Φs,t(x, k), t ≥ s} denote the deterministic flow defined by

Z t Φs,t(x, k) = x + f(u, Φs,u(x, k), k)du, s or in the differential form

∂tΦs,t(x, k) = f(t, Φs,t(x, k), k), Φs,s(x, k) = x. (6.14)

d We can see that Φs,t(x, k) exists and is unique for every x ∈ R , k ∈ K. On the set {Nt = 0},(Xs)0≤s≤t is the solution of

dXs = f(s, Xs,Z0)ds, X0 = x0.

Hence we have Xs = Φ0,s(x0,Z0), 0 ≤ s ≤ t, and particularly Xt = Φ0,t(x0,Z0). In case Nt ≥ 1, we have the following result:

Proposition 6.1. Let Ψ is the map: (t, x, k) ∈ R+ × Rd × K 7→ Ψ(t, x, k) = x + g(t, x, k). Then for any t ≥ 0 and i ∈ N∗,

Xt = ΦTi,t(., Zi) ◦ Ψ(Ti, ., Zi) ◦ · · · ◦ ΦT1,T2 (., Z1) ◦ Ψ(T1, ., Z1) ◦ Φ0,T1 (x0,Z0) (6.15) P a.e. on the set {Nt = i}.

21 Proof. We will prove (6.15) by induction. On the set {Nt = 1}, the process (Nt) has a jump at T1. The equation remain unchanged for 0 ≤ s < T1, so Xs = Φ0,s(x0,Z0), 0 ≤ s < T1. For T1 ≤ s ≤ t, Xs satisfies the following equation

dXs = f(s, Xs,Z1)ds, XT1 = Ψ(T1,XT1−,Z1).

We obtain Xs = ΦT1,s(XT1 ,Z1) = ΦT1,s(Ψ(T1, Φ0,T1 (x0,Z0),Z1),Z1),T1 ≤ s ≤ t. Particularly,

Xt = ΦT1,t(XT1 ,Z1) = ΦT1,t(., Z1) ◦ Ψ(T1, ., Z1) ◦ Φ0,T1 (x0,Z0).

Now suppose that (6.15) holds until i. Let consider the set {Nt = i + 1}, i.e., the set of trajectories of the Markov chain having i + 1 jumps up to t. For

Ti+1 ≤ s ≤ t, Xs satisfies the following equation

dXs = f(s, Xs,Zi+1)ds, XTi+1 = Ψ(Ti+1,XTi+1−,Zi+1).

Hence Xs = ΦTi+1,s(Ψ(Ti+1,XTi+1−,Zi+1),Zi+1)). We obtain the formula for {Nt = i + 1} by using the induction assumption

XTi+1− = ΦTi,Ti+1 (., Zi)◦Ψ(Ti, ., Zi)◦· · ·◦ΦT1,T2 (., Z1)◦Ψ(T1, ., Z1)◦Φ0,T1 (x0,Z0).

From Equation (6.14), the process ∂tΦs,t(x, k) satisfies

∂t (∂tΦs,t(x, k)) = ∇xf(t, Φs,t(x, k), k)∂tΦs,t(x, k), ∂tΦs,s(x, k) = f(s, x, k), which deduces Z t  ∂tΦs,t(x, k) = f(s, x, k)exp ∇xf(u, Φs,u(x, k), k)du . s

Similarly, the processes ∇xΦs,t(x, k) satisfies

∂t(∇xΦs,t(x, k)) = ∇xf(t, Φs,t(x, k), k)∇xΦs,t(x, k), ∇xΦs,s(x, k) = Id.

Hence, Z t  ∇xΦs,t(x, k) = exp ∇xf(u, Φs,u(x, k), k)du . s We define

Kt(x) = ∇xXt(x), ∀t ≥ 0, (6.16) s s Kt (x) = (∇xXt )(Xs(x)), ∀t ≥ s ≥ 0, (6.17)

s where (Xt (x))t≥s is the solution of Z t Z t s s s Xt = x + f(u, Xu,Cu)du + g(u, Xu−,Cu)dNu. (6.18) s s

22 The process (Kt) defined by (6.16) satisfies the SDE Z t Z t Kt = Id + ∇xf(s, Xs,Cs)Ksds + ∇xg(s, Xs−,Cs)Ks−dNs. (6.19) 0 0

Suppose that det[Id +∇g(., x, .)] 6= 0 and let (K¯t) be the solution of the following SDE Z t Z t −1 K¯t = Id− ∇xf(s, Xs,Cs)K¯sds− (Id+∇xg(s, Xs−,Cs)) ∇xg(s, Xs−,Cs)K¯s−dNs. 0 0 (6.20)

Proposition 6.1. If det[Id+∇g(., x, .)] 6= 0, the processes (Kt) and (K¯t) defined by (6.16) and (6.20) satisfy KtK¯t = 1, ∀t ≥ 0.

Proof. Indeed, the process Yt = KtK¯t satisfies Y0 = Id, and dYt = K¯t−dKt + Kt−dK¯t + d[K, K¯ ]t

= Yt−(∇xf(t, Xt,Ct)dt + ∇xg(t, Xt−,Ct)dNt) −1 + Yt−(−∇xf(t, Xt,Ct)dt − (Id + ∇xg(t, Xt−,Ct)) ∇xg(t, Xt−,Ct)dNt) 2 −1 − Yt−(∇xg(t, Xt−,Ct)) (Id + ∇xg(t, Xt−,Ct)) dNt = 0.

From Equations (6.19) and (6.20), we obtain the recurrence property of Kt and K¯t: ¯ −1 ¯ KTi = (Id+∇xg(Ti,XTi−,ZTi ))KTi−, KTi = (Id+∇xg(Ti,XTi−,ZTi )) KTi−.

¯ s Proposition 6.1. If det[Id +∇g(., x, .)] 6= 0, the processes (Kt), (Kt), and (Kt ) s ¯ defined by (6.16), (6.20), and (6.17) satisfy Kt (x) = Kt(x)Ks(x), ∀t ≥ s ≥ 0. s Proof. We fix s ≥ 0. From (6.18), we deduce the S.D.E. satisfied by Kt Z t Z t s s s s s Kt = Id+ ∇xf(u, Xu(Xs(x)),Cu)Kudu+ ∇xg(u, Xu−(Xs(x)),Cu)Ku−dNu, s s or Z t Z t s s s Kt = Id + ∇xf(u, Xu,Cu)Kudu + ∇xg(u, Xu−,Cu)Ku−dNu, ∀t ≥ s. s s

In addition, Kt satisfies Z t Z t Kt = Ks + ∇xf(u, Xu,Cu)Kudu + ∇xg(u, Xu−,Cu)Ku−dNu, ∀t ≥ s. s s (6.21) s Therefore, Kt = Kt Ks.

23 Now we study the relationship between the Malliavin derivative and the derivative of flow of the solution. In the Brownian case, we have (cf. [5])

s DsXt = f(s, Xs,Cs)Kt , ∀0 ≤ s ≤ t.

In our case, the result is as follows:

Proposition 6.1. Let ϕ : R+×Rd×K → Rd defined by: ∀(t, x, k) ∈ R+×Rd×K,

ϕ(t, x, k) = f(t, x + g(t, x, k), k) − (Id + ∇xg(t, x, k))f(t, x, k) − ∂tg(t, x, k).

Then we have

R T t t  1. DsXT = − 0 KT ϕ(t, Xt−,Ct) T − 1[0,t](s) dNt.

R T R T t ∗ u ∗ ut  2. Γ[XT ] = 0 0 KT ϕ(t, Xt−,Ct)ϕ (u, Xu−,Cu)(KT ) u ∧ t − T dNtdNu.

Proof. On the set {NT = i}, we have, for every 0 ≤ j ≤ i,

Tj XT (x) = ΦTi,T (., Zi) ◦ Ψ(Ti, ., Zi) ◦ · · · ◦ Ψ(Tj+1, ., Zj+1) ◦ ΦTj ,Tj+1 (x, Zj)

= F (ΦTj ,Tj+1 (x, Zj)),

where F = ΦTi,T (., Zi) ◦ Ψ(Ti, ., Zi) ◦ · · · ◦ Ψ(Tj+1, ., Zj+1). We will prove that

Tj Tj ∂Tj XT (x) = −∇xXT (x)f(Tj, x, Zj). (6.22)

Indeed, the left hand side equals to

0 −F (ΦTj ,Tj+1 (x, Zj))∂tΦTj ,Tj+1 (x, Zj), and the right hand side equals to

0 −F (ΦTj ,Tj+1 (x, Zj))∇xΦTj ,Tj+1 (x, Zj)f(Tj, x, Zj).

The equation (6.22) is obtained by using the result

∂tΦTj ,Tj+1 (x, Zj) = ∇xΦTj ,Tj+1 (x, Zj)f(Tj, x, Zj).

We have

XTj −(x) = ΦTj−1,Tj (., Zj−1) ◦ Ψ(Tj−1, ., Zj−1) ◦ · · · ◦ Ψ(T1, ., Z1) ◦ Φ0,T1 (x, Z0), which deduces

∂Tj XTj −(x) = ∂tΦTj−1,Tj (., Zj−1) (Ψ(Tj−1, ., Zj−1) ◦ · · · ◦ Ψ(T1, ., Z1) ◦ Φ0,T1 (x, Z0))

= f(Tj,XTj −(x),Zj).

24 Moreover, XTj (x) = Ψ(Tj,XTj −(x),Zj), so

∂Tj XTj (x) = ∂tΨ(Tj,XTj −(x),Zj) + ∇xΨ(Tj,XTj −(x),Zj)∂Tj XTj −(x)

= ∂tΨ(Tj,XTj −(x),Zj) + ∇xΨ(Tj,XTj −(x),Zj)f(Tj,XTj −(x),Zj).

Tj As XT (x) = XT (XTj (x)), by using Equation (6.22) we deduce

Tj Tj ∂Tj XT (x) = ∂Tj XT (XTj (x)) + ∇xXT (XTj (x))∂Tj XTj (x) Tj = −KT (x)ϕ(Tj,XTj −(x),Zj).

Hence,

i i X ∂XT X T D X = D T = − K j ϕ(T ,X ,Z )D T s T ∂T s j T j Tj − j s j j=1 j j=1 i   X T Tj = − K j ϕ(T ,X ,Z ) − 1 (s) T j Tj − j T [0,Tj ] j=1 Z T   t t = − KT ϕ(t, Xt−,Ct) − 1[0,t](s) dNt. 0 T And the second statement follows as Z T ∗ Γ[XT ] = DsXT (DsXT ) ds 0 Z T Z T    t t = ds KT ϕ(t, Xt−,Ct) − 1[0,t](s) dNt 0 0 T Z T ∗ u ∗  u   × ϕ (u, Xu−,Cu)(KT ) − 1[0,u](s) dNu 0 T Z T Z T t ∗ u ∗ = KT ϕ(t, Xt−,Ct)ϕ (u, Xu−,Cu)(KT ) 0 0 ! Z T  t   u  × − 1[0,t](s) − 1[0,u](s) ds dNtdNu 0 T T Z T Z T   t ∗ u ∗ ut = KT ϕ(t, Xt−,Ct)ϕ (u, Xu−,Cu)(KT ) u ∧ t − dNtdNu. 0 0 T

Now we can use the criterion of density in D1,2. We define Z T Z T   ! t ∗ u ∗ ut C = {det KT ϕ(t, Xt−,Ct)ϕ (u, Xu−,Cu)(KT ) u ∧ t − dNtdNu > 0}. 0 0 T

As a consequence of Theorem 5.1, we have

25 Proposition 6.1. (Existence of density of XT ) If P(C) > 0 then the conditional law of XT (x) given C is absolutely continuous with respect to the Lebesgue mea- sure on Rd.

A Now we consider the case d = 1. For each subset A of K, we denote by Nt the number of times that the Markov chain C passes through A up to t.

Proposition 6.1. Assume that there exists a subset A of K such that for every k ∈ A and every t ∈ R+, x ∈ Rd, ϕ(t, x, k) 6= 0. Then the conditional law A of XT (x) given {NT ≥ 1} is absolutely continuous with respect to Lebesgue measure on R. Proof. The results is obtained by using Proposition 6.1 and

2 Z T Z T   ! t t Γ[XT ] = KT ϕ(t, Xt−,Ct) − 1[0,t](s) dNt ds ≥ 0. 0 0 T

By using the same argument as Proposition 5.0, Γ[XT ] = 0 deduces that

t KT ϕ(t, Xt−,Ct) = 0, a.s. ∀t ∈ [0,T ].

t This can not happen since KT 6= 0 a.s.∀t ∈ [0,T ] and ϕ(t, Xt−,Ct) 6= 0 a.s. when Ct ∈ A.

In case the functions f and g do not depend on t, we can see that the function ϕ vanishes when g(x, k) = −x and f(0, k) = 0. In this case, the solution jumps to 0 at the first jump and then stays there. In the next proposition, we will give a sufficient condition for which the condition in Proposition 6.1 will be satisfied.

−1 Theorem 6.2. A sufficient condition for the measure (1{NT ≥1}P)XT to be absolutely continuous with respect to one dimensional Lebesgue measure is that: 1 |W (g, f)(x, k)| > ||f 00(., k)|| ||g(., k)||2 , ∀x ∈ , k ∈ K, 2 ∞ ∞ R where W (g, f)(x, k) = g0(x, k)f(x, k)−f 0(x, k)g(x, k) is the Wronskian of f and g, and all the derivatives are with respect to x.

Proof. By applying Taylor expansion for f(x, k) on x, we have

ϕ(x, k) = f(x + g(x, k), k) − f(x, k) − g0(x, k)f(x, k) 1 = g(x, k)f 0(x, k) + f 00(ξ , k)g2(x, k) − g0(x, k)f(x, k) 2 x 1 = f 00(ξ , k)g2(x, k) − W (g, f)(x, k), 2 x for some ξx between x and x + g(x, k).

26 7 Computation of Greeks

In this section, we would like to use the same technique as the classical Malliavin calculus for the case of Markov chain to compute greeks by using the integration by parts formula as done for the case of Poisson process in [11]. As usual for S n n ∈ N {+∞}, we shall denote by Cb (R) the set of bounded real functions which are n times differentiable with bounded derivatives.

x x Lemma 7.2. Let (a, b) be an open interval of R. Let (F )x∈(a,b) and (G )x∈(a,b) be two families of random variables such that the maps x ∈ (a, b) 7→ F x ∈ D1,2 and x ∈ (a, b) 7→ Gx ∈ D1,2 are continuously differentiable. Let m ∈ H satisfy x x DmF 6= 0, a.s on {∂xF 6= 0}, x ∈ (a, b),

x x x and such that mG ∂xF /DmF is continuous in x in Domδ. We have   x  ∂ x x x x ∂xF x x E[G f(F )] = E f(F )δ G m x + E[∂xG f(F )] (7.23) ∂x DmF

1 for any f ∈ Cb (R). Proof. We have: ∂ [Gxf(F x)] = [Gxf 0(F x)∂ F x] + [∂ Gxf(F x)]. ∂xE E x E x  x  x Dmf(F ) x x x = E G x ∂xF + E[∂xG f(F )] DmF  x  x ∂xF x x x = E G x Dmf(F ) + E[∂xG f(F )] DmF   x  x x ∂xF x x = E f(F )δ G m x + E[∂xG f(F )]. DmF The last equation follows from Remark .4.

In practice, it is interesting to consider the case where the function f is only piecewise continuous. To this end, we introduce the set T0 ⊂ R of points y ∈ R such that x 1 1 lim sup P(F ∈ (y − , y + )) = 0, n→+∞ x∈(a,b) n n S and we put T = T0 {−∞, +∞}. Let us remark that if for any x ∈ (a, b), F x admits a density which is locally bounded uniformly w.r.t. x then T = R, see for example [14] for such an example. Finally we consider the set L of real functions f of the form

n X ∀y ∈ R, f(y) = Φi(y)1Ai (y), 1

27 where n ∈ N, for all i, Ai is an interval with endpoints in T and Φi is continuous and bounded.

x x Proposition 7.2. Assume that families (F )x∈(a,b) and (G )x∈(a,b) satisfy hy- potheses of Lemma 7.2. Let f be in L. Then the map x 7→ E[Gxf(F x)] is continuously differentiable and relation (7.23) still holds.

Proof. We consider a compact interval I ⊂ (a, b) and prove the result on I. Consider first the case where f is continuous and bounded. Then, for any n ∈ N∗ 1 we can approximate f by a function fn ∈ Cb (R) such that 1 •| f(y) − f (y)| ≤ ∀y ∈ [−n, n]; n n

•| f(y) − fn(y)| ≤ kfk∞ ∀y, |y| ≥ n;

As a consequence of the previous Lemma and the hypotheses of continuity we x x made, we know that the map x 7→ E[G fn(F )] is continuously differentiable on I and we have   x  ∂ x x x x ∂xF x x E[G fn(F )] = E fn(F )δ G m x + E[∂xG fn(F )]. ∂x DmF

x x 2 It remains to prove that fn(F ) converges to f(F ) in L uniformly in x on I. Indeed, thanks to the Cauchy-Schwarz inequality, this clearly implies the ∂ ∂ uniform convergence of [Gxf (F x)] to [Gxf(F x)] on I. We have for ∂xE n ∂xE all x ∈ I 1 [|f (F x) − f(F x)|2] ≤ + kfk2 (|F x| ≥ n) E n n2 ∞P 1 sup [|F x|2] ≤ + kfk2 x∈I E , n2 ∞ n2

x x 2 hence limn→+∞ supx∈I E[|fn(F ) − f(F )| ] = 0, which yields the result in this case. Consider now the general case. We just need to consider the case where

f = Φ1[c,+∞[ or f = Φ1(c,+∞[, with Φ is bounded and continuous and c ∈ T0. Then, we can approximate f 0 by a sequence of function (fn)n in Cb (R) converging to f everywhere and such that for any n ∈ N∗

1 • fn(y) = f(y) ∀y ∈ [c + n , +∞[; 1 1 •| f(y) − fn(y)| ≤ kfk∞ ∀y ∈ [c − n , c + n ]; 1 • f(y) = 0 if y ≤ c − n .

28 Since fn is bounded and continuous, we are in the previous case and here again x x 2 we just have to prove that fn(F ) converges to f(F ) in L uniformly in x on I. We have for all x ∈ I 1 1 [|f (F x) − f(F x)|2] ≤ kfk2 (F x ∈ (c − , c + )). E n ∞P n n

Since c belongs to T0 we get

x x 2 lim sup E[|fn(F ) − f(F )| ] = 0, n→+∞ x∈I and the proof is complete since I is arbitrary.

Thanks to Lemma 7.2 and Proposition 7.2, we can compute the weights for first and second derivatives with respect to x as done in [11]. The results are slightly different due to the additional term in δ(m).

References

[1] Bally, V. and E. Clement´ (2010). Integration by parts formula with respect to jump times for stochastic differential equations. Stochastic Anal- ysis. Springer Verlag.

[2] Bichteler K., J. B. Gravereaux, and J. Jacod (1987). Malliavin Cal- culus for Processes with Jumps. Gordon and Breach Science Publishers.

[3] Bouleau N. (2003). Error Calculus for Finance and Physics, the Language of Dirichlet Forms. De Gruyter.

[4] Bouleau N. and L. Denis (2009). Energy image density property and the lent particle method for Poisson measures. Journal of Functional Analysis 257(4), 1144–1174.

[5] Bouleau N. and L. Denis (2015). Dirichlet Forms Methods for Poisson Point Measures and L´evyProcesses. Springer. Forthcoming.

[6] Bouleau N. and F. Hirsch (1986). Formes de Dirichlet g´en´eraleset den- sit´edes variables al´eatoiresr´eellessur l’espace de Wiener. J. Funct. Analysis 69(2), 229–259.

[7] Bouleau N. and F. Hirsch (1991). Dirichlet Forms and Analysis on Wiener Space. De Gruyter.

[8] Carlen E. A., and E. Pardoux (1990). Differential calculus and inte- gration by parts on Poisson space. Stochastics, Algebra and Ananlysis in Classical and Quantum Dynamics, 63–73. Kluwer.

29 [9] Coquio A. (1993). Formes de Dirichlet sur l’espace canonique de Poisson et application aux ´equationsdiff´erentielles stochastiques. Ann. Inst. Henri Poincar´e19(1), 1–36.

[10] Denis L. (2000). A criterion of density for solutions of Poisson-driven SDEs. Probab. Theory Relat. Fields 118, 406–426.

[11] El-Khatib Y. and N. Privault (2004). Computations of Greeks in a market with jumps via the Malliavin calculus. Finance and Stochastics 8(2), 161–179.

[12] Ishikawa Y. and H. Kunita (2006). Malliavin calculus on the Wiener- Poisson space and its application to canonical SDE with jumps. Stoch. Processes and their App. 116, 1743–1769.

[13] Jacod J. (1979). Calcul Stochastique et Probl`emede Martingales. Lecture Notes in Math 714. Springer.

[14] Kawai R. and Takeuchi A. (2011). Greeks formulas faor an asset price model with . Vol. 21(4), 723-742.

[15] Ma Z. M. and M. Rockner¨ (2000). Construction of diffusion on config- uration spaces. Osaka J. Math. 37, 273–314.

[16] Nourdin I. and T. Simon (2006). On the absolute continuity of L´evy processes with drift. Ann. Probab. 34, 1035–1051.

[17] Nualart D. and J. Vives (1990). Anticipative calculus for the Poisson process based on the Fock space. S´em.Prob. XXIV, Lect. Notes in M. 1426. Springer.

[18] Picard J. (1996). On the existence of smooth densities for jump processes. Probab. Theory Relat. Fields 105, 481–511.

30