<<

arXiv:1508.00048v2 [math.PR] 27 Feb 2020 est result density A A Examples 4 for representation martingale and derivative Stochastic 3 measures integer-valued of Functionals 2 Introduction 1 Contents ucinlcluu n atnaerpeetto formul representation martingale and Functional . urmmo ´v rcs ...... L´evy . . process . a . . . . of . martingale . Supremum . . Kella-Whitt . . the . . . to 4.4 L´evy . . Application processes . pure-jump . . for . . exponential . 4.3 . Stochastic . . . . . decomposition 4.2 Kunita-Watanabe . . . . 4.1 . . space . Poisson . on calculus . Malliavin . with Relation random integer-valued formula an representation 3.3 to Martingale respect with derivative 3.2 Stochastic 3.1 . ipepeitbefntoasadcmestditgas...... po multivariate . for . formula representation martingale pathwise . compensated A and . functionals . predictable 2.3 functionals Simple non-anticipative and measures 2.2 Integer-valued 2.1 Keywords: clas compensators. the time-dependent integ beyond and stochastic extend random mar results the for explicit Our of an inverse yields martingales. derivative the integrable stochastic be This to it measure. operator show derivative’ and ’stochastic Itˆo Calculus measure, Functional a construct the We extend measures. we larg results, a to these respect Using with formula representation martingale an uppoess alai acls oso admmeasur random Poisson calculus, Malliavin processes, Jump ecntutaptws aclsfrfntoaso integer of functionals for calculus pathwise a construct We atnaerpeetto,Fntoa tocluu,Int Itˆo calculus, Functional representation, Martingale nee-audrno measures random integer-valued ireBaqeFoetn&Rm Cont Rama & Blacque-Florentin Pierre eray2,2020 28, February Abstract 1 ls fitgrvle admmeasures. random integer-valued of class e igl ersnainfruafrsquare- for formula representation tingale ihrsett nitgrvle random integer-valued an to respect with fPisnrno esrsadallow and measures random Poisson of s s martingales. es, ofntoaso nee-audrandom integer-valued of functionals to vle esrsaduei oderive to it use and measures -valued a ihrsett h compensated the to respect with ral ua7 mula 14 ...... grvle admmeasures, random eger-valued 3 ...... esr 8 ...... measure n rcse 6 ...... processes int 6 ...... 11 ...... 17 . . . . . 13 ...... 15 ...... 12 ...... for a 20 13 3 2 1 Introduction

Consider a filtered probability space (Ω, F, (Ft), P) equipped with a filtration F = (Ft)t≥0 generated by a continuous Rd-valued martingale X and an integer-valued random measure J on [0,T ] × Rd with compensator µ. We say that the filtration (X, J, F) has the predictable representation property if for any square integrable F-martingale M, there exists predictable processes φ : [0,T ] × Ω → Rd and ψ : d [0,T ] × R0 × Ω → predictable such that: t t M(t)= M(0) + φ(s).dX(s)+ ψ(s,z)(J − µ)(ds dz). (1) d Z0 Z0 ZR \{0} where the first is an Itˆostochastic integral and the second integral is a stochastic integral with respect to the compensated random measure J˜ = J −µ. This property holds for instance if the filtration is generated by a multivariate [19, 13] or by a a Brownian motion and an independent Poisson random measure [21, 14]. The computation of such representations is of interest in many applications, such as the optimal control of processes with jumps, mathematical finance and more generally for the solution solution of backward stochastic differential equations with jumps [7]. When X = W is a and J is a Poisson random measure, a well-known approach for obtaining such representations is to use the Malliavin calculus on the Wiener-Poisson space [4, 3, 18, 33, 41, 43, 44]. This leads to a ‘Clark-Ocone’ type representation of the integrands φ, ψ as predictable projections of a (possibly anticipative) process defined as a Malliavin derivative of M on the Wiener- Poisson space: p p φ(s)= E[DtM(T )|Ft], ψ(t,z)= E[D(t,z)M(T )|Ft], (2) where DtM represents the Malliavin derivative of M on the Wiener space and D(t,z)M a Malliavin derivative on the Poisson space for which various constructions have been proposed [4, 3, 28, 30, 18, 42, 43]. Jacod, M´eleard and Protter [20] use Markov semigroup theory. L´eon et al [28] introduce a quotient operator. While fundamentally different from the ones mentioned above, we point out the fact that there exists an approach where the perturbation consists in shifting the jump-times rather than the jump- sizes. Originally introduced on the Poisson space by Carlen and Pardoux [6], this approach presents the advantage of providing a derivative that satisfies the chain rule and has its uses for studying the absolute continuity of solutions of stochastic differential equations. Using the method of Le´on-Tudor [29], this approach is generalised to L´evy processes in Le´on et al [27] (see also [37]). In most of the approaches mentioned above, the jump component are driven by a L´evy process or a Poisson random measure. An alternative pathwise approach for obtaining martingale representation formulae based on the Func- tional Itˆocalculus was introduced in [8, 10] in the continuous case: this approach provides a pathwise expression for the integrand and bypasses the predictable projection and the use of anticipative calculi. This approach was developed in [10] in the setting of a Brownian filtration. Our goal in the present work is to extend these results to the case of functionals of cadlag processes and integer-valued random measures. We first develop a pathwise calculus for functionals of integer-valued measures. We then show that, for a large class of integer-valued random measures, smooth functionals in the sense of this pathwise calculus are dense in the space of square-integrable integrals with respect t the (compensated) random measure. This property allows to construct a closure of the pathwise operators and extend the framework of Functional ItˆoCalculus [17, 9, 8] to functionals of integer-valued random measures. In particular, we construct a ’stochastic derivative’ operator with respect to such integer-valued random measures and obtain an explicit martingale representation formula for square-integrable martingales with respect to the filtration of the integer-valued random measure. These results hold well beyond the class of Poisson random measures and allow for random and time-dependent compensators.

Outline Section 2 introduces the space of integer-valued measures and develops a pathwise calculus for functionals of such measures. We present in particular an inversion formula for (compensated) integrals with respect to an integer-valued measure (Proposition 2.10 ). We use this framework in section 3 to construct a ’stochastic derivative’ operator with respect to a class of integer-valued random measure. This operator is defined as the closure of the pathwise derivative. As a consequence, we obtain a martingale representation formula with respect to a large class of integer-valued random measures. Our assumptions extend beyond Poisson random measures and allow for random and time-dependent compensators. In Section 3.3 we compare our approach with Malliavin calculi for jump processes. Section 4 provides some examples of applications.

2 Some technical proofs are given in the Appendix.

2 Functionals of integer-valued measures 2.1 Integer-valued measures and non-anticipative functionals d d Let B(A) denote the Borel σ-algebra of some set A. Also R0 denotes the space R without the origin. d Definition 2.1. (Space of σ-finite integer-valued measures) Denote by M([0,T ] × R0) the space of σ- d d finite simple integer-valued measures on [0,T ] × R0. For a measure j : B([0,T ] × R0) → N ∪{+∞}, we say that

∞ d j ∈M([0,T ] × R0) IFF j(.)= δ(ti,zi)(.) and is finite on compacts, i=0 X N d N with (ti)i∈N ∈ [0,T ] not necessarily distinct nor ordered, and (zi)i∈N ∈ (R0) . For convenience, we d denote M([0,T ] × R0) by MT throughout this article. We equip this space with a σ-algebra F such that d d the mapping j 7→ j(A) is measurable for all A ∈B([0,T ] × R0), the Borel σ-algebra on [0,T ] × R0. d For (t, j) ∈ [0,T ] ×M([0,T ] × R0), we denote

d d jt(.) := j(. ∩ ([0,t] × R0)) (resp. jt−(.) := j(. ∩ ([0,t) × R0))),

d d the restriction of j to [0,t]] × R0 (resp. [0,t) × R0 ). d d In this section, we write DT := D([0,T ], R ) for the space of c`adl`ag functions from [0,T ] to R .

Definition 2.2 (Stopped trajectory). For x ∈ DT , we write

x(u) if u ≤ t, x (u)= t x(t) if u > t. 

Note that xt is an element of DT . Definition 2.3. (Space Ω of processes and canonical process) We identify the space of processes

Ω := DT ×MT equipped with the σ-algebra F defined as the σ-algebra on the product space such that for every j ∈MT d and A ∈B(R0), j(A) is measurable. We define the canonical process (X,J) as follows: for any ω := (x, j) ∈ Ω,

d d d (X,J) : [0,T ] ×B(R0) → R × R ,t,A,ω 7→ ω(t) := (x(t), jt(A)).

0 0 The filtration generated by the canonical process is given by F := (Ft )t∈[0,T ] on MT as the increasing sequence of σ-algebras 0 Ft :== σ((X(s))s∈[0,t], (TK,Zk)TK ≤t), where the (Tk,Zk) denote the jump times and size of the measure J. Now, we define non-anticipative functionals on the space: Definition 2.4 (Non-anticipative functional). A non-anticipative functional F is a map

F : [0,T ] × Ω → R such that

1. F (t, x, j)= F (t, xt, jt), 2. F is measurable with respect to the product σ-algebra B([0,T ]) ×F.

3. For all t ∈ [0,T ], F (t, ·) is Ft-measurable. We denote O the space of such functionals.

3 Definition 2.5 (Predictable functional). A predictable functional F is a non-anticipative functional such that F (t, x, j)= F (t, xt, jt)= F (t, xt−, jt−), and so F (t, ·) is Ft−-measurable. We write P the space of predictable functional processes, and we have P ⊂ O. Definition 2.6 (Functional fields). A non-anticipative functional field Ψ is a map

d Ψ : [0,T ] × R0 × Ω → R such that Ψ(t,z,x,j) = Ψ(t,z,xt, jt), such that Ψ is measurable with respect to the product σ-algebra d d B([0,T ] × R0) ×F, and for all (t,z) ∈ [0,T ] × R0, Ψ(t, z, ·) is Ft-measurable. We denote Of the space of such functionals. Similarly, we call predictable functional field any Ψ ∈ Of such that

Ψ(t,z,x,j)=Ψ(t,z,xt−, jt−), and we denote by Pf the space of such predictable functional fields. d Example 1 (Integral functionals). Given a measurable function g : [0,T ] × R0 → R with compact d support in [0,T ] × R0, the integral functional

t (t, x, j) 7→ Fg(t, x, j)= g(s,z)j(dsdz), 0 Rd Z Z 0 defines a non-anticipative functional.

Definition 2.7 (Vertical perturbation in x). The vertical perturbation of a c`adl`ag function x ∈ Dt, t

F (t, xhei , j ) − F (t, x , j ) lim t t t t h→0 h is well defined for all t and all i. It is called the vertical derivative of F at t with respect to x, and is noted ∇xF (t, xt, jt). A key object in our study is the following functional derivative with respect to the jump component J: d Definition 2.9. For a non-anticipative functional F and (t,z) ∈ (0,T × R0, the following difference operator: ∇(t,z)F (t, x, j)= F (t, xt, jt− + δ(t,z)) − F (t, xt, jt−). (3) Then the operator

∇J : O→Pf , F 7→ ∇F defined by (∇J F )(t,z,x,j)= ∇(t,z)F (t, x, j)= F (., x, jt− + δ(t,z)) − F (., x, jt−) maps a non-anticipative functional F to a predictable functional field denoted ∇J F .

The following proposition summarises key properties of the pathwise operators ∇X and ∇J : Proposition 2.10 (Properties of the pathwise derivative). The following hold true: 1. If F is a predictable functional (in the sense of Definition 2.5) then

∇X F = 0 and∇J F =0.

4 2. Compensated integrals: Let F be a non-anticipative functional of the form

t F (t, xt, jt)= ψ(s,y,xs−, js−)(j − µ)(ds dy) 0 Rd Z Z 0 d d where ψ : [0,T ] × R0 × Ω 7→ R is an optional random function with compact support in [0,T ] × R0 and µ is a predictable measure satsfying Def. 2.11. Then

∇J F (t,z,xt, jt)= ψ(t,z,xt−, jt−). (4)

and ∇X F = 0. Proof. 1. If F is a vertically differentiable predictable functional then, using Definition 2.5, for all (t, x, j) ∈ [0,T ] × Ω, we have that there exists a predictable functional φ such that

G(t, j)= φ(t, xt−, jt−).

So

1 h ∇xG(t, x, j) = lim (G(t, xt−, jt−) − G(t, xt−, jt−)), h→0 h 1 1 = lim (φ(t, (x· + h [t,∞)(·))t−, jt−) − φ(t, xt−, jt−)), h→0 h 1 = lim (φ(t, xt−, jt−) − φ(t, xt−, jt−))=0. h→0 h

2. If F is a predictable functional then, using Definition 2.5, for all (t, x, j) ∈ [0,T ] × Ω, we have that there exists a predictable functional φ such that

G(t, j)= ψ(t, xt−, jt−).

d So for all z ∈ R0,

∇(t,z)G(t, x, j)= G(t, xt−, jt− + δt,z(·)) − G(t, jt−),

= ψ(t, xt−, (jt− + δt,z(·))t−) − ψ(t, xt−, jt−),

= ψ(t, xt−, jt−) − ψ(t, xt−, jt−)=0.

d 3. Since ψ has compact support in R0, 0 6∈ supp(ψ) and so j can only have a finite number of jumps on the support of ψ. Moroever, since µ is σ-finite, the integral of ψ with respect to j − µ is finite. Using the pathwise predictability of ψ(s,y,xs−, js−) and µ, we have

F (t, xt−, jt− + δ(t,z))= F (t, xt−, jt−)+ ψ(t,z,xt−, jt−), (5)

So for all t and z, ∇(t,z)F (t, jt)= ψ(t,z,xt−, jt−).

Furthermore, F is predictable in x, and so ∇xF = 0 by part 1 of the current proposition.

So ∇ is an operator that maps a non-anticipative functional to a predictable functional, and if F is the functional in Proposition 2.10, then ∇F recovers the predictable version of integrand. In particular, if ψ belongs to the set of predictable fields having compact support, in the sense that for all x and j, z 7→ ψ(t,z,x,j) equals zero on a neighbourhood of zero – we denote this space Pfc – then for all (t,j,z), ψ(t,z,jt)= ψ(t,z,jt−), ∇ is the inverse of the integral operator defined by

Pfc → O, . ψ 7→ ψ(s,z,js−)(j − µ)(ds dy), 0 Rd Z Z 0

5 2.2 Simple predictable functionals and compensated integrals Definition 2.11. (σ-finite predictable measure) We call σ-finite predictable measure any σ-finite measure

d + µ : B([0,T ] × R0) × Ω → R

d which satisfies for A ∈B([0,t] × R0):

µ(A, x, j)= µ(A, xt−, jt−).

Definition 2.12. (Stopping time) A stopping time τ is a non-anticipative mapping

τ :Ω → [0,T ] such that for any t ∈ [0,T ] 1 1 τ(x,j)≤t = τ(xt,jt)≤t. Moreover, τ is a predictable stopping-time if 1 1 τ(x,j)≤t = τ(xt−,jt−)≤t.

d Example 2. for all ǫ> 0, Z ∈B(R0), 0 6∈ Z (the closure of Z), and a σ-finite predictable measure µ,

τ ǫ(x, j, Z) = inf{t ∈ [0,T ]|µ([0,t] × Z, x, j) ≥ ǫ} ∧ T is a stopping time. If µ has no atom with respect to time, τ ǫ is also predictable. For convenience later on, we write 1 Cα(j)= j({s}×{ < |z|≤ α}). (6) t α for α ≥ 1.

2.3 A pathwise martingale representation formula for multivariate point pro- cesses We are now able to state the martingale representation formula for the case of multivariate point processes, i.e. when the jump measure has finite activity. Consider now a probability measure P) on the canonical space under which J is a multivariate point d d process on R0 i.e. an integer valued measure such that J([0,t] × R0) < ∞ for all t,ω. Moreover let

Ft = F0 ∨ σ((Tk,Zk)|Tk ≤ t) be the (completed) natural filtration of J: where the (Tk,Zk) denote respectively the jump times and jump sizes of J. In such a setting, the martingale representation theorem holds: as shown by Jacod [19, Prop 54]), a right continuous process M is an (Ft)t∈[0,T ] if and only if there exists V predictable such that t M(t)= M(0) + Ψ(s,z)J(ds dz) Z0 ZE and t e |Ψ(s,z)|J(ds dz) < ∞ a.s. on {t

d ∀(t,z) ∈ [0,T ) × R , ∇J M(t,z)=Ψ(t,z) P − a.s.

6 3 Stochastic derivative and martingale representation formula

In the previous subsection, we have been able to provide a pathwise martingale representation formula in the special case of multivariate point processes. We now aim at obtaining a martingale representation formula for the more general case, where the filtration is generated by a measure with infinite activity and a continuous martingale. We now equip the space (Ω, F) with a probability measure P such that X defines a continuous T 2 martingale and J a jump-measure such that d |z| J(dsdz) < ∞ a.s., with (predictable) compensator 0 R0 0 µ atomless in time, i.e. µ({s} dy,ω) = 0 forR anyR s ∈ [0,T ]. Moreover, we complete the filtration F by the P-null sets and write F := (Ft)t∈[0,t] for the completed filtration. We denote by J = J − µ the compensated random measure and introduce the following spaces: 2 Rd Rd Rd LP([X],µ)=e (φ, ψ) φ : [0,T ] × Ω → and ψ : [0,T ] × × Ω → both predictable and t t  2 2 2 k(φ, ψ)k 2 := E φ (s)d[X](s)+ ψ (s,y)µ(ds dy) < ∞ , LP ([X],µ) 0 0 Rd "Z Z Z 0 # ) and . . 2 2 IP ([X],µ)= Y = φ(s, )dX(s)+ ψ(s,y)J(ds dy) (φ, ψ) ∈ L (X,µ) , d ( 0 0 R0 ) Z Z Z equipped with the norm e 2 2 kY k 2 = E |Y (T )| . IP ([X],µ) Then we can write   2 2 2 LP([X],µ)= LP([X]) ⊕ LP(µ) and 2 2 2 IP ([X],µ)= IP ([X]) ⊕ IP (µ), with T 2 2 R 2 LP([X]) := φ : [0,T ] × Ω → predictable kφkLP (X) := E[ φ (t)d[X](t)] < ∞ , ( 0 ) Z and t 2 2 IP ([X]) := Y : [0,T ] × Ω → R Y (t)= φ(s)dX(s), φ ∈ LP([X]) ,  Z0  equipped with the norm 2 2 kY kIP ([X]) := E[|Y |T ], as well as

T 2 d d 2 LP(µ) := ψ : [0,T ] × R0 × Ω → R predictable E[ ψ(s,y) µ(ds dy)] < ∞ , d ( 0 R0 ) Z Z equipped with the norm T 2 2 kψk 2 := E[ ψ(t,z) µ(dtdz)], LP (µ) 0 Rd Z Z 0 and t 2 2 IP (µ) := Y : [0,T ] × Ω → R Y (t)= ψ(s,y)J(ds dy), ψ ∈ LP(µ) , ( 0 Rd ) Z Z 0 equipped with the norm e 2 2 kY kI2(µ) := E[|Y (T )| ]. The (Itˆo) stochastic integral operator with respect to (X, J˜) is defined as

2 2 IX,J : LP([X],µ) → IP ([X],µ), e . . (φ, ψ) 7→ φ(s)dX(s)+ ψ(s,y)J(dsdy). 0 0 Rd Z Z Z 0 We will now construct a stochastic derivative, the ‘adjoint’ operatore of this stochastic integral operator, as the closure of the pathwise operator (∇X , ∇J ) defined above.

7 3.1 Stochastic derivative with respect to an integer-valued random measure Consider the following space of simple predictable functionals: Definition 3.1 (Couples of simple functionals on the product space). We consider the space S of couples (φ, ψ) such that

φ :[0,T ] × Ω → Rd, (7) d d ψ :[0,T ] × R0 × Ω → R (8) and 1. φ has the following form I 1 φ(t, xt, jt)= φi(xτi , jτi ) (τi,τi+1](t) i=0 Xk=1

with the τi predictable stopping times.

i i i 2. Moreover, there exist I grids 0 ≤ t1 ≤ t2 ≤···≤ tn(i) = T such that for any i ∈ 1..I

i ǫv φi(jti )= fi((x(tu), C i (j))u∈1..U,v∈1..V ), tu

U U V i i where fi : R × R × R → R is Borel-measurable, 0 ≤ t1 ≤···≤ tU ≤ τi and 1 ≤ ǫ1 ≤···≤ ǫV . ǫm (Ctl is defined as in Equation (6).) 3. ψ has the following form

M,K 1 1 ψ(t,z,jt)= ψmk(xκm , jκm ) (κm,κm+1](t) Ak (z) m=0 kX=1

d with Ak ∈ B([0,T ] × R ), 0 6∈ Ak (the closure of the Ak), the κm are predictable stopping times (allowed to depend on the Zk).

m m m 4. Moreover, there exist M grids 0 ≤ t1 ≤ t2 ≤···≤ tn(m) = T such that for any m ∈ 1..M:

m ǫq ψmk(xκ , jκ )= gmk((x(t ), C m (j))p ..P,q ..Q), m m p tp ∈1 ∈1

P P Q m m where gmk : R ×R ×R → R is Borel-measurable, 0 ≤ t1 ≤···≤ tP ≤ κm and 1 ≤ ǫ1 ≤···≤ ǫQ.

We denote IX,J (S) for the set of processes that are stochastic integrals of processes in S. e Lemma 3.2. The set of random variables

kj f((X(ti), Cti (J))i∈1..n,j∈1..p) (9)

n n 2 with f : R × R → R bounded, and the C as in equation (6), is dense in L (FT , P) (the space of random variables with finite second moment).

Proof. Let (ti)i∈N be a dense subset of [0,T ], and (kj )j∈N a dense subset of [1, ∞). Letting (Tk,Zk) denote the jump times and sizes of the jump measure J, define

0,n,m 1 F = σ(X(ti), (Tk,Zk)|ti ≤ tn,Tk ≤ tn,

kl n,m the (non-completed) filtration generated by the (X(ti), Cti (Jt))i∈1,··· ,n,l∈1,··· ,m. We denote by F the completion of F 0,n,m. One has F n,m ⊂ F n,m+1 ∩ ∩ F n+1,m ⊂ F n+1,m+1

8 n,m 2 0 Moreover, FT is the smallest σ-algebra containing all the F . For g ∈ L (Ft , P),

0 g = E[g|FT ].

The martingale convergence theorem yields

g = lim lim E[g|F n,m], m→∞ n→∞

0,m,n Moreover, for each n and m, there exists a F -measurable random variable hnm (i.e. a random variable measurable with respect to the non-completed filtration) such that, P-.a.s.,

n,m E[g|F ]= hnm.

(see e.g. Lemma 1.2. in Crauel [12].) Finally, the Doob-Dynkin lemma ([22], p. 7) applied to hnm yields that for each couple (n,m), there n n m exists a Borel-measurable gnm : R × R × R → R with

n,m kj P E[g|F ]= gn,m((X(ti), Cti (j))i∈1..n,j∈1..m), − a.s.

2 Lemma 3.3. {(∇X Y, ∇J Y ), Y ∈ IX,J (S)} is dense in LP([X] ⊗ µ). e Proof. Let (φ, ψ) be some element of S, and consider a continuous process Y of the form

t Y c(t)= φ(s)dX(s). Z0 Notice that the integral is well defined in a pathwise sense, as this is just a Riemann sum:

c Y (t,ω)= F (t,Xt,Jt), with t F (t, xt, jt)= φ(s, xs−, js−)dx(s) Z0 I 1 = φi(xτi , jτi ) (τi,τi+1](t)(x(t) − x(τi)). i=1 X Hence, ∇X F (t, xt, jt)= φ(t, xt, jt). So these processes have the form

c kj 1 ∇X Y (t)= φ((X(ti), Cti (j))i∈1..n,j∈1..m) t>tn ,

c 2 so such ∇X Y define a total set in LP([X]) (see Cont-Fourni´e[10] and Lemma 3.2). Similarly, the processes of the form t d Y = ψ(s,y,Js−)J(ds dy), 0 Rd Z Z 0 have the form e P d (∇ Y )(t,z)= ∇G(t,Xt,Jt)= ψ(t,z,Jt−), with G the following regular functional representation of Y d:

t G(t, xt, jt)= ψ(s,y,js−)(j(ds dy) − µ(ds dy, js−)). 0 Rd Z Z 0 2 Moreover, such ψ are dense in LP(µ) (by Section A and Lemma 3.2). d c Remark 3.4. Notice that ∇X Y ≡ 0. Similarly, ∇Y ≡ 0. 2 Corollary 3.5. The space I(S) is dense in IP([X],µ).

9 Proof. This follows by the previous lemma, as the stochastic integral 2 2 IX,J : LP([X],µ) → IP ([X],µ) e . . (φ, ψ) 7→ φ(s)dX(s)+ ψ(s,y)J(ds, dy) 0 0 Rd Z Z Z 0 2 2 defines a bijective isometry between LP([X],µ) and IP ([X],µ). e 2 2 Theorem 3.6. (Extension of (∇X , ∇J ) to IP([X],µ)) The operator (∇X , ∇J ): IX,J (S) → LP([X],µ) is 2 P e closable in IP(X,µ) and its closure ∇X,J defines a bijective isometry between these two spaces: P 2 2 ∇X,J : MP([X],µ) → LP([X],µ), t t F (t,Xt,Jt) := φ(s)dX(s)+ ψ(s,y)JX (dsdy) 7→ (∇X F, (∇F )) = (φ, ψ). 0 0 Rd Z Z Z 0 P e In particular, ∇X,J is the adjoint of the stochastic integral 2 2 IX,J : LP(X,µ) → IP(X,µ), e t t (φ, ψ) 7→ ψ(s)dX(s)+ ψ(s)J(ds dy) d Z0 Z0 ZR 2 2 in the sense that for all ψ ∈ LP(X,µ) and for all Y ∈ IP (X,µ): e T T

2 IP ([X],µ) := E Y (T )( φ(s)dX(s)+ ψ(s,y)J(ds dy)) e 0 Rd 0 Rd " Z Z Z Z 0 # T T e P = E ∇X Y (s)φ(s)d[X](s)+ ∇ Y (s,y)ψ(s,y)µ(ds dy) d "Z0 Z0 ZR # P 2 =:< ∇X,J Y, (φ, ψ) >LP ([X]⊗µ) . 2 2 Proof. By definition of IP(X,µ), we know that there exists (φ, ψ) ∈ LP([X],µ) such that t t Y (t)= φ(s)dX(s)+ ψ(s,y)JX (ds dy). 0 0 Rd Z Z Z 0 Then, by the Itˆoisometry on I2([X],µ), with Z ∈ I(S), e

T T E [Y (T )Z(T )] = E φ(s)∇X Z(s)d[X]s + ψ(s,y)∇Z(s,y)µ(ds dy) , (10) 0 0 Rd "Z Z Z 0 # 2 =< (φ, ψ), (∇X Z, ∇Z) >IP ([X],µ) . Moreover, (10) uniquely characterises φ dP × d[X]-a.e., and ψ dP × dµ-a.e. For if (η,ρ) is any other solution of (10). Then

2 < Y − IX,J (η,ρ),Z >IP (X,µ)= E (Y − IX,J (η))Z(t) =0. e e 2h i 2 for all Z ∈ I(S). Hence, Y − IX,J (η,ρ)=0 P-a.s. on I (X,µ) by density of I(S) in I (X,µ). So ψ = η d[X] × dP-a.e. and ψ = ρ dµ × dPe-a.e. and so (φ, ψ) is essentially unique. 2 n 2 Now, for any Y ∈ IP (X,µ) and let (Y )n∈N a sequence of I(S) that converges to Y in IP (X,µ). 0 = lim E[|Y n(T ) − Y (T )|2], n→∞ T T n n 2 = lim E[ ∇X Y (t) − φ(t)dX(t)+ ∇Y (t,z) − ψ(t,z)J(dtdz)| ], n→∞ 0 0 Rd Z Z Z 0 T e = lim E[ |∇Y n(t,z) − ψ(t,z)|2µ(dsdz)] n→∞ 0 Rd Z Z 0 T n 2 + lim E[ |∇X Y (t) − φ(t)| d[X](s)] n→∞ 0 Z T t n n + lim 2E[( ∇X Y (t) − φ(t)dX(t)) · ( ∇Y (t,z) − ψ(t,z)(J(dsdz))], n→∞ 0 0 Rd Z Z Z 0 e 10 and the last term is zero by the Itˆoisometry. Consequently, for any approximating sequence Y n converging n n P P to Y , (∇X Y , ∇Y ) converges to (φ, ψ). So ∇X,J is closable and we can write (φ, ψ) = (∇X Y, ∇ Y )

3.2 Martingale representation formula It is known (Jacod-Shiryaev [21], Lemma 4.24 p. 185) that every local martingale (and so any square- integrable martingale) can be written as

t Y (t)= Y (0) + φdX(s)+ ψ(s,z)J(dsdz)+ N(t), (11) 0 [0,t]×Rd Z Z 0 with N a local martingale orthogonal to X and µ. e Theorem 3.7. Assume that the square-integrable martingale Y is such that N is zero in its above decomposition; Then

t t P P Y (t)= Y (0) + ∇X Y (s)dX(s)+ (∇ Y )(s,z)J(dsdz), d Z0 Z0 ZR P 2 e where ∇ is the closure in LP(µ) of the pathwise operator ∇ introduced in Proposition 2.10 for functionals with a regular functional representation. Proof. We have that the martingale Y (t) − Y (0) is the sum of two stochastic integrals. So P-a.s.,

t t P P Y (t) − Y (0) = ∇X (Y (s) − Y (0))dX(s)+ ∇ (Y (s,z) − Y (0))J(dsdz) 0 0 Rd Z Z Z 0 P P e P Moreover, Y (0) is a constant everywhere and so ∇X Y (0) = 0 and ∇ Y (0) = 0. The linearity of ∇ allows us to write

t t P P Y (t)= Y (0) + ∇X Y (s)dX(s)+ ∇ Y (s,z)J(dsdz) 0 0 Rd Z Z Z 0 e

If the filtration F has the predictable representation property ,i.e. every martingale has N ≡ 0 in its decomposition, then we trivially obtain the following corollary. Corollary 3.8 (Martingale representation formula). Assume that F has the predictable representation property. Then, for any square-integrable F-martingale Y ,

t t P P Y (t)= Y (0) + ∇X Y (s)dX(s) (∇J Y )(s,z)J(dsdz), d Z0 Z0 ZR P 2 e where ∇ is the closure in LP(µ) of the pathwise operator ∇ introduced in Definition 2.10 for functionals with a regular functional representation. 2 Remark 3.9. When taking the closure in LP(µ) of ∇, one loses the pathwise interpretation. Note that this approach treats the continuous and jump parts in a similar fashion. If the filtration is generated by a continuous martingale X and jump measure J (with compensator µ), then one can d construct the following martingale-generating measure dS on [0,T ] × R0:

dS(t,z)= 1z=0.dX(s)+ zJ(dsdz), and the martingale representation formula can then be rewrittene for any square-integrable martingale as t Y (t)= Y (0) + DY (s,z)dS(s,z), d Z0 ZR 1 1 where D = ∇X z=0 + z ∇(t,z) ∈ R. 1 So D is the limiting quotient operator when z → 0 of the operator z ∇(t,z), and the continuous and jump integrands are treated in a similar way.

11 3.3 Relation with Malliavin calculus on Poisson space We now compare our functional approach with previous approaches based on Malliavin calculus on Poisson space [4, 15, 26, 30, 34, 35]. As mentioned in the introduction, another approach to Malliavin calculus with jumps is chaos ex- pansions on the Poisson space (see e.g. Øksendal et al. [15]). Here one decomposes a random variable satisfying some integrability conditions as a of iterated integrals:

∞ F = In(fn), n=0 X where the In are iterated integrals of a (symmetric) function fn with respect to a Poisson process. The Malliavin derivative is then defined as the operator that bring this decomposition down by one level:

D : D1,2 → L2(λ × ν × P ), ∞ F 7→ Dt,z := nIn−1(fn(·,t,z)), n=1 X with D1,2 the classical Malliavin-Sobolev appearing in the Malliavin literature. It was in fact already noted that in the finite-activity case, a pathwise interpretation of the chaos expansion operator is possible, as mentioned in Last-Penrose [26]. Løkka [30] extends the results of Nualart-Vives [34] from the Poisson to the L´evy case (see also Petrou [35] for financial applications), showing that this chaos expansion approach expands to the pure-jump L´evy setting, and is equivalent to Picard operator, which consists in putting an extra weight locally on the jump measure; define the annihilation and creation operators ǫ− and ǫ+ on measures by

− c ǫt,zm(A)= m(A ∩{(t,z)} ), + − 1 ǫt,zm(A)= ǫt,zm(A)+ A(t,z).

Then, for functionals defined on Ωg, the space of general measures from [0,T ] × Rd to R, the “Malliavin- type” operator is defined as

g d D˜ t,z :Ω × [0,T ] × R → R, + (F,t,z) 7→ F ◦ ǫt,z − F.

D˜ is closable to L2([0,T ] × Ω), and D˜ = D on D1,2. This “addition of mass” approach through a creation and annihilation operator appears in other approaches such as the “lent-particle method” [5]. Alternatively, the chaos expansion operator can be associated to an equivalent perturbation operator that takes the form of a quotient operator rather than a finite-difference one. This is typically the case in the work of L´eon et al [28] or Sol´e, Utzet and Vives [41], who work in the framework of L´evy processes: for a ω = ((t1,z1),..., (tk,zk),... ), the operator Dˆ is defined on the space of square-integrable F such that E[ (DF )2ν(dz)] < ∞ as

F (ω t,z ) − F (ω) R Dˆ F (ω)= ( ) , t,z z ˆ with ω(t,z) = ((t1,z1),..., (tk,zk), (t,z),... ). It is shown that if DF is square-integrable, then DF is also well defined and coincides with Dˆ. These approaches consisting in adding mass to the measure look in some aspects similar to the one that we have taken here. There is, however, a fundamental difference: in our pathwise approach, we directly perturb the predictable projection of the process, rather than taking the predictable projection of the perturbed process. Moreover, there is no need to restrict the setting to a Poisson or L´evy space. The relationship between ∇P and the different operators D, D˜ or Dˆ can be summarised as follows, which is a jump counterpart to the Cont-Fourni´elifting theorem [10].

12 Since all the operators defined above give rise to the following type of martingale representation:

t p Y (t)= Y (0) + E[Ds,yYs|Fs](J − µ)(ds dy), 0 Rd Z Z 0 t p = Y (0) + E[D˜ s,yYs|Fs](J − µ)(ds dy), 0 Rd Z Z 0 t p = Y (0) + E[Dˆ s,yYs|Fs](J − µ)(ds dy), 0 Rd Z Z 0 the above rewrites as Proposition 3.10.

P p p p ∇ Y (s)= E[Ds,yY (t)|Fs]= E[D˜ s,yY (t)|Fs]= E[Dˆs,yY (t)|Fs] dP × dµ − a.e.

In other words, ∇P is the predictable projection of any of the aforementioned Malliavin operators that yield a Clark-Ocone formula. This translates into the following commutative diagram:

2 ∇ 2 IP −→ LP(µ) p p ↑ ( E[·|Fs])s∈[0,T ] ↑ ( E[·|Fs])s∈[0,T ] ∈Rd z 0 (Dt,z ) ∈ D1,2 −→t [0,T ] L2([0,T ] × Rd × Ω).

We of course have a similar diagram for D˜ and Dˆ, with D1,2 replaced by their corresponding domains on the bottom-left.

4 Examples 4.1 Kunita-Watanabe decomposition We consider a probability space (Ω, F, P), on which a Brownian motion W and a jump measure J –with compensator µ,such that µ(dtdz)= ν(dz)dt– generate the filtration F. We write

t X(t) := σW (t)+ zJ(ds dz), Z0 ZR0 e with J the compensated jump measure. The Kunita-Watanabe decomposition (see e.g. [11]) states that for such a martingale X and Y ∈ 2 LP([Xe],µ), there exists a unique Y with . 1. Y (·)= E[Y ]+ 0 ψ(s)dX(s),e R . 2. Ee[(Y − Y )M] = 0 for all M = 0 ξ(s)dX(s). One can then compute the Kunita-WatanabeR decomposition, such as in [2], extending it from the e 1,2 2 Malliavin space D to the whole IP ([X],µ). We have

t T P P Y (t)= E[Y ]+ ∇W Y σdW (s)+ ∇J Y (s,z)J(ds dz). Z0 Z0 ZR0 and e t T Y (t)= E[Y ]+ ψ(s)σdW (s)+ ψ(s)zJ(ds dz). Z0 Z0 ZR0 Hence e e t t o P P Y := Y − Y = (∇W Y (s) − ψ(s))σdW (s)+ (∇J Y (s,z) − zψ(s))J(ds dz), Z0 Z0 ZR0 e e 13 . The orthogonality condition entails that for all square-integrable M(·) := 0 ξ(s)dW (s): t R o P 2 E[Y M]= E[ (∇W Y (s) − ψ(s))ξ(s)σ ds 0 TZ P + (∇J Y (s,z) − zψ(s)).zξ(s)µ(ds dz)] Z0 ZR0 t P 2 P = E ξ(s) (∇W Y (s) − ψ(s))σ + z(∇J Y − zψ(s)) ds , Z0  ZR0   t using that 0 ξ(s)dX(s) is a martingale, the Itˆoisometries and the orthogonality relations between con- tinuous and pure jump parts. This implies that R

P 2 P (∇W Y (s) − ψ(s))σ + z(∇J Y (s,z) − zψ(s))ν(dz)=0. ZR0 and so that −1 2 P P 2 2 ψ(s)= σ ∇W Y (s)+ z∇J Y (s,z)ν(dz) · σ + |z| ν(dz) .  ZR0   ZR0  4.2 Stochastic exponential for pure-jump L´evy processes In this example, we show how one can recover the SDE satisfied by a stochastic exponential that is a mar- T 2 tingale. Consider a probability space (Ω, F, P), where a jump measure J such that d |z| J(ds dz) < 0 R0 ∞ a.s., and with absolutely-continuous compensator µ generates the filtration (Ft)Rt∈[0R,T ]. As in the pre- vious sections, we write J for the compensated jump measure J − µ. Then, we can write the stochastic exponential: t z(J−µ)(dsdz) − zJ({s}×dz) eR0 RRd RRd Et = e 0 (1 + zJ({s}× dz)e 0 . d R0 s∈Y[0,t] Z Let us introduce the truncated Dol´eans-Dade process:

t z J µ dsdz zJ s dz n R0 R( 1 ,∞)d ( − )( ) − R( 1 ,∞)d ({ }× ) Et = e n (1 + zJ({s}× dz))e n . 1 d ( n ,∞) s∈Y[0,t] Z Notice that the functional

t z j ds dz µ dsdz,j − zj s dz n R0 R( 1 ,∞)d ( ( )− ( s )) − R( 1 ,∞)d ({ }× ) F (t, jt)= e n (1 + zj({s}× dz))e n , 1 d ( n ,∞) s∈Y[0,t] Z is well defined, since the compensated integral is simply a Lebesgue-Stieltjes integral. It is straightforward n n n 2 to compute that (∇J F )(t,z,jt)= zF (t, jt−). Now, since Et tends to Et in IP (µ), we also have that

n P ∇J Et −→ ∇J Et n→∞

2 in LP(µ). So n n ∇J Et = zF (t,Jt−) −→ zEt− n→∞ 2 in the LP(µ) sense. So by uniqueness of the integrand in the martingale representation formula,

T Et =1+ Et−dX(t) Z0 t with X(t) = d zJ(ds, dz) a purely discontinuous L´evy martingale. We recover the classical SDE 0 R0 satisfied by E, the stochastic exponentional of the martingale X. R R e

14 4.3 Application to the Kella-Whitt martingale Consider, on a probability space (Ω, F, P) a L´evy process X with positive jumps

t X(t) := γt + zJ(ds dz), Z0 Z(−∞,0) where J is a Poisson random measures with compensator e

µ(dtdy)= ν(dy)dt

˜ 2αx and J = J − µ. We assume the L´evy measure satisfies (−∞,0) e ν(dx) < ∞ and denote by R X(t) = sup X(t). s∈[0,t]

The Kella-Whitt martingale M associated with X, introduced in [24], appears in queuing theory and modelling of storage processes (see Kella-Boxma [23] and Kyprianou [25]), is defined as:

t M(t)= ψ(α) e−αZ(s)ds +1 − e−αZ(t) − αX(t), Z0 where Z(t) := X(t) − X(t), α> 0 and

−αx 1 ψ(α)= γt + (e − 1 − αx {|x|<1})ν(dx). Z(−∞,0) Then M is a martingale (see e.g. [25, Ch. 3, Sec. 5]). Moreover,

2 −αX(t) |∆X(s)|2 [M,M]t = e .e . 0≤s≤t |∆XX(s)|6=0

By hypotheses on ν, this quantity is finite, and using Protter ([36], Cor. 3 p. 73) M is a square-integrable martingale. An application of Theorem 3.8 yields an integral representation formula for this martingale: Theorem 4.1 (Poisson integral representation of Kella-Whitt martingale). The Kella-Whitt martingale M has the representation:

t M(t)= E[M(t)] + e−α(X(s)−X(s))(1 − eαy)J(dsdy). Z0 Z(−∞,0) Proof. Denote e n −αx 1 ψ (α)= γt + (e − 1 − αx {|x|<1}ν(dx)), (−∞,− 1 ) Z n and t Xn(t)= γt + zJ(dsdz), 0 (−∞,− 1 ) Z Z n and write e t n n n n M n(t) := ψn(α) e−α(X (s)−X (s))ds +1 − e−α(X (t)−X (t)) − αXn(t). Z0 So M n is M without the small jumps. n n 2 Then M is a square-integrable martingale, and M converges to M in IP(µ), i.e.

E |M n(t) − M(t)|2 −→ 0. n→∞   Noticing that M n has finite variation and is therefore well defined as a pathwise integral, and that since X being spectrally negative, it never reaches its maximum when it jumps, we compute

n n (∇M n)(t,z)= e−α(X (t−)−X (t−))(1 − eαz),

15 which – by using the martingale representation formula – yields:

t n n M n(t)= e−α(X (s−)−X (s−))(1 − eαz)J(dsdz). Z0 Z(−∞,0) To continue: when using functional Itˆocalculus, pathwise computationse – when available – are fairly 2 straigthforward. But the price one has to pay for that is to be able to justify the convergence in IP(µ) of an approximating martingale sequence to the desired one. This is the focus of the rest of this example. We have: T E[|M(T ) − M n(T )|2] ≤CE[(ψ(α) − ψn(α))2( e−α(X(s)−X(s))ds)2] (12) 0 T Z n + CE[(ψn(α))2( e−αX(s)(eαX(s) − eαX (s))ds)2] (13) 0 Z T n + CE[(ψn(α))2( eαX(s)(e−αX(s) − e−αX (s))ds)2] (14) 0 Z n + CE[e−2αX(t)(eαX(t) − eαX (t))2] (15)

n n + CE[e2αX (t)(e−αX(t) − e−αX (t))2] (16) + Cα2E[(X(t) − Xn(t))2]. (17)

The constant C comes from expanding the square and using the inequality 2ab ≤ a2 + b2 on the cross- terms. We shall now show that all terms on the left-hand side tend to zero. Results in Dia [16] on small-jump truncations approximations prove useful here, and we use several of them. Term (17) tends to zero: the proof relies on noticing the residual X(t)−Xn(t) is a martingale and using Doob’s martingale inequality for the sup (see Dia [16], proof of proposition 2.10). In term (12), notice that the integrand is always less than 1. Hence

T E[(ψ(α) − ψn(α))2( e−α(X(s)−X(s))ds)2] ≤ T 2E[(ψ(α) − ψn(α))2] Z0 αx 1 2 = E[( e − 1 − αx |x|<1ν(dx)) ]. (− 1 ,0) Z n and the integral is deterministic, so the expectation vanishes, and this term tends to zero. Taking term (15),

n n E[e−2αX(t)(eαX(t) − eαX (t))2] ≤ e−αX(0)E[(eαX(t) − eαX (t))2] n n = e−αX(0)E[e2αX(t) + e2αX (t) − 2eαX (t)eαX(t)].

n Moreover, by Proposition 2.2 in Dia [16], e2αX (t) converges to e2αX(t) in the following norm:

2αX(t) 2αXn(t) 2αX(t) 2αXn(t) ke − e kL1 := E[|e − e |] −→ 0. n→∞ Also

n n −E[eαX (t)eαX(t)] ≤−E[eαX (t)]E[eαX(t)]

n and by the same proposition again, eαX (t) → eαX(t) in L1. Hence, the nonnegative term (15) is bounded from above by a quantity tending to zero. Concerning term (16),

2αXn(t) −αX(t) −αXn(t) 2 4αXn(t) 1 αX(t) αXn(t) 4 1 E[e (e − e ) ] ≤ E[e ] 2 E[(e − e ) ] 2 2ψn(α)t αX(t) αXn(t) 4 1 = e E[(e − e ) ] 2 by Cauchy-Schwarz and the definition of the characteristic exponent. Moreover

n E[(eαX(t)−eαX (t))4]

n n n n =E[e4αX(t) − 4e−3αX(t)−αX (t) +6e−2αX(t)−2αX (t) − 4e−αX(t)−3αX (t) + e−4αX (t)].

16 Also n n −4E[e3αX(t)+αX (t)] and − 4E[e−αX(t)−3αX (t)] both tend to −4E[e−4αX(t)] using Proposition 2.2 in Dia [16] once more. Finally

−4αX(t) −4αXn(t) n E[e ]+ E[e ] 6E[e−2αX(t)−2αX (t)] ≤ 6 2 which tends to 3E[e−4αX(t)] . Summing up, term (16) tends to zero. Regarding (13), by the :

T n n E[(ψn(α))2( e−α(X(s)(eαX(s) − eαX (s))ds)2] = (ψn(α))2E[T 2(e−α(X(t0)(eαX(t0) − eαX (t0))2] Z0 for some t0 in [0,T ], and we conclude by the same argument as in term (15). Using the mean-value theorem on term (14) in a similar fashion and proceeding as in (16), we conclude n 2 that M → M in IP (µ). n 2 Notice in passing that ∇M converges in LP(µ) to

∇PM(t,z) := e−α(X(t)−X(t))(1 − eαz); the proof follows exactly the same lines as terms (13) and (14). This yields the following martingale representation formula for the Kella-Whitt martingale:

t M(t)= E[M(t)] + e−α(X(s)−X(s))(1 − eαy)J(ds dy). Z0 Z(−∞,0) e

4.4 Supremum of a L´evy process We illustrate the above results by deriving a representation formula for the supremum of a L´evy process. Such representations were by Shiryaev and Yor [40]; the proof relies on the Itˆoformula. Recently, R´emillard and Renaud [38] provide a derivatiuon of Shiryaev and Yor’s result using Malliavin calculus. Let us introduce the filtered probability space (Ω, F, F, P), where the filtration is generated by a Brownian motion W and a Poisson measure J with compensator µ. Let us then consider X, the square- integrable L´evy process defined by,

t t X(t)= X(0) + µt + σW (t)+ zJ(dsdz)+ zJ(dsdz) Z0 Z(−1,0)∪(0,1) Z0 Z|z|≥1 For T > 0, we are interested in finding a martingale representatione for its supremum at T , denoted by X(T ) := sup0≤s≤T Xs.

Theorem 4.2. Let Ft(u)= P(X(t) ≤ u) denote the distribution of X(T ). Then

T T P P X(T )= E[X(T )] + ∇W P (t)dW (t)+ ∇J P (t,z)J(dt dz). Z0 Z0 ZR0 with e

X−X(t) P P ∇J P (t,z)= FT −t(u)du ∇W P (t)= σFT −t(X(T ) − X(t)) ZX−X(t)−z To prove the above theorem, we consider the process

P (t)= E X(T ) Ft .   We start from the same point as Shiryaev-Yor [40] and R´emillard-Re naud [38]. Our starting point is the following identity: ∞ P (t)= X(t)+ FT −t(u)du, ZX(t)−X(t)

17 where Ft(u)= P(X(t) ≤ u). As in the previous example, we focus first on the computations with a process Xn that corresponds to X with all the jumps of size less than 1/n truncated: t t Xn(t)= Xn(0) + µt + σW (t)+ zJ(dsdz)+ zJ(dsdz), 0 z∈( 1 ,1) 0 |z|≥1 Z Z n Z Z e and introduce ∞ n n n P (t) := E X (T ) Ft = X (t)+ FT −t(u)du), n n ZX (t)−X (t)   where FT −t(u)= P(X(T − t) >u). One sees that P n has a functional representation that is not vertically differentiable at the points where Xn reaches its supremum, because the supremum itself is not vertically differentiable at these points. To remedy this, we introduce the following Laplace softsup approximation defined below. Lemma 4.3. For a c`adl`ag function f, the associated Laplace softsup 1 t L(f,t) := log( eaf(s)ds), a Z0 satisfies lim La(f,t) = sup f(s) a→∞ 0≤s≤t Proof. This result can be found for continuous functions in [31, Lemma 7.30]. The proof is similar in the c`adl`ag case. 1 t 1 log(t) log( eaf(s)ds) ≤ log(t sup eaf(s)) = sup f(s)+ , (18) a a a Z0 0≤s≤t 0≤s≤t using continuities of the exponential and logarithm functions. Having a → ∞ yields the “≤” inequality. For the converse inequality, let us consider two cases. In the first case, let us assume that the supremum is attained at a certain point, i.e. there exists t0 such that f(t0) = max0≤s≤t f(s). Then, by right-continuity of f, for any ǫ > 0, there exists δ > 0 such that,for r ∈ (t0,t0 + δ), f(r) ≥ f(t0) − ǫ. So 1 t 1 t0+δ 1 t0+δ 1 log( eaf(s)ds) ≥ log( eaf(s)ds) ≥ log( ea(f(t0)−ǫ)ds)= f(t ) − ǫ + log(δ). a a a 0 a Z0 Zt0 Zt0 Taking the limit a → ∞ yields the result, as ǫ is arbitrary. In the second case where the sup is not reached, the c`adl`ag property of f entails that there exists t1 such that sup f(s) = lim f(u) =: f(t1−). 0≤s≤t u→t1 u

Then, for any ǫ, there exists δ > 0 such that f(r) ≥ f(t1) − ǫ for r ∈ (t1 − δ, t1), since f is l`ag. Using the same computations as in the first case, this yields the result. Lemma 4.4. For a L´evy process X, and any t> 0, one has lim E[|La(X,t) − X(t)|2]=0. a→∞ Proof. E[|La(X,t) − X(t)|2]= E[|La(X,t)|2]+ E[|X(t)|2] − 2E[La(X,t)X(t)]. Moreover, for a> 1, we have that log(T ) La(X,T ) ≤ X(T )+ ≤ X(T ) + (log(T ))+, a by Equation 18, entailing, by dominated convergence lim E[La(X,T )] = X(T ). a→∞ and lim E[|La(X,T )|2]= E[|X(T )|2], a→∞ thus yielding convergence.

18 Going back to P n, we introduce the process Y a,n(t):

∞ a,n a n Y (t) := L (X ,t)+ FT −t(u)du. a n n ZL (X ,t)−X (t) Lemma 4.5. For any t, Y a,n(t) is square integrable, and

lim E[|Y a,n(t) − P n(t)|2]=0 a→∞

Proof. The square-integrability of Y a,n stems from the previous lemma, using that La(Xn,t) ≤ Xn(t)+ (log(T ))+ for a> 1. Now,

Xn(t)−Xn(t) a,n n 2 a n 2 2 E[|Y (t) − P (t)| ] ≤ 2E[|L (X ,t) − X(t)| ]+2E[| FT −t(u)du| ] a n n ZL (X ,t)−X (t) We get the following inequality

Xn(t)−Xn(t) 2 a n n 2 [| FT −t(u)du| ] ≤ E[|L (X ,t) − X (t)| ] a n n ZL (X ,t)−X (t) by bounding the integrand by 1 in the left-hand side. So

E[|Y a,n(t) − P n(t)|2] ≤ 4E[|La(Xn,t) − Xn(t)|2]

Taking the limit a → ∞ and using Lemma 4.4 yields

lim E[|Y a,n(t) − P n(t)|2]=0. a→∞

For fixed t ∈ [0,T ] let us now introduce the following family of square-integrable martingales

a,n a,n (Z (s,t))s∈[0,t] = E[Y (t)|Fs]. Lemma 4.6. a,n n 2 lim E[|Z (s,t) − E[Pt |Fs]| ]=0 a→∞ Proof.

a,n n 2 a,n n 2 E[|Z (s,t) − E[Pt |Fs]| ]= E[|E[Y (t) − P (t)|Fs]| ], which by Jensen’s inequality,

a,n 2 a,n n 2 ≤ E[E[|Y (t) − P (t)| |Fs]] = E[Y (t) − P (t)| ], which converges to zero by Lemma 4.5.

n In particular, this entails that for every t, lima→∞ Z(t,t,a)= Pt . Let us now compute the functional Itˆooperators of Zn(t,t,a). At time s = t, Zn(t,t,a) = Y a,n(t), and Y a,n has a pathwise functional representation, allowing to compute explicitly the operators.

La(Xn,t)−Xn(t) a,n ∇J (Z )(t,t,z)= FT −t(u)du, a n n ZL (X ,t)−X (t)−z and La(Xn,t)−Xn(t) a,n 1 a n n ∇W (Z )(t,t) = lim FT −t(u)du = FT −t(L (X ,t) − X (t)). h→0 h a n n ZL (X ,t)−X (t)−σh In a way similar to Lemma 4.5, we can show that

Xn−Xn(t) a,n n lim ∇(Z )(t,t,z)= FT −t(u)du =: ∇P (t,z) a→∞ n n ZX −X (t)−z a,n n n lim ∇X (Z )(t,t)= σFT −t(X (t) − X (t)) a→∞

19 n n and these quantities must equate to ∇P and ∇W P respectively. We can conclude that at time T T T n n n P (T )= X (T )= E[P (T )] + ∇W P (t)dW (t)+ ∇P (t,z)J(dtdz) 0 0 (−∞,− 1 )∪( 1 ,∞) Z Z Z n n with the integrands defined as above. e In case the L´evy process has finite activity, we are done, as for a n large enough, P n = P . Otherwise, all that remains to do is to remove the truncation of the small jumps. In this case, however, this is straightforward, as

E[|P n(T ) − P (T )|2]= E[|Xn(T ) − X(T )|2],

n n which tends to zero, using the result of Dia [16], p11. This yields the convergence of ∇W P and ∇P n n P to ∇W P , which we need to compute. As X → X a.s., we also have that X → X a.s. and ∇ P is computable in a straightforward way:

X−X(t) P ∇ P (t,z)= FT −t(u)du. ZX−X(t)−z P P Regarding ∇X P , if σ = 0 then ∇X P ≡ 0. Assuming σ 6= 0 from now on, then a good candidate is σFT −t(X(t) − X(t)). Let us investigate the convergence:

n n 2 |FT −t(X(T ) − X(t)) − FT −t(X (T ) − X (t))| n 2 ≤ 2|FT −t(X(t) − X(t)) − FT −t(X (t) − X(t))| n n 2 +2|FT −t(X (t) − X(t)) − FT −t(X(T ) − X (t))| Rewriting the first term of the right-hand side, we have

T n 2 |FT −t(X(t) − X(t)) − FT −t(X (t) − X(t))| dt 0 Z T = |P (X(t) > X(t) − X(T − t) − P (X(t) > Xn(t) − X(T − t))|2dt Z0 n which converges to zero by dominated convergence since X → X a.s. As for the second term:

T n 2 |FT −t(X(t) − X(t)) − FT −t(X(T ) − X (t))| dt 0 Z T = |P (X(t) > Xn(t) − X(T − t)) − P (Xn(t) > Xn(t) − X(T − t))|2dt 0 Z T ≤ sup |P (X(t) > x) − P (Xn(t) > x)|2dt, Z0 x∈R and this quantity tends to zero by dominated convergence and using Proposition 2.10 in Dia [16]. We thus obtain the following representation of the supremum of a L´evy process:

T T P (T )= X(T )= E[P (T )] + ∇X P (t)dW (t)+ ∇J P (t,z)J(dtdz) Z0 Z0 ZR0 with e

X−X(t) P ∇ P (t,z)= FT −t(u)du ∇W P (t)= σFT −t(X(T ) − X(t)). ZX−X(t)−z A A density result

2 In this subsection, we prove that the simple random fields are dense in LP(µ) for a fairly general measure µ. This is a common result in the continuous case [39] that extends to L´evy compensators using some isometry properties between Hilbert spaces [1]. We extend this result to the case of t random and/or time-inhomogeneous compensators. We make the following assumption on the compensator:

20 Assumption 1. The compensator

d µ : B([0,T ] × R0) × Ω → R has the form µ(dt,dz,ω)= ν(s,dz,ω)dt where, for A ∈ B([0,T ]), ν is finite on every Borel set A × B ∈ d B([0,T ] × R0) such that 0 6∈ B (B denoting the closure of B). Under these assumptions, one has the following theorem. Theorem A.1. Let R be the vector space of random fields of the form

n,m ψ(t,z,ω)= ψ (ω)1 n n (t)1 (z), ik (ti (Zk,ω),ti+1(Zk,ω)] Zk i,k X=1 where

d • K ⊂ R0 is a Borel set such that 0 6∈ K, d • Zk ⊂ R0 disjoint Borel sets such that 0 6∈ Zk,

• ψij Fti -measurable, and

n n 22 • (ti )i=0 are finite stopping times given by

n −n n ti (K,ω) = inf{t ∈ [0,T ]|µ([0,t] × K,ω) ≥ i2 } ∧ T ∧ inf{t ∈ [0,T ]|µ([0,t] × K,ω) ≥ 2 }

. R 2 Then is dense in LP(µ). n n We shall write ti for ti (ω,Z) to alleviate the notation when there is no risk of confusion. d d 2 Proof: Consider a predictable random field f : [0,T ] × R0 × Ω → R , f ∈ LP(µ). Let us first assume d that f is bounded. Let (Zj )j∈N be a sequence of sets of R0, 0 6∈ Zj , such that for all j and all ω ∈ Ω,

µ([0,T ] × Zj ) < ∞. This is possible because of the σ-finiteness of µ stemming from Assumption 1. Remark A.2. Notice that if µ is a predictable measure, then the stopping times defined above are also predictable. For m,n ≥ 1, define

Anm(f)(t,z) n 22 −1 m 1 n n (t)1 (ti (Zk ,ω),ti+1(Zk ,ω)] z∈Zj = n n f(s,y,ω)µ(dsdz, ω) , µ((t (Zk,ω),t (Zk,ω)] × Zk,ω) n n i=1 k i−1 i (ti−1(Zk ,ω),ti (Zk,ω)]×Zk ! X X=1 Z n n with the convention that 0/0 = 0, which occurs when ti−1 = ti . 2 R Lemma A.3. Letf ∈ LP(µ), n,m ≥ 1. Then Anm(f) ∈ and

1. kAmn(f)k∞ ≤kfk∞;

2 2 2. kAmn(f)kLP (µ) ≤kfkLP (µ). Proof. 1. For all ω, there are three cases to be considered:

m (a) for all z 6∈ ∪k=1Zk, then Amn(f)= f = 0; 2n (b) for all t ∈ (T ∧ 2 ,T ], Amn(f)=0 ≤ |f|; (c) otherwise, for all ω,

1 Amn(f)(t,z,ω)= f(s,y,ω)µ(ds dy, ω), ∗ ∗ ∗ µ((ti −1,ti ] × Zk ,ω) ∗ ∗ Z(ti −1,ti ]×Zk∗ ∗ ∗ ∗ 2n ∗ for a unique couple (i , k ) with i ∈ 0..2 −1 and k ∈ 1..m such that (t,z) ∈ (ti∗ ,ti∗+1]×Zk∗ . That is, Amn(f) is the average of f over (ti∗−1,ti∗ ] × Zk. So by the very definition of the average, there exists a point (t0,z0), with t0 ∈ (ti∗−1,ti∗ ] × Zk, such that the value of Amn(f) is lower than the value of |f| at that point.

21 This implies that kAmn(f)k∞ ≤kfk∞. 2. To alleviate the notation, let us write

1 cik = n n f(s,y,ω)µ(dsdz, ω) . µ((ti ,ti ] × Zk,ω) (tn ,tn]×Z −1 Z i−1 i j ! Thus, for all ω 1 c2 ≤ f 2(s,y,ω)µ(ds dy, ω) ik µ((tn ,tn] × Z ,ω) i−1 i k Z(ti−1,ti]×Zk by Cauchy-Schwarz. n n Since all the [ti−1(Zk,ω),ti (Zk,ω)] × Zk are disjoint,

n 22 −1 m 2 2 1 1 Amn(f)= cik (ti,ti+1](t) Zk (z). i=1 k X X=1 Integrating,

E[ Amn(f)(s,y,ω)µ(dsdy, ω)] [0,T ]×Rd Z 0 2n 2 −1 m n n µ((ti ,ti+1] × Zj ) = E[ n n ( f(s,y,ω)µ(dsdz, ω))]. µ((t ,t ] × Zj) n n i=1 j=1 i−1 i (ti−1,ti ]×Zj X X Z Now, we have two cases to consider: n 1. for a given k, if t22n (Zk,ω)

∗ n n ∗ 2. otherwise, there exists an index i (k) such that ti∗(k)

In any case, we obtain

n 22 −1 m 2 2 2 2 kAmn(f)kLP (µ) ≤ E[ ( f (s,y,ω)µ(dsdz, ω))] ≤kfkLP (µ). n n i=1 j=1 (ti−1,ti ]×Zj X X Z n Remark A.4. We can see why we needed to define the ti as a random partition: it is so that the 2 operator Amn(f) defines a contraction in LP(µ). In fact, this is actually the only reason; in the case where µ is a L´evy compensator for example, the fact that µ is time-homogeneous and independent of ω would allow taking the partition deterministic and Lebesgue-equidistant in time. If µ is deterministic n but time-inhomogeneous, then ti may be chosen deterministic and µ-equidistant.

We are now ready to prove: Lemma A.5. For a bounded function f,

lim kAmn(f) − fkL2(µ) =0. n→∞ P Proof: We start by introducing the operator

n 22 m 1 n n (t)1 (z) (ti−1,ti ] Zk Bmn(f)(t,z) := n n f(s,y,ω)µ(dsdz, ω) . µ((t (Zk,ω),t (Zk,ω)] × Zk,ω) n n i=1 k i−1 i (ti−1,ti ]×Zk ! X X=1 Z Notice that Bmn(f) is not a simple predictable process: it is actually anticipative. However,

(Bmn(f)(., ∗,ω))n≥0 is a martingale.

22 ′ ′ Lemma A.6. Fix ω and define a probability space (Ω , Fω,m, Qω,m) with ′ m Ωω,m = {ω}× [0,T ] × ∪k=1Zk, ′ m Fω,m = {(ω, A × B), A ∈B([0,T ]),B ∈B(∪j=1Zk)}, µ(A × B,ω) Qω,m(A × B)= m , µ([0,T ] × ∪k=1Zk,ω) m which we equip with the smallest filtration (Fn )n∈N that makes the functions

n 22 (t,z) 7→ c 1 n n (t)1 (z) i (ti−1,ti ] Zk i=1 X measurable for all k in 1..m. Then (Bmn(f)(., ∗,ω)n≥0 m is a ( (Fn )n∈N, Qω,m)-martingale.

Proof. Notice that in general f is not F -measurable. However, f1 m (z) is. Then, since conditional ω,m ∪k=1Zk expectation is just orthogonal projection,

n 22 −1 m 1 n n (t)1Z (z) m (ti−1,ti ] k E[f(t,z)1 m (z)|F ](t,z)= f(s,y,ω)µ(ds dy, ω) ∪k=1Zk n n n µ((t ,t ] × Zk,ω) n n i=1 k i−1 i (ti−1,ti ]×Zk X X=1 Z = Bmn(f)(t,z). m So (Bmn(f))n≥0 is indeed a Fn -martingale. Lemma A.7. 2 1 m 2 ∀f ∈ LP(µ), lim kBmn(f) − f ∪ Zk (.)kL (µ) =0. n→∞ k=1 P

Proof. For fixed ω, {Bmn(f)(., ∗,ω),n ≥ 0} being a martingale (by Lemma A.6) allows us to apply the 2 2 L (Q)-bounded martingale convergence theorem to conclude that Bmn(f)(., ∗,ω) converges in L (Q) except perhaps on a µ-null set. We note this limit Bm∞(f). Moreover, since f is bounded, dominated convergence gives

lim Bmn(f)(s,y,ω)µ(dtdz, ω)= Bm∞(f)(s,y,ω)µ(dtdz, ω) n→∞ ZA×B ZA×B m for all A × B ∈B([0,T ] × ∪k=1Zk). By definition of the operator Bmn(f), we know that

Bmn(f)(s,y,ω)µ(ds dy, ω)= f(s,y,ω)µ(ds dy, ω) ZA×B ZA×B m for all A×B such that (ω, A×B) ∈F (a ∈ N) and all n ≥ a. L´evy’s zero-one law gives B = f1 m a m∞ ∪k=1Zk dµ(., ω)-a.e. Finally, by dominated convergence:

2 lim |B (f)(s,y,ω) − f(s,y,ω)1 m (z)| µ(ds dy, ω). mn ∪k=1Zk n→∞ [0,T ]×∪m Z Z k=1 k The result follows by applying a dominated convergence criterion.

2 Lemma A.8. Let f be bounded and LP(µ)-integrable. For any m and any fixed l,

lim Amn(Bml(f))(t,z,ω)= Bml(f)(t,z,ω), dP × dµ − a.e. n→∞

Proof. For that purpose, we expand Amn(Bml(f)), with n ≥ l:

Amn(Bml(f))(t,z) (19)

n l 22 −1 m 22 m 1 n n (t)1Z (z) (ti−1,ti ] k n n l l = .µ(((t ,t ] × Zk) ∩ ((t ,t ] × Zq))) µ((tn ,tn] × Z ).µ((tl ,tl ] × Z ) i−1 i p−1 p i=1 k p=1 q=1 i−1 i k p−1 p q X X=1 X X . f(s,y,ω)µ(ds dy, ω). (tl ,tl ]×Z Z p−1 p q

23 We now note two things: first, for q 6= k,

n n l l µ(((ti1,ti+1] × Zk) ∩ ((tp−1,tp] × Zq))) = 0.

l Moreover, recall that the time grid is refining. This means that since n ≥ l, the (ti)1≤i≤22l are a subset n of (ti )1≤i≤22n . So n n l l µ(((ti−1,ti ] × Zk) ∩ ((tp−1,tp] × Zk))) n n is either equal to µ((ti−1,ti ] × Zk) or zero. So (19) is

n l 22 −1 m 22 1 n n (t)1 (z) (ti ,ti+1] Zk .1 n n l l . f(s,y,ω)µ(dsdy, ω). l l {(ti ,ti ]⊂(tp− ,tp]} µ((t ,t ] × Zk) +1 1 l l i=1 k p=1 p−1 p (tp−1,tp]×Zk X X=1 X Z By the above, this is almost Bml(f). In fact

Amn(Bml(f))(t,z,ω) n 22 1 n 1 = Bml(f)(t,z,ω) l l l n + Bml(f)(t − t i(m−l) ,z,ω) l l n t6∈∪ [t ,t +t − ] 2 +1 t∈[t ,it +t − ] i=1 i i 2i(m l) +1 i i 2i(m l)+1 i=1 X We note two things. First,

lim Amn(Bml(f))(t,z,ω)= Bml(f)(t,z,ω) (20) n→∞

l 2n for all (t,z,ω) such that t 6= ti, i ∈ [0..2 − 1]). Second, for n ≥ l, −n |Amn(Bml(f))(t,z,ω)| ≤ |Bml(f)(t,z,ω)| + |Bml(f)(t − 2 T,z,ω)|. (21) By the triangle inequality, we have

|Amn(Bml(f))(t,z,ω) − Bml(f)(t,z,ω)| ≤ |Amn(Bml(f))(t,z,ω)| + |Bml(f)(t,z,ω)|. Squaring both sides, integrating with respect to P × µ, and using Equation 21, we see that the left-hand side is dominated by a fixed integrable function. By dominated convergence once more, we finally obtain

lim kAmn(Bml)(f) − Bml(f)kL2(µ) =0. (22) n→∞ P

Proof. (Proof of Lemma A.5) For f bounded, we are now ready to prove that

lim kAmn(f) − fkL2(µ) =0. n→∞ P We have

kA (f) − fk 2 ≤kA (f − f1 m )k 2 + kA (f1 m − B (f))k 2 mn LP (µ) mn .∈∪k=1Zk LP (µ) mn .∈∪k=1Zk ml LP (µ)

2 + kAmn(Bml(f)) − fkLP (µ)

≤kf − f1 m )k 2 + kf1 m − B (f)k 2 .∈∪k=1Zk LP (µ) .∈∪k=1Zk ml LP (µ)

2 + kAmn(Bml(f)) − fkLP (µ), as Amn is a contraction. So by Lemmas A.7 and A.8, for all l,

2 lim sup kAmn(Bml)(f) − Bml(f)kLP (µ) n→∞

≤kf − f1 m k 2 +2kf1 m − B (f)k 2 . .∈∪k=1Zk LP (µ) .∈∪k=1Zk ml LP (µ) Now, since l is arbitrary, the previous expression yields

2 1 m 2 lim kAmn(f) − fkL (µ) ≤kf − f .∈∪ Zk kL (µ) n→∞ P k=1 P since we know B (f) → f1 m when l → ∞. This holds for any m. Letting m → ∞, dominated ml .∈∪k=1Zk convergence gives

1 m 2 kf − f .∈∪ Zk )kL (µ) −→ 0, k=1 P m→∞ which concludes the proof for f bounded.

24 2 R Lemma A.5 shows that any bounded f ∈ LP(µ) may be approximated by elements of . To finish the proof of Theorem A.1, we consider an unbounded f. We can approximate f by bounded f n as follows. Write n 1 f (s,y,ω)= f(s,y,ω) |f(s,y,ω)|≤n. We now prove that f n converges to f pointwise everywhere.

P ◦ µ( {(t,z,ω): |fn(t,z,ω) − f(t,z,ω)| >ǫ}) + n N∗ n n ǫ∈R[∩Q 0\∈ [≥ 0 = P ◦ µ( {(t,z,ω): |f(t,z,ω)| >n}) n N n n \0∈ [≥ 0 = P ◦ µ( {(t,z,ω): |f(t,z,ω)| >n}). n n n \0≥1 [≥ 0 On the other hand, ∞ kf(t,z,ω)k2 (P ◦ µ)(|F (t,z,ω)| >n) ≤ , n2 n=1 n X X≥1 using the Chebyshev-Markov inequality. In particular, the series converges. This implies that

inf (P ◦ µ)(|f(t,z,ω)| >n)=0. N≥1 n N X≥ Hence

P ( {(t,z,ω): |f(t,z,ω)| >n}) ≤ inf (P ◦ µ)(|f(t,z,ω)| >n) N≥1 n n n \0≥1 [≥ 0 ≤ inf (P ◦ µ)(∪n≥1|f(t,z,ω)| >n) N≥1 ≤ inf (P ◦ µ)(|f(t,z,ω)| >n)=0 N≥1 n N X≥ Remark A.9. The proof also applies to the case where µ is a point mass at zero i.e.

m(ds dy, ω)= 10(dy)n({s})ds, which then enables to show the density of processes of the form

I 1 φ(s,ω)= φi(ω) (ti(ω),ti+1(ω)](t), i=0 X 2 in LP(Leb([0,T ])), with φi Fti measurable and the ti some stopping times.

References

[1] D. Applebaum, L´evy Processes and , Cambridge Studies in Advanced Mathe- matics, Cambridge University Press, 2004. [2] F. Benth, G. D. Nunno, A. Løkka, B. Øksendal, and F. Proske, Explicit Representation of the Minimal Variance Portfolio in Markets Driven by L´evy Processes, , 13 (2003), pp. 55–72. [3] K. Bichteler, J. Gravereaux, and J. Jacod, Malliavin calculus for processes with jumps, Stochastics Monographs, Gordon and Breach Science Publishers, 1987. [4] J.-M. Bismut, Calcul des variations stochastique et processus de sauts, and Related Fields, 63 (1983), pp. 147–235. [5] N. Bouleau and L. Denis, Dirichlet Forms for Poisson Measures and L´evy Processes: The Lent Particle Method, in Stochastic Analysis with Financial Applications, A. Kohatsu-Higa, N. Privault, and S.-J. Sheu, eds., vol. 65 of Progress in Probability, Springer Basel, 2011, pp. 3–20.

25 [6] E. Carlen and E. Pardoux, Differential Calculus and Integration by Parts on Poisson Space, in Stochastics, Algebra and Analysis in Classical and Quantum Dynamics, S. Albeverio, P. Blanchard, and D. Testard, eds., vol. 59 of Mathematics and Its Applications, Springer Netherlands, 1990, pp. 63–73. [7] F. Confortola, M. Fuhrman, J. Jacod, et al., Backward stochastic differential equation driven by a marked point process: an elementary approach with an application to optimal control, Annals of Applied Probability, 26 (2016), pp. 1743–1773. [8] R. Cont, Functional Ito calculus and functonal Kolmogorov equations, in Stochastic integration by parts and Functional Ito Calculus (Barcelona Summer School in Stochastic Analysis, 2012)), Advanced Courses in Mathematics - CRM Barcelona, Birkhauser, 2012, pp. 115–204. [9] R. Cont and D. Fournie´, A functional extension of the Itˆo formula, Comptes-rendus math´ematiques de l’Acad´emie des Sciences, 348 (2010), pp. 57–61. [10] R. Cont and D. Fournie´, Functional Itˆocalculus and stochastic integral representation of mar- tingales, Annals of Probability, 41 (2013), pp. 109–133. [11] R. Cont and P. Tankov, Financial Modelling with Jump Processes, Chapman & Hall/Crc Finan- cial Mathematics Series, Chapman & Hall/CRC, 2004. [12] H. Crauel, Random Probability Measures on Polish Spaces, Stochastics monographs, CRC Press, 2003. [13] M. H. Davis, The representation of martingales of jump processes, SIAM Journal of Control and Optimization, 14 (1976), pp. 623–638. [14] M. H. A. Davis, Martingale representation and all that, in Advances in control, communication networks, and transportation systems, Systems Control Found. Appl., Birkh¨auser Boston, Boston, MA, 2005, pp. 57–68. [15] G. Di Nunno, B. Øksendal, and F. Proske, Malliavin Calculus for L´evy Processes with Appli- cations to Finance, Springer, 2009. [16] E. Dia, Error bounds for small jumps of L´evy processes, Advances in Applied Probability, 45 (2013), pp. 86–105. [17] B. Dupire, Functional ItˆoCalculus, technical report, Bloomberg L.P., 2009. [18] Y. Ishikawa, Stochastic Calculus of Variations for Jump Processes, vol. 54 of De Gruyter Studies in Mathematics, De Gruyter, 2013. [19] J. Jacod, Multivariate Point Process:predictable projection, Radon-Nikodym derivatives, Represen- tation of Martingales, Z. Wahr. verw. Gebiete, 31 (2005), pp. 235–253. [20] J. Jacod, S. Mel´ eard,´ and P. Protter, Explicit form and robustness of martingale represen- tations, Annals of Probability, 28 (2000), pp. 1747–1780. [21] J. Jacod and A. Shiryaev, Limit theorems for stochastic processes, vol. 288, Springer Science & Business Media, 2013. [22] O. Kallenberg, Foundations of Modern Probability, Applied probability, Springer, 2002. [23] O. Kella and O. Boxma, Useful martingales for stochastic storage processes with L´evy-type input, Journal of Applied Probability, 50 (2013), pp. 439–449.

[24] O. Kella and W. Whitt, Useful martingales for stochastic storage processes with L´evy input, Journal of Applied Probability, (1992), pp. 396–403. [25] A. Kyprianou, Fluctuations of L´evy Processes with Applications: Introductory Lectures, Universi- text, Springer, 2014. [26] G. Last and M. Penrose, Martingale representation for Poisson processes with applications to minimal variance hedging, Stochastic Processes and their Applications, 121 (2011), pp. 1588–1606.

26 [27] J. Leon,´ J. Sole,´ F. Utzet, and J. Vives, Local malliavin calculus for l´evy processes and applications, Stochastics An International Journal of Probability and Stochastic Processes, 86 (2014), pp. 551–572. [28] J. A. Leon,´ J. L. Sole,´ F. Utzet, and J. Vives, On L´evy processes, Malliavin calculus and market models with jumps, Finance and Stochastics, 6 (2002), pp. 197–225.

[29] J. A. Leon´ and C. Tudor, A chaos approach to the anticipating calculus for the Poisson process, Stochastics and Stochastic Reports, 62 (1998), pp. 217–250. [30] A. Løkka, Martingale Representation of Functionals of L´evy Processes, Stochastic Analysis and Applications, 22 (2005), pp. 867–892. [31] P. Morters¨ and Y. Peres, Brownian motion, vol. 30, Cambridge University Press, 2010. [32] R. Navarro and F. Viens, analysis for the canonical L´evy process, Communications on Stochastic Analysis, 9 (2013), pp. 553–577. [33] D. Nualart, The Malliavin Calculus and Related Topics, 2nd. ed., Probability and Its Applications, Springer, 2006. [34] D. Nualart and J. Vives, Anticipative calculus for the Poisson process based on the Fock space, in S´eminaire de Probabilit´es XXIV 1988/89, J. Az´ema, M. Yor, and P.-A. Meyer, eds., vol. 1426 of Lecture Notes in Mathematics, Springer Berlin Heidelberg, 1990, pp. 155–165. [35] E. Petrou, Malliavin Calculus in L´evy spaces and Applications to Finance, Electronic Journal of Probability, 13 (2008), pp. 852–879. [36] P. Protter, Stochastic Integration and Differential Equations, 2nd. ed., Applications of mathe- matics, Springer, 2004. [37] B. Rajeev, Martingale representation for functionals of L´evy processes, Sankhya, 77 (2015), pp. 277– 299. [38] B. Remillard´ and J.-F. Renaud, A martingale representation for the maximum of a L´evy process, Communications on Stochastic Analysis, 5 (2011), pp. 683–688. [39] D. Revuz and M. Yor, Continuous Martingales and Brownian Motion, Grundlehren der mathe- matischen Wissenchaften A series of comprehensive studies in mathematics, Springer, 1999. [40] A. Shiryaev and M. Yor, On the problem of stochastic integral representations of functionals of the brownian motion. i, Theory of Probability & Its Applications, 48 (2004), pp. 304–313. [41] J. Sole,´ F. Utzet, and J. Vives, Canonical L´evy process and Malliavin calculus, Stochastic Processes and their Applications, 117 (2007), pp. 165 – 187. [42] R. Suzuki, A Clark-Ocone type formula under change of measure for L´evy processes with L2 L´evy measure, Communications on Stochastic Analysis, 7 (2013), pp. 383–407. [43] L. Wu, Construction de l’op´erateur de Malliavin sur l’espace de Poisson. (Construction of the Malliavin operator on the Poisson space)., in S´emin. probabilit´es XXI, Lect. Notes Math. 1247., 1987, pp. 100–113. [44] X. Zhang, Clark-Ocone formula and variational representation for Poisson functionals, Ann. Probab., 37 (2009), pp. 506–529.

27