Malliavin for complex Asian options in a jump diffusion setting

Lucia Caramellino and Valerio Marchisio Dept. of Mathematics, University of Rome-Tor Vergata

Abstract. We deal with a unifying Malliavin calculus in a jump diffusion context and develop an integration by parts formula giving the starting point of our work. The results are then applied to study representation formulas for the delta of complex Asian options, that is written on both the component∫ of the pair Zt = (Xt,Yt), X t standing for a jump diffusion process and Yt = 0 Xrdr. Several examples are detailed and equipped with numerical studies. Keywords: Malliavin calculus, Asian options, Monte Carlo methods. 2000 MSC: 60H07, 65C05, 91B28. Corresponding author: Lucia Caramellino, Dip. di Matematica, Universit`adi Roma-Tor Vergata, Via della Ricerca Scientifica 1, I-00133 Roma, Italy; [email protected]

Contents

1 Introduction 2

2 Malliavin calculus 3 2.1 Malliavin calculus in the jump noise direction ...... 3 2.2 Malliavin calculus in the Gaussian direction ...... 8 2.3 Malliavin derivatives for jump diffusions ...... 12 2.4 Unifying Malliavin calculus ...... 14

3 Sensitivity analysis for complex Asian options 15 3.1 Gaussian direction ...... 18 3.2 Jump amplitudes direction ...... 19 3.3 Jump times direction ...... 21 3.4 Joint Gaussian and jump amplitudes direction ...... 21 3.5 Joint jump amplitudes and times direction ...... 22

4 Examples and numerical results 23 4.1 Black-Scholes-Merton model ...... 24 4.2 A jump diffusion stochastic model ...... 28 4.3 Ornstein-Uhlenbeck model with jumps ...... 30 4.4 CIR model with jumps ...... 35

5 Figures 38

References 52

1 1 Introduction

On a probability space (Ω, F , P), let a standard Brownian motion W and a compound Poisson process N be given. As for the latter, the jump times are given by a sequence {Ti}i≥1 of positive r.v.’s such that T1,T2 − T1,T3 − T2,... are i.i.d. with exponential law of parameter λ > 0 and we set J the associated Poisson process. As for the jump amplitudes, they are modelled by a sequence {∆i}i≥1 of i.i.d. r.v.’s which are supposed to have a probability density function (pdf). For the sake of simplicity, we assume to be in a one-dimensional framework (the general case gives no technical difficulties but makes the notations much more complicated). Let b = b(t, x), σ = σ(t, x) and c = c(t, a, x) denote sufficiently smooth functions in order to guarantee the existence and uniqueness of the jump diffusion process ∫ ∫ t t ∑Jt Xt = x + b(r, Xr)dr + σ(r, Xr)dWr + c(Tj, ∆j,X − ), t ≤ T. Tj 0 0 j=1

We define the pair Zt = (Xt,Yt), t ∈ [0,T ], in which ∫ t Yt = Xrdr 0 This is our setting and we deal with the “sensitivity problem”, i.e. the study of

∂ϑE(f(XT ,YT )) where ϑ denotes a parameter on which the pair (XT ,YT ) depends. Here, we consider ϑ = x: ∂xE(f(XT ,YT )) gives the dependence of E(f(XT ,YT )) on the initial datum x. Such a quantity plays a crucial role in Finance, since it allows to hedge the (European) with payoff f(XT ,YT ). The standard Asian option case, when f(XT ,YT ) ≡ f(YT ), has been widely studied. But, in the literature there are very few theoretical results, and typically difficult to use in practice, regarding the pair (XT ,YT ), which is needed to handle the so called “complex” Asian options. Notice that this is not a trivial complication: the pair solves a degenerate jump diffusion equation and some standard generalizations cannot be applied. The main technical tool in order to tackle this problem is here given by Malliavin calculus and in this framework, to our knowledge only Benhamou [5] considers both the components X and Y . As for the generalities on Malliavin calculus, there are many references, e.g. the classical books of Nualart [18] or of Bichteler, Gravereaux and Jacod [8], even if we mostly use the approach by Bally [2] and Bally, Bavouzet- Morel and Messaoud [3]; for a general guidance in Finance, let us cite also Malliavin and Thalmaier [17]. As for the application to the sensitivity problem, the literature is now really huge. For example, the seminal papers are definitely Fourni´eLasry, Lebuchoux, Lions and Touzi [14] and Fourni´eLasry, Lebuchoux and Lions [15]; a Gaussian Malliavin calculus in the presence of jumps has been developed by Davis and Johansson [10] and Forster, L¨utkebohmert and Teichman [12]; as for the jump noise, we recall the recent papers by Bavouzet-Morel and Messaoud [4], El-Khatib and Privault [11], Privault and Wei [19], [20] and Vives, Le´on,Utzet and Sol´e[21]. Here, in Section 2 we develop an integration by parts formula for functionals of the Brownian motion, the jump times and the jump amplitudes which allows to get rid

2 of the , i.e. to state a formula of the type E E E ∂x (f(XT ,YT )1{JT =n}) = (f(XT ,YT )ΘT 1{JT =n}) + ([f(XT ,YT ), w]π1{JT =n}) (1) where ΘT represents a so called “Malliavin weight”, which is typically given by the Skorohod integral of a suitable process, and [·, ·]π stands for a “border term opera- tor”, acting on the border points of the support of the conditional law of the jump times and amplitudes given that {JT = n}, the subscript π being linked to a set of weights connected to a scalar product allowing to develop the Malliavin calculus in the direction of the jump noise. For practical purposes, (1) is particularly useful for the set up of Monte Carlo methods for the computation of the Delta. The standard techniques (as in the references already quoted) allow to determine the weight ΘT but, also for the degeneration of the pair (Xt,Yt), unfortunately the expression turns out to be heavy and really difficult to use in practice. Therefore, in Section 3 we use the generalized integration by parts formula developed in Section 2 in order to reduce the complicated structure of ΘT . The main idea has been to make use of some auxiliary processes giving a triangular generalized Malliavin covariance matrix, bringing to significant simplifications and making ΘT easy to implement in practice. Another idea has concerned the treatment of the jump times. Usually, the above briefly mentioned πi’s are chosen in order to nullify the border term operator. Here, the case πi = 1 for any i has been considered for the sake of simplifications but, at the same time, the border term operator w.r.t. the jump times is non null and then it has to be considered in practice. And actually, the resulting weights become clean and really proper to be used. In Section 4, we deal with a number of models of interest in Finance, each of them equipped with a numerical study on the Monte Carlo computation of the Delta Greek for some options, included complex Asian types. The comparison with closed form solutions and/or with already studied methods shows the goodness of our Malliavin weights, being especially significant when regarding the choice of a non null border term operator.

2 Malliavin calculus

We first introduce the notations and main concepts on Poisson and Gaussian Malliavin calculus allowing to get the integration by parts formula we are interested in.

2.1 Malliavin calculus in the jump noise direction This section follows the paper of Bally, Bavouzet-Morel and Messaoud [3], where all proofs can be found. Let (Ω, F , P) denote the underlying probability space. Let {∆i}i≥1 denote a sequence of i.i.d. absolutely continuous r.v.’s on R (the jump-amplitudes). From now on, g denotes their common pdf. −∞ ≤ ··· ≤ ∞ Assumption∪ 2.1. There exist a1 < b1 < a2 < bk + such that g > 0 on k ≡ I = j=1(aj, bj) and g 0 otherwise. Moreover, log g is continuously differentiable p on I and both ∆i and log g(∆i) belong to L for any p.

Let {Ti}i≥1 be a sequence of positive r.v.’s (the jump-times) such that T1,T2 −T1,T3 − T2,... are i.i.d. r.v.’s exponentially distributed with parameter λ > 0. We set T0 = 0.

3 ∑ J We denote by Jt the associated counting process, i.e. Jt = i≥1 1{Ti≤t}, and t = σ{Js ; s ≤ t}. For a fixed t > 0, the conditional law of T1,...,Tn given that {Jt = n} n has a density on R , given by the unform law in the simplex {(t1, . . . , tn) : 0 < t1 < ··· < tn < t} (see e.g. Bertoin [7]) and as i = 1, . . . , n, the marginal conditional law of Ti given that {Jt = n} is uniformly distributed on [Ti−1,Ti+1], with pdf

1 ∈ { } pi(ω, r) = 1{Ti−1(ω)

Assumption 2.2. {Ti}i≥1 and {∆i}i≥1 are independent.

To our purposes, we add a standard Brownian motion W , independent of {Ti}i≥1 and {∆i}i≥1, and set Ws as the associated filtration augmented by the P-null sets.

The differential operators regarding the jump noise will work on the event {Jt = n}, so that we fix once and for all t > 0 and n ≥ 1. We start by introducing some functions πj’s that play the role of weights appearing in a suitable scalar product allowing to get a duality relationship between the Malliavin derivative and the Skorohod integral we are going to define. G W ∨ { }\{ } As i = 1, . . . , n, set i = t σ( 1{Jt=n}, ∆1,..., ∆n,T1,...,Tn ∆i ). Let πi : Ω × R → R+ be a Gi × B(R)-measurable function such that πi(ω, y) = 0 when ∈ · ∈ C1 G W ∨ { y / I and πi(ω, ) (I) a.s. Similarly, set i+n = t σ( 1{Jt=n}, ∆1,..., ∆n, T1,...,Tn}\{Ti}), and let πi+n :Ω × R+ → R+ be a Gi+n × B(R+)-measurable function such that a.s. πi+n(ω, r) = 0 when r∈ / (Ti−1(ω),Ti+1(ω)) and πi+n(ω, ·) ∈ 1 C ((Ti−1(ω),Ti+1(ω))) (as for π2n, we use the convention T2n = t). Later on, we need the following

p Assumption 2.3. As i = 1, . . . , n, πi(ω, ∆i), πi+n(ω, Ti) ∈ L (Ω) for any p and there ∈ 1+δ exists δ > 0 such that ∂yπi(ω, ∆i)1{Jt=n}, ∂rπi(ω, Ti)1{Jt=n} L (Ω). We can pass now to discuss the functional spaces we will work on. × Rn × Rn → R For f :Ω + , let us set the following operators: as i = 1, . . . , n, ( ) T (f)(ω, y) = f ω, ∆ ,..., ∆ − , y, ∆ ,..., ∆ ,T ,...,T (3) i ( 1 i 1 i+1 n 1 n ) Ti+n(f)(ω, r)=f ω, ∆1,..., ∆n,T1,...,Ti−1, r, Ti+1,...,Tn (4)

Ck Ck W ×B Rn ×B Rn Then, we denote by n,t(∆) and n,t(T ) the set of the t ( ) ( +) measurable k functions f such that a.s. for any i = 1, . . . , n, Ti(f)(ω, ·) ∈ C (I) and Ti+n(f)(ω, ·) ∈ k C ((Ti−1(ω),Ti+1(ω)) respectively (recall that I is defined in Assumption 2.1 and in T Ck Ck ∩ Ck 2n we use the convention Tn+1 = t). We also set n,t = n,t(∆) n,t(T ). We can now define the simple functionals and the simple processes. S k S k The set n,t(∆) and n,t(T ) of the simple functionals of length n up to time t w.r.t. the jump amplitudes and the jump times is given by the r.v.’s on R of the form

∈ Ck ∈ Ck F = f(ω, ∆1,..., ∆n,T1,...,Tn) for f n,t(∆) and f n,t(T ) respectively.

S k S k ∩ S k n,t = n,t(∆) n,t(T ) denotes the simple functionals of length n up to time t. Pk Pk Furthermore, the set n,t(∆) and n,t(T ) of the simple processes of length n up to time t w.r.t. the jump amplitudes and the jump times is given by the r.v.’s on

4 n ∆ T R of the form U = (U1,...,Un) and U = (Un+1,...,U2n) respectively, where as i = 1, . . . , n,

∈ Ck Ui = ui(ω, ∆1,..., ∆n,T1,...,Tn), ui n,t(∆) ∈ Ck Ui+n = ui+n(ω, ∆1,..., ∆n,T1,...,Tn), ui+n n,t(T ).

Pk The set n,t of the simple processes of length n up to time t is given by the r.v.’s R2n ∆ ∈ Pk on of the form U = (U1,...,U2n), with U = (U1,...,Un) n,t(∆) and T ∈ Pk U = (Un+1,...,U2n) n,t(T ). The Malliavin derivative w.r.t. the jump amplitudes and the jump times is defined on the set {Jt = n} through

∆ ≡ S 1 → P0 D (D1,...,Dn): n,t(∆) n,t(∆) T ≡ S 1 → P0 D (Dn+1,...,D2n): n,t(T ) n,t(T ) respectively, where as F = f(ω, ∆1,..., ∆n,T1,...,Tn),

DiF = ∂∆i f(ω, ∆1,..., ∆n,T1,...,Tn)1∆i∈I

Di+nF = ∂Ti f(ω, ∆1,..., ∆n,T1,...,Tn)1Ti∈Ii ,

S 1 ∆ T for i = 1, . . . , n. On n,t, the joint Malliavin derivative is set as D = (D ,D ), so S 1 → P0 Dk Dk Dk that D : n,t n,t. Finally, the class n,t(∆), n,t(T ) and n,t is given by the ∈ S k S k S k r.v.’s F such that F 1{Jt=n} = Fn,t1{Jt=n} for some Fn,t n,t(∆), n,t(T ) and n,t, respectively. For such F , we write

DjF = DjFn,t on the set {Jt = n}.

Now, the Skorohod integral w.r.t. the jump amplitudes and the jump times is defined on the set {Jt = n} through ∑ δ∆ = n δ : P1 (∆) → S 0 (∆) ∑ i=1 i n,t n,t T n P1 → S 0 δ = i=1 δi+n : n,t(T ) n,t(T ) respectively, where as Uj = uj(ω, ∆1,..., ∆n,T1,...,Tn), ( ( ) δi(Ui) = − ∂∆ πiui (ω, ∆1,..., ∆n,T1,...,Tn)+ i ( ) ) + πiui (ω, ∆1,..., ∆n,T1,...,Tn)∂∆ ln g(∆i) ( ) i − δi+n(Ui+n) = ∂Ti (πi+nui+n) (ω, ∆1,..., ∆n,T1,...,Tn)

∆ T ∈ P1 for i = 1, . . . , n. For U = (U ,U ) n,t, the joint Skorohod integral of U is set ∆ ∆ T T P1 → S 0 I k as δ(U) = δ (U ) + δ (U ) so that δ : n,t n,t. Finally, the class n,t(∆), I k I k n,t(T ) and n,t is given by the processes U such that U1{Jt=n} = Un,t1{Jt=n} for ∈ Pk Pk Pk some Un,t n,t(∆), n,t(T ) and n,t respectively. For such U, we write

δj(U) = δj(Un,t) on the set {Jt = n}.

As an immediate consequence of the (standard) integration by parts formula on R, a duality relationship between the “Malliavin derivative” and the “Skorohod integral”

5 holds if a suitable scalar product and border term operator are defined (see next Proposition 2.5). Let us see. ∈ Pk Pk Pk As U, V n,t(∆), n,t(T ), n,t, we define

∑n ∑n ⟨ ⟩∆ ⟨ ⟩T U, V π = πiUiVi, U, V π = πi+nUi+nVi+n i=1 i=1 ⟨ ⟩ ⟨ ∆ ∆⟩∆ ⟨ T T ⟩T and U, V π = U ,V π + U ,V π respectively (in the latter, we have set U = (U ∆,U T ) and similarly for V ). Notice that they are all (random) scalar products (on the proper space) when ω ∈ {Jt = n}, whenever the πi’s are all non null. ∈ S 0 × P0 S 0 × P0 S 0 × P0 Now, for (F,U) n,t(∆) n,t(∆), n,t(T ) n,t(T ) and n,t n,t, the border term operator is defined on the set {Jt = n} as

∑n ∑k [( ) ( ) ] ∆ T · − − T · + [F,U]π = i(fui) πig (ω, bj ) i(fui) πig (ω, aj ) i=1 j=1 ∑n [( ) ( ) ] T T · − − T · + [F,U]π = i+n(fui+n) πi+npi (ω, Ti+1) i+n(fui+n) πi+npi (ω, Ti−1) i=1 ∆ ∆ T T [F,U]π = [F,U ]π + [F,U ]π (5) respectively, where f and ui, i = 1,..., 2n, denote as usual the functions representing F and Ui, i = 1,..., 2n, respectively, and the Ti’s being defined in (3) and (4). There are cases in which the border term operators are null, for example if, for any i = 1,..., 2n, πi(ω, y) = 0 on ∂Ii. The property [·, ·]π ≡ 0 gives a duality formula similar to the classical one. Such a case is widely studied in Bally, Bavouzet-Morel and Messaoud [3]. For practical purposes, they assume g ≡ 0 on ∂I and take πi = 1 · · ∆ ≡ for any i = 1, . . . , n, so that [ , ]π 0. Moreover, they set − γ − γ πi+n(ω, r) = (Ti+1(ω) r) (r Ti−1(ω)) 1r∈(Ti−1(ω),Ti+1(ω)), i = 1, ..., n,

· · T ≡ for suitable values of γ giving [ , ]π 0.

L Z 2 Remark 2.4. Later on, we take πi = 1 for any i and 1+∆i = e with Z ∼ N(m, ϱ ), so that ( ) 1 (log(1 + y) − m)2 g(y) = √ exp − 1y>−1. (1 + y) 2πϱ2 2ϱ2 · · ∆ ∈ p ≥ Here g = 0 on ∂I, giving [ , ]π = 0, and ∆i, ∂y ln g(∆i) L for any p 1, because − 2 − − 2 − m ϱ ln(1 + ∆i) L m ϱ Z −Z ∂∆i ln g(∆i) = 2 = 2 e . (1 + ∆i)ϱ ϱ

T T But in general, [F,U ]π is non null and given by ∑n T − − T + T T i+n(f)(ω, Ti+1) i+n(f)(ω, Ti−1) [F,U ]π = . T − T − i=1 i+1 i 1 By elementary arguments one has

6 ∈ D1 ∈ I 1 Proposition 2.5. For any F n,t and U n,t, such that ([ ] ) E | | | | ∞ DiFUiπi + F δi(Ui) 1{Jt=n} < for any i = 1,..., 2n, one has E ⟨ ⟩ E E ( DF,U π1{Jt=n}) = (F δ(U)1{Jt=n}) + ([F,U]π1{Jt=n}). Let us here recall some of the main properties (chain rule and Skorohod integral of a special product) we are going to use.

d Property 2.6. [CR] For any continuously differentiable ϕ : R → R and F1,..., ∈ D1 ∈ D1 Fd n,t, ϕ(F1,...,Fd) n,t and

∑d { } Dϕ(F1,...,Fd) = ∂xk ϕ(F1,...,Fd)DFk on the set Jt = n . k=1 ∈ D1 ∈ I 1 ∈ I 1 [SP] If F n,t and U n,t then FU n,t and

δ(FU) = F δ(U) − ⟨DF,U⟩π on the set {Jt = n}.

In the following, we will need higher order Malliavin derivatives. They are naturally ∈ D2 ∈ D1 ∈ D 1 Dℓ set as follows: F n,t if F n,t and DiF n,t for any i. Similarly, n,t for any ℓ ≥ 1 is defined. The following version of the Malliavin integration by parts formula will be widely used: ≥ 1 d 1 d j ∈ D2 Property 2.7. For d 1, let F = (F , ..., F ) and v = (v , . . . , v ), with F n,t j ∈ I 1 { } × and v n,t, j = 1, . . . , d. On the set Jt = n , let γDF,v be the following d d random matrix i,j ⟨ j i⟩ γDF,v = DF , v π, i, j = 1, . . . , d (6) D1 and suppose there exists its inverse matrix γˆDF,v, whose entries all belong to n,t. ∈ C1 Rd 1 d j ∈ S 1 Let ϕ p ( ) and G = (G ,...,G ), with G n,t for any j, be such that ∑ ( ( ) ( )) E | i | E | i | ∞ Diϕ(F )(GγˆDF,v) πi 1{Jt=n} + ϕ(F )δi((GγˆDF,v) ) 1{Jt=n} < . (7) i Then,

( ∑d ) ( ) ( ) E i E E ∂xi ϕ(F )G 1{Jt=n} = ϕ(F )δ(w)1{Jt=n} + [ϕ(F ), w]π1{Jt=n} i=1 ∑d i i where w = (GγˆDF,v) v . i=1

The (non-symmetric) matrix γDF,v defined in (6) is a generalization of the classical Malliavin covariance matrix σF : on the set {Jt = n}, ( ) i j σF = γDF,DF = ⟨DF ,DF ⟩π . i,j=1,...,d

The proof of Property 2.7 is straightforward∑ and reduces to the application of the i i duality formula between DF and i(GγˆD0F,v) v .

7 j Remark 2.8. In the framework of Remark 2.4 and assuming that F,G, the vi ’s and all their Malliavin derivatives belong to Lp for any p, a sufficient condition allowing to get (7) is that all entries of γˆDF,v are p-integrable on the set {Jt = n}, for any p. This holds if | |−1 ∈ p ≥ 1{Jt=n} det γDF,v L , for any p 1. When v = DF , it reduces to ask for the classical Malliavin non degeneracy condition.

Remark 2.9. Property 2.6 and 2.7 continue to hold if one works separately with Dℓ I 1 · · the jump amplitudes or the jump times, i.e. with n,t, n,t, D, δ, [ , ]π replaced by Dℓ I 1 ∆ ∆ · · ∆ Dℓ I 1 T T · · T n,t(∆), n,t(∆), D , δ , [ , ]π respectively or by n,t(T ), n,t(T ), D , δ , [ , ]π respectively.

2.2 Malliavin calculus in the Gaussian direction

Let Wt denote a Brownian motion on (Ω, F , P) and le Wt be the augmentation w.r.t. P of the filtration generated by W . Let us briefly recall how the standard Gaussian Malliavin calculus works. We follow here the approach of Bally as in [2]. Set T as the time horizon and, for N ∈ N,

N N N ∆ W = (∆0 W, . . . , ∆⌊2N T ⌋−1W ) where N − ⌊ N ⌋ − ∆k W = W k+1 W k , k = 0, 1,..., 2 T 1. 2N 2N

The set SN of the simple functionals of order N is given by the r.v.’s

F = f(∆N W ).

∈ ∞ R⌊2N T ⌋ R S ⊂ S S ∪ S where f Cb ( , ). One has N N+1. Setting = N≥1 N , one has S ⊂ Lp(Ω) for any p. The class PN of the simple processes of order N is given by the random processes

⌊2N∑T ⌋−1 [ ) us = Uk1 ∈ k k+1 . s ⌊ N ⌋ , ⌊ N ⌋ k=0 2 T 2 T ∈ S ∈ S where U0,...,U⌊2N T ⌋−1 N . Notice that u N is not necessarily adapted and if N ≤ − it is, then the Uk’s depend on ∆i W only for i k 1. p Again, PN ⊂ PN+1, P ⊂ L (H) for any p where { ∈ 2 } p { ∈ E ∥ ∥p ∞} H = processes u : u L ([0,T ]) a.s. and L (H) = u H : ( u L2([0,T ])) <

Setting, for u ∈ Lp(H), ( ) ( ∫ ) T p/2 ∥ ∥p E ∥ ∥p E | |2 u Lp(H) = u L2[0,T ] = us ds 0 then Lp(H) is a Banach space. In particular, L2(H) is a Hilbert space, with inner product ∫ ( T ) ⟨u, v⟩L2(H) = E(⟨u, v⟩L2[0,T ]) = E usvsds . 0 We can pass now to define the Malliavin derivative and the Skorohod integral.

8 For F ∈ S , the Malliavin derivative is defined as

⌊2N∑T ⌋−1 N [ ) DsF = ∂xk f(∆ W )1 ∈ k k+1 s ⌊ N ⌋ , ⌊ N ⌋ k=0 2 T 2 T and for u ∈ P, the Skorohod integral is given by

⌊2N∑T ⌋−1 ( ) N N 1 δ(u) = Uk∆ W − ∂x uk(∆ W ) k k 2⌊2N T ⌋ k=0 ∈ S ∈ P ∞ being N such that F N and u N (f and uk are the Cb function representing F and Uk respectively).∫ Since D and δ are really independent of N, they are well T defined and δ(u) = 0 usdWs (the standard Ito integral) whenever u is adapted. Therefore, for any p,

D : S ⊂ Lp(Ω) → P ⊂ Lp(H) and δ : P ⊂ Lp(H) → S ⊂ Lp(Ω) (8)

Moreover, the following duality property immediately follows:

for any F ∈ S and u ∈ P then E(⟨DF, u⟩L2[0,T ]) = E(F δ(u)) (9) which allows to extend the (unbounded, linear) operators in (8) with p = 2. In fact, from (9) it follows that both D and δ are closable. So, by setting for F ∈ S and u ∈ P

∥F ∥1,2 = ∥F ∥L2(Ω) + ∥DF ∥L2(H) and ∥u∥δ,2 = ∥u∥L2(H) + ∥δ(u)∥L2(Ω), one can define the set of the Malliavin derivable r.v.’s as and the set of the Skorohod integrable processes as

∥·∥ ∥·∥ 1,2 1,2 δ,2 D = S and Dom2(δ) = P respectively. Such spaces are both Hilbert ones, with inner products given by

1,2 ⟨F,G⟩1,2 = ⟨F,G⟩L2(Ω) + ⟨DF,DG⟩L2(H),F,G ∈ D

⟨u, v⟩δ,2 = ⟨u, v⟩L2(H) + ⟨δ(u), δ(v)⟩L2(Ω), u, v ∈ Dom2(δ) Now, by construction the duality relationship in (9) continue to hold between D1,2 and Dom2(δ). 1,2 Since D and Dom2(δ) are not algebras, for a generical p one defines

∥F ∥1,p = ∥F ∥Lp(Ω) + ∥DF ∥Lp(H) and ∥u∥δ,p = ∥u∥Lp(H) + ∥δ(u)∥Lp(Ω) for F ∈ S and u ∈ P and as a consequence,

∥·∥ ∥·∥ 1,p 1,p δ,p D = S and Domp(δ) = P .

1,∞ 1,p The algebra properties now hold on D = ∩pD and Dom∞(δ) = ∩pDomp(δ). Higher order Malliavin derivatives can also be defined. As m ≥ 2, one defines Dm,p m−1 as the set of the random variables F such that, for any (t1, . . . , tm−1) ∈ [0,T ] ··· ∈ D1,p ∈ Dm,p Dtm−1 Dt1 F . For F , one sets Dm F = D D ··· D F, (t , . . . , t ) ∈ [0,T ]m (t1,...,tm) tm tm−1 t1 1 m

9 so that Dm : Dm,p → Lp(Ω × [0,T ]m). On Dm,p, one considers ∑m ∥ ∥ ∥ ∥ ∥ k ∥ F m,p = F Lp(Ω) + D F Lp(Ω×[0,T ]k). k=1

∥·∥ Again, Dm,p = S m,p and Dm,p is a Banach space and for p = 2, Dm,2 is a Hilbert space with ∑m ⟨ ⟩ ⟨ ⟩ ⟨ k k ⟩ F,G m,p = F,G L2(Ω) + D F,D G L2(Ω×[0,T ]k). k=1 A lot of further interesting and useful facts turn out to be true, and we will soon refer to some of them specialized in our context. The forthcoming ideas to generalize the Gaussian Malliavin calculus to objects depending also on random sources independent of the Brownian motion have been already used e.g. by Davis and Johansson [10] or by Forster, L¨utkebohmert and Teichman [12].

Let {Ti}i≥1 and {∆i}i≥1 denote the jump times and amplitudes respectively, as before, and suppose they are both independent of the Brownian motion. Let us also set, as s ≥ 0, Gs = σ(Ju ; u ≤ s) ∨ σ(∆j1{j≤J } ; j ≥ 1) ∑ s G (recall that Js = j≥1 1{Tj ≤s}). Therefore, s gives all the information coming out from the jump times and amplitudes noise up to time s. Set n ≥ 1, t > 0,

n n zn,t = (y1, . . . , yn, t1, . . . , tn) ∈ R × [0, t] and Zn,t = (∆1,..., ∆n,T1,...,Tn).

Definition 2.10. [Generalized Gaussian Malliavin derivative] Let f :Ω×Rn× n n n 1,p [0, t] → R be Wt × B(R ) × B([0, t] )-measurable such that ω 7→ f(ω, zn,t) ∈ D for n n any zn,t ∈ R × [0, t] . On the set {Jt = n}, set

F = f(·, zn,t) and D0F = Df(·, zn,t) zn,t=Zn,t zn,t=Zn,t D1,p The class n,t is defined as the random variables F as above and such that ∈ p ∈ p F 1{Jt=n} L (Ω) and D0F 1{Jt=n} L (H)

For F ∈ D1,p, D F will stand for the Malliavin derivative in the Gaussian direction n,t 0 ∩ { } Dk,p Dk,∞ Dk,p on the set Jt = n . We similarly define n,t and set n,t = p n,t . ∈ D1,∞ ≥ Example 2.11. As an example, WT1 n,t for any n 1 and t > 0. In fact,

WT1 = f(ω, T1), f(ω, t) = Wt(ω). Then,

D0,uWT1 = DuWt = 1{u

( ) u = u(·, zn,t) and δ0(u) = δ u(·, zn,t) zn,t=Zn,t zn,t=Zn,t

10 The class Domn,t,p(δ0) is defined as the random processes u as above and such that

∈ p ∈ p u1{Jt=n} L (H) and δ0(u)1{Jt=n} L (Ω) ∈ For u Domn,t,p(δ0), δ0(u) will stand for the∩ Skorohod integral in the Gaussian { } direction on Jt = n . We set Domn,t,∞(δ0) = p Domn,t,p(δ0).

On the set {Jt = n}, the duality relationship between D0 and δ0 holds as well: ∈ D1,2 ∈ Proposition 2.13. For any F n,t and u Domn,t,2(δ0) one has

E ⟨ ⟩ 2 E ( D0F, u L [0,T ]1{Jt=n}) = (F δ0(u)1{Jt=n}).

The proof is straightforward and by using similar arguments, the classical properties of the Malliavin differential operators can be extended. For example,

∈ 1 Rd R ∈ D1,2 Property 2.14. [CR] For any ϕ Cb ( , ) and F1,...,Fd n,t, one has

∑d { } D0ϕ(F1,...,Fn) = ∂xi ϕ(F1,...,Fn)D0Fi on the set Jt = n . i=1

∈ D1,∞ ∈ 1 Rd R Moreover, the statement holds if F1,...,Fd n,t and ϕ Cp ( , ). ∈ D1,2 ∈ ∈ [SP] If F n,t and u Domn,t,2(δ0) are such that F u Domn,t,2(δ0) then ∫ t δ0(F u) = F δ0(u) − D0,sF usds on the set {Jt = n}. 0 The following Malliavin integration by parts formula will be strongly used.

≥ 1 d 1 d j ∈ D1,2 Property 2.15. For d 1, let F = (F , ..., F ) and v = (v , . . . , v ), with F n,t j ∈ { } and v Domn,t,2(δ0), j = 1, . . . , d. On the set Jt = n , let γD0F,v be the following d × d random matrix

i,j j i γ = ⟨D F , v ⟩ 2 , i, j = 1, . . . , d (10) D0F,v 0 L ([0,t])

D1,2 ∈ and suppose there exists its inverse γˆDF,v, whose entries all belong to n,t. Let ϕ C1 Rd 1 d ∈ b ( ) and G = (G ,...,G ) be such that w Domn,t,2(δ0), where

∑d i i w = (GγˆDF,v) v . i=1

Then, one has

( ∑d ) ( ) E i E ∂xi ϕ(F )G 1{Jt=n} = ϕ(F )δ0(w)1{Jt=n} . i=1

The case v = D0F gives the Malliavin covariance matrix σF on {Jt = n}, that is ( ) ⟨ j i⟩ { } σF = γD0F,D0F = D0F ,D0F L2([0,T ]) , on Jt = n . i,j=1,...,d

11 Remark 2.16. The proof∑ of Property 2.15 reduces to the application of the duality i i i i ∈ formula between D0F and i(GγˆD0F,v) v . Therefore, one should have (GγˆD0F,v) v Domn,t,2(δ0) for any i = 1, . . . , d. Since it is difficult to give mild sufficient conditions, i ∈ D2,∞ i ∈ i ∈ D1,∞ one generally asks that F n,t , v Domn,t,∞(δ0) and G n,t for all i = 1, . . . , i, and finally ( ) E | |−p ∞ 1{Jt=n} detγ ˆD0F,v < for any p, which is linked to the classical “non degeneracy condition” of the Malliavin covariance ∈ 1 ∈ 1 matrix. In this case, one can require ϕ Cp instead of ϕ Cb (see [CR] of Property 2.14).

2.3 Malliavin derivatives for jump diffusions We consider the case of a jump diffusion: ∫ ∫ t t ∑Jt

Xt = x + b(r, Xr)dr + σ(r, Xr)dWr + c(Ti, ∆i,XTi−) (11) 0 0 i=1 and we ask for the following

Assumption 2.17. (a)(t, x) 7→ c(t, a, x), σ(t, x), b(t, x) are continuous and x 7→ c(t, a, x), σ(t, x), b(t, x) are twice differentiable, with bounded derivatives of first and second order, and having linear growth with respect to x, uniformly with respect to t and a. (b) There exists η > 0 such that, for any r, a, x,

|1 + ∂xc(r, a, x)| ≥ η.

Under (a) of Assumption 2.17, both X and its first variation process ξ = ∂xX are well defined, the latter being given by ∫ ∫ t t ∑Jt

ξt = 1 + ∂xb(r, Xr)ξrdr + ∂xσ(r, Xr)ξrdWr + ∂xc(Ti, ∆i,XTi−)ξTi−. (12) 0 0 i=1

ˆ −1 Moreover, Assumption 2.17 ensures that the process ξt = ξt solves ∫ ∫ t ( ) t 2 ξˆt = 1 − ξˆr ∂xb − (∂xσ) (r, Xr)dr − ξˆr∂xσ(r, Xr)dWr 0 0 ∑Jt ∂ c(T , ∆ ,X −) − ˆ x i i Ti ξTi− . 1 + ∂ c(T , ∆ ,X −) i=1 x i i Ti

p Finally, under Assumption 2.17, Xt, ξt and ξˆt all belong to L for any p. If σ is non null, general Malliavin derivative formulas cannot be stated in the direction of the jump times, as it follows from next

Example 2.18. WT1 is not Malliavin derivable in the direction of the jump times. But, the (generalized) Gaussian Malliavin derivative exists (see Example 2.11).

In order to summarize the Malliavin derivatives for a general jump diffusion X as in (11), let us consider the following further hypotheses.

12 Assumption 2.19. There exists η > 0 such that, for any r, a, x,

|∂ac(r, a, x)| ≥ η.

Assumption 2.20. There exists η > 0 such that, for any r, a, x,

|q(r, a, x)| ≥ η where ( ) q(r, a, x) = ∂rc + b(1 + ∂xc) (r, a, x) − b(r, x + c(r, a, x)). (13)

Then, the Malliavin derivatives related to jump diffusions are given by the following

Property 2.21. Suppose Assumption 2.17 holds. Set t > 0 and n ≥ 1. [JA] Under Assumption 2.19, X ,X 1 , with p = 1, . . . , n (with T ≡ t), ∫ Tp r Tp

−1 − DiXTp = 1{T ≤Tp} ξTp ξ ∂ac(Ti, ∆i,X ) i Ti Ti −1 − DiXr = 1{Ti

[JT] If σ ≡ 0, under Assumption 2.20, X ,X 1 , with p = 1, . . . , n (with ∫ Tp r Tp i Ti Ti −1 − Di+nXr = 1{Ti

−1 D0,sXTp = ξTp ξs σ(s, Xs)1{s≤Tp}, −1 D X = ξ ξ σ(s, X )1{ ≤ }, ∫ 0,s r ∫r s s s∫ r r r r −1 D0,s Xudu = D0,sXudu = ξudu ξs σ(s, Xs). 0 0 s

The proof of [JA] and [JB] of Property 2.21 can be found in Bally, Bavouzet-Morel and Messaoud [3]. Actually, their representation does not involve the process ξˆ, but its introduction is straightforward. Concerning [W], it is standard, see e.g. Nualart [18].

13 2.4 Unifying Malliavin calculus Jump diffusions show that sometimes Malliavin derivatives cannot be performed along all the directions of the noise. So, we propose here a sort of “unifying Malliavin calculus” in order to write down the integration by parts formula in its most generality. Let n ≥ 1 and t > 0 be fixed. Let F be a r.v. and u a R2n+1-valued random process. ∆ T ∆ We will write u = (u0,U ,U ), with u0 a random process, U = (U1, ..., Un) and T U = (Un+1, ..., U2n). Suppose that F is Malliavin derivable in some directions. We then set its unified Malliavin derivative on {Jt = n} is set as

DF˜ = (D˜0F, D˜1F, ..., D˜nF, D˜n+1F, ..., D˜2nF ) with D˜iF = DiF if F belongs to the space where Di is defined, otherwise D˜iF = 0, for i = 0, 1,..., 2n. Similarly, if a process u is Skorohod integrable in some direction, we set its unified Skorohod operator on {Jt = n} as ∑2n δ˜(u) = δ˜0(u0) + δ˜i(Ui) i=1 with δ˜i = δi if u is Skorohod integrable along the direction i, being 0 otherwise. As an example, consider the jump diffusion case. In view of Section 2.3, we can apply the definition of the unified Malliavin and Skorohod operators on the set {Jt = n} for many functionals of the process. But, if σ is non null, in general we have D˜ = (D˜0, D˜1, ..., D˜n, 0, ..., 0). Definition 2.22. [Compatibility in the Malliavin sense] We say that a random ∆ T variable F and a random process u = (u0,U ,U ) are “compatible in the Malliavin sense on {Jt = n}” (shortly: (n, t)-compatible) if ∈ D1,2 ⇐⇒ ∈ F n,t u0 Domn,t,2(δ0) ∈ D1 ⇐⇒ ∆ ∈ I 1 F n,t(∆) U = (U1, ..., Un) n,t(∆) ∈ D1 ⇐⇒ T ∈ I 1 F n,t(T ) U = (Un+1, ..., U2n) n,t(T )

If F and U are (n, t)-compatible, on the set {Jt = n} we put ∫ t ⟨ ˜ ⟩ ˜ ⟨ ˜ ⟩ ^ ^∆ ∆ ^T T DF,U = D0,sF u0(s)ds + + DF,U π and [F,U]π = [F,U ]π + [F,U ]π 0

^∆ ∆ ∆ ∆ ∈ D1 ^T T where [F,U ]π = [F,U ]π if F n,t(∆), being 0 otherwise, and similarly, [F,U ]π = T T ∈ D1 [F,U ]π if F n,t(T ), being 0 otherwise. The compatibility in the Malliavin sense relation means, roughly speaking, that we can perform the Malliavin calculus in the same direction for both F and U. In fact, it is immediate to check that Proposition 2.23. For any (n, t)-compatible F and u such that ([ ] ) E | ˜ | | ˜ | ∞ DiFUiπi + F δi(Ui) 1{Jt=n} < for any i = 1,..., 2n, the following duality relation holds

E ⟨ ˜ ⟩ E ˜ E ^ ( DF, u 1{Jt=n}) = (F δ(u)1{Jt=n}) + ([F, u]π1{Jt=n}).

14 The duality can fail if the compatibility property does not hold. As an example, take n ≥ 2, t > 0, F = W 2 and u = 0, U = T , all the other U ’s being null. Here, T1 0 n+1 1 i ⟨ ˜ ⟩ E ˜ E ^ ̸ ˜ − DF, u = 0 but (F δ(u)1{Jt=n}) + ([F, u]π1{Jt=n}) = 0. In fact, since δ(u) = 1 ] 2 and [F, u]π = Wt /t, one has but

( ) ( ) ( 2 ) E ˜ E ^ E − 2 Wt F δ(u)1{Jt=1} + [F,U]π1{Jt=1} = ( WT + )1{Jt=1} ( ) 1 t = P(Jt = 1)E 1 − T1|Jt = 1 which is non null in general. As an extension, we have ∈ 1 Rd R Property 2.24. [CR] Let ϕ Cb ( , ) and let F1, ..., Fd be r.v.’s. Then

∑d ˜ ˜ { } Dϕ(F1, ..., Fd) = ∂xi ϕ(F1, ..., Fd)DFi on the set Jt = n . i=1

[SP] Let F and U be (n, t)-compatible and such that F u ∈ Domn,t,2(δ0) if u ∈ Domn,t,2(δ0). Then

δ˜(F u) = F δ˜(u) − ⟨DF,˜ u⟩ on the set {Jt = n}

Moreover,

Property 2.25. [Integration by parts formula] Let d ≥ 1, F = (F 1, ..., F d) and v = (v1, ..., vd) such that F i and vj are (n, t)-compatible for any i, j = 1, . . . , d. On { } × the set Jt = n , suppose there exists the inverse γˆDF,v˜ of the d d matrix

i,j j i γ = ⟨DF˜ 1{ }, v ⟩, i, j = 1, . . . , d DF,v˜ Jt=n

Suppose moreover that (7) holds with Di replaced by D˜i, i = 1,..., 2n, and that the 1 d hypothesis in Remark 2.16 are true with D0 replaced by D˜0. Let now G = (G ,...,G ) ∈ 1 and ϕ Cp be such that ∑d i i w = (GγˆDF,v˜ ) v i=1 and ϕ(F ) are (n, t)-compatible. Then,

( ∑d ) ( ( ) ) ( ) E i E ˜ E ^ ∂xi ϕ(F )G 1{Jt=n} = ϕ(F )δ w 1{Jt=n} + [ϕ(F ), w]π1{Jt=n} . i=1

3 Sensitivity analysis for complex Asian options

We study here representation formulas for the Delta of complex Asian options, that is written on ∫ ( T ) ZT = (XT ,YT ) = XT , Xs ds 0 where T > 0 denotes the maturity∫ and X is a jump diffusion process as in (11). We t could also handle the case Yt = 0 f(Xs) ds for some function f but for the sake of

15 simplicity, we take f(x) = x. In the following, ξ denotes the first variation process of X. We stress that we have in mind the dependence on both X and Y . The singular case has been already widely studied in the literature, while ours appears only sparsely. The aim is then to represent the derivative ∂xE(ϕ(XT ,YT )), that is the initial data sensitivity, in terms of an expectation which does not involve the derivative of the payoff function ϕ. We first state the following

Definition 3.1. We say that a jump diffusion X in satisfies standard hypothesis if Assumption • 2.17 and either 2.19 or 2.20 hold in the pure jump case (i.e. σ ≡ 0) • 2.17 hold in the jump diffusion case are satisfied.

Roughly speaking, Definition 3.1 ensures the existence of the Malliavin derivative in some direction of the noise. Here is our main result on the sensitivity w.r.t. the initial condition:

Theorem 3.2. Let X satisfy the standard hypothesis and the payoff function ϕ have 1 2 polynomial∫ growth. Let v = (v , v ) be such that F = ZT = (XT ,YT ), v and G = T ≥ (ξT , 0 ξudu) satisfy the hypothesis in Property 2.25, with n 1 and t = T . Then, ≡ with the convention 1{JT =n} = 1 in the pure diffusion case (i.e. when c 0), one has

E E ˜ E ^ ∂x (ϕ(ZT )1{JT =n}) = (ϕ(ZT )δ(w)1{JT =n}) + ([ϕ(ZT ), w]π1{JT =n}) (14) where ∫ ∑2 T ∑2 w = ξ γˆ1j vj + ξ dr γˆ2j vj (15) T ZT ,v r ZT ,v j=1 0 j=1 and γˆ = γ−1 on the set {J = n}, being γij = ⟨DZ˜ j, vi⟩, i, j = 1, 2. ZT ,v ZT ,v T ZT ,v Proof. The proof immediately derives from the integration by parts formula (Property ∈ 1 2.25). We can suppose that ϕ Cb : the general case can be developed by standard density arguments using mollifiers. Then, one has ∫ ( ) ( T ) E E E · ∂x (ϕ(ZT )1JT =n)) = ∂z1 ϕ(ZT ))ξT 1JT =n + ∂z2 ϕ(ZT )) ξrdr 1JT =n . 0 ∫ T Therefore, taking G = (ξT , 0 ξrdr) in Property 2.25, the statement holds. 2

∫Let us briefly discuss the hypothesis required in Theorem 3.2. First, recall that ξT and T 0 ξrdr are Malliavin differentiable in some direction and their Malliavin derivatives are p integrable, for any p. Moreover, case by case we need that w as in (15) must satisfy • ≡ ∆ ∈ I 1 T ∈ I 1 if σ 0, w = (w1, . . . , wn) n,T (∆) and/or w = (wn+1, . . . , w2n) n,T (T ); • ̸ ∈ ∆ ∈ I 1 if σ = 0, w0 Domn,T,2(δ) and/or w n,T (∆). Moreover, we will allow the border term operator in (14) to be non null only in the direction of the jump times (see Remark 2.4).

16 For practical purposes, Theorem 3.2 gives the Delta: ∑ E E E ∂x (ϕ(ZT )) = ∂x (ϕ(ZT )1{JT =0}) + ∂x (ϕ(ZT )1{JT =n}) n≥1 ∑ ( ) E E ˜ E ^ = ∂x (ϕ(ZT )1{JT =0}) + (ϕ(ZT )δ(w)1{JT =n}) + ([ϕ(ZT ), w]π1{JT =n}) . n≥1 Let us summarize some remarks on the cases n = 0 and n = 1, playing a special role in the next numerical applications.

Remark 3.3. On the set {JT = 0}, if σ ≡ 0 then {Zt}t≤T is deterministic, otherwise it is a pure diffusion and then E E ˜ ∂x (ϕ(XT ,YT )1JT =0) = (ϕ(XT ,YT )δ0(w)1JT =0)

Suppose now to observe {JT = 1} and to be interested just in the direction of the jump amplitude ∆1 or the jump time T1 (the two cases are similar, and so we will consider the first one). Then, it immediately follows that det γ = 0 for any DZ˜ T ,v v and Theorem 3.2 fails. Therefore, in this case one is forced to add a noise or to write down both X and Y as a function of ∆1, which reduces the problem to a one-dimensional one. Our next step is to discuss the choice of v in Theorem 3.2. The first natural choice is v = DZ˜ , giving the standard Malliavin covariance matrix. Unfortunately, the resulting weight is not simple to be used in practice: the introduc- tion of the generalized covariance matrix γDZ,v˜ has been done mainly to handle the complicated structure of the Malliavin covariance matrix. As an example, consider the pure diffusion case: the non degeneracy condition is not obvious and also quite hard to prove, unless the H¨ormandercondition is satisfied by the pair (see e.g. Nu- alart [18]). And in fact, in Benhamou [6] the Malliavin covariance matrix is not used at all and actually, his weight can be reproduced by a suitable choice of v (see next Remark 3.6). We propose now some choices for v giving a triangular generalized Malliavin covari- ance matrix, allowing to get significant simplifications for the weights. From now on, we suppose to have fixed the maturity time T , at which n jumps have been observed, i.e. we work on the set {JT = n}. We set ∫ t βt = ξu du = ∂xYt 0 and we use the notations

∂aci = ∂ac(Ti, ∆i,X − ) and qi = q(Ti, ∆i,X − ). Ti Ti As for the Malliavin derivatives we are going to use, we have (see Property 2.21) −1 − −1 D0,uXT = ξT ξu σ(Xu),D0,uYT = (βT βu)ξu σ(Xu) D X = ξ ξ−1∂ c ,D Y = (β − β )ξ−1∂ c , i = 1, . . . , n i T T Ti a i i T T Ti Ti a i D X = ξ ξ−1q ,D Y = (β − β )ξ−1q , i = 1, . . . , n i+n T T Ti i i+n T T Ti Ti i holding on the set {JT = n}. We recall also Remark 2.4: we assume πi = 1 and that · · ∆ ≡ the pdf g of the jump amplitudes is continuous on I and null on ∂I, giving [ , ]π 0.

17 The following classes of functions will be used to simplify the structure of γDZ,v˜ : ∫ { T } A W → R T = a : [0,T ] such that audu = 0 (16) 0 { ∑n } A J ∈ Rn n = a such that ai = 0 (17) i=1 From now on, all the equalities among processes, Malliavin derivatives and Skorohod integrals has to be intended on the set {JT = n}.

3.1 Gaussian direction ˜ ˜ ∈ A W Here we can set D = (D0, 0 ..., 0), δ = δ0 and we choose, for some a T , 1 −1 2 −1 v0,u = ξuσ (Xu)au, v0,u = ξuσ (Xu) and v1 = v2 = 0 otherwise. So the matrix γ is i i DZ˜ T ,v  ∫  0 − T β a du  0 u u  γ ˜ =   . DZT ,v ∫ − T T ξT T βT 0 βudu and the process w as in (15) is then given by ∫ −1 ( T ) ξ σ (X ) βrdr w = u u 1 − ∫ 0 a 0,u T T u 0 βrardr −1 p and w = 0 for i = 1,..., 2n. It remains to prove that 1{ }| det γ | ∈ L i JT =n DZ˜ T ,v for any p. One has, ∫ T −1 −1 −1 1{ }| det γ | = 1{ } T ξ · | β a du| . JT =n DZ˜ T ,v JT =n T u u 0 −1 ˆ ∈ q Now, ξT = ξT L for any q. Concerning the latter term, one has ∫ u ≥ Proposition 3.4. If Au = 0 as ds 0, one has ∫ T −1 ∈ p 1{JT =n} βuau du L for any p. 0 ( ) ( ) ∫ √ ∫ 1/2 ∫ 1/2 T ≤ T T −1 Proof. Since 0 Au du 0 Auξu du 0 ξu du , one has ( ∫ ) ∫ T −1 T ≤ −2 · ˆ Auξu du cA ξu du 0 0 ∫ √ ˆ −1 T where ξ = ξ and cA = 0 Au du > 0. Therefore, ( ∫ ) ( ∫ ∫ T −p T T ≤ −2p · ˆ p ≤ · ˆp Auξu du cA ξu du) dA,p ξu du 0 0 0 ˆ ∈ p ≥ for some dA,p > 0. Since supu∈[0,T ] ξu L for any p 1, the statement holds. 2

18 Remark( ) 3.5. In practice, we∫ will take, for s ∈ [0,T ], either as = T/2 − s or as = s s ≥ ∈ sin 2π T . In each case, As = 0 au du 0 for any s [0,T ]. Remark 3.6. The weight proposed by Benhamou in [5] is given by (in our standard Ben notation) δ0(w ), where ∫ ( 2 T ) β + 2ξu( rξrdr − T βT ) wB = ξ σ−1(X ) T ∫ 0 u u u T − βT (2 0 rξrdr T βT ) Tedious but straightforward computations allow to show that wB can be found through

1 2 −1 2 −1 v0,u = ξuσ (Xu), v0,u = ξuσ (Xu)

1 2 and vi = vi = 0 otherwise.

3.2 Jump amplitudes direction ∑ ˜ ˜ n ∈ A J Here we consider D = (0,D1,...,Dn, 0,..., 0), δ = i=1 δi and for a n , we set

1 −1 2 −1 vi = ξTi (∂aci) ai, vi = ξTi (∂aci) , i = 1, . . . , n and v1 = v2 = 0 otherwise. The matrix γ is then i i DZ˜ T ,v  ∑  n 0 − βT ai  i=1 i  γDZ˜ ,v = ∑ . T − n nξT nβT i=1 βTi and the process w as in (15) is ∑ −1 ( n ) ξ (∂ c ) βT Ti a i − ∑ j=1 j wi = 1 n ai , i = 1, . . . , n n j=1 βTj aj and w ≡ 0 otherwise. Cncerning the integrability of | det γ |−p on the set {J = i DZ˜ T ,v T n}, one has,

∑n −1 −1 −1 1{ }| det γ | = 1{ } nξ · | β a | . JT =n DZ˜ T ,v JT =n T Ti i i=1

−1 ˆ ∈ q Now, ξT = ξT L for any q. Concerning the latter term, one has ∑ k − Proposition∑ 3.7. Suppose that Ak = i=1 ai > 0 for any k = 1, . . . , n 1. Then, | n |−1 ∈ p 1{JT =n} i=1 βTi ai L for p < n/2. Let us give a discussion before to prove Proposition 3.7.

Remark 3.8. The real important fact is that (7) holds in the direction of the jump amplitudes. Now, it is simple to see that it reduces to ask for the following fact: −1 p one needs that 1{J =n}| det γ ˜ | ∈ L for some p > 2. Now, this holds if ∑ T DZT ,v | n |−1 ∈ 2+δ 1{JT =n} i=1 βTi ai L for some δ > 0. By Proposition 3.7, this happens whenever n > 4. However, the above formulas will be implemented for any n, and we will see that they efficiently work if the intensity λ is not too small.

In order to prove Proposition 3.7, we give the following simple

19 Lemma 3.9. Let V be a r.v. Then, |V |−1 ∈ Lp if and only if ∫ 1 P | | ( V < ξ) ∞ p+1 dξ < + . 0 ξ Proof. The statement immediately follows from the fact that ∫ ∫ ∞ ∞ P | | E | |−p P | |−p ( V < ξ) ( V ) = ( V > v)dv = p p+1 dξ. 0 0 ξ 2 ∑ Js ≥ Proof of Proposition 3.7. Setting AJs = i=1 ai 0, one has ∫ ∫ ∑n ∑n Ti T ∑n

βTi ai = ξsdsai = ξs ai1s 0. First, one has ∫ ∫ − ∫ T √ T √ ∑n n∑1 Ti+1 √

AJs ds = AJs 1{Js=i}ds = Ai ds 0 0 i=0 i=1 Ti n∑−1 √ n∑−1 = Ai(Ti+1 − Ti) ≥ cA (Ti+1 − Ti) = cA(Tn − T1) i=1 i=1 where cA = inf{Ai ; i = 1, . . . , n − 1} > 0. Therefore, the problem reduces to study − −1 ∈ 2p+δ { } when 1{JT =n}(Tn T1) L . Conditional on JT = n , the probability density function of (T1,Tn) is n(n − 1) p (ω, t , t ) = (t − t )n−21 T1,Tn 1 n T n n 1 0 0 and since E(V 1A) = E(V | A)P(A), the statement follows. 2

Remark 3.10. In practice, we will take ai = 1 for any i < n and an = 1 − n, so that Ak = k > 0 for any k = 1, . . . n − 1.

20 3.3 Jump times direction ˜ In this∑ case we consider a pure jump process. Then, D = (0, 0,..., 0,Dn+1,...,Dn), ˜ n ∈ A J δ = i=1 δi+n and for a n , we set 1 −1 2 −1 vi+n = ξTi (qi) ai, vi+n = ξTi (qi) , i = 1, . . . , n,

1 2 and vi = vi = 0 otherwise. Then  ∑  n 0 − βT ai  i=1 i  γDZ˜ ,v = ∑ T − n nξT nβT i=1 βTi giving ∑ −1 ( n ) ξ q βT Ti i − ∑ j=1 j wi+n = 1 n ai , i = 1, . . . , n n j=1 βTj aj and wi ≡ 0 otherwise. The similarity with the jump amplitude case allows to develop the same arguments as in Section 3.2 (see Proposition∑ 3.7 and Remark 3.8), so that k ≤ − the same integrability properties follow: if Ak = i=1 ai > 0 for any k n 1, −1 p we can apply Proposition 3.7 and then 1{ }| det γ | ∈ L for some p > 2 JT =n DZ˜ T ,v whenever n > 4.

Remark 3.11. For our numerical purposes, we will take ai = 1 for i < n and an = 1 − n. Therefore, Ak > 0 for any k = 1, . . . , n − 1. In this case, the border term operator is non null and by (5) one has

∑n + · + − − − fi (XT ,YT ) wi fi (XT ,YT )wi [f(XT ,YT ), w]π = (18) T − T − i=1 i+1 i 1 where ± ¯ fi (XT ,YT ) = lim f(T1, ..., Ti−1, t, Ti+1, ..., Tn, ω) t→Ti±1 ± wi = lim w¯i(T1, ..., Ti−1, t, Ti+1, ..., Tn, ω) t→Ti±1 ¯ being f andw ¯i functions such that on the set {JT = n} ¯ f(XT ,YT ) ≡ f(ω, T1,...,Tn) and wi ≡ w¯i(ω, T1,...,Tn) respectively, the symbol ω denoting generically the dependence on the other random sources (Brownian motion and jump amplitudes).

3.4 Joint Gaussian and jump amplitudes direction ∑ ˜ ˜ n Here, D = (D0,D1,...,Dn, 0,..., 0) and δ = δ0 + i=1 δi. We define the process 1 2 ∈ A W v = (v , v ) as a mixture of the ones seen in Section 3.1 and 3.2. First, for a T we set 1 −1 2 −1 v0,u = ξuσ (Xu)au, vi = ξTi (∂aci) i = 1, . . . , n, 1 2 and vi = vi = 0 otherwise. Therefore  ∫  − T 0 0 βuaudu γ =   . DZ˜ T ,v ∑ − n nξT nβT i=1 βTi

21 and ∑ − n β 1 w = − ∫ i=1 Ti · ξ σ−1(X )a , w = · ξ (∂ c )−1 as i = 1, . . . , n 0,u T u u u i n Ti a i n 0 βrardr ≡ ∈ A J wi 0 otherwise. As a different mixture, for a n we take 1 −1 2 −1 v0,u = ξuσ (Xu), vi = ξTi (∂aci) ai i = 1, . . . , n 1 2 and vi = vi otherwise. Here,  ∫  T T ξT T βT − βudu  0  γ ˜ = . DZT ,v ∑ − n 0 i=1 βTi ai and ∫ T β du 1 −1 − ∑0 u · −1 w0,u = ξuσ (Xu), wi = n ξTi (∂aci) ai as i = 1, . . . , n T T j=1 βTj aj wi ≡ 0 otherwise. As for the p-integrability properties, we refer to Section 3.1 (see Proposition 3.4) and Section 3.2 (see Proposition 3.4 and Remark 3.8).

3.5 Joint jump amplitudes and times direction ∑ ˜ ˜ n Here, we take D = (0,D1,...,D2n) and δ = i=1(δi + δi+n). Again, we take v as a ∈ A J mixture of the ones seen in Section 3.2 and 3.3. For a n , we first set 1 −1 2 −1 vi = ξTi (∂aci) ai, vi+n = ξTi qi i = 1, . . . , n, 1 2 with vi = vj = 0 otherwise. Then  ∑  n 0 − βT ai  i=1 i  γDZ˜ ,v = ∑ . T − n nξT nβT i=1 βTi so that w0 ≡ 0 and for i = 1, . . . , n ∑ − n β − ∑ j=1 Tj · −1 1 · −1 wi = n ξTi (∂aci) ai, wi+n = ξTi qi n j=1 βTj aj n ∈ A J As a second choice, for a n we take 1 −1 2 −1 vi = ξTi (∂aci) , vi+n = ξTi qi ai i = 1, . . . , n, 1 2 with vi = vj = 0 otherwise. Then,  ∑  n nξT nβT − βT  i=1 i  γDZ˜ ,v = ∑ . T − n 0 i=1 βTi ai which gives w0 ≡ 0 and for i = 1, . . . , n, ∑ n β 1 · −1 − ∑j=1 Tj · −1 wi = ξTi (∂aci) , wi+n = n ξTi qi ai. n n j=1 βTj aj As for the non degeneracy problem, we refer to Section 3.2 (see Proposition 3.7 and Remark 3.8) and Section 3.3 (see page 21). We finally recall that in this case also the border term operator has to be considered and is given by (18).

22 4 Examples and numerical results

Here, we consider some examples of financial interest and apply what developed up to now. Somewhere we prefer to use some different and specifical choice for the process v, for the sake of further simplifications. The models are singularly discussed, analyzing the different directions of the differen- tial calculus. We also present several numerical experiments in order to compare the different Malliavin approaches and the finite difference method for the Monte Carlo estimation of the Delta. We look at different types of options: floating Asian call/put options, standard and Asian call/put options, standard and Asian digital options. Here are the Asian types payoffs:

− floating Asian call and : f(XT ,YT ) = (XT −YT /T )+ and f(XT ,YT ) = (YT /T − XT )+ respectively; − − Asian digital, call and put option: f(XT ,YT ) = 1YT /T >K , f(XT ,YT ) = (YT /T K)+ and f(XT ,YT ) = (K − YT /T )+ respectively. Let us summarize some facts concerning the forthcoming numerical experiments, whose figures are postponed to Section 5. • When dealing with n = 0 and n = 1, we refer to Remark 3.3. • A benchmark value is considered, and obtained using the finite difference method with 250, 000 simulations. • Sometimes we need to discretize the time interval in order to simulate the pro- cesses. In the case, we split the time interval in 100 subintervals. • Some parameters are defined once for all, others being declared case by case. More precisely, we set L m+ϱN [jump amplitudes] 1 + ∆i = e i , with m = 0, ϱ = 0.05 and Ni ∼ N(0, 1); [Poisson intensity] λ = 5 unless specified (we test also small value of λ); [maturity time] T = 5. [starting underlying asset price] x = 100. • ∈ A W ∈ A J The functions a T and a n are set as in Remark 3.5, 3.10 and 3.11. As usually done when dealing with a Malliavin Monte Carlo estimator, we consider a variance reduction technique based on localization functions, build as a natural generalization of the ones in Bavouzet-Morel and Messaud [4]. For floating Asian call/put options are considered, we consider the following functions:   0 for s ≤ t − ε  (s − (t − ε))2 Gε(s, t) =  for s ∈ [t − ε, t + ε] and  4ε s − t for s ≥ t + ε − − 1 2 Fε(s, t) = (s t)+ Gε(s, t),Bε (s, t) = ∂sGε(s, t),Bε (s, t) = ∂tGε(s, t). so that (( ) ) ( ( ) ) ( ( ) ) YT YT YT ∂ E X − 1{ } =∂ E G X , 1{ } + ∂ E F X , 1{ } x T T + JT =n x ε T T JT =n x ε T T JT =n The first term on the r.h.s. will be treated as ( ( )) ( ( ) ( ) ) Y Y Y E ∂ G X , T = E B1 X , T · ξ + B2 X , T · β . x ε T T ε T T T ε T T T

23 As for the second one, we apply our weights. A similar idea is used for digital options: given ε > 0, we set   0 for s ≤ K − ε  ( )  1 s − K + ε 3  for s ≤ K − ε G (s) = 2 ( ε ) ε  − − 3  1 s K ε ∈  + 1 for s [K,K + ε]  2ε ε 1 for s ≥ K + ε

− ′ Fε(s) = 1s>K Gε(s),Bε(s) = Gε(s)

For UT = XT or UT = YT /T , one has E E E ∂x (1UT >K ) = ∂x (Gε(UT )) + ∂x (Fε(UT )) Now, the first term in the r.h.s. will be considered as

E(∂xGε(UT )) = E(Bε(UT )∂xUT ) while for the second, we use the Malliavin weights. We finally add some remarks about the border term operator (18), appearing when dealing with the jump times noise (a case in which (wn+1, . . . , w2n) ≠ 0). Setting ¯ f(XT ,YT ) ≡ f(T1, ..., Tn, T, ω), wi+n ≡ wi+n(T1, ..., Tn, ω), i = 1, . . . , n ω denoting the dependence on the Gaussian and jump amplitude noises, we need to compute ±i ¯± ¯ F = f = lim → ∓ f(T1, ..., Ti−1, t, Ti+1, ..., Tn, ω) i t Ti±1 ± (19) w = lim → ∓ wi+n(T1, ..., Ti−1, t, Ti+1, ..., Tn, ω), i = 1, . . . , n. i t Ti±1 where, as usual, we set T0 = 0 and Tn+1 = T . Then, ∑n + + − − − T fi wi fi wi ± ± [f(XT ,YT ), w]π = , fi and wi given in (19) (20) T − T − i=1 i+1 i 1

4.1 Black-Scholes-Merton model

We consider the process Zt = (Xt,Yt) where, for b, α ∈ R and σ > 0, ∫ ∫ t t ∑Jt

Xt = x + bXrdr + σXrdWr + α ∆iXTi−, 0 0 i=1 ( ) ∏ − 1 2 Jt i.e. Xt = x exp (b 2 σ )t + σWt i=1(1 + α∆i). Now, fix the maturity time T and the set {JT = n}. Notice that there is no direct dependence on the jump times, and the associated direction will be not considered. Here, the main equalities we will use are the following: X Y ξ = t , β = t t x t x D˜0,uXt = σXt1u≤t, D˜0,uYt = σ(Yt − Yu)1u≤t X Y − Y X ˜ t ˜ t Ti Ti DiXt = 1Ti≤t, DiYt = 1Ti≤t, ∂aci = αXTi− = α 1 + α∆i 1 + α∆i 1 + α∆i (21)

24 Gaussian direction The result in Section 3.1 gives ∫ ( T ) 1 Yrdr w = 1 − ∫ 0 · a , u ∈ [0,T ] 0,u σxT T u 0 Yrardr ∈ A W where a T . So, by Property 2.14 we can write ∫ ∫ ∫ ∫ [ T T T ( T ) ] 1 Yrdr Yrdr δ˜ (w ) = W − ∫ 0 a dW + D˜ ∫ 0 a du 0 0 σxT T T u u 0,u T u 0 Yrardr 0 0 0 Yrardr Again by Property 2.14 and the rules in (21), we have ∫ ∫ ∫ ∫ ( T ) ˜ T T ˜ T 0 Yrdr D0,u( 0 Yrdr) D0,u( 0 Yrardr) D˜0,u ∫ = ∫ − Yrdr · ∫ T T T 2 Yrardr Yrardr 0 ( Yrardr) 0 ∫ 0 ∫ 0∫ T − T T − σ u (Yr Yu)dr σ u (Yr Yu)ardr = ∫ − Yrdr · ∫ T T 2 0 Yrardr 0 ( 0 Yrardr) Therefore, ∫ [ T ∫ ∫ ∫ 1 Y dr T σ T T ˜ − ∫ 0 r ∫ − δ(w) = WT T audWu + T (Yr Yu)audrdu σxT Yrardr 0 Yrardr 0 u ∫ 0 ∫ ∫ 0 T T T ] σ 0 Yrdr − ∫ (Yr − Yu)araudrdu T 2 ( 0 Yrardr) 0 u (22)

Ben Remark 4.1. The weight by Benhamou [5] is (see Remark 3.6) δ0(wu ) where ∫ ( 2 T ) 1 Y + 2( rXrdr − TYT )Xu wBen = T ∫0 u σx T − YT (2 0 rXrdr TYT ) By using Property 2.14, one obtains { ∫ 1 Y W T σ(Y − Y ) ˜ Ben ∫ T T − ∫ T u δ(wu ) = T T du+ σx 2 rXrdr − TYT 0 2 rXrdr − TYT ∫ 0 ( ∫ 0 ) T T σYT 2 rXrdr − T (YT − Yu) − ∫u du+ T 2 0 (2 rXrdr − TYT ) ∫ ∫ 0 ∫ ∫ 2( T rX dr − TY ) T T T rX dr − T (Y − Y ) 0∫ r T − u ∫r T u + T XudWu 2σ T Xudu+ Y (2 rX dr − TY ) 0 0 Y (2 rX dr − TY ) T 0 r T ∫ T 0 r T T +2σ( rXrdr − TYT )× ∫ ∫ 0 ∫ T ( − T − T − − ) } (YT Yu)(2 0 rXrdr TYT )) + YT (2 u T (YT Yu)) × ∫ Xudu T − 2 0 (2 0 rXrdr TYT )) (23) It is clear that (23) has a complicated structure, confirming the utility, for practical purposes, of the introduction of the directions v’s.

25 Jump amplitudes direction Following Section 3.2 and by using (21), we have ∑ ( n Y ) 1 − ∑ j=1 Tj wi = 1 n ai (1 + α∆i), i = 1, . . . , n nx j=1 YTj aj

∈ A J where a n , and wi = 0 otherwise. By Property 2.6, we can write for i = 1, . . . , n, ∑ ∑ [( n Y ) ( n Y ) ] ˜ 1 − ∑ j=1 Tj ˜ ˜ ∑ j=1 Tj δi(wi) = 1 n ai δi(1 + α∆i) + Di n ai (1 + α∆i) nx j=1 YTj aj j=1 YTj aj

Now, recalling (21), the second term in the r.h.s. can be simplified and we obtain ∑ [( n Y )∑n ∑n ∑n ˜ 1 − ∑ j=1 Tj ˜ ∑ 1 − δ(w)= 1 n ai δi(1 + α∆i)+ n (YTj YTi )ai+ n x j=1 YTj aj j=1 YTj aj ∑ i=1 i=1j=i+1 n ∑n ∑n ] YT − ∑ j=1 j (Y − Y )a a ( n Y a )2 Tj Ti j i j=1 Tj j i=1 j=i+1 (24) ˜ − L where we recall that δi(1 + α∆i) = (α + (1 + α∆i)∂∆i log g(∆i)). When 1 + α∆i = exp Z, with Z ∼ N(m, ϱ2) one has (see Remark 2.4)

m + (α − 1)ϱ2 − log(1 + α∆ ) δ˜ (1 + α∆ ) = − i , i = 1, . . . , n. (25) i i ϱ2

Joint Gaussian and jump amplitudes direction Following Section 3.4, we firstly have ∑ n 1 YT 1 w = − · ∫ j=1 j · a and w = · (1 + α∆ ), 1 ≤ i ≤ n 0,u nσx T u i αnx i 0 Yrardr

∈ A W where a T . Using (21) and Property 2.14, we obtain ∑ ∫ [ n T 1 YT ˜ − ∫ j=1 j δ(w) = T audWu+ nσx Yrardr 0 0 ∫ σ T ∑n ∫ − + T au (YTj Yu) 1Tj >u du+ Yrardr 0 0 ∑ ∫ j=1 ∫ (26) n T T ] σ YT −( ∫ j=1 j) · − T 2 au ar(Yr Yu)dr du + 0 u 0 Yrardr ∑n 1 + (α + (1 + α∆ )∂ log g (∆ )) αnx i ∆i i i=1

The second possible weight, as discussed in Section 3.4, comes from ∫ T Y du 1 − ∑0 u ≤ ≤ w0,u = and wi = n ai(1 + α∆i), 1 i n σT x αT x j=1 YTj aj

26 which gives ∫ ∑n [ T W 1 Yudu δ˜(w) = T − ∑ 0 (α + (1 + α∆ )∂ log g(∆ ))+ σT x αT x n Y a i ∆i i i=1 j=1 Tj j ∫ ∑ ∫ T n T ] (Yu − YT )du (YT − YT )aj − T∑i i j=∑i+1 j i n + n 2 Yudu ai j=1 YTj aj ( j=1 YTj aj) 0 (27)

Numerical experiments We set α = 1, b = 0.1 and σ = 0.2. We first consider the case of a floating call or put option. Figure 1 show a comparison among different Malliavin weights and the finite difference method. One can notice that the use of both Gaussian and jump amplitudes noise gives results generally better than the ones obtained by using separately the noises. But the use of the localization function sensibly improves the results and also, unifies them w.r.t. the different choices of noise (see Figure 2). Figure 3 gives an example of the rate of improvement. Moreover, Table 1 shows that the results with localization are also very consistent with the “exact Monte Carlo Delta”, which estimates the Delta through the Monte Carlo price divided by the initial value x. In fact, in the Merton model, due to the price special form of the payoff, it is easy to see that delta = x .

MC sim. exact MC DF JA W JA and W

20000 0.862895 0.863427 0.862905 0.863654 0.864142 40000 0.839891 0.840280 0.840981 0.840961 0.840279 60000 0.858012 0.857856 0.857685 0.856851 0.856760 80000 0.866198 0.866429 0.866416 0.866496 0.866236 100000 0.846873 0.846675 0.845960 0.847124 0.847096 120000 0.848036 0.848036 0.847720 0.848024 0.848097

Table 1: Comparison of the localized Malliavin Delta with the finite difference method and the exact Monte Carlo.

The choice λ = 5 and T = 5 assures a (quite) large number of jumps (in mean 25). We then tested what happens for small values of λ and we found that the weight obtained through the lonely jump noise does give poor results (see also Bavouzet-Morel and Messaoud [4]). In fact, there is not sufficient noise from the jump amplitudes and the Gaussian direction has to be introduced to run a good Malliavin method. Figure 4 tests λ = 0.5 and shows this loss of accuracy. Let us also add that the desired non degeneracy for the generalized Malliavin covariance matrix is not guaranteed (see Remark 3.8). The use of the localization along the jump amplitudes direction does not give significant improvements (see Figure 5), while the results are really good if the Brownian direction is taken into account (see Figure 6). Finally, the weight from Benhamou (see Remark 3.6) is tested and compared with ours, in the pure diffusion case (α = 0). The results are good and comparable (Figure 7) and practically overlap if the localization is considered (Figure 8). The only difference is in that our weight is simpler to implement. We then tested digital options, a case in which Malliavin methods usually work better than finite differences, even without any localizations. Figures 9 and 10 refer to a

27 standard digital option and an Asian digital one respectively, with K = 100. The introduction of the localization gives further improvements, as shown in Figure 11. Finally, Figure 12 does not include the results from the finite difference for the sake of a comparison of the different Malliavin weights.

4.2 A jump diffusion model

We consider here a stochastic volatility model, in which the underlying process Xt is a Black-Scholes-Merton model and the volatility is an Ornstein-Uhlenbeck one, written on a correlated Brownian motion. For a fixed correlation constant ρ ∈ (−1, 1), we set ∫ ∫ t t (√ ) ∑Jt − 2 1 2 Xt = x + µXrdr + σrXr 1 ρ dWr + ρ dWr + α ∆iXTi− ∫ 0 0 ∫ i=1 t t − 2 (28) σt = y + k(θ σr)dr + βdWr ∫ 0 0 t Yt = Xrdr 0 where µ, k, θ, β, α ∈ R and W = (W 1,W 2) is a standard 2-dimensional Brownian motion. Notice that both X and σ can be explicitly written: ( ∫ ∫ ) J t 1 t √ ∏t − 2 − 2 1 2 Xt = x exp (µ σr )dr + σr( 1 ρ dWr + ρdWr ) (1 + α∆i) 0 2 0 ∫ ∫ i=1 t t −k(t−r) −k(t−r) 2 σt = y + kθ e dr + β e dWr 0 0 Again, we are interested in computing the sensitivity of an option price written on the pair Z = (X,Y ) with respect to x, the initial value of the process X. We will follow a Malliavin differential calculus w.r.t. W 1 and the jump noises. Now, since σ satisfies an autonomous stochastic differential equation driven by W 2 only, it is independent on the noise from W 1 and from the jump part. Therefore, we can use arguments similar to the ones developed in Section 4.1 for the Black-Scholes-Merton model. And again, the Gaussian (from W 1) or/and the jump amplitudes noise will be taken into account. Setting from now on (unless specified) D˜0,u as the Gaussian Malliavin derivative w.r.t. W 1, the following rules hold: X Y ξ = t , β = t t x t x 2 2 D˜0,uXt = χ σuXt1u≤t, D˜0,uYt = χ σu(Yt − Yu)1u≤t X Y − Y X ˜ t ˜ t Ti Ti DiXt = 1Ti≤t, DiYt = 1Ti≤t, ∂aci = αXTi− = α 1 + α∆i 1 + α∆i 1 + α∆i √ (29) where χ2 = 1 − ρ2 Let the maturity T and the number n ≥ 1 of jumps up to time T be fixed.

Gaussian direction ∈ A W The process w0 can be found as in the Black-Scholes-Merton case: for a T , ∫ ( T ) 1 0 Yrdr w0,u = √ 1 − ∫ · au , u ∈ [0,T ] − 2 T σuxT 1 ρ 0 Yrardr

28 and wi = 0 for i = 1,..., 2n. Following the same steps of Section 4.1, we obtain ∫ [ ∫ T ∫ 1 T 1 Y dr T a ˜ √ 1 − ∫ 0 r u 1 δ(w) = dWu T dWu + xT 1 − ρ2 0 σu Y a dr 0 σu √ ∫ 0 ∫ r r 1 − ρ2 T T ∫ − + T au (Yr Yu)drdu+ (30) Yrardr 0 u √0 ∫ ∫ ∫ − 2 T T T ] 1 ρ 0 Yrdr − ∫ au ar(Yr − Yu) drdu T 2 ( 0 Yrardr) 0 u

Jump amplitudes direction ∈ A J Again similarly to Section 4.1, for a n one has ∑ ( n Y ) 1 − ∑ j=1 Tj wi = 1 n ai (1 + α∆i), i = 1, . . . , n αnx j=1 YTj aj and wi = 0 otherwise. Finally, we obtain

∑ [ ( n )∑n 1 YT δ˜(w) = − 1 − ∑ j=1 j a ˜(α + (1 + α∆ )∂ log g(∆ ))+ αn x n Y a i i ∆i i j=1 Tj j i=1 ∑n ∑n ∑ 1 − + n (YTj YTi )+ (31) j=1 YTj aj ∑ 1=i j=i+1 n ∑n ∑n ] YT − ∑ j=1 j a (Y − Y )a ( n Y a )2 i Tj Ti j j=1 Tj j i=1 j=i+1

Joint Gaussian and jump amplitudes direction

Following Section 4.1, we have two possible choices for w. In the first case we have ∑ n 1 YT a 1 − √ · ∫ j=1 j · u · ≤ ≤ w0,u = T and wi = (1 + α∆i), 1 i n − 2 σu αnx nx 1 ρ 0 Yrardr

∈ A W where a T , giving

∑ ∫ [ n T 1 YT a δ˜(w) = √ − ∫ j=1 j u dW 1+ 2 T u nx 1 − ρ Yrardr 0 σu √0 ∫ 1 − ρ2 T ∑n ∫ − + T au (YTj Yu) 1Tj >u du+ 0 Yrardr 0 j=1 √ ∑ ∫ ∫ (32) 2 n T T ] 1 − ρ YT − ( ∫ j=1) j · − T 2 au ar(Yr Yu)dr du + 0 u 0 Yrardr 1 ∑n + (α + (1 + α∆ )∂ log g (∆ )) αnx i ∆i i i=1

29 ∈ A J In the second case we have, for a n , ∫ T 1 0 Yudu w0,u = √ and wi = − ∑ ai(1 + α∆i), 1 ≤ i ≤ n 2 n T x 1 − ρ σu αT x j=1 YTj aj so that ∫ T ˜ √ 1 1 1 δ(w) = dWu + 1 − ρ2T x 0 σu ∫ [ T ∑n Y du 1 ∑ 0 u − (α + (1 + α∆i)∂∆ log g(∆i))+ (33) αT x n Y a i i=1 j=1 Tj j ∫ ∑ ∫ T n T ] (Yu − YT )du (YT − YT )aj − T∑i i j=∑i+1 j i n + n 2 Yudu ai j=1 YTj aj ( j=1 YTj aj) 0

Numerical experiments

We set σ = 0.5, r = 0.1, k = 0.6, θ = 0.6, β = 0.05, α = 1. and ρ = 0.5. Figure 13, referring to a floating Asian , shows that the the Malliavin methods, in all directions (as for the Gaussian one, w.r.t. the Brownian motion W 1) give results comparable with the finite difference method. But if a digital option is taken into account, as in Figure 14 and 15, the Malliavin weights are really useful in practice.

4.3 Ornstein-Uhlenbeck model with jumps

Here we consider ∫ ∫ t t ∑Jt Xt = x − b · (Xr − θ)dr + σdWr + α∆i (34) 0 0 i=1

−bt i.e. the (affine) jump∫ version of the∑ Ornstein Uhlembeck process. Here, Xt = xe + − − t − − − bt bt br Jt b(t Ti) θ(1 e ) + σe 0 e dWr + α i=1 e ∆i. This closed form solution shows such a model as a special case in which the jump time direction can be considered for any σ. The following rules will be considered:

1 − e−bt ξ = e−bt, β = t t b −b(t−u) σ −b(t−u) D˜ X = σe 1 ≤ , D˜ Y = (1 − e )1 ≤ 0,u t u t 0,u t b u t −b(t−T ) α −b(t−T ) D˜ X = α e i 1 ≤ , D˜ Y = (1 − e i )1 ≤ , ∂ c = α i t Ti t i t b Ti t a i − − − − ˜ b(t Ti) ˜ − b(t Ti) Di+nXt = αb e ∆i1Ti≤t, Di+nYt = α∆i (1 e )1Ti≤t, qi = αb∆i (35) Concerning the border term operator, it can be useful to notice that for any continuous ± function f, the quantities fi , i = 1, . . . , n, in (19) are given by ( ) ± − α − − f = f X + αe bT (ebTi±1 − ebTi )∆ ,Y − (e bT − e bTi )(ebTi±1 − ebTi−1 )∆ i T i T b i

30 Gaussian direction ∈ A W Referring to Section 3.1, by (35) one has, for some a T , ( ) e−bu 1 − T b − e−bT w0,u = 1 − ∫ au σT T −br b 0 e ardr The associated Skorohod integral agrees with the Ito one and the final weight is ∫ ∫ ( T − − −bT T ) 1 −bu 1 T b e −bu δ˜(w) ≡ δ˜0(w0) = e dWu − ∫ e audWu (36) σT T −br 0 b 0 e ardr 0

Jump amplitudes direction ∈ A J From Section 3.2 and (35), for some a n , we have for i = 1, . . . , n, ∑ − −bT ( − n bTj ) e i n j=1 e wi = 1 + ∑ ai n −bTj αn j=1 e aj and wi = 0 otherwise, so that w is independent of the jump amplitudes and the final weight is easy to derive: ∑ − ∑n ( − n bTj ) 1 − n j=1 e ˜ − bTi ∑ δ(w) = e 1 + n − ai ∂∆i log g(∆i) (37) αn e bTj a i=1 j=1 j

Jump times direction ∈ A J From Section 3.3 and (35, we obtain, for a n , ∑ − −bT ( n bTj − ) e i j=1 e n wi+n = 1 − ∑ ai n −bTj bαn∆i j=1 e aj ≡ ˜ − for i = 1, . . . , n, and wi 0 otherwise. Since δi+n(wi+n) = ∂Ti wi+n, by using Property 2.6 we have ∑ n − ( n −bT ∑ bTi e j ˜ 1 e − ∑ j=1 δ(w) = 1 n −bT ai+ αn ∆i e j aj i=1 j=1 ∑ n − ) 1 e bTj −bTi j=1 2 −bTi −∑ aie + ∑ a e n −bTj n −bTj 2 i j=1 e aj ( j=1 e aj) (38) Concerning the border term operator, it is given in (20) where here ∑ − ( n −bT −bT ± −bT ) bTi±1 e j + e i 1 − e i − n ± e ai − ∑ j=1 w = 1 n − . (39) i bTj −bTi±1 −bTi±1 − −bTi α b n ∆i j=1 e aj + e ai + (e e )ai

Joint Gaussian and jump amplitudes direction ∈ A W Following Section 3.4, the first process w here becomes, for some a T , ∑ n − 1 e bTj − n 1 j=1 −bu −bTi w0,u = · ∫ · e au and wi = · e , 1 ≤ i ≤ n nσ T −br nα 0 e ardr

31 and wi = 0 otherwise. The final weight is immediate to write down: ∑ − ∫ n bTj − T ∑n e n − 1 − ˜ j=1 bu bTi δ(w) = ∫ · e audWu − · e ∂∆ log g(∆i). (40) T −br nα i nσ 0 e ardr 0 i=1 As for the second case, we have

1 1 bT + e−bT − 1 −bu −bTi w0,u = e and wi = · ∑ · e ai, 1 ≤ i ≤ n n −bTj T σ T σb j=1 e aj ∈ A J for some a n , with wi = 0 otherwise, and we obtain ∫ T −bT ∑n 1 − 1 bT + e − 1 − ˜ bu − ∑ · bTi δ(w) = e dWu n − e ai∂∆i log g(∆i). (41) T σ T σb e bTj a 0 j=1 j i=1

Joint jump amplitudes and times direction As already remarked, because of the special form of the Ornstien-Uhlembeck process, ̸ ∈ A J we can assume σ = 0. By using (35), Section 3.5 gives firstly, for a n , ∑ n −bT − 1 e j − n 1 e bTi j=1 −bTi wi = ∑ · e ai, wi+n = n −bTj αn j=1 e aj nαb ∆i as i = 1, . . . , n, and w0 ≡ 0. Then, ∑ − − n bTj ∑n ∑n −bT n j=1 e − 1 be i ˜ ∑ bTi δ(w) = n − e ai∂∆i log g(∆i) + . (42) nα e bTj a nαb ∆ j=1 j i=1 i=1 i and in the border border term (20) one has

−bT ± ± 1 e i 1 wi+n = (43) αbn ∆i ∈ A J Concerning the second w as in Section 3.5, for a fixed a n , one has for i = 1, . . . , n, ∑ n −bT − 1 1 n − e j e bTi a −bTi j=1 i wi = e , wi+n = ∑ · . n −bTj nα nαb j=1 e aj ∆i and w0 ≡ 0. Its Skorohod integral is then given by − ∑n e bTi δ˜(w) = − ∂ log g(∆ )+ nα ∆i i i=1 ∑ n − ( n −bT − ∑ bTi n − e j bTi e ai ∑ j=1 − ∑ e + n −bT n −bT + (44) nα∆i e j aj e j aj i=1 j=1 ∑ j=1 − − n bTj ) n j=1 e − − ∑ e bTi n −bTj 2 ( j=1 e aj) and in the border border term (20) one has ∑ n −bT −bT −bT ± − n − e j + e i − e i 1 bTi±1 ± 1 ∑ j=1 · e ai w = n − (45) i bTj −bTi − −bTi±1 nαb j=1 e aj + (e e )ai ∆i

32 Joint Gaussian and jump times direction

We introduce here the process following process v:

−bu −bTi 1 −1 e au 2 −1 e v0,u = ξuσ (Xu)au = , vi+n = ξTi (qi) = i = 1, . . . , n σ αb∆i

1 2 and vi = vj = 0 otherwise. Here one has  ∫  1 T −br 0 e ar dr  ( ∑b 0 )  γDZ,v˜ = − − − bT 1 n bTj − bT ne b j=1 e ne so that ∑ n −bT − n − e j e bTi ∫ j=1 −bu w0,u = T e au, wi+n = i = 1, . . . , n −br αbn∆i nσ 0 e ardr and wi = 0 otherwise. Then, ∑ ∫ ( n −bTj ) ∑n − n − e T 1 e bTi ˜ ∫ j=1 −bu δ(w) = T e audWu + −br αn ∆i nσ 0 e ardr 0 i=1

Since the components n+1, .., 2n of w are identical to the case of joint jump amplitudes and times direction, (45) has to be used in the border term part (20).

Joint Gaussian, jump amplitudes and times direction

We propose here the following choice:

e−bua v1 = ξ σ−1(X )a = u , 0,u u u u σ −bTi −bTi 2 −1 e 2 −1 e vi = ξTi (∂aci) = , vi+n = ξTi (qi) = , i = 1, . . . , n 2α 2αb∆i

1 2 and vi = vj = 0 otherwise. Here,  ∫  1 T −br 0 e ar dr  ( ∑b 0 )  γDZ,v˜ = − − − bT 2 n bTj − bT 2ne b j=1 e ne so that we have ∑ n −bT − − n − e j e bTi e bTi ∫ j=1 −bu w0,u = T e au, wi = , wi+n = i = n + 1, .., 2n −br 2αn 2αbn∆i nσ 0 e ardr

Finally, ∑ − ∫ ( − n bTj ) T ∑n ∑n −bT n j=1 e − 1 − 1 e i ˜ ∫ bu − · bTi δ(w) = T e audWu e ∂∆i log g(∆i) −br nα αn ∆i nσ 0 e ardr 0 i=1 i=1

33 Numerical experiments

We set r = 0.4, θ = 5, α = 10, σ = 5. We first tested floating Asian options. Figure 16 and Figure 17 report the outcomes for a singular direction of noise and a joint one respectively (to avoid confusion we have not inserted the benchmark value). High accuracy is shown if the jump times are taken into account (and the border term operator is non null), a fact which differs from what found in Bally, Bavouzed and Messaoud [3]. For these options (and the ones we are going to study), the numerical results using the jump time direction are more performing than the others, independently of the parameters choice. This could be explained as follows. In the Ornstein-Uhlembeck model, the first variation process −bt is simply ξt = e . Therefore, in the Malliavin weight for the Delta there is few dependence on the Gaussian and jump amplitudes noises, while the dependence on the jump times is really strong. In Figure 18 the results from the localization are reported and confirm the usual improvements. As for standard digital options, Figure 19 shows the comparison between the different direction of Malliavin calculus and the finite difference. The high variance of the outcomes from the finite difference method does not allow to compare the different Malliavin estimators, which are more clear in Figure 20, where the finite difference are taken off. Again, Figure 21 shows that the (non localized) jump times direction is more per- forming than the localized joint Gaussian and jump amplitudes. Figure 22 and 23 show the behavior of the joint directions and the localized weight.

We also tested the weights in the case of standard call options when σ = 0, in order to compare our results with the ones in Bally, Bavouzet-Morel and Messaoud [3]. Figure 24 displays that the use of the jump time direction gives results which are really consistent to the ones from the finite difference, and we know they are precise for call options. Also, the variance of the Malliavin weight from the jump times noise here developed is much less than the one coming from Bally, Bavouzet-Morel and Messaoud [3]. Table 2 gives such a comparison, as α varies.

α variance from ours variance from BBM

15.8114 0.000180 0.028512 16.6667 0.000222 0.041721 17.6777 0.000277 0.040069 18.8982 0.000341 0.041013 20.4124 0.000422 0.043306 22.3607 0.000535 0.040048 25.0000 0.000685 0.040713 28.8675 0.000902 0.036272 35.3553 0.001224 0.034315 50.0000 0.001749 0.033329

Table 2: Comparison between the variance from our weight and from the weight by Bally, Bavouzet-Morel and Messaoud w.r.t. the jump times. Here, ∆i ∼ N(0, 1).

34 4.4 CIR model with jumps We study here the model ∫ ∫ t t √ ∑Jt Xt = x + (ν − ηXr)dr + σ XrdWr + α (1 + ∆i) (46) 0 0 i=1 that is the (affine) jump version of the CIR process. Let us resume some important facts concerning the pure diffusion version.

Proposition 4.2. Suppose here that X is a pure diffusion: ∫ ∫ t t √ Xt = x + (ν − ηXr)dr + σ XrdWr. 0 0 Then the following statements hold. (i) Set τ the first time at which the process X hits 0. If 2ν ≥ σ2 > 0 then P(τ = p +∞) = 1 and Xt ∈ L for any p. 2 −1 ∈ p (ii) If 2ν > pσ then Xt L and moreover (( ) ) ( ) p ηt p−1 E 1 ≤ Cp √e Xt L(t) v0

σ2 − −ηt where Cp denotes a suitable positive constant and L(t) = 4η (1 e ). 2 1,∞ (iii) If 2ν > σ > 0 then Xt ∈ D . 2 2,p (iv) If 4ν > 3pσ > 0 then Xt ∈ D

Proof. (i) is a well known result (see e.g. Lamberton and Lapeyre [16] and the references quoted therein). (ii) is proved in Lemma 4.1 in Al`osand Ewald in [1]. As for (ii) and (iii), they can be easily proved as an immediate consequence of what developed by Al`osand Ewald in [1]. 2

As for the jump diffusion case, we always assume that

2 2ν ≥ σ > 0 and 1 + ∆i > 0 implying that the solution to (46) never hits 0. We can refer to the first variation process ξ and its Gaussian Malliavin derivative u process D˜0,uξ = χ as the solution to ∫ ∫ t t σξr ξt = 1 − ηξrdr + √ dWr, t ≥ 0 0 0 2 Xr ∫ ∫ ( ) (47) σξ t σ t χu ξ D˜ X χu = √ u − ηχudr + √ r − r 0,u r dW , t ≥ u t r 3/2 r 2 Xu 0 2 0 Xr 2Xr respectively. As for the existence and uniqueness of the solution to above Equation (47) see again Al`osand Ewald in [1]. Now we can consider the different directions of the differential calculus, having fixed the set {JT = n}.

35 Gaussian direction Following the definition in Section 3.1 we have that ∫ ( T ) ξu 0 βrdr w0,u = √ 1 − ∫ au σT X T u 0 βrardr ∈ A W where a T . For simplicity, we set ∫ T β dr ∫ 0 r Υ1 = T . 0 βrardr so that by Property 2.14, the Skorohod integral is ∫ ∫ ∫ { T T T } 1 ξu ξuau ξuau δ˜0(w0,u) = √ dWu − Υ1 √ dWu + √ · D˜0,uΥ1du. σT 0 Xu 0 Xu 0 Xu ∫ ˜ t In view of numerical applications, one needs to handle D0,uΥ1. Since βt = 0 ξrdr, ˜ u the derivative D0,uξt := χt will appear, and we numerically approximate it through (47).

Jump amplitudes direction As developed in Section 3.2, we have that for i = 1, ..., n ∑ ( n ) ξ βT Ti − ∑ j=1 j wi = 1 n n j=1 βTj aj ∈ A J where a n . Now, setting ∑ n β ∑ j=1 Tj Υ2 = n . j=1 βTj aj by Property (2.6) we have { } 1 ∑n ∑n ∑n δ˜ (w ) = δ˜ (ξ ) − Υ δ˜ (ξ a ) + ξ a · D˜ Υ i i n i Ti 2 i Ti i Ti i i 2 i=1 i=1 i=1

The Malliavin derivative of Υ2 will be treated in practice as in the previous Section, using the same properties of the calculus.

Joint Gaussian and jump amplitudes direction Following 3.4, we have ∑ n i=1 βTi ξuau ξTi w0,u = − ∫ √ and wi = , for i = 1, ..., n. T σ X n n 0 βrardr u Setting for simplicity ∑n β ∫ Ti Υ3 = T i=1 n 0 βrardr we finally obtain ∫ ∫ { T T } ∑n ˜ 1 ξuau ξuau 1 ˜ δ(w) = − Υ3 √ dWu − √ · (D˜0,uΥ3)du. + δi(ξT ) n σ X σ X n i 0 u 0 u i=1

36 Numerical experiments We set σ = 0.2, ν = 0.6, η = 0.6, α = 0.6. We recall that here no closed form formulas are available, so that all processes have to be discretized. Figures 25 and 26 refer to floating Asian call options and do not report the benchmark value because it overlaps the finite difference ones. They show that the use of the localization is crucial and gives significant results. In particular, the results from the Gaussian direction are really stable, maybe because the Malliavin weights do not depend explicitly on the jump amplitudes. As for Figure 28, some parameters are slightly different in order to get values not too close to zero (r = 0.1, ν = 0.1 and T = 1) and shows the efficiency of the Malliavin approach even without using the localization.

37 5 Figures

Floating Asian call 1.05 W Direction A Direction AW Joint Direction Finite Difference 1 Benchmark

0.95

0.9

0.85

0.8

0.75 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 1: Black-Scholes-Merton model

Floating Asian call 0.91 loc. W Direction loc. A Direction 0.9 loc. AW Joint Direction Finite Difference Benchmark 0.89

0.88

0.87

0.86

0.85

0.84

0.83

0.82 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 2: Black-Scholes-Merton model with localization

38 Floating Asian call 1.05 loc. AW Joint Direction AW Joint Direction Benchmark

1

0.95

0.9

0.85

0.8

0.75 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 3: Black-Scholes-Merton model. Comparison between localized and non localized weights

Floating Asian Call 12 W Direction A Direction AW Joint Direction 10 Finite Difference Benchmark

8

6

4

2

0

-2

-4 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 4: Black-Scholes-Merton model with a small value for λ (λ = 0.5).

39 Floating Asian call 12 loc. A Direction A Direction Benchmark 10

8

6

4

2

0

-2

-4 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 5: Black-Scholes-Merton model with localization and a small value for λ (λ = 0.5).

Floating Asian call 0.288 loc. W Direction loc. AW Joint Direction Finite Difference 0.286 Benchmark

0.284

0.282

0.28

0.278

0.276

0.274

0.272 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 6: Black-Scholes-Merton model with localization and a small value for λ (λ = 0.5). All weights except for the lonely jump amplitudes one.

40 0.7 W direction Benhamou W direction Finite difference

0.68

0.66

0.64

0.62

0.6

0.58 20000 40000 60000 80000 100000 120000 MC simulations

Figure 7: Black-Scholes model. Floating option.

0.64 W direction Benhamou W direction Finite difference Benchmark 0.638

0.636

0.634

0.632

0.63

0.628

0.626 20000 40000 60000 80000 100000 120000 MC simulations

Figure 8: Black-Scholes model. Floating option with localization.

41 Standard Digital 0.0035 W Direction A Direction AW Direction Finite Difference 0.003 Benchmark

0.0025

0.002

0.0015

0.001

0.0005 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 9: Black-Scholes-Merton model.

Asian Digital 0.005 W Direction A Direction AW Direction Finite Difference 0.0045 Benchmark

0.004

0.0035

0.003

0.0025

0.002 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 10: Black-Scholes-Merton model.

42 Digital 0.0035 loc. W Direction loc. A Direction loc. AW Joint Direction Finite Difference 0.003 Benchmark

0.0025

0.002

0.0015

0.001

0.0005 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 11: Black-Scholes-Merton model with localization.

Digital 0.00198 loc. W Direction loc. A Direction loc. AW Joint Direction 0.00196 Benchmark

0.00194

0.00192

0.0019

0.00188

0.00186

0.00184 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 12: Black-Scholes-Merton model with localization, without the finite difference method.

43 Floating Asian Call 3 W Direction A Direction Joint AW Direction Finite Difference Benchmark 2.5

2

1.5

1

0.5 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 13: SVM model. The “W” direction is w.r.t. W 1.

Standard Digital 0.0018 W Direction A Direction 0.0016 Joint AW Direction Finite Difference Benchmark 0.0014

0.0012

0.001

0.0008

0.0006

0.0004

0.0002

0 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 14: SVM model.

44 Asian Digital 0.003 W Direction A Direction Joint AW Direction 0.0028 Finite Difference Benchmark

0.0026

0.0024

0.0022

0.002

0.0018

0.0016 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 15: SVM model.

-0.01 W direction A direction T direction Finite difference -0.015

-0.02

-0.025

-0.03

-0.035

-0.04 20000 40000 60000 80000 100000 120000 MC simulations

Figure 16: OU model. Floating Asian option.

45 -0.01 Joint AW direction Joint WT direction Joint AT direction Joint AWT direction -0.015 Finite difference

-0.02

-0.025

-0.03

-0.035

-0.04 20000 40000 60000 80000 100000 120000 MC simulations

Figure 17: OU model. Floating Asian option.

-0.0232 Loc. W direction Loc. Joint AW direction Loc. Joint AWT direction Finite difference -0.0234 Benchmark

-0.0236

-0.0238

-0.024

-0.0242

-0.0244

-0.0246 20000 40000 60000 80000 100000 120000 MC simulations

Figure 18: OU model. Floating Asian option.

46 0.014 W direction A direction T direction Finite difference 0.012 Benchmark

0.01

0.008

0.006

0.004

0.002 20000 40000 60000 80000 100000 120000 MC simulations

Figure 19: OU model. Standard digital option.

0.009 W direction A direction T direction 0.0085 Benchmark

0.008

0.0075

0.007

0.0065

0.006

0.0055

0.005 20000 40000 60000 80000 100000 120000 MC simulations

Figure 20: OU model. Standard digital option.

47 0.0075 T direction Loc. Joint AW direction Benchmark

0.00745

0.0074

0.00735

0.0073

0.00725 20000 40000 60000 80000 100000 120000 MC simulations

Figure 21: OU model. Standard digital option.

0.014 Joint AW direction Joint WT direction Joint AT direction Joint AWT direction 0.012 Finite difference Benchmark

0.01

0.008

0.006

0.004

0.002 20000 40000 60000 80000 100000 120000 MC simulations

Figure 22: OU model. Standard digital option.

48 0.014 Loc. W direction Loc. Joint AW direction Loc. Joint AWT direction Finite difference 0.012 Benchmark

0.01

0.008

0.006

0.004

0.002 20000 40000 60000 80000 100000 120000 MC simulations

Figure 23: OU model. Digital option.

Standard Call 0.331 T Direction Finite Difference Benchmark 0.3305

0.33

0.3295

0.329

0.3285

0.328

0.3275

0.327 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 24: OU model. Standard call option

49 Floating Asian Call 0.1 W Direction A Direction Joint AW Direction Finite Difference 0.05

0

-0.05

-0.1

-0.15

-0.2 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 25: CIR model. Floating Asian call option

Floating Asian Call -0.01314 Loc. W Direction Loc. A Direction Loc. Joint AW Direction -0.01316 Finite Difference

-0.01318

-0.0132

-0.01322

-0.01324

-0.01326

-0.01328

-0.0133 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 26: CIR model. Floating Asian call option

50 Floating Asian Calll 0.1 W Direction Loc. W Direction Finite Difference

0.05

0

-0.05

-0.1

-0.15 20000 40000 60000 80000 100000 120000 MC Simulations

Figure 27: CIR model. Floating Asian call option

Digital Option 0.003 W direction A direction Joint AW direction Finite difference Benchmark 0.0025

0.002

0.0015

0.001

0.0005 20000 40000 60000 80000 100000 120000 MC simulations

Figure 28: CIR model. Digital option.

51 References

[1] E. Al`os,C.O. Ewald. Malliavin Differentiability of the Heston volatility and Applications to option pricing. Advances in Applied Probability, 40, 144-162, 2008.

[2] V. Bally. Introduction to Malliavin Calculus. Lecture Notes, available at http:// perso-math.univ-mlv.fr/users/bally.vlad/, 2007

[3] V. Bally, M.P. Bavouzet-Morel, M. Messaoud. Integration by parts formula for locally smooth laws and applications to sensitivity computations. Annals of Ap- plied Probability, 17, 33-66, 2007.

[4] M.P. Bavouzet-Morel, M. Messaoud. Computation of Greeks using Malliavin’s calculus in jump type market models. Electronic Journal of Probability, 11, 276- 300, 2006.

[5] E. Benhamou. An application of Malliavin calculus to continuous time Asian option Greeks. Available at http://papers.ssrn.com/sol3/papers. cfm?abstract id=265284, 2000.

[6] E. Benhamou. Optimal Malliavin weighting Function for the Computation of the Greeks, Mathematical Finance, 13 37-53, 2003.

[7] J. Bertoin. L´evyprocesses. Cambridge University Press, 1996.

[8] K. Bichteler, J.-B. Gravereaux, J. Jacod. Malliavin calculus for processes with jumps. Gordon and Breach Science Publishers, New York, 1987.

[9] N. Chen, P. Glasserman. Malliavin Greeks without Malliavin calculus. Stochastic Processes and their Applications, 117, 1689-1723, 2007.

[10] M.H.A. Davis, M. Johansson. Malliavin Monte Carlo Greeks for jump diffusions. Stochastic Processes and their Applications, 116, 101-129, 2006.

[11] Y. El-Khatib, N. Privault. Computations of Greeks in a market with jumps via the Malliavin calculus. Finance and Stochastics, 8, 161-179, 2004.

[12] B. Forster, E. L¨utkebohmert, J. Teichman. Calculation of Greeks for jump dif- fusions. Preprint arXiv:math.PR/0509016, 2005.

[13] B. Forster, E. L¨utkebohmert, J. Teichman. Absolutely continuous laws of jump diffusions in finite and infinite dimensions with applications to Mathematical Finance. SIAM Journal of Mathematical Analysis, 40, 2132-2153, 2009.

[14] E. Fourni´e,J.M. Lasry, J. Lebuchoux, P.L. Lions, N. Touzi. Applications of Malliavin Calculus to Monte Carlo methods in Finance. Finance and Stochastics, 5, 201-236, 1999.

[15] E. Fourni´e, J.M. Lasry, J. Lebuchoux, P.L. Lions. Applications of Malliavin Calculus to Monte Carlo methods in Finance. II. Finance and Stochastics, 2, 73-88, 2001.

[16] D. Lamberton, B. Lapeyre. Introduction to stochastic calculus applied to finance. Chapman & Hall, London, 1996.

52 [17] P. Malliavin, A. Thalmaier. Stochastic calculus of variations in mathematical finance. Springer-Verlag, Berlin, 2006.

[18] D. Nualart. The Malliavin Calculus and Related Topics. Springer-Verlag, 1995.

[19] N. Privault, X. Wei. A Malliavin calculus approach to sensitivity analysis in insurance. Insurance: Mathematics and Economics, 35, 679-690, 2004.

[20] N. Privault, X. Wei. Integration by parts for point processes and Monte Carlo estimation. Journal of Applied Probability, 44 , 806-823, 2007.

[21] J. Vives, J.A. Le´on,F. Utzet, J.L. Sol´e.On L´evyprocesses, Malliavin calculus and market models with jumps. Finance and Stochastics, 6, 197-225, 2002.

53