4.3 Arbitrage ...... 18 4.4 Malliavin Differentiability of the CEV-Type Heston Model (Logarithmic Price) ...... 20 4.5 Malliavin Differentiability of the CEV-Type Heston Model (Actual Price) ...... 23 4.6 Delta and Rho ...... 25
5 Conclusion 27
1 Introduction
Malliavin calculus is the infinite-dimensional differential calculus on the Wiener space in order to give a probabilistic proof of Holmander’s¨ theorem. It has been developed as a tool in mathematical finance. In 1999, Founie´ et al. [1] gave a new method for more efficient computation of Greeks which represent sensitivities of the derivative price to changes in parameters of a model under consideration, by using the integration by parts formula related to Malliavin calculus. Following their works, more general and efficient application to computation of Greeks have been introduced by many authors (see [2], [3], [4]). They often considered this method for tractable models typified by the Black-Scholes model.
In the Black-Scholes model, an underlying asset St is assumed to follow the stochastic differential equa- tion dSt = rSt dt + σSt dWt, where r and σ respectively imply the risk free interest rate and the volatility. The Black-Scholes model seems standard in business. The reason is that this model has the analytic solution for famous options, so it is fast to calculate prices of derivatives and risk parameters (Greeks) and easy to evaluate a lot of deals and the whole portfolios and to manage the risk. However, the Black-Scholes model has a defect that this model assumes that volatility is a constant. In the actual financial market, it is observed that volatility fluctuates. However, the Black-Scholes model does not suppose the prospective fluctuation of volatility, so when we use the model there is a problem that we would underestimate prices of options. Hence, more accurate models have been developed. One of the models is the stochastic volatility model. One of merits to consider this model is that even if prices of derivatives such as the European options are not given for any strike and maturity, we can grasp the volatility term structure. In particular, the Heston model, which is introduced in [5], is one of the most popular stochastic volatility models. This model assumes that the underlying asset St and the volatility νt follow the stochastic differential equations √ dSt = St(r dt + νt dBt), (1.1) 1 dνt = κ(µ − νt) dt + θνt 2 dWt, (1.2) where Bt and Wt denote correlated Brownian motions. In the equation (1.2), κ, µ and θ imply respectively the rate of mean reversion (percentage drift), the long-run mean (equilibrium level) and the volatility of volatility. This volatility model is called the Cox-Ingersoll-Ross model and more complicated than the Black-Scholes model. We have not got the analytic solution yet. However, even this model can not grasp fluctuation of volatility accurately. In 2006 (see [6]), Andersen
2 and Piterbarg generalized the Heston model. They extended the volatility process of (1.2) to
1 dν = κ(µ − ν ) dt + θν γ dW , γ ∈ , 1 . (1.3) t t t t 2
This model is called the constant elasticity of variance model (we will often shorten this model as the CEV 1 model). Naturally, in the case γ ∈ 2 , 1 , the volatility model (1.3) is more complicated than the volatility model (1.2). Here, consider the European call option and let φ is a payoff function. Then we can estimate the option −rT price by the following formula V (x) = E[ e φ(ST )]. However, the computation of Greeks is much ∂V (x) important in the risk-management. A Greek is given by ∂α where α is one of parameters needed to compute the price, such as the initial price, the risk free interest rate, the volatility and the maturity etc.. Most of financial institutions have calculated Greeks by using finite-difference methods but there are some demerits such that the results depend on the approximation parameters. More than anything, the methods need the assumption that the payoff function φ is differentiable. However, in business they often consider the payoff functions such as φ(x) = (x − K)+ or φ(x) = 1{x≥K}. Here we need Malliavin calculus. In 1999 Founie´ et al. in [1] gave the new methods for Greeks. To come to the point, they calculated Greeks by the ∂V (x) −rT following formula ∂α = E[ e φ(ST )·(weight)]. We can calculate this even if φ is polynomial growth. Instead, we need the Malliavin differentiability of St.
The solution Xt satisfying the stochastic differential equation with Lipschitz continuous coefficients is known as Malliavin differentiable. Hence we can easily verify that the Black-Scholes model is Malliavin γ 1 differentiable. However the diffusion coefficient x , γ ∈ 2 , 1 is neither differentiable at x = 0 nor Lipschitz continuous and then we cannot find whether the CEV-type Heston model is Malliavin differentiable 1 or not. In [7], Alos and Ewald proved that the volatility process (1.2), that is the case where γ = 2 of (1.3), was Malliavin differentiable and gave the explicit expression for the derivative. However, in the case 1 γ ∈ 2 , 1 , we can not simply prove the Malliavin differentiability in the exact same way. 1 In this paper we concentrate on the case γ ∈ 2 , 1 , that is, we extend the results in [7] and give the explicit expression for the derivative. Moreover we consider the CEV-type Heston model and give the formula to compute Greeks.
2 Summary of Malliavin Calculus
We give the short introduction of Malliavin calculus on the Wiener space. For further details, refer to [8].
2.1 Malliavin Derivative
We consider a Brownian motion {W (t, ω)}t∈[0,T ] (in the sequel, we often denote W (t, ω) by Wt) on a complete filtered probability space (Ω, F, P; (Ft)) where (Ft) is the filtration generated by Wt, and the Hilbert space H := L2([0,T ]). When fixing ω, we can consider ω(t) := W (t, ω) ∈ C([0,T ]). Then the Z T Z T Itoˆ integral of h ∈ H is constructed as h(t) dW (t, ω) = h(t) dω(t) on C([0,T ]). We denote by 0 0
3 ∞ n n Cp (R ) the set of infinitely continuously differentiable functions f : R → R such that f and all its partial derivatives have polynomial growth. Let S be the space of smooth random variables expressed as
F (ω) = f(W (h1),...,W (hn)), (2.1)
Z T ∞ n ∞ n where f ∈ Cp (R ) and W (h) := h(t) dWt where h1, . . . , hn ∈ H, n ≥ 1. We denote by C0 (R ) 0 the set of infinitely continuously differentiable functions f : Rn → R such that f has compact support. ∞ n n Moreover we denote by Cb (R ) the set of infinitely continuously differentiable functions f : R → R such that f and all of its partial derivatives are bounded. Denote by S0 and Sb respectively, the spaces of ∞ n ∞ n smooth random variables of the form (2.1) such that f ∈ C0 (R ) and f ∈ Cb (R ). We can find that p ∂ S0 ⊂ Sb ⊂ S and S0 is a linear subspace of and dense in L (Ω) for all p > 0. We use the notation ∂i = ∂xi in the sequal. We define the derivative operator D, so called the Malliavin derivative operator.
Definition 2.1 (Malliavin derivative). The Malliavin derivative DtF of a smooth random variable expressed as (2.1) is defined as the H-valued random variable given by
n X DtF = ∂if(W (h1),...,W (hn))hi(t). (2.2) i=1
We sometimes omit to write the subscript t.
Since S is dense in Lp(Ω), we will define the Malliavin derivative of a general F ∈ Lp(Ω) by means of taking limits. We will now prove that the Malliavin derivative operator D : Lp(Ω) → Lp(Ω; H) is closable. Please refer to [8] for proves of the following results.
Lemma 2.1. We have E[ GhDF, hiH ] = −E[ F hDG, hiH ] + E[ F GW (h), for F,G ∈ S and h ∈ H.
Lemma 2.2. For any p ≥ 1, the Malliavin derivative operator D : Lp(Ω) → Lp(Ω; H) is closable.
For any p ≥ 1, we denote by D1,p the domain of D in Lp(Ω) and then it is the closure of S by the norm
p 1 ( " #) p 1 Z T 2 p p p p 2 kF k1,p = E[ |F | ] + E[ kDF kH ] = E[ |F | ] + E |DtF | dt . (2.3) 0
1,2 Note that D is a Hilbert space with the scalar product hF,Gi1,2 = E[ FG] + E[ hDF,DGiH ]. Moreover, the Malliavin derivative {DtF }t∈[0,T ] is regarded as a stochastic process defined almost surely with the measure P × u where u is a Lebesgue measure in [0,T ]. Indeed, we can observe
Z T Z T 2 2 2 2 kDF kL2(Ω;H) = E (DtF ) dt = E[(DtF ) ] dt = kD·F kL2(Ω×[0,T ]). (2.4) 0 0
The following result will become a very important tool.
4 1,2 2 2 Lemma 2.3. Suppose that a sequence {Fn : Fn ∈ D , supn E[ kDFnkH ] < ∞} converges to F in L (Ω). 1,2 2 Then F belongs to D and the sequence {DFn} converges to DF in the weak topology of L (Ω; H).
k F {Dk F, t ∈ [0,T ]} Ω × [0,T ]k Similarly, we define the -th Malliavin derivative of , t1,...,tk i , as a - measurable stochastic process defined P × uk-almost surely and the operator Dk is closable from S → Lp(Ω; Hk) for any p ≥ 1 and k ≥ 1. As with the Malliavin derivative D, from the closability of Dk, we can define the domain Dk,p of the operator Dk in Lp(Ω) as the completion of S with the norm
1 ( k ) p p X k p kF kk,p = E[|F | ] + E[kD F kH⊗i ] . (2.5) i=1 \ Moreover we define D1,∞ as D1,∞ := D1,p. We will now prove the chain rule and refer to the [8, p∈N Proposition 1.2.4] for details.
1,p n Lemma 2.4. For p ≥ 1, let F = (F1,...,Fn) ∈ D and ψ : R → R be a Lipschitz function with bounded partial derivatives, and then we have ψ(F ) ∈ D1,p and
n X Dtψ(F ) = ∂iψ(F )DtFi. (2.6) i=1
2.2 Skorohod Integral
For p, q > 1 satisfing 1/p + 1/q = 1, the adjoint D∗ of the operator D which is closable and has the domain on Lp(Ω) should be closable but with the domain contained in Lq(Ω). Focus on the case p = q = 2. We can define the divergence operator δ = D∗ so called the Scorohod integral which is the adjoint of the operator D such as
δ : L2(Ω; H) =∼ L2(Ω × [0,T ]) → L2(Ω). (2.7)
Definition 2.2 (Skorohod integral). Let u ∈ L2(Ω; H). If for all F ∈ D1,2, we can have
|E[ hDF, uiH ]| ≤ ckF kL2(Ω), (2.8) where c is some constant depending on u, then u is called to belong to the domain Dom(δ). Moreover if 2 u ∈ Dom(δ), then we have that δ(u) belongs to L (Ω) and the duality relation E[ F δ(u)] = E[ hDF, uiH ], for all F ∈ D1,2.
We can get the following results.
Lemma 2.5. Let F ∈ D1,2 and u ∈ Dom(δ) satisfy F u ∈ L2(Ω; H). And then we have that F u belongs to
Dom(δ) and δ(F u) = F δ(u) − hDF, uiH .
5 2 Lemma 2.6. Let u ∈ L (Ω; H) be an Ft-adapted stochastic process then u ∈ Dom(δ) and δ(u) = R T 0 ut dWt.
We give one of famous properties of δ. The following property implies the relationship between the Malliavin derivative and the Skorohod integral. Denote by D1,2(H) the class of processes u ∈ L2(Ω; H) =∼ (Ω × [0,T ]) such that u(t) ∈ D1,2 for almost all t and there exists a measurable version of the two variable hR T R T 2 i processes Dsut satisfying E 0 0 (Dsut) λ(ds) λ(dt) < ∞.
1,2 2 Lemma 2.7. Let u ∈ D (H) satisfy that Drut ∈ Dom(δ) and that δ(Drut) ∈ L (Ω; H). We have then that δ(u) belongs to D1,2 and
Dt(δ(u)) = u(t) + δ(Dtu). (2.9)
The following result is applied to calculate Greeks. For further details, refer to [8, Chapter 6].
1,2 Lemma 2.8. Let F,G ∈ D . Suppose that an random variable u(t, ·) ∈ H satisfy hDF, uiH 6= 0 a.s. and −1 Gu(hDF, uiH ) ∈ Dom(δ). For any continuously differentiable function f with bounded derivatives, we 0 −1 have E[ f (F )G] = E[ f(F )H(F,G)] where H(F,G) = δ(Gu(hDF, uiH ) ).
2.3 Malliavin Calculus for Stochastic Differential Equations
m Consider T > 0 and Ω = C0([0,T ]; R ). Let {Wt}t∈[0,T ] be the m-dimensional Brownian motion on fil- tered probability space (Ω, F,P ; Ft) where P is the n-dimensional Wiener measure and F is the completion of the σ-field of Ω with P . And then H = L2([0,T ]; Rm) is the underlying Hilbert space. We consider the solution {Xt}t∈[0,T ] of the following n-dimensional stochastic differential equation for all i = 1, . . . , n
m i i X i j i i dXt = b (Xt) dt + σj(Xt)dWt ,X0 = x , (2.10) j=1
n n n m where b : R → R and σj : R → R satisfy the following : there is a positive constant K < ∞ such that
|b(x) − b(y)| + |σ(x) − σ(y)| ≤ K|x − y|, for all x, y ∈ Rn, (2.11) |b(x)| + |σ(x)| ≤ K(1 + |x|), for all x ∈ Rn. (2.12)
i Here σj is the columns of the matrix σ = (σj). We can have the following result related to the uniqueness and refer to [8, Lemma 2.2.1] for the detail.
Theorem 2.1. There is a unique n-dimensional, continuous and Ft-adapted stochastic process {Xt}t∈[0,T ] h pi satisfying the stochastic differential equation (2.10) with E sup0≤t≤T |X(t)| < ∞, for all p ≥ 2.
i 1,∞ In the case the coefficients are Lipschitz, the solution Xt belongs to D .
6 Theorem 2.2. Assume that coefficients are Lipschitz continuous of the stochastic differential equation (2.10). i 1,∞ Then the solution Xt belongs to D for all t ∈ [0,T ] and i = 1, . . . , n and satisfies
h j i pi sup E sup |DrXs| < ∞. (2.13) 0≤r≤t r≤s≤T
j i Moreover the derivative DrXt satisfies the following
n m Z t n Z t j i i X X i j k l X i j k DrXt = σj(Xr) + ∂kσl (Xs)DrXs dWs + ∂kb (Xs)DrXs ds, (2.14) k=1 l=1 r k=1 r
j i j j for r ≤ t a.e., and DrXt = 0 for r > t a.e.. Here D denotes the Malliavin derivative for W .
Let Xt be the solution of the following stochastic differential equation
dXt = b(Xt) dt + σ(Xt) dWt,X0 = x, (2.15)
1,2 where Wt denotes a 1-dimensional Brownian motion. Assume that Xt ∈ D . We let Yt be the first variation ∂Xt of Xt, that is, Yt = ∂x . We can easily have that Yt satisfies the folloing
0 0 dYt = b (Xt)Yt dt + σ (Xt)Yt dWt,Y0 = 1. (2.16)
Considering this as a stochastic differential equation for Yt, we can have the following solution
Z t Z t 0 1 0 2 0 Yt = exp (b (Xs) − (σ (Xs)) ) ds + σ (Xs) dWs . (2.17) 0 2 0
The following results will also be useful to calculate Greeks later.
−1 Lemma 2.9. Under the above conditions, we can have Yt = DsXtσ (Xs)Ys · 1{s≤t}. Z T Let {a(t)}t∈[0,T ] be a continuous function in H such that a(t) dt = 1. 0 R T −1 Lemma 2.10. Under the above conditions, we can have YT = 0 a(t)DtXT σ (Xt)Yt dt.
∂ Theorem 2.3. For any ψ : R → R of polynomial growth, we have ∂x E[ ψ(XT )] = E[ ψ(XT )π] where Z T −1 π = a(t)σ (Xt)Yt dWt. 0
For the more general case, the same result is proved as below. Let Xt denote the solution of the following n-dimensional stochastic differential equation just like as (2.10)
dXt = b(Xt) dt + σ(Xt) dWt,X0 = x, (2.18) where Wt denotes m-dimensional Brownian motion. For the sake of simplification, we assume that n = m.
7 Z T −1 2+ Theorem 2.4. Suppose that the diffusion coefficient σ is invertible and that E |σ (Xt)Yt| dt < 0 ji j 1,∞ ∞, for some > 0, where Y denotes the first variation process, that is, Yt = ∂iXt . Let G ∈ D be a random variable which does not depend on the initial condition x. Then for all measurable function φ with polynomial growth we have ∂iE[ φ(XT )G] = E[ φ(XT )πi(G), where a(t) is an Ft-adapted process R T satisfying 0 a(t) dt = 1,
n X k −1 ki πi(G) = δ (Ga(t)(σ (Xt)Yt) ) k=1 n Z T Z T X −1 ki k k −1 ki = G a(t)(σ (Xt)Yt) dWt − Dt Ga(t)(σ (Xt)Yt) ds , (2.19) k=1 0 0
k k and δ denotes the adjoint to the Malliavin derivative with respect to a Brownian motion Wt .
The following theorem introduced in [9] is useful. From now on, we will now denote by ∂t the once derivative with respect to t, by ∂x the once derivative with respect to x and by ∂xx the second derivative with respect to x.
Theorem 2.5. Consider a stochastic process Xt satisfying the 1-dimensional stochastic differential equation
dXt = µ(t, Xt) dt + σ(t, Xt) dWt, (2.20)
1 2 where Wt denotes a Brownian motion and the coefficients µ(t, x) ∈ C ([0,T ]×R) and σ(t, x) ∈ C ([0,T ]× R) satisfy the linear growth condition and the Lipschitz condition. Moreover, we assume that σ is positive and bounded away from 0, and that µ(t, 0) and σ(t, 0) are bounded for all t ∈ [0,T ]. Then Xt belongs to D1,2 and the derivative is given by
Z t µ∂xσ 1 ∂tσ DrXt = σ(t, Xt) exp ∂xµ − − (∂xxσ)σ − (s, Xs) ds , (2.21) r σ 2 σ for r ≤ t and DrXt = 0 for r > t.
Proof. We omit the proof. For further details, refer to [Theorem 2.1 [9]].
3 Mean-Reverting CEV Model
Following the construction in [7], we will now prove that the mean-reverting constant elasticity of vari- ance model is Malliavin differentiable. The mean-reverting CEV model follows the stochastic differential equation
1 dν = κ(µ − ν ) dt + θν γ dW , γ ∈ , 1 , (3.1) t t t t 2
8 with ν0 = ν > 0 and where µ, κ and θ > 0. In [7], Alos and Ewald proved the Malliavin differentiability of 1 1 2 the case γ = 2 of (3.1). In the case, the function x is neither continuously differentiable in 0 nor Lipschitz continuous so they circumvented various problems by some transforming and approximating. However, in 1 the case γ ∈ 2 , 1 , there are more complex problems. Following [7], we will extend their results.
3.1 Existence and Uniqueness
We will now prove that the solution to (3.1) not only exists uniquely but is also positive a.s.
Lemma 3.1. There exists a unique strong solution to (3.1) which satisfies P ( νt ≥ 0, t ≥ 0 ) = 1. Moreover, let τ = inf{ t ≥ 0; νt = 0 or = ∞} with inf{∅} = ∞. Then we have P ( τ = ∞ ) = 1.
Proof. Instead of (3.1), consider the following
1 dv = κ(µ − v ) dt + θ|v |γ dW , γ ∈ , 1 . (3.2) t t t t 2
If we have concluded that the unique strong solution of (3.2) is positive a.s., then (3.2) coincides with (3.1). The existence of non-explosive weak solution for (3.2) follows from the continuity and the sub-linear growth condition of drift and diffusion coefficients. Moreover, from [10, Proposition 5.3.20, Corollary 5.3.23], we have the pathwise uniqueness. From [10, Proposition 5.2.13], we can verify that the pathwise uniqueness holds for (3.2).
We will now prove that the second claim is true. Let τv = inf{ t ≥ 0; vt = 0 or = ∞} with inf{∅} =
∞. In order to use [10, Theorem 5.5.29], we verify that for a fixed number c ∈ R, limx→0 p(x) = −∞ R x n R y κ(µ−z) o where p(x) is defined as p(x) = c exp −2 c θ2z2γ dz dy. Since we have known that the solution vt of (3.2) does not explode at ∞, if we could prove that the above formula holds, we can claim that P (τv = ∞) = 1, that is, P (τ = ∞) = 1. We can assume without restriction that x < 1 and let c = 1. Then we have
Z y κ(µ − z) Z y κµ κ −2 2 2γ dz = −2 2 2γ − 2 2γ−1 dz 1 θ z 1 θ z θ z 2κµ 1 1 2κ 1 1 = 2 2γ−1 − 2 2γ−2 θ (2γ − 1) z y θ (2 − 2γ) z y 2κµ 1 2κ ≥ − 1 + . (3.3) θ2(2γ − 1) y2γ−1 θ2(2 − 2γ)
Letting w = y−1, we can calculate p(x). From the last inequality, there exists a constant C > 0 satisfying the following inequality and then we have as x → 0,
Z 1 2κµ 1 p(x) ≤ −C exp 2 2γ−1 dy x θ (2γ − 1) y Z 1 x 1 2κµ 2γ−1 = −C 2 exp 2 w dw → −∞. (3.4) 1 w θ (2γ − 1)
9 3.2 Lp-Integrability
Consider the stochastic differential equation
γ dνt = b(νt) dt + θνt dWt, (3.5)