Purdue University Purdue e-Pubs

Open Access Dissertations Theses and Dissertations

January 2015 Malliavin in the Canonical Levy Process: Theory and Financial Applications. Rolando Dangnanan Navarro Purdue University

Follow this and additional works at: https://docs.lib.purdue.edu/open_access_dissertations

Recommended Citation Navarro, Rolando Dangnanan, "Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications." (2015). Open Access Dissertations. 1422. https://docs.lib.purdue.edu/open_access_dissertations/1422

This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Graduate School Form 30 Updated 1/15/2015

PURDUE UNIVERSITY GRADUATE SCHOOL Thesis/Dissertation Acceptance

This is to certify that the thesis/dissertation prepared

By Rolando D. Navarro, Jr.

Entitled Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications

For the degree of Doctor of Philosophy

Is approved by the final examining committee:

Frederi Viens Chair Jonathon Peterson

Michael Levine

Jose Figueroa-Lopez

To the best of my knowledge and as understood by the student in the Thesis/Dissertation Agreement, Publication Delay, and Certification Disclaimer (Graduate School Form 32), this thesis/dissertation adheres to the provisions of Purdue University’s “Policy of Integrity in Research” and the use of copyright material.

Approved by Major Professor(s): Frederi Viens

Approved by: Jun Xie 11/24/2015 Head of the Departmental Graduate Program Date MALLIAVIN CALCULUS IN THE CANONICAL LEVY´ PROCESS:

WHITE NOISE THEORY AND FINANCIAL APPLICATIONS

A Dissertation

Submitted to the Faculty

of

Purdue University

by

Rolando D. Navarro, Jr.

In Partial Fulfillment of the

Requirements for the Degree

of

Doctor of Philosophy

December 2015

Purdue University

West Lafayette, Indiana ii

”Stay hungry, stay foolish!” Steve Jobs (1955-2011) iii

ACKNOWLEDGMENTS

I would like to express my deepest gratitude to the following who made my journey towards the completion of my Ph.D. dissertation possible. The Almighty Father for giving me all the strength and endurance to accomplish this noble endeavor. My parents for their unyielding encouragement for me to pursue graduate studies in Purdue and well as for imparting their invaluable foresight on what it takes to be successful in life and to my relatives in New York City, Lynn Terrell, Edmundo Navarro, and Araceli Galvan Navarro who kept me home away from home. My adviser, Dr. Frederi Viens for the intellectual stimulation and insightful sug- gestions selflessly provided to me during the course of this research as well as for expanding my horizon in research opportunities in mathematical finance. Also, I would like to thank Dr. Michael Levine, Dr. Jose Figueroa-Lopez, and Dr. Jonathon Peterson for their invaluable comments for my dissertation. My academic sibling, Dr. Richard Eden for providing me his meticulously written notes on Malliavin calculus. Without his mentoring and fruitful discussions during my earlier stages in my stay here Purdue, this thesis would not have gone this far. My Boilermaker friends from PQFC and Statistics Department especially to Lin Yang Cheng, Berend Coster, Xiaoguang Wang, Tian Qiu, Jeffrey Nisen, Yao Tang, and Yudong Cao for sharing their individual and collective aspirations so that through hard work and gritty determination, we can achieve our dreams in the exciting world of Quantitative Finance. My brethren in Church of Christ Iglesia ni Cristo of the Locale of Indianapolis especially to the Gumasing family: Bro. Garry, Sis. Carol, and Bro. Paolo who help me remained steadfast to the faith. You are all awesome! 23rd of November 2015, West Lafayette, IN iv

TABLE OF CONTENTS

Page ABSTRACT ...... vii 1 Introduction ...... 1 1.1 Motivation ...... 1 1.2 Overview of the Dissertation ...... 4 1.3 Main Results ...... 5 2 Preliminaries ...... 7 2.1 L´evyProcesses ...... 7 2.2 Moment Inequalities ...... 11 2.3 Geometric L´evyProcesses ...... 12 2.4 Stochastic Differential Equations ...... 14 2.5 Canonical L´evySpace ...... 15 2.6 Iterated L´evy-ItˆoIntegral ...... 16 2.7 Skorohod ...... 18 2.8 Predictable Process ...... 21 3 Canonical L´evyWhite Noise Processes ...... 22 3.1 Construction of Canonical L´evyWhite Noise Process ...... 22 3.2 Construction of Alternative Chaos Expansion for Canonical L´evyprocesses ...... 26 3.3 Alternative Chaos Expansion for Canonical L´evyprocesses ..... 35 3.4 Stochastic Test and Distribution Function ...... 41 3.4.1 The spaces G and G∗ ...... 42 3.4.2 Kontratiev and Hida spaces ...... 43 3.5 White Noise Processes from Canonical L´evyProcesses ...... 44 3.6 Wick Product and Hermite Transform ...... 49 v

Page

3.7 Stochastic Derivative ...... 52 3.8 Generalized Expectation and Generalized Conditional Expectation . 58 3.9 Skorohod Integration on G∗ ...... 63 3.10 Clark-Ocone Theorem in L2(P ) ...... 71 3.11 Multivariate Extension ...... 79 3.11.1 Notations ...... 80 3.11.2 Chaos Expansion ...... 81 3.11.3 Stochastic Test and Distribution Functions ...... 82 3.11.4 Wick Product ...... 84 3.11.5 Stochastic Derivatives ...... 84 3.11.6 Generalized Conditional Expectation ...... 85 3.11.7 Skorohod Integration on G∗ ...... 86 3.11.8 Clark Ocone Theorem in L2(P ) ...... 88 4 Clark-Ocone Theorem Under The Change of Measure and Mean-Variance Hedging ...... 89 4.1 Girsanov Theorem for L´evyProcesses ...... 89 4.2 Clark-Ocone Theorem in L2(P ) ∩ L2(Q) ...... 92 4.3 Mean Variance Hedging ...... 104 4.3.1 Financial Modeling Under a L´evyMarket ...... 104 4.3.2 Quadratic Hedging ...... 109 4.3.3 Geometric L´evyProcesses ...... 115 4.3.4 Minimal Martingale Measure ...... 121 4.3.5 The Bates Model ...... 126 5 Donsker Delta and Its Applications to Finance ...... 137 5.1 Donsker Delta ...... 137

5.2 Evaluation of E[Dt,zg(Y (T ))|Ft] ...... 138

5.2.1 Case I: E[Dt,0g(Y (T ))|Ft] ...... 139

5.2.2 Case II: E[Dt,zg(Y (T ))|Ft], z 6= 0 ...... 141 vi

Page 5.3 Examples ...... 143 5.3.1 Merton Model ...... 143 5.3.2 Continuous Case ...... 146 6 Evaluating Greeks In Exotic Options ...... 149 6.1 Preliminaries ...... 149 6.2 Markovian Property of the Payoff ...... 153 6.3 Malliavin Derivatives of the Supremum and Infimum ...... 154 6.4 Some Important Identities ...... 165 6.5 Delta ...... 166 6.6 Gamma ...... 170 6.7 Construction of Dominating Processes ...... 173 6.7.1 Continuous-Time Monitoring ...... 174 6.7.2 Discrete-Time Monitoring ...... 176 6.8 Example: Merton Model ...... 178 6.8.1 Continuous Monitoring ...... 178 6.8.2 Discrete Monitoring ...... 179 REFERENCES ...... 181 A Wiener and Poisson Chaos Expansions ...... 186 A.1 Hermite Polynomial and Hermite Function ...... 186 A.2 Wiener Chaos Expansions ...... 187 A.3 Poisson Chaos Expansions ...... 188 VITA ...... 191 vii

ABSTRACT

Navarro, Rolando, Jr. D. PhD, Purdue University, December 2015. Malliavin Calcu- lus in the Canonical L´evyProcess: White Noise Theory and Financial Applications. Major Professor: Frederi G. Viens.

We constructed a white noise theory for the Canonical L´evyprocess by Sol´e, Utzet, and Vives. The construction is based on the alternative construction of the chaos expansion of square integrable random variable. Then, we showed a Clark- Ocone theorem in L2(P ) and under the change of measure. The result from the Clark-Ocone theorem was used for the mean-variance hedging problem and applied it to stochastic volatility models such as the Barndorff-Nielsen and Shepard model model and the Bates model. A Donsker Delta approach is employed on a Binary option to solve the mean-variance hedging problem. Finally, we are able to derive the Delta and Gamma for a barrier and lookback options for an exp-L´evyprocess using the methodology of Bernis, Gobet, and Kohatsu-Higa by employing a dominating process. 1

1. INTRODUCTION

1.1 Motivation

Financial modeling of risky assets is assumed to follow the classical Black-Scholes- Merton model where the log-returns risky asset follows a normal distribution. How- ever, stylized facts suggests that the Black-Scholes-Merton model is inadequate. There is a growing interest that suggests that financial modeling under a L´evyprocess is bet- ter suited in capturing market behavior. This includes skewness and long-tailed dis- tribution of the asset returns, presence of jumps, and implied volatility smile [21], [74]. The classical Canonical space for a L´evyprocess is constucted from the σ-field of cylinder sets and a probaility measure using the Kolmogorov extension theorem [73], [7]. However, Sol´e,Utzet and Vives [77] has formulated another construction of the Canonical space for the L´evy process to be able to obtain an interpretation the Malliavin derivative for the L´evyprocess Dt,z. The derivative Dt,0 is associated with the Malliavin derivative with respect to the while Dt,z, z 6= 0 is the Malliavin derivative with respect to the pure that has a form of an increment quotient. We shall refer to the Canonical L´evyprocess to the Canonical space constructed by Sol´e,Utzet, and Vives. [77]. White noise theory was first introduced by Hida for Wiener process which has origins in quantum physics [45]. Subsequently, white noise theory was extended in the pure jump L´evyprocess [1], [64], [24]. This was done by incorporating generalized function spaces related to L2(P ) in a natural way [46]. This includes the dual spaces (G, G∗) and the Hida dual spaces ((S), (S)∗) with the following inclusions: (S) ⊂ G ⊂ L2(P ) ⊂ G∗ ⊂ (S)∗ [27]. We extend this theory for the Canonical L´evyspace by first deriving an alternative chaos expansion of square integrable random variable and give 2 some important characterizations such as the Wick-Skorohod identity, then prove the Clark-Ocone theorem for L2(P ). The Clark-Ocone theorem is the explicit representation of the Itˆorepresentation theorem in terms of the Malliavin derivative. The univariate version of the Clark-

Ocone theorem in D1,2 for the Canonical L´evyprocess can be stated as follows:

1,2 Theorem 1.1.1 [78] Let F ∈ D be FT -measurable, then Z F = E[F ] + E[Dt,zF |Ft− ]M(dt, dz) (1.1) [0,T ]×R where M is independent measure given by (2.53).

The Clark-Ocone representation can be weakened to a representation for F ∈ L2(P ) using white noise analysis with the same form (1.1). However, the Malliavin derivative Dt,z and the expectation E will be generalized to a stochastic gradient and generalized expectation respectively. An example of a contingent claim F that is not in D1,2 but belong to L2(P ) is a binary option. We will evaluate the generalized conditional expectation E[Dt,zF |Ft− ] using the Donsker Delta of an Itˆo-L´evyprocess [26]. Under the change of equivalent measure Q ∼ P , Ocone [63] and Huenhe [49], we were able to derive the Clark-Ocone theorem under the change in measure under

D1,2 for the Wiener and Pure Jump L´evyprocesses. Suzuki has further extended this representation for the Canonical L´evyprocesses [80]. Using white noise theory, the Clark-Ocone theorem under the change of measure was proven by Okur in the Wiener case [66], pure-jump L´evycase [67], and the combination of Wiener and pure jump L´evycase [67]. Let u(t) and θ(t, z) be the drift terms for the Wiener process W (t) and pure jump process N˜(dt, dz) such that dW Q = dW (t) + u(t)dt is a Q-Brownian motion and N˜ Q(dt, dz) = N˜(dt, dz) + θ(t, z)ν(dz)dt is a Q-compensated Poisson random measure. The L´evyprocess is in general an incomplete model. Hence, the Q measure is not unique. Nevertheless, there are some ways of finding drift parameters to obtain a unique equivalent measure Q by some 3

selection criterion such as the F¨ollmer-Schweizer minimal measure and the minimal martingale measure [7]. Okur [67] assumed that u(t) and θ(t, z) is either deterministic or driven by Brow- nian and compensated Poisson random measure respectively. However, this model is in general not adequate to obtain a Clark-Ocone theorem for stochastic volatility models. Hence, we will generalize the drift vectors u(t) and θ(t, z) to be driven pos- sibly by multivariate independent Wiener and Poisson noise sources. One example is the BNS model with drift under the minimal martingale measure. In this model, the drift parameter u(t) is driven by a compensated Poisson random measure. Another example is the Bates model which is driven by another independent Wiener process. As an application to financial modeling in L´evyprocesses, following the method- ology of Benth, et al., [15], the hedging portfolio by minimizing the quadratic hedging error under the martingale measure can be expressed in terms of the representation of the Clark-Ocone theorem. Another financial application of Malliavin calculus considered in the study is the evaluation of the sensitivities or so-called Greeks for exotic options under the exp- L´evyprocess. The Greeks are used in risk-management to hedge against changes in the parameters on the option price. For a L´evyprocess, a closed form of Greeks is in general not available. However, there are numerical methods in evaluating Greeks such as finite-difference, likelihood ratio, and pathwise approach [34]. Greeks using Malliavin calculus was first derived by Fournie, et al., [31]. One advantage of using the Malliavin calculus approach is it doesn’t require the density function in contrast to the likelihood ratio approach. Moreover, Bernis, Gobet, and Kohatsu-Higa was able to extend Fournie’s result for a class of exotic options which includes barrier and lookback options using a dominating process [16], [38] for a discrete and continuous monitoring case. We extend their result for an exp-L´evy process and find a suitable dominating process for discrete and continuous monitoring case. 4

1.2 Overview of the Dissertation

The dissertation is organized as follows. Chapter 2 presents a background review of the of L´evyprocess, then we discuss the Malliavin calculus for the Canonical L´evyprocesses. We present the white noise theory for Canonical L´evyprocess in Chapter 3. First, we present the construction of the Canonical L´evywhite noise process. Then, we show the alternative chaos expansion of a square-integrable random variable under the Canonical L´evyprocesses and introduce the white noise L´evy process and L´evy white noise field. From this framework, we extend the white noise theory for a Canonical L´evyprocess to prove a Clark-Ocone theorem for L2(P ). Finally, we shall present the multivariate extensions. For readers interested in the financial applications of the Canonical L´evyprocess, readers can proceed immediately to Chapter 3.11 for an overview of important defi- nitions and characterizations of the white noise theory extended on the multivariate version on the first reading. Likewise, for those who are interested in the charac- terization of the white noise theory for the Canonical L´evyprocesses, we invite the reader to explore Chapter 3 on its entirety. We derive a Clark-Ocone formula under the change of equivalent measure Q ∼ P in Chapter 4. Then, we shall present an application to mean-variance hedging portfolio under the martingale measure Q. Specific applications are presented for geometric L´evyprocesses and for the stochastic volatility model such as the BNS model, and the Bates model (Heston volatility with jumps). We present the Donsker Delta approach in Chapter 5 in evaluating the generalized conditional expectation E[Dt,zF |Ft− ] and apply the technique for the binary option. In Chapter 6, we derive the Delta and Gamma for a barrier and lookback options for an exp-L´evyprocess using the methodology of Bernis, Gobet, and Kohatsu-Higa by employing suitable dominating processes. 5

1.3 Main Results

We briefly discuss the main results and contributions of this dissertation. Chapter 3 - Canonical L´evyWhite Noise Processes

• We have shown the alternative chaos expansion for F ∈ L2(P ) the Canonical L´evyprocess in Proposition 3.3.4. The proof of the the chaos expansion uses the chaotic representation property in Theorem 3.2.4 by Nualart and Schoutens [61]. From the results of Sol´e,Utzet, and Vives in Theorems 3.3.1-3.3.3, [78], we are able to construct the alternative chaos expansion Canonical L´evyprocess. In addition, we have shown the isometry relation in Theorem 3.3.1 for this chaos expansion.

This alternative chaos expansion for the Canonical L´evyprocess is new. From this expansion, we characterize the white noise theory using some family of fuc- tion spaces of stochastic test functions and distribution functions. This charac- terization is an extension of the Wiener case [44] and the Poisson case [64], [27].

• Let X(t) be the square-integrable L´evyprocess given by (2.49). Then, in Chap- ter 3.5 we introduce the white noise L´evyprocess X˙ (t) and show that X˙ (t) is the derivative of X(t) in (S)∗. Also, we introduce the L´evywhite noise field M˙ (t, x) and we show the Radon-Nikodym derivative relation M(dt, dx) = M˙ (t, x)µ(dt, dx) in (S)∗ in (3.163).

• The concepts of white noise theory to the Canonical L´evyspace is presented in Chapter 3.4 - 3.11. Moreover, these concepts has a parallel analog Wiener and Poisson cases [27], [24], [66], [67].

– Closability of the stochastic derivative Dt,z (Theorem 3.7.1).

2 ∗ – F ∈ L (P ) implies Dt,zF ∈ G (Theorem 3.7.2)

– Fundamental Theorem of stochastic calculus in G∗ (Theorem 3.9.1)

– Wick-Skorohod identity (Theorem 3.9.5). 6

– Clark-Ocone theorem for Wick polynomials (Theorem 3.10.3) and L2(P ) (Theorem 3.10.5)

• We give a multivariate extension to the white theory for Canonical L´evyspace (Chapter 3.11).

Chapter 4 - Clark-Ocone Theorem Under The Change of Measure and Mean- Variance Hedging

• We show Clark-Ocone theorem under the change of measure (Theorem 4.2.2)

2 2 2 for F ∈ L (P ) ∩ L (Q) is FT -measurable and FZ(T ) ∈ L (P ).

• We show the mean-variance hedging portfolio with partial information under the martingale measure (Theorem 4.3.1). Furthermore, we give some specific examples of finding the mean-variance hedging portfolio in the following models:

– Geomteric L´evyProcesses (Chapter 4.3.3)

– BNS model (Chapter 4.3.4)

– Bates model (Chapter 4.3.5)

Chapter 5 - Donsker Delta And Its Application to Finance

• The generalized conditional expectation E[Dt,zF |Ft− ] is evaluated using the Donsker Delta approach (Chapter 5.2).

• From this result, we give an example of evaluating the mean-variance hedging porfolio for a binary option under the Merton model (Chapter 5.3.1) and the continuous case (Chapter 5.3.2).

Chapter 6 - Evaluating Greeks In Exotic Options

• We derive the Delta (Theorem 6.5.1) and Gamma (Theorem 6.6.1) for a barrier and lookback options for an exp-L´evyprocess using the methodology of Bernis, Gobet, and Kohatsu-Higa. A suitable dominating process was constructed for continuous and discrete monitoring case (Chapter 6.7). 7

2. PRELIMINARIES

2.1 L´evyProcesses

We present some background on L´evyprocesses [7], [27], [73], [78]. Let (Ω, F,P ) be a complete probability space. A L´evyprocess X = {X(t): t ≥ 0} is a with the following properties:

1. X(0) = 0,P − a.s.,

2. X(t) has independent increments,

3. X(t) has stationary increments,

p 4. X(t) is stochastically continuous, that is, X(s) → X(t), as s → t.

The Poisson random measure also known as the jump measure N :Ω×[0,T ]×R0 → N0 is a counting measure defined as X N(A) = 1{s:(s,∆X(s))∈A},A ∈ B([0,T ] × R0) (2.1) s∈(0,t]

− where R0 = R\0 and ∆X(t) = X(t) − X(t ) is the jump of X at time t. The L´evy measure ν of X is defined as the expectation of N as follows:   X ν(B) = E[N((0, 1] × B)] = E  1{s:∆X(s)∈B} ,B ∈ B(R0). (2.2) s∈(0,1] The L´evymeasure is a σ-finite measure and satisfies Z (1 ∧ z2)ν(dz) < ∞. (2.3) R0 The compensated Poisson random measure also known as the compensated jump ˜ measure N :Ω × [0,T ] × R0 → R is given by

N˜(dt, dz) = N(dt, dz) − dtν(dz). (2.4) 8

The L´evyprocess X(t) has integral representation known as the L´evy-Itˆodecom- position theorem.

Theorem 2.1.1 L´evy-Itˆodecomposition theorem

Let X(t) ∈ R be a L´evyprocess, then there exists a triplet (a, σ2, ν) such that for all t ≥ 0 Z Z X(t) = at + σW (t) + zN(ds, dz) + zN˜(ds, dz). (2.5) [0,t]×{|z|≥1} [0,t]×{|z|<1}

The triplet (a, σ2, ν) is known as the L´evytriplet or the characteristic triplet. Like- wise, we can write the L´evyprocess representation as follows: Z X(t) = bt + σW (t) + zN˜(ds, dz) (2.6) [0,t]×R0 where Z b = a + zν(dz). (2.7) |z|≥1 The characteristic function of the L´evyprocess is given by the L´evyKhintchine for- mula [27].

Theorem 2.1.2 L´evyKhintchine formula

Let X(t) ∈ R be a L´evyprocess in law then a necessary and sufficient condition that its characteristic function is given as

E[exp(iuX(t))] = exp(Ψ(u)t) (2.8) where Ψ(u) is the characteristic exponent given by

1 Z Ψ(u) = iau − σ2u2 + (exp(iuz) − 1 − iuz1 )ν(dz) (2.9) 2 {|z|<1} R0

2 where α ∈ R, σ ≤ 0 are constants and ν = ν(dz), z ∈ R0 is σ-finite measure in

B(R0) satisfying Z (1 ∧ z2)ν(dz) < ∞. (2.10) R0 9

From the L´evy-Itˆorepresentation theorem, it is natural to consider an Itˆo-L´evypro- cess of the form of Z t Z t Z X(t) = x + α(s) + β(s)dW (s) + γ(s, z)N˜(ds, dz). (2.11) 0 0 [0,t]×R0 In short-hand SDE form, we have the following: Z dX(t) = α(t)dt + β(t)dW (t) + γ(t, z)N˜(dt, dz),X(0) = x. (2.12) R0

If the coefficients α(s), β, and γ(s, x) > −1 are predictable for all (s, x) ∈ R+ × R0 such that Z  Z  |µ(s)| + σ2(s) + θ2(s, z)ν(dz) ds < ∞, P a.s. (2.13) R R0 Then the stochastic in (2.11) are . Furthermore, if we impose the following square-integrablilty condition Z  Z   E |µ(s)| + σ2(s) + θ2(s, z)ν(dz) ds < ∞. (2.14) R R0 Then the stochastic integrals in (2.11) are martingales. Now, we present Itˆo’slemma for Itˆo-L´evyproceses .

Theorem 2.1.3 [27] Let X(t) be an Itˆo-L´evyprocess given by (2.12) and let F ∈

1,2 C (R+ × R; R) and define Y (t) = F (t, X(t)), then Y (t)is a Itˆo-Lˆevyprocess with SDE

dY (t) ∂ ∂ 1 ∂2 = F (t, X(t))dt + F (t, X(t)) (α(t)dt + β(t)dW (t)) + F (t, X(t))β2(t)dt ∂t ∂x 2 ∂x2 Z  ∂  + F (t, X(t) + γ(t, z) − F (t, X(t)) − F (t, X(t))γ(t, z) ν(dz)dt ∂x R0 Z + F (t, X(t) + γ(t, z) − F (t, X(t)−) N˜(dt, dz). (2.15) R0 T Extending the Itˆo-Lˆevyin the multidimensional case, X(t) = (X1(t), ··· ,Xn(t)) , we have following form Z dX(t) = α(t)dt + β(t)dW (t) + γ(t, z)N˜(ds, dz),X(0) = x. (2.16) R0 10

where α(t) ∈ Rn, β(t) ∈ Rn×d, and γ(t, z) ∈ Rn×l are predicable processes, W (t) = T (W1(t), ··· ,Wd(t)) is a vector of d-dimensional independent Wiener process and T ˜  ˜ ˜  N(dt, dz) = N1(dt, dz1), ··· , N1(dt, dz1) is a vector of l-dimensional independent

compensated Poisson random measures. That is, the SDE for Xi(t) is given as follows: d Z X ˜ dXi(t) =αi(t) + βij(t)dWj(t) + γij(t, zj)Nj(dt, dzj) j=1 R0

Xi(0) =xi, i ∈ {1, ··· , n}. (2.17)

Then, we have the following Itˆo’slemma for the multidimensional case.

Theorem 2.1.4 [27] Let X(t) be a -Itˆo-L´evyprocess given by (2.17) and let F ∈

1,2 n C (R+ × R ; R) and define Y (t) = F (t, X(t)), then Y (t) is a Itˆo-Lˆevyprocess with SDE

dY (t) n ∂ X ∂ = F (t, X(t))dt + F (t, X(t))α (t)dt ∂t ∂x i i=1 i n d n d X X ∂ 1 X X ∂2 + β (t)dW (t) + F (t, X(t))(ββT ) (t)dt ∂x ij j 2 ∂x ∂x ij i=1 j=1 i i=1 j=1 i i l n ! X Z X ∂ + F (t, X(t) + γj(t, z)) − F (t, X(t)) − F (t, X(t))γ (t, z ) ν(dz )dt ∂x ij j j j=1 R0 i=1 i l Z X j −  ˜ + F (t, X(t) + γ (t, z)) − F (t, X(t )) Nj(dt, dzj). (2.18) j=1 R0 where γj is the jth column of γ.

The following theorem states criterion for a L´evyprocess concerning to the vari- ation process and moments.

2 Theorem 2.1.5 [73] Let X = {X(t)}t≥0 with characteristic triplet (a, σ , ν).

(i) X has a finite variation process iff Z σ = 0, |z|ν(dz) < ∞. (2.19) |z|<1 11

(ii) X has a finite nth absolute moment, where n ∈ N, that is, Z E[|X(t)|n] < ∞, ∀t > 0 ⇔ |z|ν(dz). (2.20) |z|≥1

(ii) X has a finite exponential moment E[euX(t)] where u ∈ R and ∀t > 0 iff Z euzν(dz) < ∞. (2.21) |z|≥1 In this case,

E[euX(t)] = etΨ(−iu). (2.22)

2.2 Moment Inequalities

We introduce some moment inequalities that will be useful in finding upper bound of moments for both continuous and pure jump case [7], [53]. Let F be a square- integrable, adapted process. Denote the following Wiener integral as follows: Z t M(t) = F (s)dW (s). (2.23) 0 Then M is a square-intergrable martingale. From the Burkholder’s inequality, fol- lowed by Doob’s martingale inequality, then for any p ≥ 2, there exists Cp > 0 such that " # p p/2 E sup |M(s)| ≤ CpE [M,M]t . (2.24) s∈[0,t] On the other hand, let H be a predictable process and denote the following compen- sated Poisson integral as follows: Z I(t) = H(s, z)N˜(ds, dz) (2.25) [0,t]×A

where A ∈ B(R0). Then for p ≥ 2, there exists Dp > 0 such that " # E sup |I(t)|p s∈[0,t] "Z p/2# Z ! 2 p ≤Dp E |H(s, z)| ν(dz)ds + E |H(s, z)| ν(dz)ds . [0,t]×A [0,t]×A (2.26) 12

2.3 Geometric L´evy Processes

We let Ss, s ∈ [0,T ] be a risky-asset (i.e. stock) price process modeled as geometric L´evyProcess of the form  Z  dS(s) =S(s) µ(s)ds + σ(s)dW (s) + θ(s, z)N˜(ds, dz) , s ∈ [t, T ], R0 S(t) =z. (2.27)

We denote Y (s) = log S(s), s ∈ [0,T ] (2.28)

to be the log-returns. Then, from Itˆo’s lemma by taking f(t, x) = log x, we obtain, a Itˆo-L´evyprocess of the form of Z dY (s) =α(s)ds + β(s)dW (s) + γ(s, z)N˜(ds, dz), s ∈ [t, T ], R0 Y (t) =y (2.29)

where

 σ2(s) Z α(s) = µ(s) − + [log(1 + θ(s, z)) − θ(s, z)]ν(dz)ds, 2 R0 β(s) =σ(s),

γ(s, z) = log(1 + θ(s, z)),

y = log x (2.30)

where y is a constant, α(s), β(s), and γ(s, z) > −1 are deterministic for all (s, z) ∈

[0,T ] × R0 such that Z T  Z  |α(s)| + β2(s) + γ2(s, z)ν(dz) ds < ∞. (2.31) 0 R0 Then we have the following conditional characteristic function as stated in the fol- lowing lemma. 13

Lemma 2.3.1  Z T  u2β(s) E[ exp(iuY (T ))|Ft] = exp iuY (t) + α(s) − ds t 2 Z  + (exp(iuγ(s, z)) − 1 − iuγ(s, z)) ν(dz)ds (2.32) [t,T ]×R0 Proof We let F = F (s, x) = exp(iuY (s)). (2.33)

Then, from Itˆo’slemma for Fs = F (s, Y (s))  Z  ˜ dFs = Fs a(s)ds + b(s)dW (s) + c(s, z)N(ds, dz) (2.34) R0 where  u2β(s) Z a(s) = α(s) − + (exp(iuγ(s, z)) − 1 − iuγ(s, z)) ν(dz), 2 R0 b(s) =iuβ(s),

c(s, z) = eiu(s,z) − 1 . (2.35)

Integrating (2.34) we get Z T Z T Z ˜ FT = Ft + a(s)Fsds + b(s)FsdW (s) + c(s, z)N(ds, dz). (2.36) t t [t,T ]×R0

Taking the conditional expectation with respect to Ft gives us Z T E[FT |Ft] = Ft + a(s)E[Fs|Ft]ds. (2.37) t

We let m(s) = E[Fs|Ft], then by differentiating the above equation, we obtain the following ODE

dm(s) =a(s)m(s), s ∈ [t, T ],

m(t) =Ft (2.38)

Solving the ODE gives us Z s  m(s) = Ft exp a(u)du . (2.39) t Hence, we finally obtain the desired result. 14

2.4 Stochastic Differential Equations

We present the conditions for the existence of the strong solutions for L´evypro- cesses namely: Lipschitz and growth conditions. Let X be a cadlag with the following SDE Z dX(t) = α(t, X(t))dt + β(t, X(t))dW (t) + γ(t, X(t), z)N˜(ds, dz) (2.40) R0 where α, β : R+ ×R → R be jointly measurable and Ft-adapted, γ : R+ ×R×R0 → R be jointly measurable and Ft predictable. We say that the SDE in (2.40) has a strong solution if its X(t) pathwise unique Ft adapted solution. To ensure a strong solution, the following conditions should be satisfied [76]:

(i) Growth conditions :

|α(t, x)| ≤c(t)(1 + |x|), Z |β(t, x)|2 + |γ(t, x, z)|2ν(dz) ≤c(t)(1 + |x|2) (2.41) R0 where c(t) ≥ 0 is some deterministic function such that Z T C(T ) ≡ c(t)dt < ∞ ∀T > 0, (2.42) 0 (ii) Lipschitz conditions:

|α(t, x) − α(t, y)| ≤c(t)|x − y|, Z 2 2 2 2 |β(t, x) − β(t, y)| + |γ(t, x, z) − γ(t, y, z)| ν(dx) ≤ K1|x − y| ≤c(t)|x − y| R0 (2.43)

(iii) Initial conditions:

2 X(0) ∈ F0,E[X (0)] < ∞. (2.44)

The existence of the strong solution implies that " # E sup X2(t) ≤ k(T ) < ∞ (2.45) t∈[0,T ] where k(T ) depends on T and C(T ) only. 15

2.5 Canonical L´evy Space

The usual Canonical L´evyspace is constructed from the set of cadlag functions with the σ-field generated by the cylinders and with the measure given by the Kol- mogorov extension theorem [73]. The alternative construction of the Canonical L´evy space by Sol´e,Utzet, and Vives [77] was constructed to provide a probabilistic inter- pretation of the Malliavin derivative Dt,x. In their construction, the gradient operator becomes the sum of a derivative and increment quotient operators [5]. Consider the Canonical L´evyProcess

(Ω, F,P ) = (ΩW × ΩJ , FW ⊗ FJ ,PW ⊗ PJ ) (2.46) where (ΩW , FW ,PW ) is the Canonical Wiener space and (ΩJ , FJ ,PJ ) is the Canonical jump L´evyspace. If X(t) is a L´evyProcess with triplet (a, σ2, ν). From the L´evy-Itˆo decomposition [27], X(t) can be expressed as follows: Z X(t) = bt + σW (t) + zN˜(ds, dz) (2.47) [0,t]×R0 where Z b = a + zν(dz). (2.48) |z|≥1 Let X(t) be a centered, square-integrable L´evyprocess, then X(t) can be written as follows: Z X(t) = σW (t) + zN˜(ds, dz). (2.49) [0,t]×R0 Its characteristic function is given by  1 Z   E (exp(iuX(t))] = exp − σ2u2 + (exp(iuz) − 1 − iuz)ν(dz) t . (2.50) 2 R0 From the the moment theorem [7], for p ≥ 1, Z E[|X(t)|p] < ∞ ⇔ |z|pν(dz) < ∞. (2.51) |z|≥1 Hence, from the square-integrable assumption of X(t), (2.51) and (2.3) implies Z z2ν(dz) < ∞. (2.52) R0 16

Itˆo[50] has extended the centered square-integrable L´evyprocess X to an inde- pendent measure M on (R+ × R, B(R+ × R)) can be constructed as follows Z Z M(E) = σ dW (t) + zdN˜(dt, dz) (2.53) 0 E0 E

0 where E ∈ B(R+ × R), E0 = {t ∈ R+ :(t, 0) ∈ E} and E = E \ E0. Then for

E1,E2 ∈ B(R+ × R) such that µ(E1) < ∞, µ(E2) < ∞

E[M(E1)M(E2)] = µ(E1 ∩ E2) (2.54)

where µ is a measure on ([0,T ] × R, B([0,T ] × R) and Z Z 2 2 µ(E) = σ dt + z dν(z)dt, E ∈ B([0,T ] × R). (2.55) 0 E0 E In differential form, we have

2 2 µ(dt, dz) = σ dδ0(z)dt + z (1 − δ0(z))dν(z)dt = λ(dt)η(dz) (2.56)

where λ(dt) = dt is the Lebesgue measure and

2 2 η(dz) = σ dδ0(z)dt + z (1 − δ0(x))dν(z). (2.57)

2.6 Iterated L´evy-ItˆoIntegral

Let f ∈ L2(µn) = L2((λ × η)n) = L2(([0,T ] × R)n) be a deterministic function such that Z Z 2 2 ||f||L2(µn) = ··· |f ((t1, z1) ··· (tn, zn)) | µ(dt1, dz1) ··· µ(dtn, dzn) < ∞. [0,T ]×R [0,T ]×R (2.58)

∧ The symmetrization of f denoted by f over the pairs (t1, x1), ··· , (tn, xn) is given by

1 X f ∧((t , z ), ··· (t , z )) = f((t , z ), ··· (t , z ) (2.59) 1 1 n n n! σ(1) σ(1) σ(n) σ(n) σ∈Sn where σ = (σ(1), ··· , σ(n)) is a permutation of {1, ··· , n} and Sn is the set of permutations of {1, ··· , n}. 17

Denote Sn = {(t1, z1), ··· (tn, zn) : 0 < t1, ··· tn < T, xi ∈ R, i ∈ {1, ··· n}}. For 2 n f ∈ L (µ ) define the n-fold iterated integral by over Sn as follows: Z Z Z Jn(f) ≡ ··· f ((t1, z1) ··· (tn, zn)) − − [0,T ]×R [0,tn )×R [0,t2 )×R

M(dt1, dz1) ··· M(dtn−1, dzn−1)M(dtn, dzn). (2.60)

Also, for f ∈ L2(µn), define the n-fold iterated integral over ([0,T ] × R)n as follows: Z Z Z In(f) ≡ ··· f ((t1, z1) ··· (tn, zn)) [0,T ]×R [0,T ]×R [0,T ]×R

M(dt1, dz1) ··· M(dtn−1, dzn−1)M(dtn, dzn). (2.61)

2 n 2 n Denote Ls(µ ) be the subspace of symmetric functions in L (µ ). Then, for f ∈ 2 n Ls(µ ), we have the following identity:

In(f) = n!Jn(f). (2.62)

The integrated integral In has the following properties [78]:

1. Symmetrization

∧ 2 n In(f) = In(f ), f ∈ L (µ ), (2.63)

2. Linearity

2 n In(af + bg) = aIn(f) + bIn(g), f, g ∈ L (µ ), a, b ∈ R, (2.64)

3. Isometry

∧ ∧ 2 n 2 m E[In(f)Im(g)] = n! hf , g iL2(µn) δmn, f ∈ L (µ ), g ∈ L (µ ). (2.65)

Itˆohas has shown the following chaos expansion for the L´evyspace.

Theorem 2.6.1 [50] Let F ∈ L2(P ), then F has chaos expansion given by ∞ X F = In(fn) (2.66) n=0 2 n where we set I0(f0) = E[F ]. The chaos expansion is unique if fn ∈ Ls(µ ) for all n ∈ N. Furthermore, we have the following isometry relation: ∞ 2 X 2 2 n ||F ||L2(P ) = n!||fn||L2(µn), fn ∈ Ls(µ ). (2.67) n=0 18

2.7 Skorohod Integral

Definition 2.7.1 [77], [78] Let F ∈ L2(P × µ) with chaos expansion of the form of

∞ X F (t, z) = In(fn(·, (t, z))) (2.68) n=0 such that ∞ X ˜ 2 (n + 1)!||f||L2(µn+1) < ∞ (2.69) n=0

˜ 2 n+1 where fn ∈ Ls(µ ) . Then we define the Skorohod integral of F with respect to M as follows: Z ∞ X ˜ δ(F ) = F (t, x)M(δt, dx) = In+1(fn). (2.70) R+×R n=0 We say that F is Skorohod integrable if it converges in L2(P ), that is,

∞ 2 X ˜ 2 ||δ(F )|| = (n + 1)!||fn||L2(µn+1) < ∞. (2.71) n=0 Definition 2.7.2 [77], [78], [80] Let F ∈ L2(P ) with chaos expansion of the form of (2.66). Denote the set D1,2 ≡ Dom D is the set F ∈ L2(P ) such that ∞ X 2 nn!||fn||L2(µn) < ∞. (2.72) n=1

For F ∈ Dom D, the Malliavin derivative DF :Ω × [0,T ] × R → R defined by ∞ X Dt,zF = nIn−1 (fn(·, (t, z))) . (2.73) n=1 with convergence in L2(P × µ). Moreover, we have the following:

∞ 2 X 2 ||Dt,zF ||L2(P ×µ) = nn!||fn||L2(µn) < ∞. (2.74) n=1 Dom D is a Hilbert space with scalar product of F,G ∈ Dom D Z  < F, G >= E[FG] + E Dt,zFDt,zGµ(dt, dz) (2.75) [0,T ]×R and D is a closed operator from Dom D to L2(P × µ). 19

For f : ([0,T ] × R)n → R, we have the following decomposition: Z Z fdµ⊗n = σ2 f(·, (t, 0))dtdµ⊗n−1 ([0,T ]×R)n [0,T ]×([0,T ]×R)n−1 Z + f(·, (t, z))z2ν(dz)dtdµ⊗n−1 (2.76) n−1 [0,T ]×R0×([0,T ]×R) We define the spaces DomD0 and DomD1 as follows.

Definition 2.7.3 [77], [78], [80] Let F ∈ L2(P ) with chaos expansion of the form of (2.66).

(i) DomD0 is the set of F ∈ L2(P ) such that σ > 0,

∞ Z T X 2 2 nn! ||fn(·, (t, 0))||L2(µn−1)σ dt < ∞. (2.77) n=1 0

For F ∈ Dom D0, we define

∞ X Dt,0F = nIn−1 (fn(·, (t, 0))) (2.78) n=1

with convergence in L2(P × λ).

(ii) DomD1 is the set of F ∈ L2(P ) such that ν 6= 0 and

∞ Z X 2 2 nn! ||fn(·, (t, z))||L2(µn−1)z ν(dz)dt < ∞. (2.79) n=1 [0,T ]×R0

For F ∈ Dom D1, we define

∞ X Dt,zF = nIn−1 (fn(·, (t, z))) , z 6= 0 (2.80) n=1

with convergence in L2(P × z2ν(dz)dt).

Remark 2.7.1 If σ > 0 and ν 6= 0, then Dom D = Dom D0∩ Dom D1 ⊂ L2(P ). 20

Theorem 2.7.2 [80] Chain Rule

1,2 1 n Let F = (F1, ··· ,Fn), Fi ∈ D for i ∈ {1, ··· , n} and ϕ ∈ C (R , R). Suppose that

(i) ϕ(F ) ∈ L2(P ),

Pn ∂ϕ(F ) 2 (ii) Dt,0F ∈ L (P × λ), k=1 ∂xk

ϕ(F1+zDt,zF1,··· ,Fn+zDt,zFn)−ϕ(F1,··· ,Fn) 2 2 (iii) z ∈ L (P × z ν(dz)dt), then ϕ(F ) ∈ D1,2 and n X ∂ϕ(F ) Dt,zϕ(F ) = Dt,0Fk1{z=0} ∂xk k=1 ϕ(F + zD F , ··· ,F + zD F ) − ϕ(F , ··· ,F ) = + 1 t,z 1 n t,z n 1 n 1 . (2.81) z {z6=0}

Definition 2.7.4 [77], [78] The space L1,2 Let F ∈ L2(P ×µ) with chaos expansion of the form of (2.66). such that F (t, z) ∈ D1,2 for all (t, z) ∈ [0,T ] × R µ-a.e., DF ∈ L2(P × µ⊗2). Then the chaos expansion of F is equivalent to ∞ X ˆ 2 nn!||fn||L2(µn+1) < ∞. (2.82) n=1

Remark 2.7.3 The above chaos expansion implies L1,2 ⊂ D1,2.

We state some of the important characterization of L1,2

(i) Let F,G ∈ L1,2, then Z  E[δ(F )δ(G)] = E F (t, z)G(t, z)µ(dt, dz) [0,T ]×R Z  + E Dt,zF (s, x)Dt,xG(s, x)µ(ds, dx)µ(dt, dz) . (2.83) ([0,T ]×R)2

1,2 (ii) Let F ∈ L such that Dt,zF ∈ Dom δ for all (t, z) ∈ [0,T ] × R µ-a.e. Then δ(F ) ∈ D1,2 and

Dt,zδ(F ) = F (t, z) + δ(Dt,zF ) (2.84)

for all (t, z) ∈ [0,T ] × R µ-a.e. 21

2.8 Predictable Process

Definition 2.8.1 [27] Predictable Process A predictable process is a stochastic process measurable with respect to the σ-field generated by

A × (s, t] × B,A ∈ Fs, 0 ≤ s < t, B ∈ B(R0). (2.85)

Note: Any measurable F-adapted and left-continuous (with respect to t) process is predictable [27].

We shall present some important theorems related to predictable process. The first theorem is the isometry relation prsented by the following theorem.

Theorem 2.8.1 [7], [78] Let F and G be µ-square integrable predictable processes, then Z Z  E F (t, z)M(dt, dz) F (t, z)M(dt, dz) [0,T ]×R [0,T ]×R Z  =E F (t, z)G(t, z)µ(dt, dz) . (2.86) [0,T ]×R Theorem 2.8.2 [78] Let F be µ-square integrable predictable processes, then Z Z T Z F (t, z)M(dt, dz) = σ F (t, 0)dW (t) + zF (t, z)N˜(dt, dz). (2.87) [0,T ]×R 0 [0,T ]×R0 Theorem 2.8.3 [78] Let F ∈ L2(P × µ) be a predictable processes. Then F ∈ Dom(δ) and Z δ(F ) = F (t, z)M(dt, dz). (2.88) [0,T ]×R Finally, we have the Clark-Ocone theorem in D1,2 stated as follows.

1,2 Theorem 2.8.4 [78] Let F ∈ D be FT -measurable, then Z F = E[F ] + E[Dt,zF |Ft− ]M(dt, dz) (2.89) [0,T ]×R where M is independent measure given by (2.53). 22

3. CANONICAL LEVY´ WHITE NOISE PROCESSES

3.1 Construction of Canonical L´evyWhite Noise Process

We construct the Canonical L´evywhite noise process [56] using a parallel proce- dure in deriving Wiener and Poisson white noise process [46]. Let S ≡ S(R) be the Schwartz space of test functions which consists of rapidly decreasing smooth functions

f ∈ C∞(R) such that α (β) ||f||α,β = sup |x f (x)| < ∞. (3.1) x∈R

In addition, S(R) is a Fr´echet space with respect to the seminorm kfkα,β. Its dual S0 ≡ S0(R) is the Schwartz space of tempered distribution functions endowed with a weak* topology. The action of ω ∈ S0(R) on φ ∈ S(R) given by the mapping w : S(R) × S0(R) → R w(φ, ω) =< ω, φ > . (3.2)

Moreover, we have the following inclusions:

2 0 S(R) ⊂ L (P ) ⊂ S (R). (3.3)

We construct the Canonical L´evywhite noise process on the Ω = S0(R) using the Bochner-Minlos theorem which is stated as follows:

Theorem 3.1.1 A necessary and sufficient condition for the existence of a probability

measure P on S0(R) such that Z g(φ) = E[ei<ω,φ>] = ei<ω,φ>dP (ω) (3.4) S0(R) satisfies the following conditions:

a.) g(0) = 1, 23

0 b.) g is positive definite, that is, for φi ∈ S (R) and ci ∈ C such that c = T (c1 ··· cn) 6= 0n, i ∈ {1, ··· , n}, ∀n ∈ N,

n n X X cicjg(φi − φj) > 0, (3.5) i=1 j=1

c.) g is continuous in Fr´echetTopology.

In our construction, we let Z  g(φ) = exp Ψ(φ(y))dy (3.6) R where σ2u2 Z Ψ(u) = − + (eiuz − iuz − 1)ν(dz). (3.7) 2 R0 Claim The functional g satisfies the Bochner-Minlos theorem.

Proof We can express g as the product

g(φ) = f(φ)h(φ) (3.8)

where  σ2 Z   σ2  f(φ) = exp − |φ(y)|2dy = exp − kφk2 (3.9) L2(R) 2 R 2 Z Z  h(φ) = exp (eiφ(y)z − iφ(y)z − 1)ν(dz)dy . (3.10) R R0 Then, f and h satisfies the Bochner-Minlos theorem corresponding to the Wiener and the compensated Poisson case respectively [46]. Clearly, g satisfies conditions (a) and (c) of the Bochner-Minlos condition. It is suffice to check (b) to prove our assertion. Define the following n × n matrices:

Gn = {g(φi − φj)},Fn = {f(φi − φj)},Hn = {h(φi − φj)}. (3.11)

From (3.8) and (3.11),

Gn = {g(φi − φj)}i,j∈{1,···n} = {f(φi − φj)h(φi − φj) = Fn Hn. (3.12) 24 where denotes the Hadamard product. Since f and h are positive definite, so does the matrices Fn and Hn is also positive definite for all n ∈ N. By the Schur’s product theorem [47] implies Gn is positive definite. Since this holds for all n ∈ N, then g is positive definite. Thus, this proves our assertion.

By taking φ(y) = tϕ(y) with t ∈ R fixed then from (3.7), we obtain

E[eit<ω,ϕ>]  σ2t2 Z Z Z  = exp − |ϕ(y)|2dy + (exp(itzϕ(y)) − itzϕ(y) − 1)ν(dz)dy . (3.13) 2 R R R0

Claim Let ϕ ∈ S(R), then

E[< ω, ϕ >] =0, (3.14) Z E[< ω, ϕ >2] =ζ ϕ(y)dy (3.15) R where Z ζ = σ2 + z2ν(dz). (3.16) R0

∞ Proof By density argument, it is suffice to show the identity for ϕ ∈ C0 (R). Let the L´evydensity ν ∈ [−r, r]\{0} for some r > 0. Then by expanding the terms in (3.13) by Taylor expansion, we obtain:

∞ Xintn E[< ω, ϕ >n] n! n=0 ∞ ∞ ! !n X 1 Z σ2t2ϕ2(y) Z X iktkzkϕk(y) = − + ν(dz) dy . (3.17) n! 2 k! n=0 R R0 k=2

Collecting the t and t2 coefficients yields the desired result.

We extend the definition of < ω, φ > from φ ∈ S(R) to L2(R). Since S(R) is dense 2 2 in L (R), then for ϕ ∈ L (R) arbitrary, there exists ϕn ∈ S(R) such that ϕn → ϕ in L2(R). By completeness of L2(R), as m, n → ∞

| < ω, ϕn > − < ω, ϕm > | = | < ω, ϕn − ϕm > | → 0. (3.18) 25

Hence, {< ω, ϕn >: n ∈ N} is a Cauchy sequence in R and its limit is < ω, ϕ >. ˜ 2 Then, define X(t, ω) ≡< ω, χ[0,t] > where χ[0,t] ∈ L (R) as follows:  1, s ∈ [0, t], t ≥ 0   χ[0,t] = −1, s ∈ [−t, 0), t < 0 (3.19)   0, otherwise.

Taking the characteristic function of X˜(t) yields

˜ E[ exp(iuX(t))] = E[exp(iu < ω, χ[0,t] >)] "Z σ2u2χ2 (y) Z ! # = exp − [0,t] + exp(iuz)χ (y)) − iuzχ (y) − 1 ν(dz) dy 2 [0,t] [0,t] R R0  σ2u2 Z   = exp − + (exp(iuz) − iuz − 1) ν(dz) t . (3.20) 2 R0 By the L´evy-Khinchine theorem, X(t) is a L´evyprocess and there exists a c`adl`ag modification of X˜(t), say X(t) which is a L´evyprocess [7]. The smoothed white noise process for the Canonical L´evyprocess is given by: Z 2 < ω, φ >= φ(t)dX(t, ω), ω ∈ Ω, φ ∈ L (R) (3.21) R where X(t) has the following representation:

Z t Z X(t) = σ dW (t) + zN˜(ds, dz). (3.22) 0 [0,t]×R0

We define the filtered probability space (Ω, F, {Ft}t≥0,P ) for the white noise 0 X X Canonical L´evyprocess where F = B(S (R)) and Ft = Ft ∨ N where Ft = σ{X(s): s ∈ [0, t]} is the σ-field generated by X up to time t and N are the P -null sets. 26

3.2 Construction of Alternative Chaos Expansion for Canonical L´evy processes

Nualart-Schoutens Chaos Decomposition

We assume that the L´evymeasure ν satisfies the so-called Nualart-Schoutens assumption [61]: for all ε > 0 there exists λ > 0 such that Z exp(λ|z|)ν(dz) < ∞. (3.23) R0\(−ε,ε) This assumption covers some important classes in L´evyprocesses such as the nor- mal (Gaussian), Poisson, gamma, negative binomial, and Meixner processes. This assumption implies the following implications:

1. The absolute moments are greater than or equal to 2 with respect to ν is finite, that is, for all p ≥ 2, Z |z|pν(dz) < ∞ (3.24) R0 and thus, X(t) has moments of all orders for all p ≥ 2.

2. The characteristic function E[exp(iuX(t))] is analytic in the neighborhood of

zero and the polynomials are dense in L2(R,P ◦ X(t)−1).

Power Jump Processes

Definition 3.2.1 Power Jump Processes X(i) = {X(i)(t): t ≥ 0}, i ∈ N  X  ∆(X(t))i, i > 1,  X(i)(t) = s∈(0,t] (3.25)  X(t), i = 1.

Then, from the representation in (2.47) we can express X(i) in integral form follows: Z  i  z N(ds, dz), i > 1 (i)  [0,t]×R0 X (t) = Z (3.26)  i ˜ bt + σW (t) + z N(ds, dz), i = 1. [0,t]×R0 27

The power jump processes X(i) is also a L´evyprocesses. In general,

X X(t) 6= ∆X(s) (3.27) s∈(0,t]

and the equality only holds for a pure jump processes (σ = 0) with bounded variation. Taking the expectation yields

(i) E[X (t)] = mi(t) (3.28) where Z  i  x ν(dx), i > 1 mi = R0 (3.29) b, i = 1.

Definition 3.2.2 Compensated Power Jump Processes Y (i) = {Y (i)(t): t ≥ 0}, i ∈ N given by Y (i)(t) = X(i)(t) − E X(i)(t) . (3.30)

Y (i) is referred to as the Teugels martingale of order i, and it is a normal martingale. Alternatively, we can express Y (i) in integral form follows: Z i ˜  z N(ds, dx), i > 1 (i)  [0,t]×R0 Y (t) = Z (3.31)  i ˜ σW (t) + z N(ds, dz), i = 1. [0,t]×R0 Moreover, the quadratic covariation and the predicatable covariation processes for the Teugels martingales Y (i) are as follows: Z (i) (j) 2 i+j [Y ,Y ]t =σ t1{i=j=1} + z N(ds, dz) [0,t]×R0 2 (i+j) =σ t1{i=j=1} + X (t), (3.32)

 Z  (i) (j) 2 i+j < Y ,Y >t= σ t1{i=j=1}t + z ν(dz) t R0 2 =(σ 1{i=j=1} + mi+j)t. (3.33) 28

The Spaces S1 and S2 [61]

Let S1 be the space of real polynomials in R+, that is, ( n ) X k−1 S1 = ckz : ck ∈ R, z ∈ R+, k ∈ {1, ··· n}, n ∈ N (3.34) k=0 endowed with the inner product << ·, · >>1 given by Z 2 2 << P, Q >>1=σ P (0)Q(0) + P (z)Q(z)z ν(dz) R0 Z = P (z)Q(z)η(dz) =< P, Q >L2(η) (3.35) R

where P,Q ∈ S1. Note that Z i−1 j−1 2 i+j << z , z >>1=σ 1{i=j=1} + z ν(dx) R0 2 =σ 1{i=j=1} + mi+j. (3.36)

2 Let {pi(z)}i∈N be the orthogonalization of {1, z, z , ···} in S1. From the Gram- Schmidt orthogonality procedure, we have the following:

p1(z) =1,

i−1 i−1 i i−1 X << pj(z), z >>1 X j−1 pi(z) =z − 2 pj(z) = aijz (3.37) j=1 kpj(z)k1 j=1 where  i−1 R i−1 << p (z), z >> pj(z)z η(dz)  j 1 R − 2 = R 2 , j ∈ {1, ··· , i − 1},  kp (z)k p (z)η(dz) aij = j 1 R j (3.38)  1, j = i.

Example For i = 2, p2(z) = a21 + a22z, where a22 = 1 and R zη(dz) R z3ν(dx) a = − R = − R0 . 21 R η(dz) σ2 + R z2ν(dx) R R0

On the other hand, let S2 be the space of linear transformations of Teugels martingales of the L´evyProcesses, that is,

( n ) X (k) S2 = ckY : ck ∈ R, k ∈ {1, ··· n}, n ∈ N (3.39) k=0 29

endowed with the inner product << ·, · >>2 given by

(i) (j) (i) (j) 2 << Y ,Y >>2= E[[Y ,Y ]1] = σ 1{i=j=1} + mi+j. (3.40)

i−1 (i) Then x ↔ Y is an isometry between S1 and S2. (i) (1) (2) (3) Let {H }i∈N be the orthogonalization of {Y ,Y ,Y , ···} in S2. Then, (i) {H }i∈N are strongly orthogonal martingales. From the Gram-Schmidt orthogo- nality procedure, we have the following:

H(1) =Y (1),

i−1 (j) (i) i−1 X << H ,Y >>2 X H(1) =Y (i) − = a∗ Y (i) (3.41) ||H(j)||2 ij j=1 2 j=1

where  << H(j),Y (i) >> E[[H(j),Y (i)] ]  2 1 − (j) 2 = − (j) , j ∈ {1, ··· , i − 1} ∗ ||H ||2 E[[H ]1] aij = (3.42)  1, j = i.

Lemma 3.2.1 The Gram-Schmidt coefficients in S1 and S2 coincide, that is,

∗ aij = aij, j ∈ {1, ··· , i}, i ∈ N. (3.43)

Proof We shall prove this lemma by induction. Base step: Since

p1(x) = a11 = 1 (3.44)

(1) ∗ (1) (1) H = a11Y = Y . (3.45)

Hence,

∗ a11 = a11 = 1. (3.46)

∗ Inductive step: Suppose that aij = aij, j ∈ {1, ··· , i}. Then, from the Gram- Schmidt procedure,

i+1 X j−1 pi+1(z) = ai+1,jz (3.47) j=1 30

where  i  << pj(z), z >>1 − 2 , j ∈ {1, ··· , i}, kpj(z)k ai+1,j = 1 (3.48)  1, j = i + 1. On the other hand,

i+1 (i+1) X ∗ (i) H = ai+1,jY (3.49) j=1

where  << H(j),Y (i+1) >>  2 − (j) 2 , j ∈ {1, ··· , i}, ∗ ||H ||2 ai+1,j = (3.50)  1, j = i + 1.

From the isometry relation between S1 and S2 and by induction hypothesis, we obtain

j i X k−1 i << pj(z), z >>1= ajk << z , z >>1 k=1 j X ∗ (k) (i+1) (j) (i+1) = ajk << Y ,Y >>2=<< H ,Y >>2, (3.51) k=1

j j 2 X X k−1 l−1 kpj(z)k1 = ajkajl << x , x >>1 k=1 l=1 j j X X ∗ ∗ (k) (l) (j) 2 = ajkajl << Y ,Y >>2= H 2 (3.52) k=1 l=1 then from (3.48), (3.50), (3.51), (3.52), yields

∗ ai+1,j = ai+1,j, j ∈ {1, ··· , i + 1}. (3.53)

Since H(i) are pairwise strongly orthogonal martingales which forms a linear i∈N combination of Y (j), j ∈ {1, ··· , i} of the form of

i (i) X ∗ (j) H = aijY (t). (3.54) j=1 31

For i 6= j, the product H(i)H(j) and the quadratic covariation process [H(i),H(j)] are both uniformly integrable martingales [69]. Since H(i),H(i) is a predictable covariation process such that H(i)H(j) − H(i),H(i) is a martingale, then

(i) (j) H ,H t = 0, i 6= j. (3.55)

Moreover, we have following quadratic covariation and predictable covariation process for H(i) i∈N

i j (i) (j) X X ∗ ∗ (i) (j) [H ,H ]t = aikajl[Y ,Y ]t k=1 l=1 i j 2 X X ∗ ∗ (k+l) =σ t + aikajlX , (3.56) k=1 l=1

i j (i) (j) X X ∗ ∗ (i) (j) < H ,H >t= aikajl < Y ,Y >t k=1 l=1 i j ! X X ∗ ∗ 2 = aikajlmk+l + σ tδij = qitδij (3.57) k=1 l=1 where i i 2 X X ∗ ∗ qi = σ + aikailmk+l. (3.58) k=1 l=1 Theorem 3.2.2 [54] Z < pi(x), pj(x) >L2(η)= pi(z)pj(z)η(dz) = qiδij (3.59) R

2 where qi = kpikL2(η) is given by (3.58).

Proof Since Z << pi(z), pj(z) >>1= pi(z)pj(z)η(dz). (3.60) R

From the isometry relation of S1 and S2, we obtain

k−1 l−1 (k) (l) z , z 1 = Y ,Y 2 . (3.61) 32

Then, from the preceding lemma, since the Gram-Schmidt coefficients in S1 and S2 coincide then, we have the following:

** i j ++ X k−1 X k−1 << pi(z), pj(z) >>1= aikz , ajlz k=1 k=1 1 ** i j ++ X ∗ (k) X ∗ (l) (i) (j) = aikY , ajlY =<< H ,H >>2 . (3.62) k=1 k=1 2

For i 6= j, H(i) and H(j) are strongly orthogonal, then the quadratic covariation process [H(i),H(j)] is a martingale hence,

(i) (j) (i) (j) (i) (j) << H ,H >>2= E[[H ,H ]1] = E[[H ,H ]0] = 0. (3.63)

On the other hand, for i = j, since

(k) (l) 2 (k+l) [Y ,Y ]t = σ t1{k=l=1} + X (t) (3.64) then

" i i # (i) (i) X ∗ (l) X ∗ (l) [H ,H ]t = aikY , ailY k=1 l=1 t i i X X ∗ ∗ 2 (k+l)  = aikaik σ t1{k=l=1} + X (t) k=1 l=1 i i 2 X X ∗ ∗ (k+l) =σ t + aikaikX (t). (3.65) k=1 l=1 Finally, taking the expectation at t = 1 gives us

i i (i) (i) 2 X X ∗ ∗ << pi(z), pi(z) >>1= E[[H ,H ]1] = σ + aikailmk+l = qi. (3.66) k=1 l=1 33

Chaotic and Predictable Representation Properties

Denote the following multiple integral for f ∈ L2([0,T ]n) with respect to the orthogonal martinagles H(i)’s:

(i1,··· ,in) Jn (f) − − Z T Z tn Z t2 (i1) (in−1) (in) = ··· f(t1, ··· tn−1, tn)dH (t1) ··· dH (tn−1)dH (tn). 0 0 0 (3.67)

Leon et al., [54] has shown orthogonality relationship between different multi- indices (i1, ··· , in) stated in the following theorem.

Theorem 3.2.3 [54] Let f ∈ L2([0,T ]n) and g ∈ L2([0,T ]m), then  Z  qi1 ··· qin f(t1, ··· , tn)g(t1, ··· , tn)dt1 ··· dtn   Σn E[J (i1,··· ,in)(f)J (j1,··· ,jm)(g)] = n n for n = m, (i1, ··· , in) = (j1, ··· , jn),   0, otherwise (3.68) where

Σn = {(t1, ··· , tn) : 0 < t1 < ··· tn ≤ T } (3.69) is the positive simplex of [0,T ]n.

Proof We prove the theorem by induction as well as the identity

(i) (j) < H ,H >t= qitδij. (3.70)

(i1,··· ,in) Note that Jn can be written recursively as follows:

Z T (i1,··· ,in) (in) Jn (f) = αndH (tn), 0 Z T (j1,··· ,jm) (im) Jn (g) = βmdH (tm) (3.71) 0 34

where − Z tk (ik−1) αk = αk−1dH (tk−1), k ∈ {2, ··· n}, α1 = f(t1, ··· , tn), 0 − Z tk (ik−1) βk = βk−1dH (tk−1), k ∈ {2, ··· m}, β1 = g(t1, ··· , tm). (3.72) 0 Case I: (m = n). Z T (i1,··· ,in) (j1,··· ,jn) E[Jn (f)Jn (g)] = qin δin,jn E[αnβn]dtn. (3.73) 0 Then, the desired result is obtained by induction. Case II: (m 6= n). Without loss of generality, we assume that m < n. Then, by induction

(i1,··· ,in) (j1,··· ,jm) E[Jn (f)Jn (g)] = qin−m+1 ··· qin δin−m+1,j1 ··· δin,jm · − − Z T Z tn−1 Z tn−m+2 ··· E[αn−m+1β1]dtn−m+2 ··· dtn−1dtn. (3.74) 0 0 0 Since

E[αn−m+1β1] " − − # Z tn−m+1 Z t2 (i1) (in−m+2) =E ··· f(t1, ··· , tn)dH (t1)dH (tn−m+2)g(t1, ··· , tm) 0 0 " − − # Z tn−m+1 Z t2 (i1) (in−m+2) =E ··· f(t1, ··· , tn)g(t1, ··· , tm)dH (t1)dH (tn−m+2) 0 0

=0 (3.75) then, we obtain the desired result.

Nualart and Schoutens [61] have shown the for every F ∈ L2(P ) can be represented in terms of the iterated integrals in terms of H(i).

Theorem 3.2.4 Chaotic Representation Property (CRP) [61] Every random variable F ∈ L2(P ) has a representation of the form of ∞ X X (j1,··· ,jn) F =E[F ] + Jn (fj1,··· ,jn ) n=1 j1,··· ,jn≥1 (3.76) 35

As a corollary to the CRP, they have shown a predictable representation in terms of in terms of H(i).

Theorem 3.2.5 Predictable Representation Property (PRP) [61] Every random variable F ∈ L2(P ) has a representation of the form of ∞ X Z T F = E[F ] + φ(n)(s)dH(n)(s) (3.77) n=1 0 where φ(j)(s) is a predictable process.

3.3 Alternative Chaos Expansion for Canonical L´evyprocesses

We present some important results all based in Sol´e,et al., [78] which is crucial in finding the alternative chaos expansion for the Canonical L´evyspace.

Theorem 3.3.1 [78] Let g = {g(t): t ∈ [0,T ]} be a predictable process such that Z T  E g2(t)dt < ∞. (3.78) 0

Then, g(t)pi(x) is integrable with respect to M and Z T Z (i) g(t)dH (t) = g(t)pi(x)M(dt, dx). (3.79) 0 [0,T ]×R

Proof For i = 1, p1(x) = 1 and Z t Z H(1) = Y (1) = σW (t) + zN˜(ds, dz). (3.80) 0 R0 Then Z T Z T  Z  g(t)dH(1)(t) = g(t) σdW (t) + zN˜(dt, dz) 0 0 R0 Z = g(t)p1(z)M(dt, dz). (3.81) [0,T ]×R For i > 1, since g(t) is predictable and so is g(t)zi. From Itˆoisometry, we obtain. "Z 2# Z  E g(t)ziN˜(ds, dz) =E g2(t)z2idtν(dz) [0,T ]×R0 [0,T ]×R0 Z Z T  = z2ν(dx) · E g2(t)dt < ∞ (3.82) [0,T ] 0 36

Hence, g(t)zi is square-integrable with respect to N˜ and thus, integrable with respect to N˜. In addition, with the square-integrability condition in (3.78) implies integra- bility with respect to W and thus to M. From the integral form of the compensated power jump process Y (j) for j > 1, Z T Z Z g(t)dY (j)(t) = g(t)zjN˜(dt, dz) = g(t)xj−1M(dt, dz). (3.83) 0 [0,T ]×R0 [0,T ]×R Hence, from the preceding equation, we finally obtain Z T i Z T (i) X (j) g(t)dH (t) = aij g(t)dY (t) 0 j=1 0 Z i X i−1 = g(t) aijz M(dt, dz) [0,T ]×R j=1 Z = g(t)pi(z)M(dt, dz). (3.84) [0,T ]×R

Corollary 3.3.2 [78] H(i)(t) can be expressed as the follows: Z (i) H (t) = pi(z)M(dt, dz). (3.85) [0,T ]×R Proof Take g = 1 from the preceding theorem.

Theorem 3.3.3 [78] Let f ∈ L2([0,T ]n), then

(j1,··· ,jn) Jn (f) = In(f(t1, ··· , tn)1Σn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)). (3.86)

Proof Using the preceding corollary, we have as follows:

(j1,··· ,jn) Jn (f)

− − Z T Z tn Z t2 (j1) (jn−1) (jn) = ··· f(t1, ··· , tn)dH (t1) ··· dH (tn−1)dH (tn) 0 0 0 − − Z T Z Z tn Z Z t2 Z

= ··· f(t1, ··· , tn)pj1 (x1) ··· pjn (zn)M(dt1, dz1) ··· M(dtn, dzn) 0 0 0 Z R R R

= f(t1, ··· , tn)1Σn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)M(dt1, dz1) ··· M(dtn, dzn) ([0,T ]×R)n

=In(f(t1, ··· , tn)1Σn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)). (3.87) 37

We follow Benth, et al., [15] approach in comparing the relationship between Itˆo’s chaos expansion and the CRP. However, their approach was only limited with respect to chaos expansion with respect to the iterated integral of the compensated Poisson random measure N˜. With their result, Di Nunno was able to derive the alternative expansion in the Poisson case [24]. A discussion of the alternative chaos expansion for the Wiener and Poisson case is referred to the Appendix. With the results of the preceding section, we are able to use this relationship for the Canonical L´evycase. From Itˆo’schaos expansion [50] of the Canonical L´evy process we have ∞ X 2 n F = E[F ] + In(fn) fn ∈ Ls(µ ). (3.88) n=1 On the other hand, from Nualart-Schoutens CRP and from the preceding theorem 3.86, we obtain

∞ X X (j1,··· ,jn) F − E[F ] = Jn (fj1,··· ,jn (t1, ··· , tn)) n=1 j1,··· ,jn≥1 ∞ X X = In (fj1,··· ,jn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)1Σn (t1, ··· , tn))

n=1 j1,··· ,jn≥1 ∞ ! X X = In fj1,··· ,jn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)1Σn (t1, ··· , tn) .

n=1 j1,··· ,jn≥1 (3.89)

We let

X gn((t1, z1), ··· (tn, zn)) = fj1,··· ,jn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)1Σn (t1, ··· , tn).

j1,··· ,jn≥1 (3.90) Then, by the uniqueness of the chaos expansion, we obtain

∧ fn = gn , ∀n ∈ N. (3.91)

Since

2 n fj1,··· ,jn (t1, ··· , tn)1Σn (t1, ··· , tn) ∈ L (λ ) (3.92) 38 and the Hermite functions1 {e } forms an orthonormal basis in L2(λ), then we n n∈N can express (3.92) as follows:

X (j ,··· ,j ) f (t , ··· , t ) = γ 1 n e (t ) ··· e (t ). (3.93) j1,··· ,jn 1 n i1,··· ,in i1 1 in n i1,··· ,in≥1

We let pi(z) πi(z) = , i ∈ N (3.94) kpikL2(η) 2 then, from (3.59), {πi}i∈N are orthonormal basis functions in L (η). Denote

c(j1,··· ,jn) = kp k · · · kp k γ(j1,··· ,jn). (3.95) i1,··· ,in i1 L2(η) in L2(η) i1,··· ,in

Hence, we can express (3.90) in terms of orthonormal basis functions in L2(µn) as follows:

X X (j ,··· ,j ) g ((t , z ), ··· , (t , z )) = c 1 n e (t )π (z ) ··· e (t )π (z ). n 1 1 n n i1,··· ,in i1 1 j1 1 in n jn n i1,··· ,in≥1 j1,··· ,jn≥1 (3.96) Since the symmetrization operator is linear operator, that is, (af + bg)∧ = af ∧ + bg∧ where a, b, ∈ R, then

X X (j ,··· ,j ) ∧ g∧((t , z ), ··· (t , x )) = c 1 n (e (t )π (z ) ··· e (t )π (z )) . n 1 1 n n i1,··· ,in i1 1 j1 1 in n jn n i1,··· ,in≥1 j1,··· ,jn≥1 (3.97)

2 n We want to express (3.97) in terms of orthogonal functions in Ls(µ ). We shall adapt the same trick Di Nunno applied in the Poisson case using the Cantor diagonalization technique [24]. Denote the Cantor diagonalization mapping κ : N × N as follows: (i + j − 2)(i + j − 1) κ(i, j) = j + . (3.98) 2

Let k = κ(i, j) and

δk(t, z) = ei(t)πj(z) (3.99)

1 See Appendix for a discussion of the Hermite function en and its relationship to the Hermite polynomial hn. 39

2 then, {δk}k∈N is an orthonormal basis in L (µ). Then from (3.97) and (3.99)

∧ gn ((t1, z1), ··· , (tn, zn)) X X (j ,··· ,j ) ∧ = c 1 n δ (t , z ) ··· δ (t , z ) . (3.100) i1,··· ,in κ(i1,j1) 1 1 κ(in,jn) n n i1,··· ,in≥1 j1,··· ,jn≥1 Denote the following multi-indices given by

α = (α1, α2, ··· ), αi ∈ N0, i ∈ N (3.101) with compact support and I by the set of α in (3.101). Also, we denote the following:

Index(α) = max{i : αi 6= 0} (3.102)

m m X Y |α| = αi, α! = αi, m = Index(α). (3.103) i=1 i=1 Suppose that m = Index(α) and n = |α|, define the following tensor product as follows:

⊗α δ ((t1, z1) ··· (tn, zn))

⊗α1 ⊗αm =δ1 ⊗ · · · ⊗ δm ((t1, z1) ··· (tn, zn))

=δ1(t1, z1) ··· δ1(tα1 , zα1 )δ2(tα1+1, zα1+1) ··· δ2(tα1+α2 , zα1+α2 )

··· δm(tn−αm+1, zn−αm+1) ··· δm(tn, xn) (3.104)

⊗0 with the convention δi = 1, i ∈ {1, ··· , m}. Also, we denote the symmetrized tensor product as follows:

⊗ˆ α ⊗α ∧ δ ((t1, z1) ··· (tn, zn)) = δ ((t1, z1) ··· (tn, zn))

ˆ ˆ ⊗α1 ˆ ˆ ⊗αm =δ1 ⊗ · · · ⊗δm ((t1, z1) ··· (tn, zn)). (3.105)

Now, since

∧ ⊗ˆ α gn ∈ span{δ : |α| = n, α ∈ I} (3.106)

∧ then gn has of the form of ∧ X ⊗ˆ α fn = gn = cαδ . (3.107) |α|=n 40

⊗ˆ α By taking I0(δ ) = 1 and c0 = E[F ], then by (3.88), (3.91), and (3.107) we obtain

∞ ∞   X ∧ X X ⊗ˆ α X  ⊗ˆ α F = E[F ] + In(gn ) = E[F ] + In  cαδ  = cαI|α| δ . n=1 n=1 |α|=n α∈I

We denote

 ⊗ˆ α Kα = I|α| δ (3.108) then we have the following chaos expansion stated in the theorem below.

Theorem 3.3.4 Let F ∈ L2(P ), then it has a unique chaos expansion of the form of

X F = cαKα. (3.109) α∈I Moreover, we shall establish an isometry relation to this chaos expansion by prov- ing the following lemma.

Lemma 3.3.5

E[KαKβ] = α! · 1{α=β} (3.110)

Proof We let

mα = Index(α), nα = |α|, mβ = Index(β), nβ = |β|. (3.111)

Then, by isometry (2.65),

h  ⊗ˆ α  ⊗ˆ βi E[KαKβ] =E Inα δ Inβ δ

D ⊗ˆ α ⊗ˆ βE =nα! δ , δ δnαnβ Z ⊗ˆ α ⊗ˆ β ⊗n =nα! δ δ dµ δnαnβ . (3.112) ([0,T ]×R)nα

For nα 6= nβ, the (3.112) vanishes. Throughout the remainder of the proof, it suffice to evaluate for the case n = nα = nβ. Denote m = mα, and consider the tensor product in (3.104). There are n! terms in the symmetrization on δ⊗ˆ α as well as δ⊗ˆ β with each term of these symmetrized tensor product has a factor of 1/n!. Since {δk}k∈N forms an orthonomal basis in L2(µ), then for α 6= β, (3.112) vanishes. 41

Consider the case α = β, for each n! terms in Kα, one can get a non-zero expec-

tation term with a product on a term in Kβ by permuting the terms in (3.104) by

permuting the first α1 terms, then permuting the next α2 terms, and so forth and

finally, permuting the last αm terms. There are α! = α1! ··· αm! possible combinations 2 in this procedure each with the weight of one by orthonormality of δk{k∈N} in L (µ). Hence, we obtain 1 E[ ] = n! · · n! · α! · 1 = α! · 1 . (3.113) KαKβ (n!)2 {α=β} {α=β}

Proposition 3.3.1 (Isometry) Let F ∈ L2(P ), with a chaos expansion of the form of (3.109), then

2 X 2 kF kL2(P ) = cαα!. (3.114) α∈I Proof From the preceding lemma, " # 2 2 X X X X X 2 kF kL2(P ) = E[F ] = E cαKα cβKβ = cαcβE[KαKβ] = cαα!. α∈I β∈I α∈I β∈I α∈I (3.115)

3.4 Stochastic Test and Distribution Function

Consider the following formal expansion

X F = cαKα. (3.116) α∈I If the following growth condition holds,

X 2 cαα! < ∞, (3.117) α∈I then F ∈ L2(P ). We relax this growth condition to obtain a family of generalized function spaces of stochastic test functions and stochastic distribution functions that relates to L2(P ) naturally [46]. 42

3.4.1 The spaces G and G∗

The stochastic test function G and the stochastic distribution function G∗ was first investigated by Pothoff and Timpel in the Wiener case [68]. A parallel definition was carried out by Di Nunno [24] in the Poisson case. We extend these definitions for the Canonical L´evyspace.

Definition 3.4.1 (i) Let Gq, q ∈ R be the space of formal expansion X F = In(fn) (3.118) n∈N0 such that !1/2 X 2 2qn kF k = n! kf k 2 n e < ∞. (3.119) Gq n L (µ ) n∈N0 For every q ∈ R, Gq is a Hilbert space with inner product

X 2qn 2 n < X, Y >Gr = n! < fn, gn >L (µ ) e (3.120) n∈N0 where F and G has the following formal sum:

X X F = In(fn),G = In(gn). (3.121) n∈N0 n∈N0 Define the stochastic test function G as

\ G = Gq (3.122) q>0 endowed with the projective topology, that is, as n → ∞

F → F on G ⇔ kF − F k → 0 ∀q > 0. (3.123) n n Gq

(ii) Define the stochastic distribution function G∗ as

∗ [ G = G−q (3.124) q>0 endowed with the inductive topology, that is, as n → ∞

G → G on G∗ ⇔ ∃q > 0 such that kG − Gk → 0. (3.125) n n G−q 43

Note that G∗ is a dual of G. Let F ∈ G and G ∈ G∗ with the formal expansion of F and G of the form of (3.121). The action of G on F is given by: X < G, F >G,G∗ = n! < fn, gn >L2(µn) . (3.126) n∈N0

Also, we can express the Gq-norm, q ∈ R in terms of the chaos expansion (3.109) as follows: 2 X kF k = c α!e2qn. (3.127) Gq α α∈I

3.4.2 Kontratiev and Hida spaces

Similarly, we extend the definitions of Kontratiev and Hida space [46] to the Canonical L´evyspace. We let α ∈ I and suppose that Index(α) = m, then we denote

m αk Y α k (2N) = (2j) j (3.128) j=1 where k ∈ Z. In particular, if α = ε(m) = (0, ··· , 0, 1, 0, ··· ), that is, ε(m) is a multi-index with all zeros except for the m-th component which contains one, then

(m) (2N)ε k = (2m)k.

Definition 3.4.2 (i) Let p ∈ [0, 1]. Suppose that F has a formal expansion of the

form (3.121). Then, F belongs to the space (S)p,q, q ∈ R if

2 X 2 1+p αq kF kp,q = aα(α!) (2N) < ∞. (3.129) α∈I

Define the Kontratiev test function (S)p as \ (S)p = (S)p,q (3.130) q>0 endowed with the projective topology.

(ii) Let q ∈ R, then the formal expansion X G = bαKα (3.131) α∈I 44

belongs to the space (S)−q if

2 X 2 1−p −αq kGk−p,−q = bα(α!) (2N) < ∞. (3.132) α∈I Define the Kontratiev distribution function (S)∗ as [ (S)−p = (S)−p,−q (3.133) q>0 endowed with the inductive topology.

Note that (S)−p is a dual of (S)p. The action of G ∈ (S)−p on F ∈ (S)p, with the formal expansion of F and G of the form (3.121) is given by X < G, F >= α!aαbα. (3.134) α∈I The Hida spaces are the special cases of the Kondratiev spaces. The Hida test ∗ ∗ function (S) and Hida distribution function (S) is given by (S) = (S)0 and (S) =

(S)−0 respectively. From the above definitions, we have the following inclusions for p ∈ [0, 1]:

2 ∗ (S)1 ⊂ (S)p ⊂ (S)0 ⊂ G ⊂ L (P ) ⊂ G ⊂ (S)−0 ⊂ (S)−p ⊂ (S)−1. (3.135)

3.5 White Noise Processes from Canonical L´evyProcesses

We extend the concept of white noise processes in the Canonical L´evyspace. Consider the chaos expansion of X(t) in (2.49) ! X X X(t) =I1(1) = I1 h1, eiiL2(λ) h1, πjiL2(ν) ei(s)πj(z) i∈N j∈N X X Z t Z = ei(s)ds πj(s)η(dz)I1 (ei(s)πj(z)) . (3.136) 0 i∈N j∈N R Now since

⊗εκ(i,j) Kεκ(i,j) =I1(δ ) = I1 (ei(s)πj(z)) Z t Z = ei(s)πj(z)dsη(dz) (3.137) 0 R 45

2 and {πj}j∈N is orthonormal with respect to L (η), then Z Z Z πj(z)η(dz) = π1(z)πj(z)η(dz) = η(dz) · 1{j=1} = ζ1{j=1} (3.138) R R R where Z ζ = σ2 + z2ν(dz). (3.139) R0 Alternatively, we can write X(t) as follows:

X Z t X(t) = ζ ei(s)dsKεκ(i,1) . (3.140) 0 i∈N Lemma 3.5.1 For i, j ∈ N, κ(i, j) ≥ i. (3.141)

Proof Since 1 κ(i, j) = j + (i + j − 2)(i + j − 1) (3.142) 2 then 1 κ(i, j) − i = i2 + (2j − 5)i + (j2 − j + 2) . (3.143) 2

Let j ∈ N and consider the quadratic equation in i as follows:

2 2 Fj(i) = i + (2j − 5)i + (j − j + 2). (3.144)

To prove the lemma, it suffice to show that Fj(i) ≥ 0 for all (i, j) ∈ N × N.

• Case I: (j > 1) The discriminant in this case is

∆ = (2j − 5)2 − 4(j2 − j + 2) = −16j + 17 < 0. (3.145)

Since Fj is concave upwards then, Fj(i) > 0 for all i ∈ N.

• Case II: (j = 1) In this case, we have

2 F1(i) = i − 3i + 2 = (i − 1)(i − 2). (3.146)

and it is concave upwards. Thus, for all i ∈ N, F1(i) ≥ 0. 46

Definition 3.5.1 White Noise L´evyProcess X˙ (t) Z ˙ X X X X(t) = ei(t) πj(z)η(dz)Kεκ(i,j) = ζ ei(t)Kεκ(i,1) . (3.147) i∈N j∈N R i∈N Lemma 3.5.2 X˙ (t) ∈ (S)∗. (3.148)

−1/12 Proof Since κ(i, 1) ≥ i and supt∈R |en(t)| = O(n ) [46], then,

2 κ(i,1) ˙ 2 X κ(i,1) 2 −ε q X(t) =ζ ε !ei (t)(2N) −q i∈N 2 X 2 −q =ζ ei (t)(2κ(i, 1)) i∈N 2 X 2 −q ≤ζ ei (t)(2i) . (3.149) i∈N Hence, the series converges for q ≥ 2 and thus proves our claim.

Lemma 3.5.3 dX(t) X˙ (t) = in (S)∗. (3.150) dt That is, there exists q > 0 such that

2 X(t + h) − X(t) − X˙ (t) → 0 as h → 0. (3.151) h −q

Proof Note that from (3.140) and (3.147), we have the following:

Z t+h X(t + h) − X(t) ˙ X 1 − X(t) =ζ (ei(s) − ei(t))dsKεκ(i,1) h h t i∈N X =ζ ai(h)Kεκ(i,1) (3.152) i∈N where 1 Z t+h ai(h) = (ei(s) − ei(t))ds. (3.153) h t 47

−1/12 Since supt∈R |en(t)| = O(n ) then, supi∈N |ai(h)| < ∞ for all h ∈ [0, 1]. Further- more, since κ(i, 1) ≥ 1 then,

2 X(t + h) − X(t) ˙ 2 X (i,1) 2 −q − X(t) =ζ ε !|ai(h)| (2κ(i, 1)) h −q i∈N 2 X 2 −q ≤ζ |ai(h)| (2i) . (3.154) i∈N

Now, since ai(h) → 0 as h → 0 for all i ∈ N then, for all q ≥ 2 from the dominated convergence theorem,

X 2 −q |ai(h)| (2i) → 0 as h → 0. (3.155) i∈N Hence, from the bounded convergence theorem, we obtain

2 X(t + h) − X(t) − X˙ (t) → 0 as h → 0. (3.156) h −q

Definition 3.5.2 L´evyWhite Noise M˙ (t, z)

˙ X X M(t, z) = ei(t)πj(z)Kεκ(i,j) . (3.157) i∈N j∈N

Lemma 3.5.4 For i, j ∈ N, pij ≤ κ(i, j). (3.158)

(i+j−2)(i+j−1) Proof For i, j ∈ N, i+j −2 ≥ 0, hence κ(i, j) = j + 2 ≥ j. On the other √ hand, since κ(i, j) ≥ i. Hence, from the above arguments, we have ij ≤ κ(i, j).

Lemma 3.5.5 M˙ (t, z) ∈ (S)∗ µ − a.e.

Proof Since 2 ˙ X X 2 2 −q M(t, z) = ei (t)πj (z)(2κ(i, j)) . (3.159) −q i∈N j∈N 48

Then, from the preceding lemma and by orthonormality of {ei}i∈N and {πj}j∈N is orthonormal with respect to L2(λ) and L2(η) respectively, then Z 2 Z ˙ X X 2 2 p −q M(t, z) µ(dt, dz) ≤ ei (t)πj (z)(2 ij) dtη(dz) × −q × R+ R R+ R i∈N j∈N √ Z Z X −q 2 X p −q 2 = ( 2i) ei (t)dt ( 2j) πj (z)η(dz) i∈N R+ j∈N R X √ X = ( 2i)−q (p2j)−q. (3.160) i∈N j∈N Hence, the above equation converges for q > 2, thus proves our claim.

Remark 3.5.6 Radon-Nikodym Interpretation of the L´evyWhite Noise Field

Let t ∈ R+ and A ∈ B(R), then Z t Z M(t, A) = M(ds, dz). (3.161) 0 A Likewise, we can express M(t, A) as follows: Z t Z t Z M(t, A) =σ dW (s) + zN˜(ds, dz) 0 0 A

=I1(1[0,t](s)1A(z)) ! X X =I1 1[0,t], ei L2(λ) h1A, πjiL2(ν) ei(s)πj(z) i∈N j∈N X X = 1[0,t], ei L2(λ) h1A, πjiL2(ν) I1 (ei(s)πj(z)) i∈N j∈N X X Z t Z = ei(s)ds πj(z)η(dz)Kεκ(i,j) 0 A i∈N j∈N Z t Z X X = ei(s)πj(z)µ(ds, dz)Kεκ(i,j) 0 A i∈N j∈N Z t Z = M˙ (t, z)µ(ds, dz). (3.162) 0 A Hence, from (3.161) and (3.162), M˙ (s, z) as the Radon-Nikodym derivative in (S)∗ is as follows: M(dt, dz) = M˙ (t, z)µ(dt, dz). (3.163) 49

3.6 Wick Product and Hermite Transform

The Wick Product was a first introduced by Wick in 1950 as a renormalization tool in quantum field theory and its application in stochastic analysis was introduced by Hida and Ikeda in 1965 [46]. We state some of its properties which are similar to the Wiener and Poisson white noise theory.

Definition 3.6.1 Wick Product P P Let F = α∈I aαKα ∈ (S)−1 and G = β∈I bβKβ ∈ (S)−1, then the Wick Product of X and Y denoted by X  Y is defined as

X X X X X  Y = aαbβKα+β = aαbβKγ. (3.164) α∈I β∈I γ∈I α+β=γ

We define the Wick powers of X ∈ (S)−1 as follows:

n (n−1) 0 X = X  X, n ∈ N,X = 1. (3.165)

Moreover, if f : C → C is entire, given by the following expansion:

∞ X n f(z) = anz . (3.166) n=0

 then, we define the following Wick version f (X), X ∈ (S)−1 given as

∞  X n f (X) = anX . (3.167) n=0

Moreover, we define the the Wick exponential of X ∈ (S)−1 denoted as

∞ X Xn exp(X) = . (3.168) n! n=0

2 2 whenever it is convergent in (S)−1. Let β ∈ L (R) and γ ∈ L (R × R0) deterministic, then we have the following Wick exponentials with respect to the Wiener and the compensated Poisson random measure respectively [27]:

Z  Z 1 Z  exp β(t)dW (t) = exp β(t)dW (t) − β2(t)dt , (3.169) 2 R+ R+ R+ 50

Z  exp γ(t, z)N˜(dt, dz) R+×R0 Z Z  = exp (log(1 + γ(t, z)) − γ(t, z))ν(dz)dt + log(1 + γ(t, z))N˜(dt, dz) . R+×R0 R+×R0 (3.170)

Some of the important properties of the Wick Product [46].

∗ ∗ 1. The Wick product is closed in the following spaces: (S)−1,(S) ,(S), (S)1, G , G∗ However, the Wick product is in general, not closed in L2(P ).

2. If either X or Y are deterministic, then

X  Y = X · Y. (3.171)

3. Let X, Y , and Z ∈ (S)−1, then,

X  Y =Y  X,

(X  Y )  Z =X  (Y  Z),

X  (Y + Z) =(X  Y ) + (X  Z) (3.172)

that is, the commutative, associative, and distributive law holds respectively.

4. Wick algebra follows the same rules as ordinary algebra. For example,

(X + Y )2 =X2 + 2X  Y + Y 2,

exp(X + Y ) = exp(X)  exp(Y ). (3.173)

5. Expectation Properties

(a) Let X, Y , X  Y ∈ L1(P ), then

E[X  Y ] = E[X] · E[Y ]. (3.174)

Note: Independence of X and Y is not required. 51

(b) Let X ∈ L1(P ), then

E[exp X] = exp(E[X]). (3.175)

6. Wick Chain Rule: Let X(·): R → (S)−1 continuously differentiable, and let f : C → C be entire such that f(R) ⊂ R, then d d f (X(t)) = (f 0)(X(t))  X(t) (3.176) dt dt

in (S)−1.

P Definition 3.6.2 [46] Let F = α∈I aαHα ∈ (S)−1, then the Hermite Transform denoted by HF or F˜ is defined by

˜ X α HF (z) = F (z) = aαz ∈ C. (3.177) α∈I

N α α1 α2 αn 0 where z = (z1, z2, ··· , zn, ··· ) ∈ C , and z = z1 z2 ··· zn ··· , α ∈ I, and zj ∈ 1, ∀j ∈ N.

Some of the important properties of the Hermite transform which will enable us to manipulate the Wick product is stated below.

Theorem 3.6.1 [46] Let F,G ∈ (S)−1, then

H(F  G)(z) = HF (z) ·HG(z). (3.178)

In addition, let f : C → C, be an entire function such that f(R) ⊂ R, and f (F ) ∈

(S)−1, then H(f (F ))(z) = f(HF (z)). (3.179)

whenever it converges in C. 52

Lemma 3.6.2 [46] Suppose X(t, ω) and F (t, ω) are (S)−1 processes such that

dX˜(t,z) ˜ (i) dt = F (t, z), ∀t ∈ (a, b), z ∈ Kq(δ)

˜ (ii) F (t, z) is a bounded function of (t, z) ∈ (a, b) × Kq(δ), continuous in t ∈ (a, b)

for each z ∈ Kq(δ). where

N X α qα 2 Kq(δ) = {z ∈ C : |z| (2N) < δ }. (3.180) α6==0

Then X(t, ω) is a differentiable (S)−1 process and dX(t, z) = F (t, z), ∀t ∈ (a, b). (3.181) dt

3.7 Stochastic Derivative

Consider the formal sum

X X F = In(fn) = cαKα (3.182) n∈N0 α∈I where

 ⊗ˆ α Kα = I|α| δ , (3.183)

X ⊗ˆ α fn = cαδ . (3.184) |α|=n If F ∈ D1,2, then we have the Malliavin derivative in F ∈ D1,2 as follows, ∞ X Dt,zF = nIn−1(fn(., (t, z))). (3.185) n=1

Let us relax the D1,2 case and let us define a stochastic derivative in F ∈ G∗ with ∗ the same form as (3.185). This is well-defined if Dt,zF converges in G . We employ this same strategy of Øksendal and Proske [64] in taking the stochastic derivative in F ∈ G∗ in the Poisson case. Similarly, we can define a stochastic derivative in ∗ ∗ F ∈ (S) whenever Dt,zF converges in (S) . In the Wiener case, the stochastic derivative corresponds to the Hida-Malliavin derivative [27]. 53

From (3.184), we have as follows:

X ⊗ˆ α fn(., (t, z)) = cαδ (., (t, z)). (3.186) |α|=n

T Let p = Index(α), then αi = 0 for i > p and εi = (0, ··· , 1, ··· , 0) , a unit vector with a 1 in the ith component and zero otherwise. Then, δ⊗ˆ α(., t, z) can be computed as follows:

p ˆ 1 X ˆ ˆ δ⊗α(., t, z) = α |α − ε |!δ⊗(α−εi)δ⊗εi (t, z) |α|! i i i=1 1 X ˆ ˆ = α δ⊗(α−εi)δ⊗εi (t, z). (3.187) |α| i i∈N Then, from (3.185), (3.186), and (3.187), we obtain the stochastic derivative

∞   X X X ⊗ˆ (α−εi) ⊗ˆ εi Dt,zF = In n cα αiδ δ (t, z) n=1 |α|=n i∈N ∞   X X X ⊗ˆ (α−εi) ⊗ˆ εi = cααiIn δ δ (t, z) n=1 |α|=n i∈N

X X ⊗ˆ εi = cααiKα−εi δ (t, z). (3.188) α∈I i∈N

Note that if F ∈ D1,2, then the Malliavin derivative in (3.185) and the stochastic derivative in (3.188) coincide. Since κ is bijective, then for any i ∈ N, ∃(k, m) ∈ N×N such that i = κ(k, m). Hence, we can also express (3.188) as follows:

X X X ⊗ˆ εκ(k,m) Dt,zF = cαακ(k,m)Kα−εκ(k,m) δ (t, z) (3.189) α∈I k∈N m∈N X X X = cαακ(k,m)Kα−εκ(k,m) ek(t)πm(z) (3.190) α∈I k∈N m∈N X X X = cβ+εκ(k,m) (βκ(k,m) + 1)Kβek(t)πm(z). (3.191) β∈I k∈N m∈N 54

Theorem 3.7.1 Closability of Stochastic Derivatives

∗ Let Fm,F ∈ G such that as m → ∞

∗ (i) Fm → F in G ,

∗ (ii) Dt,zFm converges in G

∗ Then, Dt,zFm → Dt,zF in G .

Proof We follow a parallel arguments in showing closability in the D1,2 case [27]. Consider the formal expansion

X F = cαKα, (3.192) α∈I X m Fm = cα Kα (3.193) α∈I ∗ such that Fm → F in G , then there exists q > 0 such that

2 X m 2 −2q|α| ||Fm − F ||G∗ = α!|cα − cα| e → 0. (3.194) α∈I m Hence, cα → cα. Since the stochastic derivative of Dj,t,zF is given as

X X X m Dt,zFm = cα ακ(k,l)Kα−εκ(k,l) ek(t)πl(z). (3.195) α∈I k∈N l∈N ∗ and since Dt,zFm converges in G then by the Cauchy criterion, there exists r > 0 such that

2 X X X m n 2 2 −2r|α−κ(k,l)|! ||Dt,zFm − Dt,zFn||G−r = |cα − cα| ακ(k,l)(α − κ(k,l))!e → 0. α∈I k∈N l∈N (3.196)

From Fatou’s lemma,

X X X m 2 2 −2r|α−κ(k,l)!| lim |cα − cα| ακ(k,l)(α − κ(k,l))!e m→∞ α∈I k∈N l∈N

X X X m n 2 2 −2r|α−κ(k,l)|! = lim lim inf |cα − cα| ακ(k,l)(α − κ(k,l))!e m→∞ n→∞ α∈I k∈N l∈N

X X X m n 2 2 −2r|α−κ(k,l)|! ≤ lim lim inf |cα − cα| ακ(k,l)(α − κ(k,l))!e = 0. (3.197) m→∞ n→∞ α∈I k∈N l∈N 55

Hence,

2 ||Dt,zFm − Dt,zF ||G−r → 0. (3.198)

So therefore,

∗ Dt,zFm → Dt,zF ∈ G−r ⊂ G . (3.199)

Theorem 3.7.2 Let ∞ X ∗ F = In(fn) ∈ G (3.200) n=0 2 n ∗ where fn ∈ Ls(µ ). Then, Dt,zF ∈ G , µ a.e. given by

X Dt,zF = nIn(fn−1(·, (t, z))). (3.201) n=1

Proof We follow parallel arguments of [67] in the Poisson case. Since F ∈ L2(P ) and define its partial sum as

m X Fm = In(fn) (3.202) n=0

∗ then Fm → F in G as m → ∞. Pick q > 0 be arbitrary, then

∞ 2 X 2 −2qn 2 n ||Fm − F ||G−q = n!||fn||L (µ )e → 0. (3.203) n=m+1

Since q > 0 is arbitrary, then F ∈ G∗. Note that

∞ 2 X 2 −2q(n−1) 2 n−1 ||Dt,zFm − Dt,zF ||G−q = nn!||fn(·, (t, z))||L (µ )e . (3.204) n=m+1 56

Integrating both sides and as m → ∞ yields Z 2 ||Dt,zFm − Dt,zF ||G−q µ(dt, dz) [0,t]×R Z ∞ X 2 −2q(n−1) = nn!||fn(·, (t, z))||L2(µn−1)e µ(dt, dz) [0,t]×R n=m+1 ∞ Z X 2 −2q(n−1) . = nn! ||fn(·, (t, z))||L2(µn−1)e µ(dt, dz) n=m+1 [0,t]×R ∞ X 2 −2q(n−1) = nn!||fn||L2(µn)e n=m+1 ∞ X 2 ≤K n!||fn||L2(µn) → 0 (3.205) n=m+1

for some K > 0. Thus, verifies our claim.

We define the counterpart of Dom D, Dom D0, and Dom D1 in the space G∗ denoted by G, G0, and G1 respectively.

Definition 3.7.1 Let F ∈ G∗ with chaos expansion of the form (3.182). G is the set F ∈ G∗ such that there exists q > 0 such that there exists q > 0 such that

∞ X 2 −2q(n−1) nn!||fn||L2(µn)e < ∞. (3.206) n=1

For F ∈ G, the stochastic derivative DF :Ω × [0,T ] × R → R defined by

∞ X Dt,zF = nIn−1 (fn(·, (t, z))) . (3.207) n=1 with convergence in G∗ × L2(µ). Moreover, we have the following: Z 2 2 ||D F || 2 = ||D F || 2 µ(dt, dz) t,z G−q L (µ) t,z ×L (µ) [0,T ]×R ∞ X 2 −2q(n−1) = nn!||fn||L2(µn)e < ∞. (3.208) n=1

Definition 3.7.2 Let F ∈ G∗ with chaos expansion of the form (3.182). 57

(i) G0 is the set of F ∈ G∗ such that σ > 0 and there exists q > 0 such that

∞ Z T X 2 −2q(n−1) 2 nn! ||fn(·, (t, 0))||L2(µn−1)e σ dt < ∞. (3.209) n=1 0

For F ∈ G0, we define

∞ X Dt,0F = nIn−1 (fn(·, (t, 0))) . (3.210) n=1

with convergence in G∗ × L2(λ). Moreover, we have the following:

Z T 2 2 2 ||D F || 2 = ||D F || σ dt t,0 G−q×L ([0,T ]) t,0 G−q 0 ∞ Z T X 2 −2q(n−1) 2 = nn! ||fn(·, (t, 0))||L2(µn−1)e σ dt < ∞. n=1 0 (3.211)

(ii) G1 is the set of F ∈ G∗ such that ν 6= 0 and there exists q > 0 such that

∞ Z X 2 −2q(n−1) 2 nn! ||fn(·, (t, z))||L2(µn−1)e z ν(dz)dt < ∞. (3.212) n=1 [0,T ]×R0

For F ∈ G1, we define

∞ X Dt,zF = nIn−1 (fn(·, (t, z))) , z 6= 0. (3.213) n=1

with convergence in G∗ × L2(z2ν(dz)dt). Moreover, we have the following: Z 2 2 2 ||D F || 2 = ||D F || z ν(dz)dt t,z G−q×L (R0) t,z G−q [0,T ]×R ∞ Z X 2 −2q(n−1) 2 = nn! ||fn(·, (t, z))||L2(µn−1)e z ν(dz)dt < ∞. n=1 [0,T ]×R0 (3.214)

If σ > 0 and ν 6= 0, then G = G0 ∩ G1 ⊂ G∗.

We state a chain rule in G. The proof of this chain rule is analogous to Dom D case [36], [80] by weakening the assumption from L2(P ) to G∗. 58

Theorem 3.7.3 Chain Rule

1 n Let F = (F1, ··· ,Fn), Fi ∈ G for i ∈ {1, ··· , n} and ϕ ∈ C (R ; R). Suppose that

(i) ϕ(F ) ∈ G∗,

(ii) there exists q0 > 0 such that

n X ∂ϕ(F ) 2 Dt,0 ∈ L (λ), ∂xk k=1 G−q0

(iii) there exists q1 > 0 such that

ϕ(F1 + zDt,zF1, ··· ,Fn + zDt,zFn) − ϕ(F1, ··· ,Fn) 2 2 ∈ L (z ν(dz)dt). z G−q1

Then, ϕ(F ) ∈ G and

n X ∂ϕ(F ) Dt,zϕ(F ) = Dt,0Fk1{z=0} ∂xk k=1 ϕ(F + zD F , ··· ,F + zD F ) − ϕ(F , ··· ,F ) + 1 t,z 1 n t,z n 1 n 1 . (3.215) z {z6=0} Corollary 3.7.4 Product Rule Let F,G ∈ L2(P ) and suppose that F 2,G2,FG ∈ G∗, then

Dt,z(FG) = FDt,zG + GDt,zF + zDt,zFDt,zG. (3.216)

Lastly, we state the chain rule under a Wick polynomial g that is entire.

Theorem 3.7.5 [25] Let F ∈ (S)∗ and g : C → C be entire, then

 0  Dt,zg (F ) =(g ) (F )  Dt,zF. (3.217)

3.8 Generalized Expectation and Generalized Conditional Expectation

Definition 3.8.1 Generalized Expectation and Generalized Conditional Expectation in (S)∗ P ∗ Let F = α∈I cαKα ∈ (S) , we define the generalized expectation E[F ] is given by

E[F ] = c0. (3.218) 59

We define generalized conditional expectation E[F |FA] with respect to FA, A ∈ B(R+) is given by X E[F |FA] = cαE[Kα|FA] (3.219) α∈I whenever it converges in (S)∗.

Remark 3.8.1 If F ∈ L2(P ) ⊂ (G)∗, then the generalized expectation coincides with the usual conditional expectation.

Theorem 3.8.2 [27] Properties of conditional expectation in (S)∗

∗ (i) Suppose that F , G, E[F |Ft], and E[G|Ft] belongs to (S) , then

E[F  G|FA] = E[F |FA]  E[G|FA]. (3.220)

In addition, if F , G, ∈ L1(P ), then

E[F  G] = E[F ] · E[G]. (3.221)

∗   ∗ (ii.) Let F ∈ (S) , and suppose that exp (F ), E[F |Ft], exp (E[F |FA]) ∈ (S) then

  E[exp F |FA] = exp (E[F |FA]). (3.222)

In addition, if F ∈ L1(P ), then

E[exp F ] = exp(E[F ]). (3.223)

Theorem 3.8.3 [27] Suppose that u(s, x) is Skorohod integrable and E[u(s, x)|Ft] ∈ ∗ (S) for all (s, x) ∈ R+ × R, then Z  Z

E u(s, x)M(δs, dx) Ft = E [u(s, x)| Ft] M(δs, dx), R+×R [0,t]×R Z 

E u(s, x)M(δs, dx) Ft =0. (3.224) (t,∞)×R 60

Definition 3.8.2 Generalized Expectation and Generalized Conditional Expectation in G∗ P∞ ∗ ∗ Let F = n=0 In(fn) ∈ G , we define the generalized expectation E[F ] in G is given by

E[F ] = I0(f0) (3.225) and we define the conditional expectation of F with respect to A ∈ B([0,T ]) is given by ∞ X ⊗n E[F |FA] = In(fn1A ). (3.226) n=0

The generalized conditional expectation in G∗ is more tractable to handle com- pared to the generalized conditional expectation in (S)∗.

Remark 3.8.4 If F ∈ L2(P ) ⊂ G∗, then the generalized expectation coincides with the usual conditional expectation.

Lemma 3.8.5 [14], [27] Basic properties of conditional expectation in G∗

(i) Closure under G∗

∗ ∗ Let F ∈ G and A ∈ B([0,T ]), then E[F |FA] ∈ G and for some q > 0.

kE[F |F ]k ≤ ||F || . (3.227) A G−q G−q

(ii) Closure under L2(P )

∗ 2 Let F ∈ G and A ∈ B([0,T ]), then E[F |FA] ∈ L (P ) and

kE[F |FA]kL2(P ) ≤ ||F ||L2(P ). (3.228)

(iii) Linearity

Let F,G ∈ G∗, a, b ∈ R, and A ∈ B([0,T ]), then

E[aF + bG|FA] = aE[F |FA] + bE[G|FA]. (3.229) 61

(iv) Tower Property Let F ∈ G∗, and A, B ∈ B([0,T ]) such that A ⊂ B, then

E[E[F |FA]FB] = E[F |FA] = E[E[F |FB]FA]. (3.230)

Proof We denote the following formal expansions:

∞ ∞ X X F = In(fn),G = In(gn) (3.231) n=0 n=0

2 n where fn, gn ∈ Ls(µ ) for all n ∈ N0.

∗ (i) Since F ∈ G , then ||F ||G−q < ∞ for some q > 0. Hence,

∞ 2 X 2 2 kE[F |F ]k = n! f 1⊗n e−2qn ≤ kF k < ∞. (3.232) A G−q n A L2(µn) G−q n=0

2 (ii) Since F ∈ L (P ), then ||F ||L2(P ) < ∞. Hence,

∞ 2 X ⊗n 2 kE[F |FA]kF ∈L2(P ) = n! fn1A L2(µn) ≤ kF kL2(P ) < ∞. (3.233) n=0

(iii) Using Cauchy-Schwartz inequality, we can show that aF + bG ∈ G∗. Then, we have the following expansion

∞ X ⊗n E[aF + bG|FA] = In((afn + bgn)1A ) n=0 ∞ ∞ X ⊗n X ⊗n =a In(fn1A ) + b In(gn1A ) = aE[F |FA] + bE[G|FA]. n=0 n=0 (3.234)

(iv) Since A ⊂ B, then 1A1B = 1B1A = 1A∩B = 1A. Hence,

∞ ∞ ∞ X ⊗n ⊗n X ⊗n ⊗n X ⊗n E[E[F |FA]FB] = In(fn1A 1B ) == In(fn1B 1A ) = In(fn1A ). n=0 n=0 n=0 (3.235) That is,

E[E[F |FA]FB] = E[F |FA] = E[E[F |FB]FA]. (3.236) 62

Theorem 3.8.6 [27], [66] Let F ∈ G∗ and A ∈ B([0,T ]), then

Dt,zE[F |FA] = E[Dt,zF |FA]1A(t). (3.237)

Proof

∞ ∞ X ⊗n X ⊗(n−1) Dt,zE[F |FA] = Dt,z In(fn1A ) = In−1(fn1A )1A = E[Dt,zF |FA]1A(t). n=0 n=1 (3.238)

∗ Corollary 3.8.7 [27] Let u : R+ × R → G be an F-predictable process, then

(i) Dt,zu(s, x) is F-predictable process for all (t, z) ∈ R+ × R,

(ii) Dt,zu(s, x) = 0, for s < t, z ∈ R.

Proof The assertion holds in (i) and (ii) by applying previous theorem

Dt,zu(s, x) = E[u(s, x)|Fs− ]1[0,s)(t) = E[u(s, x)|Fs− ]1{t>s}. (3.239)

Theorem 3.8.8 [27] Properties of conditional expectation in G∗

(i) Let F,G ∈ G∗, and A ∈ B([0,T ]), then

E[F  G|FA] = E[F |FA]  E[G|FA]. (3.240)

(ii) Let F , exp F ∈ G∗, and A ∈ B([0,T ]) then

  E[exp F |FA] = exp (E[F |FA]). (3.241) 63

3.9 Skorohod Integration on G∗

Definition 3.9.1 Skorohod Integral in G∗

∗ Let u : R+ × R → G with the formal expansion given by

∞ X u(t, z) = In(fn(·, t, z)) (3.242) n=0 such that

∞ X ˜ 2 −2q(n+1) (n + 1)!||f||L2(µn+1)e < ∞ (3.243) n=0

˜ 2 n+1 where fn ∈ Ls(µ ), and for some q > 0. Then we define the Skorohod integral of u with respect to M as follows: Z ∞ X ˜ δ(u) = u(t, x)M(δt, dx) = In+1(fn). (3.244) R+×R n=0 We say that u is Skorohod integrable if δ(u) ∈ G∗, that is, there exists some q > 0 such that

∞ 2 X 2 −2q(n+1) ˜ 2 n+1 ||δ(u)||G−q = (n + 1)!||fn||L (µ )e < ∞. (3.245) n=0 Theorem 3.9.1 Fundamental Theorem of Stochastic Calculus

∗ Let u : R+ × R → G be a random field satisfying the following conditions:

(i) u ∈ L2(P × µ),

(ii) Dt,zu is Skorohod integrable for all (t, z) ∈ [0,T ] × R,

∗ ∗ (iii) Dt,zδ(u) ∈ G and δ(Dt,zu) ∈ G , for all (t, z) ∈ [0,T ]×R and there exists q > 0 such that Z 2 ||Dt,zδ(u)||G−q µ(dt, dz) < ∞, (3.246) [0,T ]×R

Z 2 ||δ(Dt,zu)||G−q µ(dt, dz) < ∞. (3.247) [0,T ]×R 64

Then,

Dt,z (δ(u)) = u(t, z) + δ(Dt,zu), (3.248) that is, Z Z Dt,z u(s, x)M(δs, dx) = u(t, z) + Dt,zu(s, x)M(δs, dx). (3.249) [0,T ]×R [0,T ]×R Proof First, suppose the base case where u(s, x) has of the form

u(s, x) = In(fn(·, (s, x))) (3.250) then

˜ δ(u) = In+1(fn) (3.251) where

1 f˜ = f˜ ((t , z ), ··· , (t , z )) = [f (·, (t , z )) + ··· f (·, (t , z ))] . n n 1 1 n+1 n+1 n + 1 n 1 1 n n+1 n+1 (3.252)

Since

1 f˜ (·, (t, z)) = [f (·, (t , z ), (t, z)) + ··· f (·, (t , z ), (t, z)) + f (·, ·, (t, z))] n n + 1 n 1 1 n n n n (3.253) then

˜ Dt,zδ(u) =(n + 1)In(f(·, (t, z)))

=In(fn(·, (t1, z1), (t, z))) + ··· In(fn(·, (tn, zn), (t, z))) + In(fn(·, ·, (t, z)))

=In(fn(·, (t1, z1), (t, z))) + ··· In(fn(·, (tn, zn), (t, z))) + u(t, z) (3.254) and also,

Dt,zu(s, x) = nIn−1(fn(·, (t, z), (s, x))). (3.255) 65

Then from (ii), its Skorohod integral is given as Z δ(Dt,zu) = Dt,zu(s, x)M(δs, dx) [0,t]×R Z = nIn−1(fn(·, (t, z), (s, x))M(δs, dx) [0,t]×R ˜ =nIn(fn(·, (t, z), ·)) (3.256) where 1 f˜ (·, (t, z), ·) = [f (·, (t, z), (t , z )) + ··· + f (·, (t, z), (t , z ))] (3.257) n n n 1 1 n n n is the symmetrization with respect to (t1, z1), ··· , (tn, zn). Hence, from (3.256) and (3.257) yields

δ(Dt,zu) =In(fn(·, (t1, z1), (t, z))) + ··· In(fn(·, (tn, zn), (t, z))) (3.258)

Then from (3.254) and (3.258) yields (3.248). On the other hand, consider the general case of u(s, x) has of the form

∞ X u(s, x) = In(fn(·, (s, x))). (3.259) n=0 Then, we have the following:

∞ X ˜ δ(u) = In+1(fn), (3.260) n=0 ∞ X ˜ Dt,zδ(u) = (n + 1)In(fn(·, (t, z))). (3.261) n=0 Consider the partial sum

m X um(s, x) = In(fn(·, (s, x))). (3.262) n=0 From (i) and by isometry, Z ∞ 2 X 2 ||u||L2(P ×µ) = ||fn(·, (t, z))||L2(µn)µ(dt, dz) [0,T ]×R n=0 ∞ X 2 = ||fn||L2(µn+1) < ∞. (3.263) n=0 66

So therefore, we have the following convergence as m → ∞, ∞ 2 X 2 ||u − um||L2(P ×µ) = ||fn||L2(µn+1) → 0. (3.264) n=m+1

2 Hence, um → u in L (P × µ). Applying the result from base case, we obtain

Dt,z(δ(um)) = δ(Dt,zum) + um(t, z). (3.265)

To show (3.248), we need to show the following as m → ∞,

Dt,z(δ(um)) →u(t, z) + δ(Dt,zu), (3.266)

Dt,z(δ(um)) →Dt,z(δ(u)) (3.267) in G∗ × L2(µ). To show (3.266), we have we have the following:

∞ X Dt,zu(s, x) = nIn−1(fn(·, (t, z), (s, x))) n=1 ∞ ∞ X ˜ X ˜ δ(Dt,zu(s, x)) = nIn(fn(·, (t, z), ·)) = In(nfn(·, (t, z), ·)). (3.268) n=1 n=0 where the last equation is from (ii). Then from (iii), there exists q > 0 such that Z 2 ||δ(Dt,zu)||G−q µ(ds, dx) [0,T ]×R Z ∞ X ˜ 2 −2qn = n!||nfn(·, (t, z), ·)||L2(µn)e µ(dt, dz) [0,T ]×R n=0 ∞ Z X 2 ˜ 2 −2qn = n!n ||fn(·, (t, z), ·)||L2(µn)µ(dt, dz)e n=0 [0,T ]×R ∞ X 2 ˜ 2 −2qn = n!n ||fn||L2(µn+1)e < ∞. (3.269) n=0 Hence, as m → ∞, we obtain Z ∞ 2 X 2 2 −2qn ˜ 2 n+1 ||δ(Dt,zu) − δ(Dt,zum)||G−q µ(ds, dx) = n!n ||fn||L (µ )e → 0. [0,T ]×R n=0 (3.270)

This implies as m → ∞,

∗ 2 δ(Dt,zum) → δ(Dt,zu), G × L (µ). (3.271) 67

From (3.265), we have the following:

∗ 2 Dt,z(δ(um)) → u(t, z) + δ(Dt,zu), G × L (µ). (3.272)

On the other hand, to show (3.267), note that

˜ (n + 1)f(·, (t, z)) =fn(·, (t1, z1), (t, z)) + ··· + fn(·, (tn, zn), (t, z)) + fn(·, ·, (t, z)) ˜ =nf(·, (t, z), ·) + fn(·, ·, (t, z)) (3.273)

then we have,

n 1 f˜(·, (t, z)) = f˜ (·, (t, z), ·) + f (·, ·, (t, z)). (3.274) n + 1 n n + 1 n

From the parallelogram inequality

2 2 2n 2 2 2 ||f˜ (·, (t, z))|| 2 n ≤ ||f˜(·, (t, z), ·)|| 2 n + ||f (·, ·, (t, z))|| 2 n . n L (µ ) (n + 1)2 L (µ ) (n + 1)2 n L (µ ) (3.275)

Hence,

˜ 2 ||fn||L2(µn+1) Z ˜ 2 = ||fn(·, (t, z))||L2(µn)µ(dt, dz) [0,T ]×R Z  2  2n ˜ 2 2 2 ≤ 2 ||fn(·, (t, z), ·)||L2(µn) + 2 ||fn(·, ·, (t, z))||L2(µn) µ(dt, dz) [0,T ]×R (n + 1) (n + 1) 2 2n ˜ 2 2 2 = ||f || 2 n+1 + ||f || 2 n+1 . (3.276) (n + 1)2 n L (µ ) (n + 1)2 n L (µ )

So therefore, Z 2 ||Dt,zδ(u)||G−q µ(dt, dz) [0,T ]×R ∞ ∞ X 2 ˜ 2 −2qn X 2 −2qn ≤ 2 n n!||fn||L2(µn+1)e + 2 n!||fn||L2(µn+1)e n=0 n=0 Z 2 2 2 ≤ 2 ||Dt,zδ(u)||G−q µ(dt, dz) + 2||u||L (P ×µ) < ∞. (3.277) [0,T ]×R 68

The last term is finite from (i) and (iii). Then finally, we have the following expression as m → ∞, Z 2 ||Dt,zδ(u) − Dt,zδ(um)||G−q µ(dt, dz) [0,T ]×R ∞ ∞ X 2 ˜ −2qn X −2qn ≤ 2 n n!||fn||L2(µn+1)e + 2 n!||fn||L2(µn+1)e → 0. (3.278) n=m+1 n=m+1

Hence, as m → ∞

∗ 2 Dt,z(δ(um)) → u(t, z) + Dt,z(δ(u)), G × L (µ). (3.279)

Finally, the limits of the integral in (3.249) is a consequence of (3.272).

The special case of the theorem if u is predictable then by applying Corollary 3.8.7, we have the following corollary.

Corollary 3.9.2 Let u satisfies the conditions of the preceding theorem and in addi- tion, suppose that it is also predictable, then we have the following identity: Z Z Dt,z u(s, x)M(ds, dx) = u(t, z) + Dt,zu(s, x)M(ds, dx). (3.280) [0,T ]×R [t,T ]×R Corollary 3.9.3 Let u satisfies the conditions of the preceding corollary, then Z Z −1 Dt,z u(s, 0)dW (s) =σ u(t, 0)1{z=0} + Dt,zu(s, 0)dW (s), (3.281) [0,T ] [t,T ] Z Z ˜ −1 ˜ Dt,z u(s, x)xN(ds, dx) =σ u(t, z)1{z6=0} + Dt,zu(s, x)N(ds, dx). [0,T ]×R0 [t,T ]×R0 (3.282)

Remark 3.9.4 From the corollary, we have the following identities:

(i) For z = 0, Z Z −1 Dt,0 u(s, 0)dW (s) =σ u(t, 0) + Dt,zu(s, 0)dW (s), (3.283) [0,T ] [t,T ] Z Z ˜ ˜ Dt,0 u(s, x)xN(ds, dx) = Dt,zu(s, x)N(ds, dx), (3.284) [0,T ]×R0 [t,T ]×R0 69

(ii) For z 6= 0, Z Z Dt,z u(s, 0)dW (s) = Dt,zu(s, 0)dW (s), (3.285) [0,T ] [t,T ] Z Z ˜ ˜ Dt,z u(s, x)xN(ds, dx) =u(t, z) + Dt,zu(s, x)xN(ds, dx) [0,T ]×R0 [t,T ]×R0 (3.286)

Proof From the independent measure

˜ M(ds, dx) = σdW (t)δ0(x) + xN(ds, dx)(1 − δ0(x)) (3.287)

and from (3.281), we have following: Z Dt,z u(s, x)M(ds, dx) [0,T ]×R Z Z ˜ =σDt,z u(s, 0)dW (s) + Dt,z u(s, x)xN(ds, dx) (3.288) [0,T ] [0,T ]×R0 and Z u(t, z)+ Dt,zu(s, x)M(ds, dx) [t,T ]×R Z = u(t, z)1{z=0} + σ Dt,zu(s, 0)dW (s) [0,T ] Z ˜ +u(t, z)1{z6=0} + Dt,zu(s, x)xN(ds, dx). (3.289) [0,T ]×R0 Separating the continuous and the jump term, we obtain the desired identity.

We extend the concept of (S)∗ integrability [45], [64] in the Canonical L´evyspace.

Definition 3.9.2 (S)∗ integrability

∗ ∗ The random field u : R+ × R is (S) -integrable if the action of u for all F ∈ (S) satisfies

< u, F >∈ L1(µ) (3.290)

The (S)∗-integral denoted by

. Z I = u(t, z)µ(dt, dz) (3.291) R+×R 70 is a unique element in (S)∗ such that Z  Z u(t, z)µ(dt, dz),F = hu(t, z),F i µ(dt, dz). (3.292) R+×R R+×R Theorem 3.9.5 Wick-Skorohod Identity Let u be Skorohod integrable with respect to M, then u(t, z)  M˙ (t, x), for all (t, z) ∈

∗ R+ × R is (S) integrable and Z Z u(t, x)M(δt, dz) = u(t, z)  M˙ (t, z)µ(dt, dz). (3.293) R+×R R+×R Proof Since G ∈ (S)∗, then it remains to show the identity in (3.293). Likewise, since u is Skorohod-integrable with respect to M, then it has a representation of the form ∞ X X u(t, z) = cα(t, z)Kα = In(fn(·, (t, z))) (3.294) α∈I n=0 2 n where fn(·, (t, z)) ∈ Ls(µ ). The right-hand side of (3.293) yields the following: Z u(t, z)  M˙ (dt, dz)µ(dt, dz) R+×R Z X X X = cα(t, x)Kα  ek(t)πm(z)Kκ(k,m) µ(dt, dz) × R+ R α∈I k∈N m∈N X X X Z = cα(t, x)ek(t)πm(z)µ(dt, dx)Kα+κ(k,m) × α∈I k∈N m∈N R+ R X X X = < cα, ekpm >L2(µ) Kα+κ(k,m) . (3.295) α∈I k∈N m∈N Now since

X ⊗ˆ α fn(·, (t, z)) = cα(t, z)δ (3.296) |α|=n

Then, fn(·, (t, z)) has the following orthonormal expansion

X X X ⊗ˆ α fn(·, (t, z)) = < cα, ekπm >L2(µ) δ ek(t)πm(z). (3.297) k∈N m∈N |α|=n 71

Hence, the left-hand side of (3.293) yields the following: Z u(t, z)M(δt, dz) R+×R ∞ Z X = In(fn(·, t, x))M(δt, dz) R+×R n=0   Z ∞ X X X X ⊗ˆ α = In  < cα, ekπm >L2(µ) δ ek(t)πm(z) M(δt, dz) × R+ R n=0 k∈N m∈N |α|=n   ∞ Z X X X X ⊗ˆ α ⊗κ(k,m) = In  < cα, ekπm >L2(µ) δ ⊗ δ  M(δt, dz) × n=0 R+ R k∈N m∈N |α|=n

∞   X X X X ⊗ˆ (α+κ(k,m)) = In+1  < cα, ekπm >L2(µ) δ  n=0 k∈N m∈N |α|=n ∞   X X X X ⊗ˆ (α+κ(k,m)) = < cα, ekπm >L2(µ) In+1 δ n=0 |α|=n k∈N m∈N X X X = < cα, ekpm >L2(µ) Kα+κ(k,m) . (3.298) α∈I k∈N m∈N Finally, form (3.295) and (3.298) gives us the desired identity.

3.10 Clark-Ocone Theorem in L2(P )

With the framework concepts presented for the white noise theory for Canonical L´evyprocesses, our goal is to show a Clark-Ocone theorem in L2(P ) with respect to the independent random measure M. The steps in proving the Clark-Ocone theorem in L2(P ) is similar to the Wiener and Poisson white noise cases [27] by first showing the Clark-Ocone theorem for a Wick polynomial then establish an auxiliary lemma (Lemma 3.10.4) that will prove the Clark-Ocone theorem in L2(P ). Denote the following polynomial:

X α N P (x) = cαx , x ∈ R , cα ∈ R (3.299) α∈I 72

α . α1 α2 0 where x = x1 x2 ··· and xj = 1, j ∈ N. Denote its Wick version of the polynomial T at X = (X1, ··· Xn) by

 X α P (X) = cαX . (3.300) α∈I

∗ Throughout this section, we assume that a process u : R+ → (S) is differentiable in the (S)∗ sense. define the following processes in (S)∗ as follows: Z Xk,m = ek(x)πm(s)M(ds, dx) = Kκ(k,m) , R+×R Z (t) . (t) Xk,m = ek(x)πm(s)M(ds, dx) = Kκ(k,m) . (3.301) [0,t]×R from the relation (3.163) in (S)∗, then from the Wick-Skorohod identity, we have Z ˙ Xk,m = ek(x)πm(s)M(ds, dx)µ(ds, dx), R+×R Z (t) ˙ Xk,m = ek(x)πm(s)M(ds, dx)µ(ds, dx). (3.302) [0,t]×R Then, from the have the following derivative in (S)∗

d X(t) = e (x)L (t) (3.303) dt k,m k m

where Z ˙ Lm(t) = πm(t)M(ds, dx)η(dx). (3.304) R

We let P (x) be a polynomial in Rn, that is, P (x) can be written as follows:

X α n n P (x) = cαx x ∈ R , cα ∈ R, I = N . (3.305) α∈I and let

T X =(Xk1,m1 , ··· ,Xkn,mn ) , X(t) =(X(t) , ··· ,X(t) )T , k1,m1 kn,mn T α =(ακ(k1,m1), ··· , ακ(kn,mn)) (3.306) 73

where ki, mi ∈ N, for all i ∈ {1, ··· , n}. Then, its Wick version of the polynomial at T X = (X1, ··· Xn) by

 X α P (X) = cαX . (3.307) α∈I Moreover, we have the following identities:

α κ(k1,m1) κ(kn,mn) X = (Xk1,m1 )  · · ·  Xk,mn = Kα, α  κ(k1,m1)  κ(kn,mn) X(t) = X(t)  · · ·  X(t) = (t). (3.308) k1,m1 k,mn Kα

If F = P (X) ∈ G∗, then its generalized conditional expectation in G∗ is given by

X (t)α  (t) E[F |Ft] = cα X = P X . (3.309) α∈I

∗ We define the concept of FT measurablity in G in the following definition.

∗ Definition 3.10.1 [27] Let T > 0 be a constant, we say that F ∈ G is FT measur- able if

E[F |FT ] = F. (3.310)

∗ Lemma 3.10.1 [27] F ∈ G is FT measurable iff F can be written as

X (T )α F = cα X . (3.311) α∈I Lemma 3.10.2 Differentiation of the Wick Polynomial

(i)

n X X α−κ(ki,mi) Dt,zP (X) = cαακ(ki,mi)X eki (t)πmi (z), (3.312) i=1 α∈I n  X X (α−κ(ki,mi)) Dt,zP (X) = cαακ(ki,mi)X eki (t)πmi (z). (3.313) i=1 α∈I

(ii)

n  d X  ∂P  P (X(t)) = X(t)  e (t)L (z). (3.314) dt ∂x ki mi i=1 i 74

Proof (i) From the chain rule,

n X ∂P (X) D P (X) = D X , (3.315) t,z ∂x t,z ki,mi i=1 i n X ∂P (X) D P (X) = D X . (3.316) t,z ∂x t,z ki,mi i=1 i Since

∂P (X) X α−κ(ki,mi) = cαακ(ki,mi)X , (3.317) ∂xi α∈I  ∂P (X) X (α−κ(ki,mi)) = cαακ(ki,mi)X (3.318) ∂xi α∈I and Z

Xki,mi = eki (s)πmi (x)M(ds, dx) = I1(eki πmi ) (3.319) R+×R Then,

Dt,zXki,mi = eki (t)πmi (x). (3.320)

Plugging (3.317) − (3.320) into (3.315) − (3.316), yields the desired result.

(ii) From the Wick chain rule and (3.319), we obtain n   d X ∂P d (t) P (X(t)) = X(t)  X dt ∂x dt ki,mi i=1 i n  X  ∂P  = X(t)  e (t)L (z). (3.321) ∂x ki mi i=1 i

To show the Clark-Ocone theorem in L2(P ), we first establish a Clark-Ocone theorem for polynomials.

Theorem 3.10.3 Clark-Ocone Theorem for Polynomials

∗ Let F ∈ G be an FT measurable Wick polynomial of degree n, then Z F =E[F ] + E[Dt,zF |Ft−]M(dt, dz). (3.322) [0,T ]×R 75

Proof Since F Wick polynomial of degree n, then it has of the form

 X α F = P (X) = cαX (3.323) α∈I

n where P (x) is a polynomial in R . Moreover, since F is an an FT measurable, then " #

X (T )α F =E[F |FT ] = E cα X FT α∈I h α i X (T ) = cαE X FT α∈I X (T )α = cα X . (3.324) α∈I

The expansion F and E[Dt,zF |Ft−] consists of finite number of terms. Hence, both processes are Skorohod integrable. Then from the Wick-Skorohod identity and from the preceding lemma, Z E[Dt,zF |Ft−]M(dt, dz) [0,T ]×R " n  # Z X  ∂P  = E (X) e (t)π (z) F  M˙ (t, z)η(dz)dt ∂x ki mi t− [0,T ]×R i=1 i n  Z X  ∂P  = X(t) e (t)π (z)  M˙ (t, z)η(dz)dt ∂x ki mi [0,T ]×R i=1 i n  Z X  ∂P  Z = X(t)  π (z)M˙ (t, z)η(dz)e (t)dt ∂x mi ki [0,T ] i=1 i R n  Z X  ∂P  Z = X(t)  e (t) π (z)M˙ (t, z)η(dz)dt. ∂x ki mi [0,T ] i=1 i R (3.325)

Now, since

Z d e (t) π (z)M˙ (t, z)η(dz)dt = e (t)L (t) = X(t) . (3.326) ki mi ki mi ki,mi R dt 76

Then, from the Wick chain rule and since F is be FT − measurable, we finally obtain Z E[Dt,zF |Ft−]M(dt, dz) [0,T ]×R Z n   X ∂P d (t) = X(t)  X dt ∂x dt ki,mi [0,T ] i=1 i Z d = P  X(t) [0,T ] dt =P  X(T ) − P  X(0)

=E[F |FT ] − E[F |F0] =F − E[F ]. (3.327)

We need the following auxiliary lemma in establishing a Clark-Ocone in L2(P ).

Theorem 3.10.4 Let F ∈ G∗, then we have the following:

∗ ∗ (i) Dt,zF ∈ G , G , µ a.e.,

∗ ∗ (ii) Let Fn ∈ G , ∀ n ∈ N such that Fn → F in G as n → ∞, then there exists a ∗ ∗ sub-sequence Fnk , k ∈ N such that Dt,zFnk → Dt,zF ∈ G as k → ∞, G , µ a.e.

Proof The proof is similar to the proof of Okur [66] in the Wiener case.

(i) Since F ∈ G∗, then it has a formal expansion X F = cαKα (3.328) α∈I and there exists q ∈ R such that

2 X 2 −2q|α| ||F ||G−q = α!cαe . (3.329) α∈I Then X X X Dt,zF = cβ+εκ(k,m) (βκ(k,m) + 1)ek(t)πm(z)Kβ β∈I k∈N m∈N X = gβ(t, z)Kβ (3.330) β∈I 77 where

X X gβ(t, z) = cβ+εκ(k,m) (βκ(k,m) + 1)ek(t)πm(z). (3.331) k∈N m∈N

Since ekpm is an orthonormal basis with respect to µ, then

Z X X |g (t, z)|2η(dz)dt = c2 (β + 1)2. (3.332) β β+εκ(k,m) κ(k,m) × R+ R k∈N m∈N Also, since

X k D F k2 = g (t, z)β!e−2(q+1)|β|. (3.333) t,z G−(q+1) β β∈IN

From the identity (z + 1)e−z ≤ 1 for all z ≥ 0, then we obtain the following expression Z k D F k2 η(dx)dt t,z G−(q+1) R+×R X X X = c2 (β + 1)2β!e−2(q+1)|β| β+εκ(k,m) κ(k,m) β∈I k∈N m∈N X X X = (β + 1)e−2(q+1)|β|c2 (β + ε )! κ(k,m) β+εκ(k,m) κ(k,m) β∈I k∈N m∈N X X X ≤ (|β| + 1)e−2(q+1)|β| c2 (β + ε )! β+εκ(k,m) κ(k,m) β∈I k∈N m∈N X −2(q+1)|β| X 2 = (|β| + 1)e cαα! β∈I |α|=|β|+1 ∞ X X 2 −2(q+1)n = α!cαe n=0 |α|=n+1 ∞ X X 2 −2qn ≤ α!cαe n=0 |α|=n

X 2 −2q|α| = α!cαe α∈I 2 =||F ||G−q . (3.334)

∗ ∗ Hence, Dt,zF ∈ G−(q+1) ⊂ G , G , µ a.e. 78

∗ (ii) Since Fn → F in G as n → ∞, then ∃q ∈ N0 such that k Fn − F kG−q → 0

as n → ∞. Let Gn = Fn − F , then it is suffice to show that there exists a

sub-sequence Gnk such that Dt,xGnk → 0. From our previous result, we obtain Z k D G k2 η(dx)dt ≤k G k2 → 0. t,z n G−(q+1) n G−q R+×R (3.335)

2 Hence, k Dt,zGn kG−(q+1) → 0 in L (η × λ). Thus, there exists a sub-sequence ∗ ∗ k Dt,zGnk kG−(q+1) for k ∈ N such that as k → ∞,Dt,zGnk → 0 in G , G , µ a.e.

Theorem 3.10.5 Clark-Ocone Theorem in L2(P )

2 Let F ∈ L (P ) be FT measurable, then Z F =E[F ] + E[Dt,zF |Ft−]M(dt, dz) (3.336) [0,T ]×R 2 where E[Dt,zF |Ft−] ∈ L (P × µ), (t, z) ∈ [0,T ] × R.

Proof Since F is FT -measurable, then it has chaos expansion of the form X F = cαKα. (3.337) α∈I

Let Fn be the truncation of F such that X Fn = cαKα (3.338) α∈In where In = {α ∈ I : |α| ≤ n, Index(α) ≤ n}. Then, from the Clark-Ocone theorem for polynomials, for at n ∈ N, Z Fn =E[Fn] + E[Dt,zF |Ft−]M(dt, dz). (3.339) [0,T ]×R From Itˆo’srepresentation theorem, there exists a unique predictable process u(t, z),

(t, z) ∈ [0,T ] × R such that Z  E u2(t, z)µ(t, z) < ∞ (3.340) [0,T ]×R 79 and Z F =E[F ] + u(t, z)M(dt, dz). (3.341) [0,T ]×R From the isometry relation (Theorem 2.8.1), we obtain

 2 E |(Fn − E[Fn]) − (F − E[F ])| " Z 2#

=E (E[Dt,zFn|Ft−] − u(t, z))M(dt, dz) [0,T ]×R Z  2 =E |E[Dt,zFn|Ft−] − u(t, z)| µ(dt, dz) . (3.342) [0,T ]×R

2 Then, since Fn → F in L (P ), then the right hand side approaches zero as n → ∞. Thus, we have the following convergence:

2 E[Dt,zFn|Ft− ] → u(t, z),L (P × µ). (3.343)

2 ∗ Now since Fn → F ∈ L (P ) ⊂ G , then from Lemma 3.10.4 then there exists a sub-sequence Fnk , k ∈ N such that

∗ ∗ E[Dt,xFnk |Ft−] → E[Dt,xF |Ft−] ∈ G , k → ∞ G , µ a.e. (3.344)

Taking a further sub-sequence, we have

2 E[Dt,xFnk |Ft−] → u(t, z), k → ∞ L (P ), µ a.e. (3.345)

Thus, it follows that

2 u(t, z) = E[Dt,xF |Ft−],L (P ), µ a.e. (3.346)

3.11 Multivariate Extension

In this section, we provide an overview of extending the white noise frame for the Canonical L´evyprocess in the multivariate setting. We follow a similar framework 80

of [1] and [64] which combines the Gaussian white noise process and pure jump L´evy white noise process as a product σ-field of these processes. Since the arguments of the theorems in the multivariate case is similar to the univariate case, then we shall state the theorems without proof.

3.11.1 Notations

(j) (j) (j) (j) Let, (Ω , F , {Ft }t≥0,P ), j ∈ {1, ··· ,N} be an independent probability space for the white noise Canonical L´evyprocess. Its independent measure Mj is given by Z Z ˜ Mj(E) = σj dWj(t) + zdNj(dt, dz) (3.347) 0 E0 E 0 where E0 = {t ∈ R+ :(t, 0) ∈ E} and E = E \ E0. Then for E1,E2 ∈ B(R+ × R)

such that µj(E1) < ∞, µj(E2) < ∞

E[Mj(E1)Mj(E2)] = µj(E1 ∩ E2) (3.348)

where µj is a measure on ([0,T ] × R, B([0,T ] × R), where Z Z 2 2 µj(E) = σj dt + z dνj(z)dt, E ∈ B([0,T ] × R). (3.349) 0 E0 E In differential form, we have

2 2 µj(dt, dz) = σj dδ0(z)dt + z (1 − δ0(z))dνj(z)dt = λj(dt)ηj(dx) (3.350)

where λj(dt) = dt is the Lebesgue measure and

2 2 ηj(dz) = σj dδ0(z)dt + z (1 − δ0(z))dνj(z). (3.351) 81

Denote (Ω, F, {Ft}t≥0,P ) be a filtered probability space of the multivariate white (j) (j) (j) (j) noise Canonical L´evyprocess which is a product space of (Ω , F , {Ft }t≥0,P ), j ∈ {1, ··· ,N} where

Ω =Ω(1) × · · · × Ω(N),

F =F (1) ⊗ · · · ⊗ F (N),

(1) (N) Ft =Ft ⊗ · · · ⊗ Ft , t ≥ 0 P =P (1) × · · · × P (N). (3.352)

(1) (N) (1) We let the index α = (α , ··· , α ) where αj ∈ I and the index set IN = I × (N) (j) ˙ · · · × I where I = I where j ∈ {1, ··· ,N}. The white noise processes Xj, Xj, ˙ ˙ ˙ Mj, and Mj are defined naturally from X, X, M, and M respectively. Likewise, we have the following Radon-Nikodym relation in (S)∗

˙ Mj(dt, dz) = Mj(t, z)µj(dt, dz). (3.353)

3.11.2 Chaos Expansion

Denote the following notations:

N N X Y |α| = |α(j)|, α! = α(j)!. (3.354) j=1 j=1 Consider the product of the form

N Y (j) (1) (N) Kα(ω) = Kα(j) (ω ), ω = (ω , ··· , ω ) (3.355) j=1

{ } forms an orthogonal basis is L2(P ) with the following relation: Kα α∈IN

E [KαKβ] = α!1{α=β}. (3.356)

2 For F ∈ L (P ), FT -measurable can be uniquely written of the form

X F = cαKα. (3.357) α∈IN 82

for some constants cα ∈ R. In terms of the iterated integral, F can be written as follows: N X Y  F = Inj fj,nj . (3.358) N j=1 n∈N0

T 2 nj where n = (n1, ··· , nN ) ,nj ∈ N0 and fj,nj ∈ Ls(µ ), j ∈ {1, ··· ,N}. From isometry and independence, we have following relation:

N 2 X 2 Y (j) X 2 ||F ||L2(P ) = cα α ! = cαα!. (3.359) α∈IN j=1 α∈IN Alternatively, in terms of the iterated integral:

N N 2 X Y 2 X Y 2 n n ||F ||L2(P ) = nj!||fj,nj ||L2(µ j ) = n! ||fj,nj ||L2(µ j ). (3.360) N j=1 N j=1 n∈N0 n∈N0

3.11.3 Stochastic Test and Distribution Functions

The space G and G∗

Suppose that F has a formal expansion of the form (3.357). Then, F belongs to

the space Gq, q ∈ R if N 2 X 2 Y (j) (j) ||F ||Gq = cα α ! exp(2qα ) α∈IN j=1 X 2 = cαα! exp(2q|α|) < ∞. (3.361)

α∈IN Alternatively, in terms of the iterated integral:

N 2 X Y 2 n ||F ||Gq = nj!||fj,nj ||L2(µ j ) exp(2qnj) N j=1 n∈N0 N X Y 2 n = n! ||fj,nj ||L2(µ j ) · exp(2q|n|) < ∞. (3.362) N j=1 n∈N0 Define the stochastic test function G given by \ G = Gq (3.363) q>0 83 endowed with inductive topology. The stochastic test function G∗ is defined as

[ G = G−q (3.364) q>0 endowed with projective topology. Note that the G∗ is the dual of G. Let G ∈ G and F ∈ G∗ with the following formal expansion:

X X F = cαKα,G = dαKα. (3.365) α∈IN α∈IN Then the action of F on G is given by

X < G, F >G,G∗ = α!cαdα. (3.366)

α∈IN

Kontratiev Spaces and Hida Spaces

Let p ∈ [0, 1]. Suppose that F has a formal expansion of the form (3.357). Then,

F belongs to the space (S)q, q ∈ R if

N 2 X 2 Y (j) 1+p α(j)q kF kp,q = cα (α !) (2N) α∈IN j=1 N X 2 Y (j) α(j)q = cα α !(2N) < ∞. (3.367) α∈IN j=1

Define the Kondratiev test function (S)p as \ (S)p = (S)p,q (3.368) q>0 endowed with the projective topology. The Kondratiev distribution function (S)−p as

∗ [ (S) = (S)−p,−q (3.369) q>0 endowed with the inductive topology. Note that (S)∗ is a dual of (S). Let G ∈ (S) and F ∈ S∗ with the following formal expansion:

X X F = cαKα,G = dαKα. (3.370) α∈IN α∈IN 84

Note that (S)−p is a dual of (S)p. The action of G ∈ (S)−p on F ∈ (S)p, with the formal expansion of F and G of the form (3.121) is given by

X < G, F >= α!cαdα. (3.371)

α∈IN The Hida spaces are the special cases of the Kondratiev spaces. The Hida test ∗ ∗ function (S) and Hida distribution function (S) is given by (S) = (S)0 and (S) =

(S)−0 respectively. From the above definitions, we have the following inclusions for p ∈ [0, 1]:

2 ∗ (S)1 ⊂ (S)p ⊂ (S)0 ⊂ G ⊂ L (P ) ⊂ G ⊂ (S)−0 ⊂ (S)−p ⊂ (S)−1. (3.372)

3.11.4 Wick Product

Definition 3.11.1 Wick Product Let F = P a ∈ (S) and G = P b ∈ (S) , then the Wick Product α∈IN αKα −1 β∈IN βKβ −1 of X and Y denoted by X  Y is defined as

X X X X X  Y = aαbβKα+β = aαbβKγ. (3.373) α∈IN β∈IN γ∈I α+β=γ

3.11.5 Stochastic Derivatives

We extend the stochastic derivative in the multivariate case as follows:

X X ⊗ˆ ε(j) Dj,t,zF = cααiK (j) δ i (t, z). (3.374) α−εi α∈IN i∈N

(j) T th (j) where εi = (0, ··· , εi, ··· , 0) such that εi is the j subvector of εi and a zero vector otherwise. Likewise, we can also express Dj,t,zF as follows: 85

ˆ (j) X X X (j) ⊗εκ(k,m) Dj,t,zF = cαακ(k,m)K (j) δ (t, z) (3.375) α−εκ(k,m) α∈IN k∈N m∈N X X X (j) = cαακ(k,m)K (j) ek(t)πm(z) (3.376) α−εκ(k,m) α∈IN k∈N m∈N X X X (j) = c (j) (βκ(k,m) + 1)Kβek(t)πm(z). (3.377) β+εκ(k,m) β∈IN k∈N m∈N Theorem 3.11.1 Let N X Y 2 F = Ini (fi,ni ) ∈ L (P ). (3.378) N i=1 n∈N0

T 2 ni where n = (n1, ··· , nN ) ,nj ∈ N0 and fi,ni ∈ Ls(µ ), i ∈ {1, ··· ,N}. Then, ∗ Dj,t,zF ∈ G , µ a.e. given by

∞ N X  X Y ∗ Dj,t,zF = njInj fj,nj Ini (fi,ni ) ∈ G . (3.379)

nj =1 N−1 i=1,i6=j n/nj ∈N0 Theorem 3.11.2 Closability of Stochastic Derivatives

∗ Let Fm,F ∈ G such that as m → ∞

∗ (i) Fm → F in G ,

∗ (ii) Dj,t,zFm converges in G for j ∈ {1, ··· ,N}.

∗ Then, Dj,t,zFm → Dj,t,zF in G , j ∈ {1, ··· ,N}.

3.11.6 Generalized Conditional Expectation

Let F has a formal expansion of the form (3.358). The conditional expectation

E[F |FA], A ∈ B([0,T ]) is given as

∞ N X Y  ⊗nj  E[F |FA] = Inj fj,nj 1A . N j=1 n∈N0 If F ∈ G∗, we can easily show, by writing the chaos expansion in terms of the iterated integral, the following properties conditional expectation in G∗ holds in the multivariate case. 86

3.11.7 Skorohod Integration on G∗

Definition 3.11.2 Skorohod Integral in G∗

∗ Let u : R+ × R → G be a random field with the formal expansion given by

N ∞ X Y X  ∗ u(t, z) = Ini (fi,ni ) Inj fj,nj (·, (t, z)) ∈ G . (3.380) N−1 i=1,i6=j nj =0 n/nj ∈N0

T 2 nj where n = (n1, ··· , nN ) ,nj ∈ N0, fj,nj ∈ Ls(µ ), j ∈ {1, ··· ,N} such that for some q > 0,

N X −2q(|n|+1) Y 2 2 ˜ n (n + j)!e ||fi,ni ||L2(µni )||fj,nj ||L2(µ j+1 ) < ∞ (3.381) N i=0,i6=j n/nj ∈N0

Define the Skorohod integral of u with respect to Mj as follows: Z δj(u) = u(t, x)Mj(δt, dx) R+×R N ∞ X Y X  ˜  = Ini (fi,ni ) Inj+1 fj,nj (3.382)

N−1 i=0,i6=j nj =1 n/nj ∈N0 We say that u is Skorohod integrable if δ(u) ∈ G∗, that is, there exists some q > 0 such that ||δ(u)||G−q < ∞. Moreover, we have the following

2 ||δ(u)||G−q N ∞ X Y 2 −2qni X 2 −2q(nj +1) ˜ n = ni!||fi,ni ||L2(µni )e nj+1!||fj,nj ||L2(µ j+1 )e N−1 i=0,i6=j nj =1 n/nj ∈N0 N X −2q(|n|+1) Y 2 2 ˜ n = (n + j)!e ||fi,ni ||L2(µni )||fj,nj ||L2(µ j+1 ) < ∞ (3.383) N i=0,i6=j n/nj ∈N0

T where j = (0, ·, 1, ··· , 0) is a unit vector of length n with one at the jth-component and zero otherwise.

Theorem 3.11.3 Fundamental Theorem of Stochastic Calculus

∗ Let u : R+ × R → G be a random field satisfying the following conditions:

(i) u ∈ L2(P × µ), 87

(ii) Dj,t,zu is Skorohod integrable for all (t, z) ∈ [0,T ] × R,

∗ ∗ (iii) Dj,t,zδ(u) ∈ G and δ(Dt,zu) ∈ G , for all (t, z) ∈ [0,T ] × R and there exists q > 0 such that Z 2 ||Dj,t,zδ(u)||G−q µj(dt, dz) < ∞, (3.384) [0,T ]×R Z 2 ||δ(Dj,t,zu)||G−q µj(dt, dz) < ∞. (3.385) [0,T ]×R Then,

Dj,t,z (δ(u)) = u(t, z) + δ(Dj,t,zu), (3.386) that is, Z Z Dj,t,z u(s, x)Mj(δs, dx) = u(t, z) + Dj,t,zu(s, x)Mj(δs, dx). (3.387) [0,T ]×R [0,T ]×R Corollary 3.11.4 Let u satisfy the conditions of the preceding theorem and in addi- tion, suppose that it is also predictable, then we have the following identity: Z Z Dj,t,z u(s, x)Mj(ds, dx) = u(t, z) + Dj,t,zu(s, x)Mj(ds, dx). (3.388) [0,T ]×R [t,T ]×R Corollary 3.11.5 Let u satisfy the conditions of the preceding corollary, then Z Z −1 Dj,t,z u(s, 0)dWj(s) =σj u(t, 0)1{z=0} + Dj,t,zu(s, 0)dW (s), [0,T ] [t,T ] (3.389) Z Z ˜ −1 ˜ Dj,t,z u(s, x)xNj(ds, dx) =σj u(t, z)1{z6=0} + Dj,t,zu(s, x)Nj(ds, dx). [0,T ]×R0 [t,T ]×R0 (3.390) To conclude this section, we state the Wick-Skorohod theorem in the multivariate case.

Theorem 3.11.6 Wick-Skorohod Theorem ˙ Let u be Skorohod integrable with respect to Mj, then u(t, z)  Mj(t, z), for all (t, z) ∈ ∗ R+ × R is (S) integrable and Z Z ˙ u(t, z)Mj(δt, dz) = u(t, z)  Mj(t, z)µj(dt, dz). (3.391) R+×R R+×R 88

3.11.8 Clark Ocone Theorem in L2(P )

We state the Clark-Ocone theorem in the multivariate case in L2(P ), as follows.

Theorem 3.11.7 Clark-Ocone Theorem in L2(P )

2 Let F ∈ L (P ) be FT measurable, then

N X Z F =E[F ] + E[Dj,t,zF |Ft−]Mj(dt, dz) (3.392) j=1 [0,T ]×R

2 where E[Dj,t,zF |Ft−] ∈ L (P × µ), (t, z) ∈ [0,T ] × R for j ∈ {1, ··· ,N}. 89

4. CLARK-OCONE THEOREM UNDER THE CHANGE OF MEASURE AND MEAN-VARIANCE HEDGING

4.1 Girsanov Theorem for L´evyProcesses

To prove the Clark-Ocone theorem under the change in measure, we shall state the Girsanov theorem to be able to define the equivalent measure Q ∼ P . We state the Girsanov theorem for L´evyprocesses.

Theorem 4.1.1 [27] [65] Girsanov Theorem for L´evyProcesses

Suppose that there exists a predictable processes uj(s), j ∈ {1, ··· , d} and θj(s, x) < 1, j ∈ {1, ··· , l} where (s, x) ∈ [0,T ] × R0 such that Z T 2 uj (s)ds <∞, a.s., (4.1) 0 Z 2 (| log(1 − θj(s, x))| + θj (s, x))νj(dx)ds <∞, a.s., (4.2) [0,T ]×R0 for all j ∈ {1, ··· ,N}. Denote the Doleans-Dade exponential Z(t) for t ∈ [0,T ] by

d X Z t 1 Z t  Z(t) = exp − u (s)dW (s) + u2(s)ds + (4.3) j j 2 j j=1 0 0 l Z ! X  ˜  log(1 − θj(s, x))Nj(ds, dx) + (log(1 − θj(s, x)) + θj(s, x))νj(dx)ds j=1 [0,T ]×R0 d Z t l Z !! X X ˜ =E − uj(s)dWj(s) + θj(s, x)Nj(ds, dx) (4.4) j=1 0 j=1 [0,t]×R0

where E is the stochastic exponential operator. Define a measure Q on FT by

dQ(ω) = Z(T, ω)dP (ω). (4.5) 90

Suppose that a Novikov-type condition is satisfied (to be discussed later), then E[Z(T )] = 1 and W Q and N˜ Q is a Brownian motion and compensated Poisson random measure under Q respectively where

Q dWj (t) =dWj(t) + uj(t)dt, (4.6) ˜ Q ˜ Nj (dt, dz) =Nj(dt, dz) + θj(t, z)νj(dz)dt (4.7)

for all j ∈ {1, ··· ,N}.

Remark 4.1.2 We can write (4.6) and (4.7) in matrix-vector form as follows:

dW Q(t) =dW (t) + u(t)dt, (4.8)

N˜ Q(dt, dz) =N˜(dt, dz) + θ(t, z)ν(dz)dt (4.9)

where

T ˜ ˜ ˜ T W (t) = [W1(t), ··· ,Wd(t)] , N(dt, dz) = [N1(dt, dz), ··· , Nl(dt, dz)] , Q Q Q T ˜ Q ˜ Q ˜ Q T W (t) = [W1 (t), ··· ,Wd (t)] , N (dt, dz) = [N1 (dt, dz), ··· , Nl (dt, dz)] ,

T u(t) = [u1(t), ··· , ud(t)] , θ(t, z) = diag[θ1(t, z), ··· , θl(t, z)],

T ν(dz) =[ν1(dz), ··· , νl(dz)] . (4.10)

Applying the Novikov-type conditions to Z(t) to become a martingale implies the following for all t ∈ [0,T ], the dynamics of Z is given as

dZ(t) =Z(t−)dL(t),

Z(0) =1 (4.11)

where L is a local martingale given by d l Z X X ˜ dL(t) = − uj(t)dWj(t) − θj(t, zj)Nj(dt, dz). (4.12) j=1 j=1 R0 Hence, Z(t) = E(M(t)) with the following continuous and discrete parts

N c X dL (t) = − uj(t)dWj(t), (4.13) j=1 N Z d X ˜ dL (t) = − θj(t, z)Nj(dt, dz). (4.14) j=1 R0 91

The corresponding angle bracket process is given as

d Z t c X 2 < L >t= uj (s)ds, (4.15) j=1 0 l Z d X 2 < L >t= θj (s, z)Nj(ds, dz). (4.16) j=1 [0,T ]×R0 We state two Novikov-type theorems below. The first theorem is attributed to Lepin- gle and Memin [55] and the second theorem is attributed to Protter and Shimbo [70].

Theorem 4.1.3 [55], [65] Let L be a local martingale such that ∆L > −1 and

1 X A(t) = < Lc > + [(1 + ∆L(s))) log(1 + ∆L(s))) − ∆L(s)] (4.17) 2 t s∈(0,t] which has a compensator B = {B(t)}t≥0 such that

E[exp(B(T ))] < ∞. (4.18)

Then, E(M) is a u.i. martingale and E(M) > 0 almost surely.

Applying (4.12) to Theorem 4.1.3, we have as follows:

d l 1 X Z t X Z A(t) = u2(s)ds + ((1 − θ (s, x)) log θ (s, x) + θ (s, z))N (ds, dx) 2 j j j j j j=1 0 j=1 [0,t]×R0 (4.19) then its compensator is given as follows:

d l 1 X Z t X Z B(t) = u2(s)ds + ((1 − θ (s, x)) log θ (s, x) + θ (s, x))ν (dx)ds. 2 j j j j j j=1 0 j=1 [0,t]×R0 (4.20)

Theorem 4.1.4 [70], [65] Let L be a square integrable martingale local such that ∆L > −1. If

 1  E exp < Lc > + < Ld > < ∞ (4.21) 2 T T then E(L) is a uniformly integrable martingale. 92

From the Theorem 4.1.4 for L = Z and from (4.15) − (4.16) we have the following Novikov-type condition

" d l !# 1 X Z T X Z E exp u2(s)ds + θ2(s, x)ν (dx)ds < ∞. (4.22) 2 j j j j=1 0 j=1 [0,T ]×R0

Definition 4.1.1 [27] Generalized Bayes Formula We let Q(dω)Z(T )P (dω), where Z is the Doleans-Dade exponential. Let F,Z(T )F ∈ (G)∗, then we define the Generalized Bayes Formula as follows:

E[Z(T )F |F ] EQ[F |F ] = A ,A ∈ B([0,T ]). (4.23) A Z(t)

Remark 4.1.5 If F,Z(T )F ∈ L2(P ) such that the Novikov condition is satisfied, then Z is a martingale and thus satisfies (4.23) which corresponds to the abstract Bayes rule.

4.2 Clark-Ocone Theorem in L2(P ) ∩ L2(Q)

Before we present Clark-Ocone Theorem in L2(P ) ∩ L2(Q), we shall present an important lemma. For simplicity of presentation, we shall assume that N = d = l. The summation can be adjusted accordingly if d 6= l.

Lemma 4.2.1 Stochastic derivative of Z(T ). Suppose that the assumptions of the Girsanov theorem for L´evyprocesses and the

assumptions of fundamental theorem of stochastic calculus for ui(t) and log(1−θi(t, z)) for i ∈ {1, ··· ,N} are satisfied. Then we have following stochastic derivative for Z(T ).

(i) If z = 0, then

" N Z −1 X Q Dj,t,0Z(T ) = Z(T ) −σj uj(t) − Dj,t,0ui(s)dWi (s) i=1 [t,T ] Z D θ (s, x)  + j,t,0 i N˜ Q(ds, dx) . (4.24) 1 − θ (s, x) i [t,T ]×R0 i 93

(ii) If z 6= 0, then

−1 Dj,t,zZ(T ) = z Z(T ) (exp(zDj,t,z log Z(T )) − 1) (4.25)

where

−1 Dj,t,z log Z(T ) = z log(1 − θj(t, z)) N X  Z T 1 Z T + − D u (s)dW Q(s) − z(D u (s))2ds j,t,z i i 2 j,t,z i i=1 0 t Z  zD θ (s, x) + z−1 log 1 − j,t,z i N˜ Q(ds, dx) 1 − θ (s, x) i [t,T ]×R0 i Z   zD θ (s, x)   + z−1 log 1 − j,t,z i (1 − θ (s, x)) + D θ (s, x) ν (dx)ds . 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i (4.26)

Proof (i) Consider the process (4.3). We let

N X Z t 1 Z t  Y (t) = log Z(t) = − u (s)dW (s) + u2(s)ds + i i 2 i i=1 0 0 N Z X  ˜  log(1 − θi(s, x)Ni(ds, dx) + (log(1 − θi(s, x)) + θi(s, x))νi(dx)ds . i=1 [0,t]×R0 (4.27)

Then for all z ∈ R

N X  Z T 1 Z T D Y (T ) = − − D u (s)dW (s) − D u2(s)ds j,t,z j,t,z i i 2 j,t,z i j=1 0 0 Z ˜ + Dj,t,z log(1 − θi(s, x)Ni(ds, dx) [0,T ]×R0 Z  +Dj,t,z (log(1 − θi(s, x)) + θi(s, x))νi(dx)ds . [0,T ]×R0 (4.28)

From the chain rule,

Dj,t,0Z(T ) = Z(T )Dj,t,0Y (T ). (4.29) 94

Then, we have the following derivatives: Z T Z T −1 Dj,t,0 ui(s)dWi(s) = σj (s)uj(t)1{i=j} + Dj,t,0ui(s)dWi(s), (4.30) 0 t

Z T Z T Z T 2 2 Dj,t,0 ui (s)ds = Dj,t,0ui (s)ds = 2ui(s)Dj,t,0ui(s)ds, (4.31) 0 t t

Z ˜ Dj,t,0 log(1 − θi(s, x)Ni(ds, dx) [0,T ]×R0 Z −1 ˜ = Dj,t,0(x log(1 − θi(s, x))xNi(ds, dx) [t,T ]×R0 Z D θ (s, z ) = j,t,0 i j N˜ (ds, dz ), (4.32) 1 − θ (s, z ) i j [t,T ]×R0 i j

Z Dj,t,0 (log(1 − θi(s, x)) + θi(s, x))νi(dx)ds [0,T ]×R0 Z = (Dj,t,0 log(1 − θi(s, x)) + Dj,t,0θi(s, x))νi(dx)ds [t,T ]×R0 Z  D θ (s, x)  = − j,t,0 i + D θ (s, x) ν (dx)ds. (4.33) 1 − θ (s, x) j,t,0 i i [t,T ]×R0 i Collecting terms yields " N Z −1 X Dj,t,0Z(T ) =Z(T ) −σj uj(t) − Dj,t,0ui(s)(dWi(s) + ui(s)) i=1 [t,T ] Z D θ (s, x)  + j,t,0 i (N˜ (ds, dx) + ν (dx)ds) 1 − θ (s, x) i i [t,T ]×R0 i " N Z −1 X Q =Z(T ) −σj uj(t)1{σj 6=0} − Dj,t,0ui(s)dWi (s) i=1 [t,T ] Z D θ (s, x)  + j,t,0 i N˜ Q(ds, dx) . (4.34) 1 − θ (s, x) i [t,T ]×R0 i

(ii) From the chain rule, we have the following derivatives for z 6= 0,

Dj,t,zZ(T ) =Dj,t,z exp(Y (T ))

−1 =z [exp(Y (T ) + zDj,t,zY (T )) − exp(Y (T ))]

−1 =z Z(T )[exp(zDj,t,zY (T )) − 1], (4.35) 95

2 −1 2 2 Dj,t,zui (s) =z [(ui(s) + zDj,t,zui(s)) − ui (s)]

2 =2ui(s)Dj,t,zui(s) + z(Dj,t,zui(s)) , (4.36)

−1 Dj,t,z(x log(1 − θi(s, x))

−1 −1 =x z [log(1 − θi(s, x)) − zDj,t,zθi(s, x)) − log(1 − θi(s, x))]  zD θ (s, x) =z−1 log 1 − j,t,z i . (4.37) 1 − θi(s, x)

Hence, we have following derivatives:

Z T Z T Dj,t,z ui(s)dWi(s) = Dj,t,zui(s)dWi(s), (4.38) 0 t

Z T Z T 2 2 Dj,t,z ui (s)ds = Dj,t,zui (s)ds 0 t Z T 2 = 2ui(s)Dj,t,zui(s) + z(Dj,t,zui(s)) , (4.39) t

Z ˜ Dj,t,z log(1 − θi(s, x))Ni(ds, dzi) [0,T ]×R0 Z −1 −1 ˜ =z log(1 − θj(s, z))1{i=j} + Dj,t,z(x log(1 − θi(s, x))xNi(ds, dx) [t,T ]×R0 Z  zD θ (s, x) =z−1 log(1 − θ (s, z))1 + z−1 log 1 − j,t,z i N˜ (ds, dx), j {i=j} 1 − θ (s, x) i [t,T ]×R0 i (4.40)

Z Dj,t,z (log(1 − θi(s, x)) + θi(s, x))νi(dx)ds [0,T ]×R0 Z = (Dj,t,z log(1 − θi(s, x)) + Dj,t,zθi(s, x))νi(dx)ds [t,T ]×R0 Z   zD θ (s, x)  = z−1 log 1 − j,t,z i + D θ (s, x) ν (dz)ds. (4.41) 1 − θ (s, x) j,t,z i i [t,T ]×R0 i 96

Finally, collecting terms yield

−1 Dj,t,zY (T ) = z log(1 − θj(t, z))+ N X  Z T 1 Z T + − D u (s)(dW (s) + u (s)ds) − z(D u (s))2ds j,t,z i i i 2 j,t,z i i=1 t t Z  zD θ (s, x)   + z−1 log 1 − j,t,z i N˜ (ds, dx) + ν (dx)ds 1 − θ (s, x) i i [t,T ]×R0 i Z   zD θ (s, x)   + z−1 log 1 − j,t,z i (1 − θ (s, x)) + D θ (s, x) ν (dx)ds 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i N X  Z T 1 Z T =z−1 log(1 − θ (t, z)) + − D u (s)dW Q(s) − z(D u (s))2ds j j,t,z i i 2 j,t,z i i=1 0 t Z  zD θ (s, x) + z−1 log 1 − j,t,z i N˜ Q(ds, dx) 1 − θ (s, x) i [t,T ]×R0 i Z   zD θ (s, x)   + z−1 log 1 − j,t,z i (1 − θ (s, x)) + D θ (s, x) ν (dx)ds . 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i (4.42) 97

Theorem 4.2.2 Clark-Ocone theorem under the change of measure

2 2 2 Let F ∈ L (P ) ∩ L (Q) be FT -measurable and FZ(T ) ∈ L (P ). Suppose that the assumptions of Lemma 4.2.1 are satisfied, then

N Z T Q X Q Q F =E [F ] + σjE [Dj,t,0F − FKj(t)|Ft− ]dWj (t)+ (4.43) j=1 0 N Z X Q ˜ Q E [F (Hj(t, z) − 1) + zHj(t, z)Dj,t,zF |Ft− ]Nj (dt, dz) (4.44) j=1 [0,T ]×R0 where

N Z T Z  X Dj,t,0θi(s, x) K (t) = D u (s)dW Q(s) + N˜ Q(ds, dx) , (4.45) j j,t,0 i i 1 − θ (s, x) i i=1 t [t,T ]×R0 i

Hj(t, z) N X  Z T 1 Z T = exp − zD u (s)dW Q(s) − (zD u (s))2ds j,t,z i j 2 j,t,z i i=1 t t Z  zD θ (s, x) + log 1 − j,t,z i N˜ Q(ds, dx) 1 − θ (s, x) i [t,T ]×R0 i Z   zD θ (s, x)   + log 1 − j,t,z i (1 − θ (s, x)) + zD θ (s, x) ν (dx)ds 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i (4.46) for all j ∈ {1, ··· ,N}.

Proof We let

Λ(t) = Z(t)−1 (4.47) 98

where Z(t) is given by (4.3). From Itˆo’slemma,

dΛ(t)

N N 1 X 1 2 X = − (−u (t)Z(t−))dW (t) + + (−u (t)Z(t−))2dt Z2(t−) i i 2 Z3(t−) i i=1 i=1 N X Z  1 1  − N˜ (dt, dz)+ Z(t−) + (−θ (t, z)Z(t−)) Z(t−) i i=1 R0 i N Z  −  X 1 1 −θi(t, z)Z(t ) − − ν (dz)dt Z(t−) + (−θ (t, z)Z(t−)) Z(t−) Z(t−)2 i i=1 R0 i " N N − X X 2 =Λ(t ) ui(t)dWi(t) + ui (t)dt i=1 i=1 N X Z 1  1  + − 1 N˜ (dt, dz)+ Z(t−) 1 − θ (t, z) i i=1 R0 i N # X Z 1  1  − 1 − θ (t, z) ν (dz)dt Z(t−) 1 − θ (t, z) i i i=1 R0 i " N − X  Q  2 =Λ(t ) ui(t) dWi (t) − ui(t)dt + ui (t)dt+ i=1 N X Z 1  1    − 1 N˜ Q(dt, dz) − θ (t, z)ν (dz)dt + Z(t−) 1 − θ (t, z) i i i i=1 R0 i N # X Z 1  1  − 1 − θ (t, z) ν (dz)dt Z(t−) 1 − θ (t, z) i i i=1 R0 i " N N Z # X X θi(t, z) =Λ(t−) u (t)dW Q(t) + N˜ Q(dt, dz) . (4.48) i i 1 − θ (t, z) i i=1 i=1 R0 i We let

Q Y (t) = E [F |Ft] (4.49)

Then assuming that a Novikov-type condition to Z(t) is satisfied, from the abstract Bayes rule,

E[FZ(T )|F ] Y (t) = t = Λ(t)V (t). (4.50) E[Z(T )|Ft] 99

2 P 2 Since FZ(T ) ∈ L (P ), then V (t) ≡ E [FZ(T )|Ft] ∈ L (P ). From the Clark-Ocone theorem in L2(P ), we have the following:

N X Z E[FZ(T )|Ft] =E[E[FZ(T )|Ft]] + E[Dj,s,zE[FZ(T )|Ft]|Fs− ]Mj(ds, dx) j=1 [0,T ]×R N X Z =E[FZ(T )] + E[Dj,s,z(FZ(T ))|Fs− ]Mj(ds, dx) (4.51) j=1 [0,T ]×R N X Z T =E[FZ(T )] + σjE[Dj,t,0(FZ(T ))|Ft− ]dWj(t) j=1 0 N Z X ˜ + E[Dj,t,z(FZ(T ))|Ft− ]xNj(dt, dx). (4.52) j=1 [0,T ]×R0

The first term is by the tower property of conditional expectation

E[E[FZ(T )|Ft]] = E[FZ(T )]. (4.53)

By applying the tower property of conditional expectation yields

E[Dj,s,zE[FZ(T )|Ft]Fs− ]] =E[Dj,s,z(FZ(T ))|Fs− ]1{s < t}. (4.54)

From the product rule

− − dY (t) = Λ(t )dV (t) + V (t )dΛ(t) + d[Λ,V ]t (4.55)

The quadratic variation is evaluated as follows:

" N − X d[Λ,V ]t =Λ(t ) uj(t)σjE[Dj,t,0(FZ(T ))|Ft− ]dt+ j=1 N Z # X θj(t, z) = E[D (FZ(T ))|F − ]zN (dt, dz) . (4.56) 1 − θ (t, z) j,t,z t j j=1 R0 j 100

Hence,

dY (t)

" N − X =Λ(t ) σjE[Dj,t,0(FZ(T ))|Ft− ]dWj(t)+ j=1 N Z # X ˜ E[Dj,t,z(FZ(T ))|Ft− ]zNj(dt, dz) + j=1 R0 " N N Z # − X Q X θj(t, z) Q E[FZ(T )|F − ]Λ(t ) u (t)dW (t) + N˜ (dt, dz) t j j 1 − θ (t, z) j j=1 j=1 R0 j " N − X Λ(t ) σjuj(t)E[Dj,t,0(FZ(T ))|Ft− ]dt+ j=1 N Z # X θj(t, z) E[D (FZ(T ))|F − ]z(N˜ (dt, dz) + ν (dz)dt) 1 − θ (t, z) j,t,z t j j j=1 R0 j " N − X Q =Λ(t ) σjE[Dj,t,0(FZ(T ))|Ft− ](dWj (t) − uj(t)dt)+ j=1 N Z # X ˜ Q E[Dj,t,z(FZ(T ))|Ft− ]z(Nj (dt, dz) − θj(t, z)ν(dz)dt) + j=1 R0 " N N Z # X X θj(t, z) Y (t) u (t)dW Q(t) + N˜ Q(dt, dz) j j 1 − θ (t, z) j j=1 j=1 R0 j " N − X Λ(t ) uj(t)σjE[Dj,t,0(Z(T )F )|Ft− ]dt+ j=1 N Z # X θj(t, z) E[D (Z(T )F )|F − ]z(N˜ (dt, dz) + (1 − θ (t, z))ν (dz)dt) 1 − θ (t, z) j,t,z t j j j j=1 R0 j " N − X Q =Λ(t ) (σjE[Dj,t,0(Z(T )F )|Ft− ] + E[Z(T )F uj(t)|Ft− ])dWj (t)+ j=1 Z    E[Dj,t,z(Z(T )F )|Ft− ] θj(t, z) Q z + E[Z(T )F |F − ] N˜ (dt, dz) (4.57) 1 − θ (t, z) 1 − θ (t, z) t j R0 j j Since Z(T )F ∈ L2(P ) then from product rule

Dj,t,z(Z(T )F ) = FDj,t,zZ(T ) + Z(T )Dj,t,zF + zDj,t,zZ(T )Dj,t,zF. (4.58) 101

Consider the case z = 0. Note that Kj(t) in (4.45) can be written in terms of

Dj,t,zZ(T ) in (4.24) as follows:

−1  Dj,t,0Z(T ) = Z(T ) F σj uj(t) − Kj(t) . (4.59)

From the product rule (4.58), we obtain

−1  Dj,t,0(Z(T )F ) =FZ(T ) F σj uj(t) − Kj(t) + Z(T )Dj,t,0F  −1  Z(T ) Dj,t,0F − F σj uj(t) + Kj(t) . (4.60)

On the other hand, consider the case z 6= 0. Note that Hj(t, z) in (4.46) can be written in terms of Dj,t,z log Z(T ) in (4.26) as follows:

Hj(t, z) = exp(zDj,t,z log Z(T ) − log(1 − θj(t, z)). (4.61)

Then we can express Dj,t,zZ(T ) in (4.25) as follows:

−1 Dj,t,zZ(T ) =z Z(T )[exp(zDj,t,z log Z(T )) − 1]

−1 =z Z(T )[(1 − θj(t, z))Hj(t, z) − 1]. (4.62)

Then, from the product rule (4.58), we obtain

Dj,t,z(Z(T )F )

−1 =z Z(T )[(1 − θj(s, z))Hj(t, z) − 1]F + Z(T )Dj,t,zF

+Z(T )[(1 − θj(s, z))Hj(t, z) − 1]Dj,t,zF

−1 =Z(T )[z ((1 − θj(s, z))Hj(t, z) − 1)F + (1 − θj(s, z))Hj(t, z)Dj,t,zF ]. (4.63) 102

Substituting (4.60) and (4.63) into (4.57) gives us

dY (t)

" N − X −1  = Λ(t ) (σjE[Z(T )(Dj,t,0 − F σj uj(t) + Kj(t) )|Ft− ] (4.64) j=1

Q + E[Z(T )F uj(t)|Ft− ])dWj (t) Z  1 + E[Z(T )[z−1((1 − θ (s, z))H (t, z) − 1) 1 − θ (t, z) j j R0 j   θj(t, z) ˜ Q + (1 − θj(s, z))Hj(t, z)Dj,t,zF ]z + E[Z(T )F |Ft− ] Nj (dt, dz) 1 − θj(t, z) " N − X Q =Λ(t ) σjE[Dj,t,0F − FKj(t)|Ft− ]dWj (t) (4.65) j=1 N Z # X ˜ Q + E[F (Hj(t, z) − 1) + zHj(t, z)Dj,t,zF |Ft− ]Nj (dt, dz) . (4.66) j=1 R0

Since F is FT measurable, then

Q Y (T ) = E [F |FT ] = F (4.67) and also

Q Q Y (0) = E [F |F0] = E [F ]. (4.68)

Then from the abstract Bayes rule, and from above boundary condition (4.67) and (4.68) , we finally obtain

d Z T Q X Q Q F =E [F ] + σjE [Dj,t,0F − FKj(t)|Ft− ]dWj (t)+ (4.69) j=1 0 l Z X Q ˜ Q E [F (Hj(t, z) − 1) + zHj(t, z)Dj,t,zF |Ft− ]Nj (dt, dz). (4.70) j=1 [t,T ]×R0

From the theorem, we have the following representation of F ∈ L2(P ) ∩ L2(Q) for both continuous and pure jump case. 103

(i) Continuous Case

N Z T Q X Q Q F =E [F ] + σjE [Dj,t,0F − FKj(t)|Ft− ]dWj (t) (4.71) j=1 0

where

N Z T X Q Kj(t) = Dj,t,0ui(s)dWi (s). (4.72) i=1 t

(ii) Pure Jump Case

N Z Q X Q ˜ Q F =E [F ] + E [F (Hj(t, z) − 1) + zHj(t, z)Dj,t,zF |Ft− ]Nj (dt, dz) j=1 [0,T ]×R0 (4.73)

where N Z   X zDj,t,zθi(s, x) H (t, z) = exp log 1 − N˜ Q(ds, dx) j 1 − θ (s, x) i i=1 [t,T ]×R0 i Z   zD θ (s, x)   + log 1 − j,t,z i (1 − θ (s, x)) + zD θ (s, x) ν (dx)ds . 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i (4.74)

For the deterministic case, we have the following representation for F ∈ L2(P ) ∩ L2(Q).

Corollary 4.2.3 Deterministic Drift

Suppose that the assumptions of Theorem 4.2.2 are satisfied and in addition, uj and 2 2 θj are deterministic, for all j ∈ {1, ··· ,N} then for F ∈ L (P ) ∩ L (Q)

N Z T Q X Q Q F =E [F ] + σjE [Dj,t,0F |Ft− ]dWj (t) j=1 0 N Z X Q ˜ Q + E [Dj,t,zF |Ft− ]zNj (dt, dz) j=1 [0,T ]×R0 N Z Q X Q Q =E [F ] + E [Dj,t,zF |Ft− ]Mj (dt, dz). (4.75) j=1 [0,T ]×R 104

where M Q is an independent random measure on ([0,T ] × R, B([0,T ] × R)) such that Z Z M Q(E) = σ dW Q(t) + zdN˜ Q(dt, dz) (4.76) 0 E0 E 0 where E ∈ B(R+ × R), E0 = {t ∈ R+ :(t, 0) ∈ E} and E = E \ E0.

4.3 Mean Variance Hedging

4.3.1 Financial Modeling Under a L´evyMarket

In this section we give a brief overview of the financial market driven by L´evy processes. We will closely follow the discussions of Di Nunno [27], Øksendal and Sulem [65].

Asset Dynamics

We let (Ω, F,P ) be the probability space under the usual hypothesis. For t ∈ [0,T ], we denote the following filtration:

(i) Ft - full information {F}t∈[0,T ] ⊂ F,

(ii) Ht - partial information Ht ⊂ Ft for all t ∈ [0,T ].

We model our portfolio as follows. Let S0(t) be the risk-free asset process and Si(t), i ∈ {1, ··· ,N} be the risky asset processes where S1(t), ··· ,SM (t) are tradable M ≤

N and SM+1(t), ··· ,SN (t) are non-tradable. Under the objective P measure, we model the risky assets and risk-free asset with the following dynamics:

(i) risky asset P dynamics Z ˜ dSi(t) =µi(t)dt + σi(t)dW (t) + γi(t, z)N(dt, dz) R0 d Z l X X ˜ =µi(t)dt + σij(t)dWj(t) + γij(t, z)Nj(dt, dz), j=1 R0 j=1

Si(0) =xi > 0, i ∈ {1, ··· ,N} (4.77) 105

(ii) risk-free P dynamics

dS0(t) =r(t)S0(t)dt,

S0(0) =1 (4.78) where the risk-free rate r is deterministic and the coefficients µi, σi, γi are predictable and satisfies the Lipschitz and growth conditions and

σi(t) =[σi1(t), ··· , σid(t)], i ∈ {1, ··· d},

γi(t, z) =[γi1(t, z), ··· , γil(t, z)], i ∈ {1, ··· l}. (4.79)

In matrix-vector form, we can write (4.77) the dynamics of the risky asset of of the form Z dS(t) = µ(t)dt + σ(t)dW (t) + γ(t, z)N˜(dt, dz) (4.80) R0 where

T S(t) =[S1(t), ··· ,SN (t)] ,

T W (t) =[W1(t), ··· ,Wd(t)] , ˜ ˜ ˜ T N(dt, dz) =[N1(dt, dz), ··· , Nl(dt, dz)] ,

T µ(t) =[µ1(t), ··· , µN (t)] ,

σ(t) ={σij(t)}1≤i≤N,1≤j≤d,

γ(t, z) ={γij(t, z)}1≤i≤N,1≤j≤l. (4.81)

Suppose that the the drift terms ui(t) and θi(t, z) and the Doleans-Dade exponential (4.3) satisfies the Girsanov theorem for L´evyprocesses (Theorem 4.1.1). Then, under the change of measure Q ∼ P , Z ˜ dSi(t) =µi(t)dt + σi(t)dW (t) + γi(t, z)N(dt, dz) R0 Z Q ˜ Q =µi(t)dt + σi(t)(dW (t) − u(t)dt) + γi(t, z)(N (dt, dz) − θ(t, z)ν(dz)dt) R0 Z Q ˜ Q =µi(t)dt + σi(t)(dW (t) − u(t)dt) + γi(t, z)(N (dt, dz) − θ(t, z)ν(dz)dt) R0 Z Q ˜ Q =αi(t)dt + σi(t)dW (t) + γi(t, z)N (dt, dz) (4.82) R0 106 where Z αi(t) = µi(t) − σi(t)u(t) − γi(t, z)θ(t, z)ν(dz). (4.83) R0 We let the discounted value process S˜(t) given by

˜ Si(t) Si(t) = , t ∈ [0,T ]. (4.84) S0(t) then,

˜ S0(t) = 1. (4.85)

From Itˆo’slemma, we get     ˜ 1 1 1 dSi(t) = dSi(t) + Si(t)d + d Si, S0(t) S0(t) S0 t 1  Z  −r(t)dt = α (t)dt + σ (t)dW Q(t) + γ (t, z)N˜ Q(dt, dz) + S (t−) S (t) i i i i S (t) 0 R0 0 1  Z  = (α (t) − r(t)S (t−))dt + σ (t)dW Q(t) + γ (t, z)N˜ Q(dt, dz) . S (t) i i i i 0 R0 (4.86)

We also denote the discounted factor at the interval [t, T ] as follows:  Z T  D(t, T ) = exp − r(s)ds . (4.87) t

Arbitrage-Free Condition

We assume an arbitrage-free portfolio. From the fundamental theorem of asset ˜ pricing, there exists an equivalent measure Q ∼ P such that Si is a Q-local martingale [65]. Then, there exists predictable processes u and θ such that Z σi(t)u(t) + γi(t, z)θ(t, z)ν(dz) = µi(t) − r(t)Si(t). (4.88) R0

Denote the following processes normalized by the numeraire S0 1 σ˜ij(t) = σij(t), i ∈ {1, ··· ,N}, j ∈ {1, ··· , d}, S0(t) 1 γ˜ij(t, z) = γij(t, z), i ∈ {1, ··· ,N}, j ∈ {1, ··· , l}. (4.89) S0(t) 107

Similarly,σ ˜i,σ ˜,γ ˜i,σ ˜i are defined as the normalized processes of σi, σ, γi, σi respec-

tively by S0. Hence, the discounted price dynamics under the risk-neutral measure Q is given by Z ˜ Q ˜ Q dSi(t) =˜σi(t)dW (t) + γ˜i(t, z)N (dt, dz) R0 d Z l X Q X ˜ Q = σ˜ij(t)dWj (t) + γ˜ij(t, z)Nj (dt, dzj). (4.90) j=1 R0 j=1 Example Geometric L´evyprocesses for tradable risky assets d Z l ! − X X ˜ dSi(t) =Si(t ) ai(t)dt + bij(t)dWj(t) + cij(t, zj)Nj(dt, dz) , i=1 R0 j=1

Si(0) =xi > 0 (4.91) where i ∈ {1, ··· ,M} and all coefficients are predictable with cij > −1. In this model, we have

µi(t) = ai(t)Si(t), i ∈ {1, ··· ,M},

σij(t) = bij(t)Si(t), i ∈ {1, ··· ,M}, j ∈ {1, ··· , d}

γij(t, z) = cij(t)Si(t), i ∈ {1, ··· ,M}, j ∈ {1, ··· , l}. (4.92)

The solution of the SDE is given by d ! d Z T 1 X X Z T S (T ) = exp a (t) − u2(s) dt + u (s)dW (t)+ i i 2 j j j 0 j=1 j=1 0 l Z ! X  ˜  log(1 + cij(t, z))Nj(dt, dz) + (log(1 + cij(t, z)) − cij(t, z))νj(dz)dt . j=1 [0,t]×R0 (4.93) The arbitrage-free condition is given by d l X Z X ai(t) − r(t) = bij(t)uj(t) + cij(t, z)θj(t, z)νj(dz). (4.94) j=1 R0 j=1 Thus, the risk-neutral dynamics of the discounted process under the arbitrage-free condition is given by d Z l ! ˜ ˜ − X Q X ˜ Q dSi(t) =Si(t ) bij(t)dWj (t) + cij(t, z)Nj (dt, dz) . (4.95) j=1 R0 j=1 108

Self-Financing Condition

N+1 We denote the portfolio/trading strategy ϕi : [0,T ] → R as an Ft-predictable process which corresponds to the number of units the investor possess for the asset

Si at time t for all i ∈ {0, 1, ··· ,M}. The value/wealth process corresponding to the portfolio ϕ starting at x is given by

M ϕ X Vx (t) ≡ V (t) = x + ϕ0(t)S0(t) + ϕi(t)Si(t). (4.96) i=1 Assume, the process is value process is self-financing, then

M X dV (t) = ϕ0(t)dS0(t) + ϕi(t)dSi(t). (4.97) i=1

We let the discounted value process V˜ (t) given by

V (t) V˜ (t) = . (4.98) S0(t)

From Itˆo’slemma, we get

M ˜ X ˜ ˜ dV (t) = ϕi(t)dSi(t) = ϕ(t) · dS(t). (4.99) i=1 where

T ˜ ˜ ˜ T ϕ(t) = [ϕ1(t), ··· , ϕM (t)] , S(t) = [S1(t), ··· , SM (t)] (4.100)

Hence, the discounted value process is given by

M Z t ˜ X ˜ V (t) =x + ϕi(s)dSi(s) i=1 0 M Z T  Z  X Q ˜ Q =x + ϕi(t) σ˜i(s)dW (s) + γ˜ij(s, z)N (ds, dz) i=1 0 R0 M d Z t M l Z X X Q X X ˜ Q =x + ϕi(s)˜σij(s)dWj (s) + ϕi(s)˜γij(s, z)Nj (ds, dz). i=1 j=1 0 i=1 j=1 [0,t]×R0 (4.101) 109

4.3.2 Quadratic Hedging

Motivation

A market is said to be complete if it can be replicated by a self-financing portfo- lio [7]. Under the Black-Scholes model, the market is complete. However, the market modeled under a L´evyprocess is in general incomplete and thus, the equivalent mar- tingale measure Q is not unique. A natural way to find a hedging portfolio is by minimizing the expected quadratic

ϕ 2 hedging error (F − Vx (T )) for a contingent claim F ∈ L (P ) for all x ∈ R and ϕ belongs to some admissible set AP with respect to the objective measure P , that is, we minimize the functional

P P ϕ 2 P Jx,ϕ = E [|F − Vx (T )| ], x ∈ R, ϕ ∈ A . (4.102)

This represents the mean square hedging error at maturity. The solution for this problems incorporates variance optimal martingale measure QMV ∼ P [8], [39], and [71] and explicit solutions are difficult to obtain in the the presence of jumps [8], [21]. A tractable way of doing quadratic hedging is when we work on the risk-neutral martingale measure Q where the discounted asset process S˜ is a Q-martingale. First, we define the concept of an admissible portfolio under partial information H.

Definition 4.3.1 [21], [27] The predictable process ϕ(t) for t ∈ [0,T ] is an H- admissible portfolio if the following conditions are satisfied:

(i) ϕ(t) is H-caglad predictable,

 2 (ii) EQ R T ϕ(t) · dS˜(t) < ∞. 0

The set of all H-admissible portfolios is denoted by AH.

ϕ To find a quadratic hedging portfolio in Q is to take the discounted (F − Vx (T )) 2 hedging error for F ∈ L (Q) for all x ∈ R and ϕ belongs to some admissible set AH. That is, we minimize the functional

Q Q ˆ ˆ ϕ 2 Jx,ϕ = E [|F − Vx (T )| ], x ∈ R, ϕ ∈ AH. (4.103) 110

where Hˆ is the discounted claim H from maturity. Denote the set of admissible payoffs of the form

 Z T  ˜ A = V0 + ϕ(t) · dS(t): V0 ∈ R, ϕ ∈ AH . (4.104) 0

2 Then, AH is a closed subspace in L (Q). The quadratic hedging problem under Q can be stated as follows:

Q ˆ ˆ ϕ 2 ˆ 2 inf E [|F − Vx (T )| ] = inf ||F − A||L2(Q). (4.105) x∈R,ϕ∈AH A∈A

Under the assumption that S˜ is a square-intergrable Q martingale and F ∈ L2(Q) which implies Fˆ ∈ L2(Q) from the dominated convergence theorem, then Fˆ admits a Galtchouk-Kunita-Watanabe (GKW) decomposition of the form

Z T Fˆ = EQ[Fˆ] + ϕ∗(t) · dS˜(t) + N, a.s. (4.106) 0

∗ where {ϕ (t)}t∈[0,T ] is a square integrable predictable process, N is orthogonal to all ˜ Q stochastic integrals with respect to S and the martingale Nt = E [N|Ft] is strongly orthogonal to A. R T ∗ ˜ From the GKW decomposition, (4.106), the stochastic integral 0 ϕ (t) · dS(t) is the orthogonal projection of Fˆ in L2(Q) which corresponds to the hedgable com- ponent. On the other hand, N is the is the orthogonal complement of F˜ in L2(Q) which corresponds to the non-hedgable component or the residual risk. Using Malli- avin calculus, our aim is to find ϕ∗(t) by applying the Clark-Ocone reprenentation theorem. Likewise, an alternative solution in solving the quadratic hedging error in P is to choose a risk-neutral measure Q ∼ P such that S˜ is a Q-martingale. Performing the GKW decomposition would yield different hedging strategies and since orthogonality is not invariant under the change of measure so does the the residual risk which is not desirable. However, by employing a F¨ollmer-Schweizer [30] minimal maringale mea- sure QFS ∼ P and perfroming a GKW decomposition, the residual risk N preserves its its orthogonality under P . 111

Mean Variance Hedging Under the Martingale Measure

Given the contingent claim F ∈ L2(P ) ∩ L2(Q), we want to find the hedging port- ˜ ˜ ϕ folio ϕ ∈ AH that minimizes the discounted residual error (F − Vx (T )) at maturity in the mean-square sense under risk-neutral measure. That is, we want to minimize

Q the functional Jx,ϕ in (4.103).

Theorem 4.3.1 Quadratic hedging under the martingale measure Suppose that F ∈ L2(P ) ∩ L2(Q) has a Clark-Ocone representation of the form

N Z T N Z Q X Q X ˜ Q F = E [F ] + βj(t)dWj (t) + ξj(t, z)Nj (dt, dz) (4.107) j=1 0 j=1 [0,T ]×R0 for some predictable process {βj}1≤j≤N and {ξj}1≤j≤N , then the mean variance hedging portfolio problem in (4.103) under a martingale measure Q with discounted risky-asset dynamics given by (4.90) with partial information is given by ϕ(t) ∈ RN which is a solution of the linear equation

Q(t)ϕ(t) = D(t, T )R(t) where

 Q M×M  Q M Q(t) = E [Nik(t)| Ht− ] ∈ R ,R(t) = E [Mi(t)| Ht− ] ∈ R , (4.108)

N N X X Z Mk(t) = σkj(t)βj(t) + γkj(t, z)ξj(t, z)νj(dz), j=1 j=1 R0 N N X X Z Nik(t) = σij(t)σkj(t) + γij(t, z)γkj(t, z)νj(dz). (4.109) j=1 j=1 R0

Proof From the GKW decomposition, it suffice to show that the residual (Fˆ − ˆ ϕ 2 Vx (T )) is orthogonal to all G ∈ L (Q), that is,

Q ˆ ˆ ϕ E [(F − Vx (T ))G] = 0 (4.110) 112

where FT -measurable of the form

M Z T X ˜ G = ψi(t)dSi(t) (4.111) i=1 0

and ψ ∈ AH. Then, from (4.90), we obtain

M N Z T M l Z X X Q X X ˜ Q G = ψi(t)˜σij(t)dWj (t) + ψi(t)γ ˜ij(t, z)Nj (dt, dz) i=1 j=1 0 i=1 j=1 [0,T ]×R0 d Z T N Z X Q X ˜ Q = Uj(t)dWj (t) + Vj(t, z)Nj (dt, dz). (4.112) j=1 0 j=1 [0,T ]×R0 where

N X Uj(t) = ψi(t)˜σij(t), (4.113) i=1 N X Vj(t, z) = ψi(t)˜γij(t, z). (4.114) i=1 From the Clark-Ocone representation of F ∈ L2(P ) ∩ L2(Q) in (4.107) since r is deterministic, then its Fˆ can be represented as

N Z T N Z ˆ Q ˆ X Q X ˜ Q F =E [F ] + D(0,T )βj(t)dWj (t) + D(0,T )ξj(t, z)Nj (dt, dz). j=1 0 j=1 [0,T ]×R0 (4.115)

From dominated convergence theorem, the discounted process Fˆ ∈ L2(P ) ∩ L2(Q). Likewise, it can be shown that the optimal initial capital x under the mean-variance hedging [21] is given by

x = EQ[F ]. (4.116)

Hence, form (4.101), we have

M N Z T ˆ Q ˆ X X Q V (T ) =E [F ] + ϕi(t)˜σij(t)dWj (t) i=1 j=1 0 M N Z X X ˜ Q + ϕi(t)˜γij(t, z)Nj (dt, dz). (4.117) i=1 j=1 [0,T ]×R0 113

Then N Z T N Z ˆ ˆ X Q X ˜ Q F − V (T ) = Aj(t)dWj (t) + Bj(t, z)Nj (dt, dzj) (4.118) j=1 0 j=1 [0,T ]×R0 where

M X Aj(t) =βj(t)D(0,T ) − ϕi(t)˜σij(t), i=1 M X Bj(t, z) =ξj(t, z)D(0,T ) − ϕi(t)˜γij(t, z). (4.119) i=1 Hence, the expression in (4.110) becomes

0 =EQ[(Fˆ − Vˆ )G] " N Z T N Z # Q X X =E Aj(t)Uj(t)dt + Bj(t)Vj(t)νj(dz)dt j=1 0 j=1 [0,T ]×R0 " N Z T M ! M Q X X X =E βj(t)D(0,T ) − ϕi(t)˜σij(t) ψi(t)˜σij(t)dt+ j=1 0 i=1 i=1 N M ! M # X Z X X ξj(t, z)D(0,T ) − ϕi(t)˜γij(t, z) ψi(t)˜γij(t, z)νj(dz)dt j=1 [0,T ]×R0 i=1 i=1 " N Z T # Q X E ψk(t)lk(t)dt (4.120) k=1 0 where

N M ! X X lk(t) = σ˜kj(t) βj(t)D(0,T ) − ϕi(t)˜σij(t) + j=1 i=1 N M ! X Z X + γ˜kj(t, z) ξj(t, z)D(0,T ) − ϕi(t)˜γij(t, z) νj(dz)dt j=1 R0 i=1 N N ! X X Z = σ˜kj(t)βj(t) + γ˜kj(t, z)ξj(t, z)νj(dz) D(0,T ) j=1 j=1 R0 M N N ! X X X Z − ϕi(t) σ˜ij(t)˜σkj(t) + γ˜ij(t, z)˜γkj(t, z)νj(dz) i=1 j=1 j=1 R0 M X =mk(t) − ϕi(t)nik(t), (4.121) i=1 114

N N ! X X Z mk(t) = σ˜kj(t)βj(t) + γ˜kj(t, z)ξj(t, z)νj(dz) D(0,T ) (4.122) j=1 j=1 R0 N N X X Z nik(t) = σ˜ij(t)˜σkj(t) + γ˜ij(t, z)˜γkj(t, z)νj(dz). (4.123) j=1 j=1 R0

Then, (4.120) holds for all ψ ∈ AH if and only if

Q E [Lk(t)| Ht− ] = 0. (4.124)

Since Ht ⊂ Ft and ϕ ∈ AH, and removing the tildes in {σ˜ij} and {γ˜ij} following system of linear equation for k ∈ {1, ··· ,M} M X Q Q E [Nik(t)| Ht− ] ϕi(t) = D(t, T )E [Mk(t)| Ht− ] (4.125) i=1 where N N X X Z Mk(t) = σkj(t)βj(t) + γkj(t, z)ξj(t, z)νj(dz), j=1 j=1 R0 N N X X Z Nik(t) = σij(t)σkj(t) + γij(t, z)γkj(t, z)νj(dz). (4.126) j=1 j=1 R0 Then, ϕ(t) can be solved as a linear equation of the form

Q(t)ϕ(t) = D(t, T )R(t) (4.127) where ϕ(t) ∈ RM and M×M Q Q(t) = {Qik(t)} ∈ R , Qik(t) = E [Nik(t)| Ht− ], M Q R(t) = {Ri(t)} ∈ R , Ri(t) = E [Mi(t)| Ht− ] .

These spacial cases would yield some simplifications in the computation of the parameters.

(i) Drift parameters uij(t), θij(t, z) are deterministic. From Corollary 4.2.3, we obtain

Q βj(t) = σjE [Dj,t,0F |Ft− ],

Q ξj(t, z) = zE [Dj,t,zF |Ft− ], (4.128)

and ϕ(t) can be solved using Theorem 4.3.1. 115

(ii) Univariate case (single tradable risky asset)

Q hPN PN R i E σ1j(t)βj(t) + γ1j(t, z)ξj(t, z)νj(dz) Ht− j=1 j=1 R0 ϕ(t) = D(t, T ) h i Q PN 2 PN R 2 E σ (t) + γ (t, z)νj(dz) Ht− j=1 1j j=1 R0 1j (4.129)

(iii) Univariate, and the coefficients σij(t), γij(t, z) are H-predictable PN Q PN R Q σ1j(t)E [βj(t)|Ht− ] + γ1j(t, z)E [ξj(t, z)|Ht− ] νj(dz) ϕ(t) = D(t, T ) j=1 j=1 R0 PN 2 PN R 2 σ (t) + γ (t, z)νj(dz) j=1 1j j=1 R0 1j (4.130)

(iv) Univariate, and the coefficients σij, γij are F-predictable (full model) Pd Pl R σ1j(t)βj(t) + γ1j(t, z)ξj(t, z)νj(dz) ϕ(t) = D(t, T ) j=1 j=1 R0 (4.131) Pd 2 Pl R 2 σ (t) + γ (t, z)νj(dz) j=1 1j j=1 R0 1j

4.3.3 Geometric L´evy Processes

We will discuss some characterization of quadratic hedging for a geometric L´evy process. We consider the P dynamics of a geometric L´evy process is given as

dS0(t) =r(t)S0(t)dt, S(0) = 1  Z  ˜ dS1(t) =S1(t) a(t)dt + b(t)dW (t) + c(t, x)N(dt, dx) ,S1(0) = x1 > 0. R0 (4.132) We will discuss hedging strategies for the change of measure where the drift param- eters are deterministic. Then, we will discuss quadratic hedging under minimum martingale measure.

Deterministic Coefficients

We will assume that the coefficients a(t), r(t), b(t), and c(t, z) > −1 are deter- ministic such that Z T  Z  |a(t)| + |r(t)| + b2(t) + c2(t, z)ν(dz) dt < ∞. (4.133) 0 R0 116

For an arbitrage-free portfolio, we require the drift parameters u(t) and θ(t, z) to satisfy the following: Z a(t) − r(t) = b(t)u(t) + c(t, z)θ(t, z)ν(dx). (4.134) R0 Then, under the risk-neutral dynamics, the discounted price process is given by  Z  ˜ ˜ − Q ˜ Q dS1(t) = S1(t ) b(t)dW (t) + c(t, z)N (dt, dz) . (4.135) R0 If u(t) and θ(t, z) are deterministic, then, we obtain the following mean-variance hedging portfolio for F ∈ L2(P ) under a full model as follows:

Q R Q b(t)σE [Dt,0F |Ft− ] + c(t, z)zE [Dt,zF |Ft− ] ν(dz) R0 ϕ(t) =D(t, T )   . (4.136) 2 R 2 S1(t) b (t) + c (t, z)ν(dz) R0

Models where the drift coefficients are deterministic include the Merton model [59] which we will discuss in the next section and the minimum measure under the exp- L´evymodel [33].

It can be shown [10], if we have the additional condition Z c4(t, z) + | log(1 + c(t, z))|2 ν(dz) < ∞ (4.137) R0

1,2 then S1(T ) ∈ D and S (T )b(t) S (T )c(t, z) D S (T ) = 1 1 + 1 1 . (4.138) t,z 1 σ {z=0} z {z6=0}

1 Furthermore, consider the contingent claim F = Φ(S1(T )), where Φ ∈ C (R+, R) such that

2 (i) Φ(S1(T )) ∈ L (P ),

0 0 S1(T )b(t) 2 (ii) Φ (S1(T ))Dt,0Φ(S(T )) = Φ (S1(T )) σ ∈ L (P × λ),

Φ(S1(T )+zDt,zS1(T ))−Φ(S1(T )) Φ(S1(T )(1+c(t,z))−Φ(S1(T )) 2 2 (iii) z = z . ∈ L (P × z ν(dz)dt), z 6= 0 117

Then, from the chain rule, F ∈ D1,2 and

0 S (T )b(t) Φ(S (T )(1 + c(t, z))) − Φ(S (T )) D F = Φ (S (T )) 1 1 + 1 1 1 . t,z 1 σ {z=0} z {z6=0} (4.139) Hence, the mean-variance hedging portfolio under a deterministic drift coefficients becomes

1 h 2 Q h 0 i ϕ(t) =D(t, T )   b (t)E Φ (S1(T ))S1(T )|Ft− 2 R 2 S1(t) b (t) + c (t, z)ν(dz) R0 Z  Q + c(t, z)E [Φ(S1(T )(1 + c(t, z))) − Φ(S1(T ))|Ft− ] ν(dz) . (4.140) R0

Merton Model

We examine the Merton model which is the first jump-diffusion model in option pricing [59], [21]. The P dynamics of this model is given as

S1(t) = S0 exp (µdt + σdW (t) + dJ(t)) (4.141) where µ is the rate of return, b is the diffusion are assumed to be constant. The jump process J(t) is a of the form

N(t) X J(t) = Yi (4.142) i=1

iid 2 where N(t) ∼ P oisson(λt), Yi ∼ N(m, δ ). Then, J(t) is a pure jump L´evyprocess with L´evymeasure ν(dz) = λF (dz) where F is the distribution function of Y1. Then J(t) can be represented in terms of Poisson random measure as follows: Z J(t) = zN(ds, dz) [0,t]×R0 Z   = z N˜(ds, dz) + ν(dz)ds . (4.143) [0,t]×R0 We assume all of these processes are mutually independent. Then, we can write the risky-asset process process as follows:  Z  Z  ˜ S1(t) = S0 exp µ + zν(dz) t + σdW (t) + zN(ds, dz) . (4.144) R0 [0,t]×R0 118

Then the Merton model is a special case of geometric L´evyprocess with deterministic coefficients with

b(t) = σ, c(t, z) = ez − 1. (4.145)

From Itˆo’slemma,  σ2 Z  Z  dS (t) = S (t−) µ + + (ez − 1)ν(dz) dt + σdW (t) + (ez − 1)N˜(dt, dz) . 1 1 2 R0 R0 (4.146)

Under the change of measure Q ∼ P , we have the following Brownian motions under the Q measure:

dW Q(t) =dW (t) + u(t)dt, (4.147)

N˜ Q(dt, dz) =N˜(dt, dz) + θ(t, z)ν(dz)dt (4.148)

For an arbitrage-free condition, we have the following constraint: σ2 Z Z µ + + (ez − 1)ν(dz) − r(t) = σu(t) + (ez − 1)θ(t, z)ν(dz). (4.149) 2 R0 R0 Under the risk-neutral measure Q, Merton proposed changing the drift term of the Wiener process but leaving the jump part unchanged. The economic justification of this proposal is that the ”jump risk” can be diversified and there is no market risk premium attached to it. Then, we have the following drift coefficients: µ + σ2 + R (ez − 1)ν(dz) − r(t) u(t) = 2 R0 , θ(t, z) = 0. (4.150) σ

From the distribution of Y1, we have the following: Z Z (ez − 1)ν(dz) =λ (ez − 1)F (dz) R0 R0 =λ (E[Y (·)] − 1)   δ2   =λ exp m + − 1 . (4.151) 2 Then, we have the following dynamics under the Q measure  Z  − Q z ˜ Q dS1(t) = S1(t ) r(t)dt + σdW (t) + (e − 1)N (dt, dz) . (4.152) R0 119

We claim that (4.137) is satisfied. From the second moment and the moment generating function of a normal distribution, we see that Z c4(t, z) + | log(1 + c(t, z))|2 ν(dz) R0 Z =λ (e2 − 1)4 + z2 F (dz) R0 =λ E (exp(Y (·)) − 1)4 + E Y (·)2 < ∞ (4.153)

1,2 and thus S1(t) ∈ D . The mean-variance hedging portfolio is given by

2 Q R z Q σ E [Dt,0F |Ft− ] + (e − 1)zE [Dt,zF |Ft− ] ν(dz) R0 ϕ(t) = D(t, T )   . (4.154) 2 R z 2 S1(t) σ + (e − 1) ν(dz) R0 From the moment generating function of a normal distribution, we have the following integral: Z Z (ez − 1)2ν(dz) =λ (e2z − 2ez + 1)F (dz) R0 R0 =λE [exp(2Y (·)) − exp(Y (·)) + 1]  δ2  =λ exp(2m + 2δ2) − 2 exp(m + ) + 1 . (4.155) 2

Exponential L´evyProcess

A special case of the geometric L´evyprocesses if the risky asset price is modeled as an exponential L´evy(exp-L´evy)model [21] given by the following Q measure

S1(t) = S0 exp(rt + L(t)) (4.156) where L(t) is a L´evyprocess with characteristic triplet (a, σ2, νQ). Thus, from the L´evy-Itˆodecomposition theorem, Z Z dL(t) =adt + σdW Q(t) + zN Q(dt, dz) + zN˜ Q(dt, dz) |z|≥1 |z|<1 Z =bdt + σW (t) + xN˜ Q(ds, dx). (4.157) R0 120 where Z b = a + zνQ(dz). (4.158) |z|≥1 and a, r, σ are constants. Then its normalized asset process is given as

˜ S1(t) = S0 exp(L(t)). (4.159)

From Itˆo’slemma, we obtain

˜ dS1(t)  σ2  Z =S˜ (t−) b + dt + σdW Q(t) + (ez − 1)N˜ Q(dt, dz) 1 2 R0 Z  + (ez − z − 1)νQ(dz)dt R0  σ2  Z =S˜ (t−) a + dt + σdW Q(t) + (ez − 1)N˜ Q(dt, dz) 1 2 R0 Z  z  Q + e − z1{z<1} − 1 ν (dz)dt . (4.160) R0 Moreover, under the following restrictions [21]: Z ezνQ(dz) < ∞, (4.161) |z|≥1  σ2  Z a + + ez − z1 − 1 νQ(dz) = 0. (4.162) 2 {z<1} R0 ˜ Q S1(t) is a square-integrable Q-martingale and E [exp(L(T ))] = 1. Then under the martingale measure Q, we have  Z  ˜ ˜ − Q z ˜ Q dS1(t) = S1(t ) σdW (t) + (e − 1)N (dt, dz) . (4.163) R0 Then, the exp-Lˆevymodel is the geometric L´evyprocess with

b(t) = σ, c(t, z) = ez − 1. (4.164)

Since we have already started using the martingale measure Q, then we can set the drift coefficients to zero then we have the following mean-variance hedging portfolio 2 Q R z Q σ E [Dt,0F |Ft− ] + (e − 1)zE [Dt,zF |Ft− ] ν(dz) R0 ϕ(t) = D(t, T )   . (4.165) 2 R z 2 S1(t) σ + (e − 1) ν(dz) R0 121

Furthermore, if we have the additional condition Z (ez − 1)4 + z2 ν(dz) < ∞ (4.166) R0 then S(T ) ∈ D1,2 and S (T )(ez − 1) D S (T ) = S (T )1 + 1 1 . (4.167) t,z 1 1 {z=0} z {z6=0}

1 For the contingent claim F ∈ Φ(S(T )), where Φ ∈ C (R+, R) and suppose that the assumptions of the chain rule holds, then

z 0 Φ(S) (T )e ) − Φ(S (T )) D F = Φ (S (T ))S (T )1 + 1 1 1 . (4.168) t,z 1 1 {z=0} z {z6=0} Hence, the mean-variance hedging portfolio is given by

1 h 2 Q h 0 − i ϕ(t) =D(t, T )   σ E Φ (S1(T ))S1(T )|S1(t ) 2 R z 2 S1(t) σ + (e − 1) ν(dz) R0 Z  z Q z + (e − 1)E [Φ(S1(T )e ) − Φ(S1(T ))|Ft− ] ν(dz) . (4.169) R0 Example [10] Consider the European call option with strike price K

+ Φ(S1(T )) = (S1(T ) − K) (4.170)

Then, from the chain rule under the geometric L´evyprocess, S (T )b(t) D F = 1 1 1 t,z σ {S1(T )>K} {z=0} (S (T )(1 + c(t, z)) − K)+ − (S (T ) − K)+ + 1 1 1 . (4.171) z {z6=0} On the other hand, for the exp-L´evymodel we have the following: (S (T )ez − K)+ − (S (T ) − K)+ D F = S(T )1 1 + 1 1 1 . (4.172) t,z {S1(T )>K} {z=0} z {z6=0}

4.3.4 Minimal Martingale Measure

We examine the mean-variance hedging under the minimal martingale measure (MMM). In addition, we shall present results with the Barndorff-Nielsen and Shepard (BNS) stochastic volatility model in light of results of Arai and Suzuki [11]. 122

(i) Deterministic Coefficients Consider the asset dynamics (4.132) in Section (4.3.3) with deterministic co- efficients under the objective P -measure satisfying the integrability condition

(4.133). Then S1 is a special semi-martingale [69] with a unique canonical decomposition

S1(t) = S1(0) + A(t) + M(t) (4.173)

where M(t) is a local martingale with M(0) = 0 and A(t) is a finite variation process with A(0) = 0. Hence  Z  − ˜ dM(t) =S1(t ) b(t)dW (t) + c(t, x)N(dt, dz) , (4.174) R0 − dA(t) =S1(t )a(t)dt. (4.175)

The minimal martingale measure Q is given by the following Radon-Nikodym derivative [75]  Z  dQ 2 Z(t) = = E − Λ(s)dM(s) ∈ L (P ) (4.176) dP Ft [0,t] that is, Z(t) satisfies the SDE

dZ(t) = −Z(t−)Λ(t)dM(t),Z(0) = 1 (4.177)

for some predictable process Λ(t). If we let

− − u(t) = Λ(t)b(t)S1(t ), θ(t, z) = Λ(t)c(t, z)S1(t ) (4.178)

then Z Λ(t)dM(t) = u(t)dW (t) + zN˜(dt, dz). (4.179) [0,t]×R0 Assume that Z satisfies the Novikov-type conditions, then from the Girsanov theorem for L´evyprocesses, and W Q and N˜ Q is a Brownian motion and com- pensated Poisson random measure under Q respectively where

dW Q(t) =dW (t) + u(t)dt, (4.180)

N˜ Q(dt, dz) =N˜(dt, dz) + θ(t, z)ν(dz)dt. (4.181) 123

From the arbitrage-free condition, we have the following, Z − − a(t) − r(t) = b(t) · Λ(t)b(t)S1(t ) + c(t, z) · Λ(t)c(t, z)S1(t )ν(dz). (4.182) R0 Solving for Λ(t) yields a(t) − r(t) Λ(t) =  . (4.183) − 2 R 2 S1(t ) b (t) + c (t, z)ν(dz) R0 Thus, the drift parameters are given as follows: (a(t) − r(t))b(t) (a(t) − r(t))c(t, z) u(t) =  , θ(t, z) =  . b2(t) + R c2(t, z)ν(dz) b2(t) + R c2(t, z)ν(dz) R0 R0 (4.184)

which are deterministic. Thus, the mean-variance hedging portfolio is given by (4.136).

(ii) Barndorff-Nielsen and Shephard Model The Barndorff-Nielsen and Shephard (BNS) model is a stochastic volatility model driven by a positive L´evyOrstein-Uhlenbeck (OU) process. The risky asset under the historical P dynamics is given by the following model [12], [21]

S1(t) =S0 exp(X(t)), dX(t) =(µ + βv(t))dt + pv(t)dW (t) + ρdL(λt),

dv(t) = − λv(t)dt + L(λt), v(0) > 0 (4.185)

where L(t) is the driving L´evyprocess which is a subordinator (an increasing L´evyprocess almost surely) without drift with L´evymeasure νZ , βv(t) is the volatility risk premium, ρdZ(t) is the leverage effect, ρ ≤ 0, and λ > 0 is the mean-reversion parameter.

We let J(t) = L(λt) be the jump process with Poisson random measure N(dt, dz), and L´evymeasure ν(dz). Then ν(dz) = λνL(dz) and Z J(t) = L(λt) = xN(ds, dx) (4.186) [0,t]×R+ 124 from Itˆo’slemma, we obtain  Z  − p ρx ˜ dS1(t) =S1(t ) a(t)dt + v(t)dW (t) + (e − 1)N(dt, dz) (4.187) R+ where

 1 Z ∞ a(t) = µ + β + v(t) + (eρz − 1)ν(dz). (4.188) 2 0 The difference in the preceding case is that the coefficient σ(t) is random. If we set β = −1/2, then

Z ∞ a(t) = µ + (eρz − 1)ν(dz). (4.189) 0 which is deterministic. For now, we will be dealing the case β = −1/2 since the boundedness of the drift parameters u(t) and θ(t, z) no longer holds [9], [11]. In addition, the solution for the L´evyOU process is given by Z v(t) =v(0)e−λt + eλ(s−t)zN(ds, dz). (4.190) [0,t]×R+

Assume that S1 is a special semi-martingale then repeating the same calcula- tions in the preceding case, we obtain the MMM drift parameters. Solving for Λ(t) yields

a(t) − r(t) Λ(t) = . (4.191) − R ∞ ρz 2  S1(t ) v(t) + 0 (e − 1) ν(dz) Thus, the drift parameters are given as follows:

(a(t) − r(t))b(t) (a(t) − r(t))(eρz − 1) u(t) = , θ(t, z) = . R ∞ ρz 2  R ∞ ρz 2  v(t) + 0 (e − 1) ν(dz) v(t) + 0 (e − 1) ν(dz) (4.192) which is random.

Then, we have the following Malliavin derivatives [11] for t ≤ s,

−λ(s−t) Dt,zv(s) = e 1{z>0}, (4.193) 125

pv(s) + ze−λ(s−t) − pv(s) D pv(s) = 1 , (4.194) t,z z {z>0} f (pv(s) + zD pv(s)) − f (pv(s)) D u(s) = u t,z u 1 t,z z {z>0} f (pσ2(s) + ze−λ(s−t)) − f (pv(s)) = u u 1 , (4.195) z {z>0} f (pv(s) + zD pv(s)) D θ(s, x) = θ t,z 1 t,z z {z>0} f (pv(s) + ze−λ(s−t)) − f (pv(s)) = u u 1 , (4.196) z {z>0} where (a(t) − r(t))y f (y) = , y ∈ (4.197) u 2 R ∞ ρz 2 R y + 0 (e − 1) ν(dz) and (a(t) − r(t)) f (y) = , y ∈ . (4.198) θ 2 R ∞ ρz 2 R y + 0 (e − 1) ν(dz) Hence, the mean-variance hedging portfolio is given by σpv(t)β(t) + R ∞(ez − 1)zξ(t, z)ν(dz) ϕ(t) = D(t, T ) 0 . (4.199) − R ∞ z 2  S1(t ) v(t) + 0 (e − 1) ν(dz) The from the Clark-Ocone theorem under the change of measure, β(t) and ξ(t, z) are determined as follows:

Q β(t) =σE [Dt,0F − FK(t)|Ft− ], (4.200)

Q ξ(t, z) =E [F (H(t, z) − 1) + zH(t, z)Dt,zF |Ft− ] (4.201) where Z T Z D θ(s, x) K(t) = D u(s)dW Q(s) + t,0 N˜ Q(ds, dx) = 0, (4.202) t,0 1 − θ(s, x) t [t,T ]×R+  Z T Z T Q 1 2 H(t, z) = exp − zDt,zu(s)dW (s) − (zDt,zu(s)) ds t 2 t Z  zD θ(s, x) + log 1 − t,z N˜ Q(ds, dx) 1 − θ(s, x) [t,T ]×R+ Z   zD θ(s, x)   + log 1 − t,z (1 − θ(s, x)) + zD θ(s, x) ν(dx)ds . 1 − θ(s, x) t,z [t,T ]×R+ (4.203) 126

4.3.5 The Bates Model

Preliminaries

The Bates model is an extension of the Heston stochastic volatility [42] by adding a proportional log-normal jumps in the risky asset process [13], [21]. Its P dynamics

−  p  dS(t) = S(t ) µdt + v(t)dB(t) + dJ(t) ,S(0) = S0 > 0, p dv(t) = κ(θ − v(t))dt + β v(t)dW (t), v(0) = v0 > 0 (4.204)

where B and W are correlated Brownian motion with d[B,W ] = ρdt and µ is the rate of return, κ is the mean-reversion rate, θ is the long-run variance, β is the volatility of the volatility (vol-vol). The jump process J(t) is a Compound Poisson Process of the form

N(t) X J(t) = Yi (4.205) i=1 where N(t) ∼ P oisson(λt),

 1  log(1 + Y ) iid∼ N log(1 + k¯) − δ2, δ2 , (4.206) i 2

and N(t), and Yi’s are mutually independent. Then, J(t) is a pure jump L´evyprocess

with L´evymeasure ν(dz) = λF (dz) where F is the distribution function of Y1. Then J(t) can be represented in terms of Poisson random measure as follows: Z J(t) = zN(ds, dz) [0,t]×R0 Z   = z N˜(ds, dz) + ν(dz)ds . (4.207) [0,t]×R0 The variance process v(t) is a mean-reverting square root Cox-Ingersoll-Ross (CIR) model. We shall denote this process by CIR(κ, θ, β). We assume that all parameters are constant and it satisfies Feller’s condition

2κθ > β2. (4.208) 127

The Feller’s condition [22] assures that v(t) > 0, a.s. Hence, the P dynamics of the Bates model can be written as follows:  Z  − p p 2 ˜ dS(t) = S(t ) αdt + v(t)(ρdW1(t) + 1 − ρ dW2(t)) + zN3(dt, dz) R0 p dv(t) = κ(θ − v(t))dt + β v(t)dW1(t). (4.209)

˜ where W1 and W2 are Wiener processes and N3 is a compensated Poisson random process and are all mutually independent and Z α = µ + zν(dz). (4.210) R0

Characterization of the CIR and 3/2 model

We need to characterize some of the important properties of the CIR model and its reciprocal, which is the 3/2 model which is needed in deriving the mean-variance hedging portfolio. From the CIR model of the variance process v(t), the conditional distribution of v(t) given v(s) [22] is given by

2 −κ(t−s) β (1 − e ) 0 v(t)|v(s) ∼ χ 2(λ) (4.211) 4κ ν

02 where χν (λ) is non-central chi-square distribution with ν degrees of freedom and non-centrality parameter λ where

4κθ 4κe−κ(t−s) ν = , λ = v(s). (4.212) β2 σ2(1 − e−κ(t−s))

02 The pdf of X ∼ χν (λ) is given by

1 xν/4−1 √ f (x) = e−(x+λ)/2 I ( λx)1 X 2 λ ν/2−1 {x≥0} 1  ν λx = e−(λ+x)/2xν/2−1 F :, , 1 (4.213) ν/2 ν  0 1 2 4 {x≥0} 2 Γ 2 where

∞ X 1 z 2n+α I (z) = (4.214) α n!Γ(n + α + 1) 2 α=0 128

is the modified Bessel function of the first kind,

∞ X zn F (; , b, z) = (4.215) 0 1 (b) n! n=0 n is the confluent hypergeometric limit function, and

Γ(n + α) (a) = , n ∈ (4.216) n Γ(a) N0

is the Pochhammer symbol [3]. The moments of X are given as

Γ α + ν   ν ν λx E[Xα] = 2αe−λ/2 2 F α + , , (4.217) ν  1 1 2 2 4 Γ 2 where

∞ n X (a)nz F (a, b, z) = (4.218) 1 1 (b) n! n=0 n is the confluent hypergeometric function. Hence, its conditional moments are given by

σ2(1 − e−κ(t−s))α  2κe−κ(t−s)  E[v(t)α|v(s)] = exp − v(s) · 2κe−κ(t−s) σ2(1 − e−κ(t−s))  2κθ  Γ α + β2  2κθ 2κθ λx 1F1 α + , , . (4.219)  2κθ  β2 β2 4 Γ β2

The 3/2 stochastic volatility model [43] is given by

¯ ¯ 3/2 dv¯(t) =κv ¯ (t)(θ − v(t))dt + βv¯ (t)dW (t), v¯(0) =v ¯0 > 0. (4.220)

This model is a non-affine stochastic volatility model with a 3/2 power law and nonlinear reversion rateκv ¯ (t) [28], [37]. The Fourier-Laplace transform of the log spot price and the integrated variance ofv ¯ is given in closed form [19], [40]. Hence, the Laplace integrated variance ofv ¯(s) can be derived [52] and is given as follows:

  Z T  Γ(γ − α)  2 α  −2  E exp −z v¯(s)ds = y(0, v ) F α, γ, (4.221) ¯ 0 1 1 ¯2 0 Γ(α) β β y(0, v¯0) 129

where z ∈ C and s 1 κ¯  1 κ¯ 2 2z α = + + + + , (4.222) 2 β¯2 2 β¯2 β¯2 s 1 κ¯ 2 2z γ =1 + 2 + + , (4.223) 2 β¯2 β¯2 Z T Z u  y(t, v) =v exp κ¯θ¯(s)ds du. (4.224) 0 t Since we assume θ(s) = θ¯ is a constant, then

v¯ exp(¯κθT¯ ) − 1 y(0, v¯ ) = 0 . (4.225) 0 κ¯θ¯ Since the square-root term in α and γ should be non-negative, then the range of z is given as follows:

β¯2 + 2¯κ2 z ≥ − √ . (4.226) 2 2β¯ The relationship between the and the 3/2 model is as follows. We letv ¯(t) = 1/v(t), that is,v ¯(t) reciprocal Heston process. From Itˆo’slemma, we obtain

2 2 3/2 dv¯(t) = κv¯(t) − (κθ − β )¯v dt + βv¯ (t)dW (t), v¯(0) = 1/v0 > 0. (4.227)

Hence,v ¯(t) is a 3/2 model [40], [43] with the following parameters κ κ¯ = κθ − β2, θ¯ = , β¯ = −β. (4.228) κθ − β2 Its conditional moments which can be obtained by computing the negative moments of the non-central chi-square distribution

σ2(1 − e−κ(t−s))−α  2κe−κ(t−s) 1  E[¯v(t)α|v¯(s)] = exp − · 2κe−κ(t−s) σ2(1 − e−κ(t−s)) y(s)  2κθ  Γ −α + β2  2κθ 2κθ λx 1F1 −α + , , . (4.229)  2κθ  β2 β2 4 Γ β2 This conditional moment is finite if 2κθ α < . (4.230) β2 130

Conversely, taking the dynamics of reciprocal ofv ¯(t), From Itˆo’slemma, we obtain [28]

 1  κ¯ + β¯2 1   1 1/2 d =κ ¯θ¯ − dt − β¯ dW (t) (4.231) v¯(t) κ¯θ¯ v¯(t) v¯(t)

1 ¯ κ¯+β¯2 ¯ that is, v¯(t) is a CIR(¯κθ, κ¯θ¯ , −β) process. Applying Feller’s condition for 1/vˆ(t) > 0 which implies the non-explosion ofv ¯(t) [28] gives us

β¯2 κ¯ ≥ − . (4.232) 2

Change of Measure

Under the change of measure Q ∼ P , we have the following Brownian motions under the Q measure:

Q dWi (t) = dWi(t) + ui(t)dt, i ∈ {1, 2}, (4.233) ˜ Q ˜ Ni (dt, dz) = Ni(dt, dz) + θ(t, z)ν(dz)dt, i ∈ {3}, (4.234)

For an arbitrage-free condition, we have the following constraints: Z Z p p 2p µ + zν(dz) − r(t) = ρ v(t)u1(t) + 1 − ρ v(t)u2(t) + zθ3(t, z)ν(dz). R0 R0 (4.235)

In this case, we have an incomplete model. We can choose the drift parameters such that the P and Q dynamics of v(t) are the same and in the same spirit as the Merton model, where there is no drift in the jump term. In addition, Z Z ¯ zν(dz) = λ zF (dz) = E[Y1] = λk. (4.236) R0 R0 Hence, we have the following drift terms:

µ + λk¯ − r(t) u1(t) = 0, u2(t) = , θ(t, z) = 0. (4.237) p(1 − ρ2)v(t) 131

Then, we have the following dynamics under the Q measure  Z  − p Q p 2 Q ˜ Q dS(t) =S(t ) r(t)dt + v(t)S(ρdW1 (t) + 1 − ρ dW2 (t)) + zN3 (dt, dz) , R0

S(0) =S0 > 0,

p Q dv(t) =κ(θ − v(t))dt + β v(t)dW1 (s),

v(0) =v0 > 0. (4.238)

From Itˆo’slemma, we get

S(T ) Z T   Z T Z T v(t) p Q p 2 p Q =S0 exp r(t) − ds + ρ v(t)dW1 (s) + 1 − ρ v(t)dW2 (t) 0 2 0 0 Z Z  ˜ Q + (log(1 + z) − z)ν(dz)dt + log(1 + z)N3 (dt, dz) . (4.239) [0,T ]×R0 [0,T ]×R0 We denote the normalized log stock process as follows:  S(t)  Y (t) = log . (4.240) S(0) Then, we have Z T   Z T Z T v(t) p Q p 2 p Q Y (T ) = r(t) − ds + ρ v(t)dW1 (s) + 1 − ρ v(t)dW2 (t) 0 2 0 0 Z Z ˜ Q + (log(1 + z) − z)ν(dz)dt + log(1 + z)N3 (dt, dz). [0,T ]×R0 [0,T ]×R0 (4.241)

Under the Heston model, Y (T ) has the same form as (4.241) except for the absence of the last two terms. It was shown that in the Heston case, Y (T ) ∈ D1,2 and by chain rule, S(T ) ∈ D1,2 [29]. The final two terms of (4.241) consist of a deterministic term and from (4.206), a compensated Poisson random process with normally distributed

jumps. Hence, under the Bates model, Y (T ) ∈ D1,2 and by chain rule, S(T ) ∈ D1,2. Sincev ¯(t) = 1/v(t) is a 3/2 model and from the Laplace transform of the integrated variance ofv ¯(t), we obtain the Novikov condition   Z T    Z T  1 2 E exp u2(t)dt = E exp −λ v¯(t)dt . (4.242) 2 0 0 132

where the expression at the right is given by (4.221) which finite under certain con- ditions where

(µ + λk¯ − r(t))2 z = − (4.243) 1 − ρ2 and the 3/2 model parameters is given by (4.228). Under the restriction for z in (4.226), we get

β¯2 + 2¯κ2 2κθ − β2 2 z ≥ − √ = − √ . (4.244) 2 2β¯ 2 2β

From (4.243) and (4.244), we obtain the following restriction for ρ2 which will satisfy the Novikov condition:

2(µ + λk¯ − r(t))β 2 ρ2 ≤ 1 − . (4.245) 2κθ − β2

Mean-Variance Hedging

We assume to have a full model, then the mean-variance portfolio is given as

p p 2p R ρ v(t)S(t)β1(t) + 1 − ρ v(t)S(t)β2(t) + zS(t)ξ3(t, z)ν(dz) ϕ(t) =D(t, T ) R0  2  2 ρpv(t)S(t) + p1 − ρ2pv(t)S(t) + + R (zS(t))2 ν(dz) R0 p p 2p R D(t, T ) ρ v(t)β1(t) + 1 − ρ v(t)β2(t) + zξ3(t, z)ν(dz) = R0 . (4.246) S(t) v(t) + R z2ν(dz) R0

where βj(t) are computed from the Clark-Ocone theorem under the change of measure as follows:

Q βj(t) =E [Dj,t,0F − FKj(t)|Ft− ], 2 Z T X Q Kj(t) = Dj,t,0ui(s)dWi (s) j ∈ {1, 2}, (4.247) i=1 t and

Q ξ3(t, z) = E [D3,t,zF |Ft− ]. (4.248) 133

From the moments of the log-normal distribution, we have the following: Z Z z2ν(dz) =λ z2F (dz) = λE[Y (·)2] R0 R0 h 2 i =λ eδ (1 + k¯)2 + −2(1 + k¯) + 1 . (4.249)

We assume that our contingent claim F ∈ D1,2. Without loss of generality, we

assume that σj = 1 in (3.347). Hence, the stochastic derivative Dj,t,0 is the Malliavin Q derivative operator with respect to Wj . Although the Heston model has no closed form, Al´osand Ewald [4] were able to derive the following characterizations of the variance process v(t) and volatility process σ(t) = pv(t) as follows:

(i) v(s) ∈ L1,2 and for t ≤ s,  s   2    p Z κ κθ β 1 D1,t,zv(s) =β v(s) exp − − − du 1{z=0}, (4.250) t 2 2 8 v(u) p (ii) v(s) ∈ L1,2 and for t ≤ s,  s   2    p β Z κ κθ β 1 D1,t,z v(s) = exp − − − du 1{z=0}. (4.251) 2 t 2 2 8 v(u)

Hence, under Feller’s condition we have following upper bounds:  κ  |D v(s)| ≤βpv(t) exp − (s − t) , 1,t,z 2 β  κ  |D pv(s)| ≤ exp − (s − t) . (4.252) 1,t,z 2 2 Q Since v(s) and σ(s) do not depend on W2 , then

D2,t,0v(s) = D2,t,0σ(s) = 0. (4.253)

In addition, since u1(s) = 0, then

Dj,s,0u1(t) = 0, j ∈ {1, 2}. (4.254)

If the Feller condition holds, then from (4.229) - (4.230) implies 1 ∈ L2(Q). (4.255) pv(s) 134

On the other hand, since

1  κ  |v−3/2(s)D v(s)| ≤ exp − (s − t)) . (4.256) 1,t,0 v(s) 2

Then, from (4.229) - (4.230) and from dominated convergence theorem

2κθ 2 < ⇒ v−3/2(s)D v(s) ∈ L2(Q × λ). (4.257) β2 1,t,0

From the chain rule, we obtain

1 −3/2 D1,t,0 = v (s)D1,t,0v(s) (4.258) pv(s) and thus,

µ − r(t) −3/2 D1,t,0u2(s) = − v (s)D1,t,0v(s). (4.259) 2p1 − ρ2 So therefore, we have the following:

Z T µ − r(t) −3/2 K1(t) = − v (s)D1,t,0v(s)ds, K2(t) = 0. (4.260) p 2 2 1 − ρ t Collecting terms, we obtain the following mean-variance hedging portfolio

D(t, T ) ϕ(t) = · v(t) + R z2ν(dz) R0 " Z T # Q µ − r(t) −3/2 ρE D1,t,0F + F v (s)D1,t,0v(s)ds Ft− p 2 2 1 − ρ t Z  p 2 Q Q + 1 − ρ E [D2,t,0F | Ft− ] + zE [D3,t,zF | Ft− ]ν(dz) . (4.261) R0 To solve the mean-variance hedging portfolio explicitly, we need to compute

for Dj,t,zF . We consider the contingent claim of the form F = Φ(S(T )), where 1 2 0 2 Φ ∈ C (R+, R), such that Φ(S(T )) ∈ L (Q), Φ (S(T ))Dj,t,0S(T ) ∈ L (Q × λ), and

Φ(S(T )+zDj,t,zS(T ))−Φ(S(T )) 2 2 z ∈ L (Q × z ν(dz)dt) for z 6= 0, then from the chain rule, F ∈ D1,2 and Φ(S(T ) + zD S(T )) − Φ(S(T )) D F =Φ0(S(T ))D S(T )1 + j,t,z 1 j,t,z j,t,0 {z=0} z {z6=0} (4.262) 135 where

Dj,t,zS(T )

=Dj,t,z(S0 exp(Y (T ))) exp(Y (T ) + zD Y (T )) − exp(Y (T )) =S exp(Y (T ))D Y (T )1 + S j,t,z 1 0 j,t,z {z=0} 0 z {z6=0} exp(zD Y (T )) − 1 =S(T )D Y (T )1 + S(T ) j,t,z 1 . (4.263) j,t,z {z=0} z {z6=0}

1,2 p 1,2 Since Y (t) ∈ D and v(t), v(t) ∈ L , then from chain rule Dj,t,zY (T ) is computed as follows:  Z T  Z T  1 p p Q D1,t,zY (T ) = − D1,t,0v(s)ds + ρ v(t) + D1,t,0 v(s)dW1 (s) 2 t t Z T  p 2 p Q + 1 − ρ D1,t,0 v(s)dW2 (s) 1{z=0} t  Z T Z T  1 p p Q = − D1,t,0v(s)ds + ρ v(t) + D1,t,0 v(s)dB (s) 1{z=0} 2 t t (4.264) where

Q Q p 2 Q dB (s) = ρdW1 (s) + 1 − ρ dW2 (s) (4.265) is a Q Brownian motion,

p 2p D2,t,zY (T ) = 1 − ρ v(t)1{z=0}, (4.266) log(1 + z) D Y (T ) = 1 . (4.267) 3,t,z z {z6=0} So therefore, we have the following stochastic derivatives:

0 D1,t,zF =Φ (S(T ))S(T )·  Z T Z T  1 p p Q − D1,t,0v(s)ds + ρ v(t) + D1,t,0 v(s)dB (s) 1{z=0}, 2 t t (4.268)

0 p 2p D2,t,zF =Φ (S(T ))S(T ) 1 − ρ v(t)1{z=0}, (4.269) Φ(S(T )(1 + z)) − Φ(S(T )) D F = 1 . (4.270) 3,t,z z {z6=0} 136

Remark 4.3.2 With the recent results of Al´osand Rheinl¨anderon the Malliavin differentiability of the 3/2 model [6] and the reciprocal relation of the the Heston and the 3/2 model, then we find mean-variance portfolio with the similar fashion for the Bates model under the 3/2 stochastic volatility model, that is,

−  p  dS(t) = S(t ) µdt + v(t)dB(t) + dJ(t) ,S(0) = S0 > 0,

3/2 dv(t) = κ(θ − v(t))dt + βv(t) dW (t), v(0) = v0 > 0. (4.271) 137

5. DONSKER DELTA AND ITS APPLICATIONS TO FINANCE

The Donsker Delta is an approach to compute the generalized conditional expectation

E[Dt,zg(Y (T ))|Ft] without explicitly evaluating the Malliavin derivative [2], [27], [66] is first studied in the Wiener case. We shall present some applications in finding the mean-variance hedging portfolio.

5.1 Donsker Delta

Definition 5.1.1 Let Y :Ω → R be a random variable, belongs to (S)∗. Then the ∗ mapping δY : R → (S) is called the Donsker Delta function Y if for all measurable g : R → R Z g(Y ) = g(u)δY (u)dy , a.e. (5.1) R such that the integral on the RHS conveges [27].

We assume that the L´evyprocess X(t) satisfies the following condition to ensure the convergence of the Donsker Delta in (S)∗ .

Assumption There exists  ∈ (0, 1) such that

lim |u|−(1+)ReΨ(u) = ∞. (5.2) |u|→∞

• This entails strong Feller property of the semigroup which implies absolute continuity of its distribution with respect to the Lebesgue measure [17].

• A L´evyprocess with a Brownian term satisfies this condition.

This assumption was stated [58] to assure convergence of the Donsker Delta in (S)∗ in the pure jump L´evycase. However, this assumption still holds with a L´evyprocess 138 with a Wiener and pure jump components to assure convergence of the Donsker Delta in (S)∗. We state the following theorem for the Donsker Delta of the Itˆo-L´evyprocess

Theorem 5.1.1 [26] The Donsker Delta of the Itˆo-L´evyprocess Z dY (s) =α(s)ds + β(s)dW (s) + γ(s, x)N˜(ds, dx), s ∈ [t, T ] R0 Y (t) =y (5.3) where y is a constant, α(s), β(s), and γ(s, x) > −1, deterministic (s, x) ∈ [0,T ] × R0 such that Z T  Z  |α(s)| + β2(s) + γ2(s, x)ν(dx) ds < ∞ (5.4) 0 R0 is given as follows

Z  Z T   1  1 2 2 δY (T )(u) = exp −iλu + iλy + iλα(s) − λ β (s) ds 2π R t 2 Z T Z + iλβ(s)dW (s) + (exp(iλγ(s, x)) − 1 − iλγ(s, x)) ν(dx)ds t [t,T ]×R0 Z  + (exp(iλγ(s, x)) − 1) N˜(ds, dx) dλ. (5.5) [t,T ]×R0 whenever it converges in (S)∗.

Let g : R → R be measurable, then from the Donsker Delta function of the Itˆo-L´evy process, we obtain

Z  1 Z  Z T  1  g(Y (T )) = g(u) exp −iλu + iλy + iλα(s) − λ2β2(s) ds R 2π R t 2 Z T Z + iλβ(s)dW (s) + (exp(iλγ(s, x)) − 1 − iλγ(s, x)) ν(dx)ds t [t,T ]×R0 Z   + (exp(iλγ(s, x)) − 1) N˜(ds, dx) dλ du. (5.6) [t,T ]×R0

5.2 Evaluation of E[Dt,zg(Y (T ))|Ft]

We evaluate E[Dt,zg(Y (T ))|Ft] by considering 2 cases. Case I: z = 0 and Case II: z 6= 0 139

5.2.1 Case I: E[Dt,0g(Y (T ))|Ft]

Taking the stochastic derivative Dt,0 in (5.1) yields Z Dt,0g(Y (T )) = g(u)Dt,0δY (T )(u)du. (5.7) R Then, taking the generalized conditional expectation, we obtain Z E[Dt,0g(Y (T ))|Ft] = g(u)E[Dt,0δY (T )(u)|Ft]du. (5.8) R From the Wick chain rule and since iλβ(t) is deterministic, then Z  Z T   1  1 2 2 Dt,0δY (T )(u) = exp −iλu + iλy + iλα(s) − λ β (s) ds 2π R t 2 Z T Z + iλβ(s)dW (s) + (exp(iλγ(s, x)) − 1 − iλγ(s, x)) ν(dx)ds t [t,T ]×R0 Z  β(t) + (exp(iλγ(s, x)) − 1) N˜(ds, dx) iλ dλ. (5.9) σ [t,T ]×R0 Taking the generalized conditional expectation, we obtain Z  Z T   β(t)  E[Dt,0δY (T )(u)|Ft] = iλ exp(−iλu + iλy)E exp iλβ(s)dW (s) Ft  2πσ R t  Z    ˜ E exp (exp(iλγ(s, x)) − 1) N(ds, dx) Ft · [t,T ]×R0 Z T  1  exp iλα(s) − λ2β2(s) ds t 2 Z  + (exp(iλγ(s, x)) − 1 − iλγ(s, x)) ν(dx)ds dλ. (5.10) [t,T ]×R0 Now since  Z T    Z T    E exp iλβ(s)dW (s) Ft = exp E iλβ(s)dW (s) Ft t t = exp(0) = 1 (5.11) and  Z    ˜ E exp (exp(iλγ(s, x)) − 1) N(ds, dx) Ft [t,T ]×R0  Z   ˜  = exp E (exp(iλγ(s, x)) − 1) N(ds, dx) Ft = exp (0) = 1. (5.12) [t,T ]×R0 140

Hence, using the above results and from previous theorem yield

Z  Z T   β(t) −iλu 1 2 2 E[Dt,0δY (T )(u)|Ft] = iλe exp iλy + iλα(s) − λ β (s) ds+ 2πσ R t 2 Z  (exp(iλγ(s, x)) − 1 − iλγ(s, x)) ν(dx)ds dλ [t,T ]×R0 Z β(t) −iλu = iλe E[exp(iλY (T ))|Ft]dλ. (5.13) 2πσ R If the characteristic function is known but not the pdf, the last expression is sufficient

1 to compute the expression E[Dt,0g(Y (T ))|Ft]. Since | exp(iλY (T ))| ∈ L , the con- ditional expectation on the right-hand side is an ordinary conditional expectation.

On the other hand, if the pdf of Y (T ) conditional to Ft denoted by fY (T )(·|Ft) is differentiable, then, from the differentiation property of the Fourier transform, we obtain

β(t) d E[D δ (u)|F ] = − f (u|F ). (5.14) t,0 Y (T ) t σ du Y (T ) t

Hence, E[Dt,0g(Y (T ))|Ft] can be expressed of the form Z Z β(t) −iλu E[Dt,0g(Y (T ))|Ft] = g(u) iλe E[exp(iλY (T ))|Ft]dλdu. (5.15) R 2πσ R This can be simulated by Fourier transform techniques [18], [20] together with Monte

Carlo simulation. Alternatively, if fY (T )(·|Ft) is differentiable, then,

β(t) Z d E[Dt,0g(Y (T ))|Ft] = − g(u) fY (T )(u|Ft)du σ R du β(t) Z d = − g(u) log fY (T )(u|Ft)fY (T )(u|Ft)du σ R du β(t)   = − E g(Y (T )) log fY (T )(Y (T )|Ft) Ft . (5.16) σ

The conditional expectation in the right-hand is an ordinary conditional expectation and its form resembles a likelihood ratio method estimator [34]. This can be simulated by Monte Carlo simulation. 141

5.2.2 Case II: E[Dt,zg(Y (T ))|Ft], z 6= 0

Taking the stochastic derivative Dt,z, where z 6= 0, in (5.1) yields Z Dt,zg(Y (T )) = g(u)Dt,zδY (T )(u)du. (5.17) R Then, taking the generalized conditional expectation, we obtain Z E[Dt,zg(Y (T ))|Ft] = g(u)E[Dt,zδY (T )(u)|Ft]du. (5.18) R From the Wick chain rule and since exp(iλγ(t, z)) − 1 is deterministic, then Z  Z T   1  1 2 2 Dt,zδY (T )(u) = exp −iλu + iλy + iλα(s) − λ β (s) ds 2π R t 2 Z T Z + iλβ(s)dW (s) + (exp(iλγ(s, x)) − 1 − iλγ(s, x)) ν(dx)ds t [t,T ]×R0 Z  (exp(iλγ(t, z)) − 1) + (exp(iλγ(s, x)) − 1) N˜(ds, dx) dλ. (5.19) z [t,T ]×R0 Taking the generalized conditional expectation, we obtain

E[Dt,zδY (T )(u)|Ft] Z  Z T   1 (exp(iλγ(t, z)) − 1)  = exp(−iλu + iλy)E exp iλβ(s)dW (s) Ft  2π R z t  Z    ˜ E exp (exp(iλγ(s, x)) − 1) N(ds, dx) Ft · [t,T ]×R0 Z T  1  exp iλα(s) − λ2β2(s) ds t 2 Z  + (exp(iλγ(s, x)) − 1 − iλγ(s, x)) ν(dx)ds dλ. (5.20) [t,T ]×R0 Performing similar calculations from the Brownian case, we obtain Z 1 −iλu (exp(iλγ(t, z)) − 1) E[Dt,zδY (T )(u)|Ft] = e E[exp(iλY (T ))|Ft] dλ. (5.21) 2π R z If the characteristic function is known but not the pdf, the last expression is suffi-

1 cient to compute the expression E[Dt,zg(Y (T ))|Ft]. Since | exp(iλY (T ))| ∈ L , the conditional expectation on the right-hand side is an ordinary conditional expecta- tion. On the other hand, provided that the pdf of Y (T ) conditional to Ft denoted by 142

fY (T )(·|Ft) is known, then, from the translation property of the Fourier Transform, we obtain

E[Dt,zδY (T )(u)|Ft] Z Z 1 −iλ(u−γ(t,z)) 1 −iλu = e E[exp(iλY (T ))|Ft]dλ − e E[exp(iλY (T ))|Ft]dλ 2πz R 2π R f (u − γ(t, z)|F ) − f (u|F ) = Y (T ) t Y (T ) t . (5.22) z

Hence, E[Dt,zg(Y (T ))|Ft] can be expressed as follows: Z  Z g(u) 1 −iλ(u−γ(t,z)) E[Dt,zg(Y (T ))|Ft] = e E[exp(iλY (T ))|Ft]dλ R z 2π R Z  1 −iλu − e E[exp(iλY (T ))|Ft]dλ du. (5.23) 2π R This can be simulated by Fourier transform techniques [18], [20] together with Monte

Carlo simulation. Likewise, if fY (T )(·|Ft) is known, then, Z   fY (T )(u − γ(t, z)|Ft) − fY (T )(u|Ft) E[Dt,zg(Y (T ))|Ft] = g(u) du R z Z g(u + γ(t, z)) − g(u) = fY (T )(u|Ft)du R z E[g(Y (T ) + γ(t, z))|F ] − E[g(Y (T ))|F ] = t t . (5.24) z Since Z T Z T Z Y (T ) =y + α(s)ds + β(s)dW (s) + γ(t, x)N˜(ds, dx), s ∈ [t, T ] t t [t,T ]×R0 (5.25) then, for (t, z) ∈ [0,T ] × R0, γ(t, z) D Y (T ) = . (5.26) t,z z Hence, we obtain E[g(Y (T ) + zD Y (T ))|F ] − E[g(Y (T ))|F ] E[D g(Y (T ))|F ] = t,z t t . (5.27) t,z t z The result shows this corresponds to the conditional expectation of a increment op-

erator. This is the same result in the D1,2 case. Moreover, this can be simulated by Monte Carlo simulation. 143

5.3 Examples

5.3.1 Merton Model

We take a look again at the Merton model in Chapter 4.3.3. Consider the jump- diffusion of the form

N(t) X iid 2 X(t) = µt + σW (t) + Yi Yi ∼ N(m, δ ). (5.28) i=0 Then its pdf and cdf is given respectively as follows:

∞ k X (λt) f (x) = e−λt φ(x, µt + km, σ2t + kδ2), (5.29) X(t) k! k=0 ∞ k X (λt) F (x) = e−λt N(x, µt + km, σ2t + kδ2) (5.30) X(t) k! k=0

where φ(·, a, b2) and N(·, a, b2) is the pdf and cdf of the normal distribution N(a, b2) respectively. From the risk-neutral model (4.152) of the Merton model, we have the following solution of the geometric L´evy process,

Z t  2  Z t σ Q S1(t) =S1(0) exp r(s) − ds + σdW (s) 0 2 0 Z Z  + (z − (ex − 1))ν(dx)ds + zN˜ Q(ds, dx) . (5.31) [0,t]×R0 [0,t]×R0 Then Y (t) defined as

 S (t)  Y (t) = log 1 (5.32) S1(0)

is an Itˆo-L´evy process of the form

Z t Z t Z Y (t) = α(s)ds + β(s)dW (s) + γ(s, x)N˜(ds, dx) (5.33) 0 0 [0,t]×R0 where

σ2 Z α(s) = r(s) − + (x − (ex − 1))ν(dz), β(s) = σ, γ(s, z) = z. (5.34) 2 R0 144

Let F be the distribution function of Y1, then have the following integrals: Z Z (ex − x − 1)ν(dx) =λ (ex − x − 1)F (dx), R0 R0

Y1  =λ E[e ] − E[Y1] − 1   δ2   =λ exp m + − m − 1 (5.35) 2

Z Z γ(s, x)ν(dx) =λ z2F (dx) R0 R0 2 =λE[Y1 ] =λ m + δ2 . (5.36)

If r(t) is deterministic and belongs to L1[0,T ], then (2.31) is satisfied.

Consider the contingent claim F = Φ(S1(T )), where Φ : R+ → R and let g : R → R such that

u g(u) = Φ(S0e ), u ∈ R (5.37)

Then, we have the following relationship:

F = Φ(S1(T )) = g(Y (T )). (5.38)

For the Merton model, the mean-variance hedging portfolio is given by (4.154). The Wick product is invariant under the change of measure in the Wiener case [48]. This identity also holds in the L´evycase by an can be proven using the similar approach of showing this identity in the Wiener case by showing the identity using the Doleans-Dade exponential then extend using a density argument [48]. Hence, we

Q can use the results from the previous section to evaluate E [Dt,zg(Y (T ))|Ft]

Example Binary Option We consider the Binary option

F = Φ(S(T )) = 1[K,L](S1(T )) (5.39) 145

then,

g(u) = 1[log K ,log L ](u), u ∈ R. (5.40) S0 S0

1,2 Although, F/∈ D , nevertheless, we can compute E[Dt,zg(Y (T ))|Ft] using the Donsker Delta approach as follows: Z Q β(t) d E [Dt,0F |Ft] = − g(u) fY (T )(u|Ft)du σ R du b Z d = − 1 K L (u) log fY (T )(u|Ft)du [log S , S ] σ R 0 0 du log L b Z S0 d = − fY (T )(u|Ft)du σ log K du S0 b = − f (log K|F ) − f (log L|F ) . (5.41) σ Y (T ) t Y (T ) t and for z 6= 0,

Q E [Dt,zF |Ft] EQ[g(Y (T ) + zD Y (T ))|F ] − EQ[g(Y (T ))|F ] = t,z t t z Q Q E [1[log K ,log L ](Y (T ) + z)|Ft] − E [1[log K ,log L ]Y (T )|Ft] = S0 S0 S0 S0 z  h K L i   h K L i  Q Y (T ) + z ∈ log , log Ft − Q Y (T ) ∈ log , log Ft S0 S0 S0 S0 = z      1 L K = FY (T ) log − z Ft − FY (T ) log − z Ft z S0 S0     L K −FY (T ) log Ft + FY (T ) log Ft (5.42) S0 S0 where its conditional pdf and cdf of the jump diffusion process Y (T ) under the risk- neutral measure is given as

∞ k X (λ(T − t)) f (x|F ) = e−λ(T −t) φ(x, α(T − t) + km, σ2(T − t) + kδ2), Y (T ) t k! k=0 (5.43) ∞ k X (λ(T − t)) F (x|F ) = e−λ(T −t) N(x, α(T − t) + km, σ2(T − t) + kδ2). Y (T ) t k! k=0 (5.44) 146

5.3.2 Continuous Case

We demonstrate some special results in the continuous case. Using generalized

Q conditional expectation, E [Dt,0g(Y (T ))|Ft], we are able to obtain a Delta of an option coinciding with the Delta obtained from the likelihood method. From the risk-neutral model (4.152) of the geometric Brownian motion

Q  dS1(t) = S1(t) rdt + σdW (t) ,S1(0) = S0. (5.45)

The solution for this SDE is as follows:

Z t  2  Z t  σ Q S1(t) =S0 exp r(s) − ds + σdW (s) . (5.46) 0 2 0 Then Y (t) defined as  S (t)  Y (t) = log 1 (5.47) S1(0) is a an Itˆo-L´evyprocess of the form Z t Z t Z t Z t  Y (t) = α(s)ds + β(s)dQW (s) ∼ N α(s)ds, β2(s)dW (s) . (5.48) 0 0 0 0 where σ2 α(s) = r(s) − , β(s) = σ, γ(s, x) = 0. (5.49) 2 Hence, Z T Z T Y (T ) = Y (t) + α(s)ds + β(s)dW Q(s). (5.50) t t If r(t) is deterministic and belongs to L1[0,T ], then (2.31) is satisfied. The mean- variance hedging portfolio, from (4.136) for the continuous case is given as

Q E [D F |F − ] ϕ(t) = D(t, T ) t,0 t . (5.51) S1(t)

Consider the contingent claim F = Φ(ST ), where Φ : R+ → R and let g : R → R such that

u g(u) = Φ(S0e ), u ∈ R (5.52) 147

Q We can compute for E [Dt,0F |Ft] as follows:

Q β(t)   E [Dt,0g(Y (T ))|Ft] = − E g(Y (T )) log fY (T )(Y (T )|Ft) Ft . (5.53) σ

We can express Y (T ) conditional to Ft as follows,

Z T Z T  Y (T )|Ft = Y (t) + α(s)ds + β(s)dW (s) ∼ N Y (t) + m[t,T ], Σ[t,T ] (5.54) t t where Z T Z T 2 m[t,T ] = α(s)ds, Σ[t,T ] = β (s)ds. (5.55) t t

Then density of Y (T ) conditional to Ft

2 ! 1 u − (Y (t) + m[t,T ]) fY (T )(u|Ft) = p exp − . (5.56) 2πΣ[t,T ] 2Σ[t,T ]

Hence, taking the logarithm gives us

R T Q d Y (T ) − (Y (t) + m[t,T ]) t β(s)dW (s) log fY (T )(Y (T )|Ft) = − = − T . (5.57) du Σ[t,T ] R 2 t β (s)ds So therefore,

" R T Q # Q t σdW (s) E [Dt,0F |Ft] =E Φ(S1(T )) Ft . (5.58) R T 2 t σ ds Hence, the mean-variance hedging portfolio is given by

" T # Φ(S (T )) R σdW Q(s) Q 1 t ϕ(t) = D(t, T )E T Ft . (5.59) S1(t) R 2 t σ ds If we let r(s) = r an constant and t = 0, we obtain the familiar Delta of the option using the likelihood method as well as from the technique using the first variation process by Fournie, et al., [31] process given by

 Q  −rT Q W (T ) ∆ = e E Φ(S1(T )) . (5.60) σS0T 148

Example We consider the Binary option

F = Φ(S(T )) = 1[K,L](S1(T )). (5.61)

Then, the mean-variance hedging portfolio is given by

D(t, T ) ϕ(t) √ [φ(d2(K)) − φ(d2(L))] (5.62) σS1(t) T − t where φ(·) is the density function of the standard normal distribution and

S(t) R T  σ2  S1(t) log + r(s) − ds log H + m[t,T ] H t 2 d2(H) = p = q . (5.63) Σ[t,T ] R T 2 t σ ds 149

6. EVALUATING GREEKS IN EXOTIC OPTIONS

6.1 Preliminaries

We investigate the Greeks for exotic options using Malliavin calculus which in- volves the running supremum and running infimum of an asset process such as the barrier and lookback options. To be able to use Malliavin calculus on these op- tions, one should show Malliavin differentiability of the running supremum. Nualart and Vives were able to prove in the continuous case [62]. Moreover, a recent paper from Arai and Suzuki proved the Malliavin differentiability in the L´evycase [10]. Througout the rest of this chapter, we shall present the assumptions taken from [38] and [16]. Without loss of generality, we consider a single risky asset S. Consider the risky- asset price under the Q dynamics modeled as an exponential L´evyprocess

S(t) = S0 exp(rt + L(t)) (6.1) where r ≥ 0 is the risk-free interest rate and L(t) is a square-integrable L´evyprocess with characteristic triplet (b, σ2, ν), that is, L(t) can be written as follows: Z L(t) = bt + σW Q(t) + zN˜ Q(dt, dz). (6.2) [0,t]×R0 In addition, we assume that S ∈ L2(Q) and e−rtS(t) is a Q-martingale. Hence, we the following restrictions [21]: Z e2zν(dz) <∞, |z|≥1 σ2 Z b + + (ez − z − 1)ν(dz) =0. (6.3) 2 R0 From Itˆo’slemma, the SDE of the exp-L´evyprocess is given by Z dS(t) = α(t, S(t))dt + β(t, S(t))dW Q(t) + γ(t, S(t), z)N˜ Q(dt, dz) (6.4) R0 150 where

 σ2  Z  α(t, x) = r + + (ez − z − 1)ν(dz) x, 2 R0 β(t, x) =σx,

γ(t, x, z) =(ez − 1)x. (6.5)

Then, by choosing

 2  Z Z σ z 2 z 2 c(t) = r + + (e − z − 1)ν(dz) + σ + (e − 1) ν(dz) (6.6) 2 R0 R0 in Chapter 2.4, then the SDE of the exp-L´evyprocess has a strong solution provided that Z (ez − z − 1)ν(dz) <∞, R0 Z (ez − 1)2ν(dz) <∞. (6.7) R0

2 From Chapter 2.4 implies supt∈[0.T ] S(t) ∈ L (Q). Denote the following running supremum and infimum processes as follows:

M S(t) = sup S(u), mS(t) = inf S(u), u∈I∩[0,t] u∈I∩[0,t] M L(t) = sup L(u), mL(t) = inf L(u) (6.8) u∈I∩[0,t] u∈I∩[0,t]

There are two monitoring schemes of interest, namely:

• continuous-time monitoring: I = [0,T ]

• discrete-time monitoring: I = {0 = t0 < t1 < ··· < tn−1 < tn = T }.

Without loss of generality, we denote the following shorthand notation:

M S = M S(T ), mS = mS(T ),S = S(T ). (6.9)

Consider the payoff of the form Φ = Φ(M S, mS,S) be square integrable under the following assumptions. 151

Assumption (S) There exists a > 0 such that the following condition holds:

S S S S • Φ(M , m ,S) does not depend on M if M < S0 exp(a),

S S S S • Φ(M , m ,S) does not depend on m if m > S0 exp(−a).

Remark 6.1.1 This assumption is not so restrictive and it includes a large class of barrier and lookback options. Some examples are as follows: Let ϕ be a vanilla option payoff and a > 0.

• Single barrier options

– Up and out barrier option:

S S Φ(M , m ,S) = 1M S

– Down and in barrier option:

S S Φ(M , m ,S) = 1mS ≤Dϕ(S), a = log(S0/D)

• Double barrier options

– Double in barrier option:

S S Φ(M , m ,S) = 1M S ≥U 1mS ≤Dϕ(S), a = min(log(U/S0), log(S0/D))

– Double out barrier option:

S S Φ(M , m ,S) = 1M S Dϕ(S), a = min(log(U/S0), log(S0/D))

– Mixed in/out out barrier option:

S S Φ(M , m ,S) = 1M S ≥U 1mS ≤Dϕ(z)

This option doesn’t directly satisfy (S). However, we can write the payoff as follows:

S S Φ(M , m ,S) = 1mS ≤Dϕ(S) − 1mS ≥U 1mS ≤Dϕ(S)

which is a linear combination of payoff verifying (S). 152

• Backward start lookback put:

S S S Φ(M , m ,S) = max(M0,M ) − S,M0 > S0, a = log(M0/S0)

• Out of the money put on minimum:

S S S + Φ(M , m ,S) = (K − m ) , K < S0, a = log(S0/K)

∞ Let Ψ : [0, ∞) → [0, 1] be a localizing process belongs to Cb such that   1, x ≤ a , Ψ(x) = 2 (6.10)  0, x ≥ a

where a > 0 is given by Assumption (S).

Assumption (NS) We assume that the L´evymeasure ν satisfies the so-called Nualart-Schoutens assump- tion (3.23).

Definition 6.1.1 X-dominating process An increasing, predictable, cadlag process Y = {Y (t): t ∈ [0,T ]} is a dominating process for X = {X(t): t ∈ [0,T ]} or an X-dominating process if the following condition holds: For all t ∈ I,

|X(t)| ≤ Y (t). (6.11)

Moreover, we assume that the dominating process has the following moment property.

Assumption (H)

There exists a positive function α : N → [0, ∞) with limq→∞ α(q) = ∞, such that ∀q ∈ N q α(q) E[Y (t)] < Cqt , ∀[0,T ]. (6.12)

In particular, Y (0) = 0.

Moreover, we have the following Malliavin differentiability assumption. 153

Assumption R(q)

Ψ(Y (t)) ∈ Dq,∞ for all t ∈ [0,T ]. Moreover, for j ∈ {1, ··· , q}, " # p sup E sup |D(r1,0)··· ,(rj ,0)Ψ(Y (·))| ≤ Cp. (6.13) r1,··· ,rj ∈[0,T ] r1∨,··· ,∨rj ≤t≤T

6.2 Markovian Property of the Payoff

Let Φ ∈ L2 be Borel measurable and s ≤ t, then

Φ M S(t), mS(t),S(t) ! =Φ sup S(u), inf S(u),S(t) u∈I∩[0,t] u∈I∩[0,t] !   ! rt rt rt =Φ S0e exp sup L(u) ,S0e exp inf L(u) ,S0e exp(L(t)) . (6.14) u∈I∩[0,t] u∈I∩[0,t]

From the independent increments property of L´evyprocesses, the last term of the following expressions are independent of X(s): ! sup L(u) = max sup L(u), sup L(u) u∈I∩[0,t] u∈I∩[0,s] u∈I∩[s,t] ! = max M L(s),L(s) + sup (L(u) − L(s)) , (6.15) u∈I∩[s,t]   inf L(u) = min inf L(u) min L(u) , u∈I∩[0,t] u∈I∩[0,s] u∈I∩[s,t]   = min mL(s),L(s) + min (L(u) − L(s)) , (6.16) u∈I∩[s,t] L(t) =L(s) + (L(t) − L(s)). (6.17)

Hence, we have the following conditional expectation with respect to Fs:

S S  E[Φ M (t), m (t),S(t) |Fs] =E[Φ M S(t), mS(t),S(t) |(M L(s), mL(s),L(s))]

=E[Φ M S(t), mS(t),S(t) |(M S(s), mS(s),S(s))]. (6.18) 154

6.3 Malliavin Derivatives of the Supremum and Infimum

Before we proceed in deriving the Greeks, we need to characterize the Malliavin derivative of the infimum and supremum for both discrete and continuous monitoring. Most of the derivation for the Malliavin derivatives for the supremum process were done by Arai and Suzuki [10]. For completeness, we shall give a infiumum process.

Let x ∈ R, then the positive and negative part of x are given as follows:

x+ =max(x, 0) = x · 1{x ≥ 0}, (6.19)

x− = − min(x, 0) = −x · 1{x < 0}. (6.20)

Furthermore, let y ∈ R, then we can write the maximum and minimum of x and y as follows:

max(x, y) = (x − y)+ + y, (6.21)

min(x, y) = −(x − y)− + y. (6.22)

Lemma 6.3.1 Let F ∈ D1,2,K ∈ R, then (F − K)+ ∈ D1,2 with

(i)

(F + zD F − K)+ − (F − K)+ D (F − K)+ = 1 D F 1 + t,z 1 , t,z {F >K} t,0 {z=0} z {z6=0} (6.23)

(ii)

(F + zD F − K)− − (F − K)− D (F −K)− = −1 D F 1 + t,z 1 . t,z {F ≤K} t,0 {z=0} z {z6=0} (6.24)

Proof The proof of (6.23) is given by [10]. To complete the lemma, it suffice to show (6.24). Since

(F − K)− = (F − K)+ − (F − K) (6.25) 155

then

− Dt,z(F − K)

+ =Dt,z(F − K) − Dt,z(F − K) (F + zD F − K)+ − (F − K)+  =[1 D F − D F ]1 + t,z − D F 1 {F >K} t,0 t,0 {z=0} z t,z {z6=0} (F + zD F − K)+ − (F − K)+ =1 D F · 1 + t,z 1 . (6.26) {F >K} t,0 {z=0} z {z6=0}

1,2 Corollary 6.3.2 Let F1,F2 ∈ D , then

(i)

+ Dt,z(F2 − F1) =1{F2>F1}Dt,0(F2 − F1) · 1{z=0} ((F − F ) + zD (F − F ))+ − (F − F )+ + 2 1 t,z 2 1 2 1 1 , (6.27) z {z6=0}

(ii)

− Dt,z(F2 − F1) = − 1{F ≤K}Dt,0(F2 − F1) · 1{z=0} ((F − F ) + zD (F − F ))− − (F − F )− + 2 1 t,z 2 1 2 1 1 . (6.28) z {z6=0}

Proof From the previous lemma, we take F = F2 − F1 and K = 0.

Theorem 6.3.3 Malliavin Derivatives of the Maximum and Minimum

1,2 Let Fk ∈ D , k ∈ N, 1 ≤ k ≤ n for all n ∈ N, then

1,2 (i) Mn ≡ max1≤k≤n Fk ∈ D and n X max1≤k≤n(Fk + zDt,zFk) − Mn D M = 1 D F 1 + 1 , t,z n An,k t,0 k {z=0} z {z6=0} k=1 (6.29)

1,2 (ii) mn ≡ min1≤k≤n Fk ∈ D and n X min1≤k≤n(Fk + zDt,zFk) − Mn D m = 1 D F 1 + 1 (6.30) t,z n an,k t,0 k {z=0} z {z6=0} k=1 156

where

An,1 = {Mn = F1},An,k = {Mn 6= F1, ··· ,Mn 6= Fk−1,Mn = Fk}, 2 ≤ k ≤ n

an,1 = {mn = F1}, an,k = {mn 6= F1, ··· mn 6= Fk−1, mn = Fk}, 2 ≤ k ≤ n. (6.31)

Proof The proof of (6.29) is given by [10]. To complete the lemma, it suffice to show (6.30).

− 1,2 Note that m1 = F1, m2 = F2 ∧ F1 = −(F2 − F1) + F1 ∈ D and by induction, it − 1,2 follows that mn = Fn ∧ mn−1 = −(Fn − mn−1) + mn−1 ∈ D . For z = 0,

− Dt,0mn = − Dt,0(Fn − mn−1) + Dt,0mn−1

=1{Fn

=1{Fn

=1{Fn

=1an,n Dt,0Fn + 1{mn=mn−1}Dt,0mn−1. (6.32)

Recursively, we obtain

n X Dt,0mn = 1an,k Dt,0Fk. (6.33) k=1 For z 6= 0,

− Dt,zmn = − Dt,z(Fn − mn−1) + Dt,zmn−1

−1 − − = − z [((Fn − mn−1) + zDt,z(Fn − mn−1)) − (Fn − mn−1) ] + Dt,zmn−1

−1 − = − z [((Fn + zDt,zFn) − (mn−1 + zDt,zmn−1))

− − (mn−1 + zDt,zmn−1) − ((Fn − mn−1) − mn−1)]

−1 =z [(Fn + zDt,zFn) ∧ (mn−1 + zDt,zmn−1) − mn]. (6.34)

Now since

−1 Dt,zmn = z [(mn + zDt,zmn) − mn] (6.35) 157

so therefore,

mn + zDt,zmn = (Fn + zDt,zFn) ∧ (mn−1 + zDt,zmn−1). (6.36)

From the recursion, (6.36), we obtain

mn + zDt,zmn = min (Fk + zDt,zFk) (6.37) 1≤k≤n

so therefore,

min (F + zD F ) − m D m = 1≤k≤n k t,z k n . (6.38) t,z n z

Consider the risky-asset process S(t) = S0 exp(rt + L(t)). Denote the following:

S L Mn = sup S(tk),Mn = sup L(tk), 1≤k≤n,tk≤t 1≤k≤n,tk≤t

S L mn = inf S(tk), mn = inf L(tk). (6.39) 1≤k≤n,tk≤t 1≤k≤n,tk≤t

Since L(t) ∈ D1,2 for all t ∈ [0,T ], then we have the following corollary.

Corollary 6.3.4 Malliavin Derivative of Supremum and Infimum (Discrete Monitoring)

(i)

n L X Dt,zmn = 1{An,k}Dt,0L(tk)1{z∈R0} k=1 L sup (L(tk) + zDt,zL(tk)) − M + 1≤k≤n,tk≤t n 1 , (6.40) z {z∈R0}

(ii)

n L X Dt,zMn = 1{an,k}Dt,0L(tk)1{z∈R0}, k=1 inf (L(t ) + zD L(t )) − mL + 1≤k≤n,tk≤t k t,z k n 1 (6.41) z {z∈R0} 158 where

L An,1 ={Mn = L(t1)},

L L L An,k ={Mn 6= L(t1), ··· Mn 6= L(tk−1),Mn = L(tk)}, 2 ≤ k ≤ n,

L an,1 ={mn = L(t1)},

L L L an,k ={mn 6= L(t1), ··· mn 6= L(tk−1), mn = L(tk)}, 2 ≤ k ≤ n. (6.42)

Corollary 6.3.5 Malliavin Derivative of Supremum and Infimum (Discrete Monitoring)

(i)

n S S X Dt,zMn =Mn 1{An,k}Dt,0L(tk)1{z=0} k=1

zDt,zL(tk) S sup (S(tk)e ) − M + 1≤n,tk≤t n 1 z {z∈R0}

(ii)

n S S X Dt,zmn =mn 1{an,k}Dt,0L(tk)1{z=0} k=1 inf (S(t )ezDt,zL(tk)) − mS + 1≤n,tk≤t k n 1 (6.43) z {z∈R0} where

S An,1 ={Mn = S(t1)},

S S S An,k ={Mn 6= S(t1), ··· Mn 6= S(tk−1),Mn = S(tk)}, 2 ≤ k ≤ n,

S an,1 ={mn = S(t1)},

S S S an,k ={mn 6= S(t1), ··· mn 6= S(tk−1), mn = S(tk)}, 2 ≤ k ≤ n. (6.44)

Proof Since the exponential is strictly increasing, then An,k and an,k holds for 1 ≤ k ≤ n for all n ∈ N. 159

(i) From the chain rule

S S S Dt,zMn =Dt,0Mn 1{z=0} + Dt,zMn 1{z∈R0} L L L rt M +zDt,zM rt M L S0e e n n − S0e e n =D S erteMn 1 + 1 . (6.45) t,0 0 {z=0} z {z∈R0}

From the preceding corollary and the chain rule, we have the following: For z = 0,

n L L rt Mn rt Mn L S X Dt,0S0e e = S0e e Dt,0Mn = Mn 1{An,k}Dt,0L(tk), (6.46) k=1 and for z 6= 0,

rt L L S0e exp(Mn + zDt,zMn ) ! rt L L =S0e exp Mn + sup (L(tk) + zDt,zL(tk)) − Mn {1≤k≤n,tk≤t}

rt = sup S0e exp(L(tk) + zDt,zL(tk)) {1≤k≤n,tk≤t}

rt = sup S0e exp(zDt,zL(tk)). (6.47) {1≤k≤n,tk≤t}

S Hence, from the last two equations gives us the desired expression for Dt,zMn .

(ii) From the chain rule,

S S S Dt,zmn =Dt,0mn1{z=0} + Dt,zmn1{z∈R0} L L L rt m +zDt,zm rt m L S0e e n n − S0e e n =D S ertemn 1 + 1 (6.48) t,0 0 {z=0} z {z∈R0}

From the preceding corollary and the chain rule, we have the following: For z = 0,

n L L rt mn rt mn L S X Dt,0S0e e = S0e e Dt,0mn = mn 1{an,k}Dt,0L(tk) (6.49) k=1 160

and for z 6= 0,

rt L L S0e exp(mn + zDt,zmn )   rt L L =S0e exp mn + inf (L(tk) + zDt,zL(tk)) − mn {1≤k≤n,tk≤t}

rt = inf S0e exp(L(tk) + zDt,zL(tk)) {1≤k≤n,tk≤t}

rt = inf S0e exp(zDt,zL(tk)). (6.50) {1≤k≤n,tk≤t}

S Hence, from the last two equations gives us the desired expression for Dt,zmn.

We let

τ M = inf{t ∈ [0,T ]: L(t) ∨ L(t−) = M L},

τ m = inf{t ∈ [0,T ]: L(t) ∧ L(t−) = mL}. (6.51)

Note that

M L = sup L(t) = L(τ M ) ∨ L(τ M−), t∈[0,T ] mL = inf L(t) = L(τ m) ∧ L(τ m−). (6.52) t∈[0,T ]

Assumption The countable dense subset U ≡ {uk : k ∈ N} where 0,T ∈ U exists such that

M L = sup L(t), mL = inf L(t) (6.53) t∈U t∈U and P (L(s) = L(t)) = 0 for all s 6= t, where s, t ∈ U.

Remark 6.3.6 If L is a L´evyprocess that is not a compound Possion process, the above assumption holds [73].

Let U ≡ {uk : k ∈ N} be a countable dense where 0,T ∈ U. Then, we have the following identities. 161

Lemma 6.3.7 [10] Let Y = {Y (t): t ∈ [0,T ]} be a cadlag and let

Y Y Mn = max Y (uk),M = sup Y (t) (6.54) 1≤k≤n t∈[0,T ]

Y a.s. Y Then, Mn → M , as n → ∞.

Corollary 6.3.8 Let Y = {Y (t): t ∈ [0,T ]} be a cadlag and let

Y Y mn = min Y (uk), m = inf Y (t) (6.55) 1≤k≤n t∈[0,T ]

Y a.s. Y Then, mn → m , as n → ∞.

Proof Applying the previous lemma, we obtain

Y a.s. Y mn = − max (−Y (uk)) → − sup (−Y (t)) = inf Y (t) = m (6.56) 1≤k≤n t∈k[0,T ] t∈[0,T ]

Definition 6.3.1 [72] Uniform Integrablility

A collection of random variables {Ai : i ∈ I} where I is an index set is said to

integrable if supi∈I E[|Li|] < ∞. Further, {Ai : i ∈ I} is said to be uniformly integrable (u.i.) if

sup E[|Ai| · 1{|Ai| ≥ λ}] → 0, λ → ∞. (6.57) i∈I

Lemma 6.3.9 [72] Sufficiency condition for Uniform Integrability

1 0 If |Ai| ≤ B for all i ∈ I for some B ∈ L (P ), then then the Ais are uniformly integrable. 162

Claim As n → ∞,

2 L L (Q) L (i) Mn → M ,

2 L L (Q) L (ii) mn → m .

Remark 6.3.10 Arai and Suzuki [10] has just proceded after they have shown that 2 L a.s. L L L (Q) L Mn → M , then they have just mentioned that Mn → M . Moreover, in addition to almost sure convergence, uniform integrability is also required to justify the L2(P ) convergence.

L a.s. L L 2 a.s. L 2 Proof (i) From the previous lemma, Mn → M , then, (Mn ) → (M ) . It L 2 suffice to show that (Mn ) is u.i.. Note that

L L 2 L 2 L 2 L 2 Mn ≤ max((M ) , (m ) ) ≤ (M ) + (m ) (6.58)

Now since !2 M L(t)2 = sup L(t) ≤ sup |L(t)|2, (6.59) t∈[0,T ] t∈[0,T ]  2 mL(t)2 = inf L(t) ≤ sup |L(t)|2. (6.60) t∈[0,T ] t∈[0,T ]

So therefore,

L 2 Mn ≤ 2 sup |L(t)| . (6.61) t∈[0,T ] From the L´evyprocess L(t) we have the following upper bound: " # EQ sup |L(t)|2 ≤ t∈[0,T ] " # " Z 2#! 2 2 Q Q 2 Q Q 3 (µT ) + E sup σW (t) + E sup zN˜ (ds, dz) . t∈[0,T ] t∈[0,T ] [0,t]×R0 (6.62)

From the Burkholder’s Inequality [7], there exists Cp > 0 such that " # Q Q 2 2 E sup σW (t) ≤ Cp[σW ]T = Cpσ T. (6.63) t∈[0,T ] 163

where [·] is the quadratic variation process. Also, from Kunita’s maximal in-

equality for a pure jump process [7], there exists Dp > 0 such that

" Z 2# Z  Q ˜ Q Q 2 E sup zN (ds, dz) ≤2DpE z dsν(dz) t∈[0,T ] [0,t]×R0 [0,T ]×R0 Z  2 =2DpTE z ν(dz) . (6.64) R0 Hence, we obtain, " # EQ sup |L(t)|2 < ∞. (6.65) t∈[0,T ]

By letting,

L 2 An = Mn ,B = sup |L(t)| , I ∈ N (6.66) t∈[0,T ] L then from the previous lemma, implies Mn is u.i.

(ii) Similarly, from the inequality

L L 2 L 2 2 mn ≤ max((M ) , (m ) ) ≤ 2 sup |L(t)| (6.67) t∈[0,T ]

by letting,

L 2 An = Mn ,B = sup |L(t)| , I ∈ N (6.68) t∈[0,T ] L then from the previous lemma, implies mn is u.i.

Theorem 6.3.11 Malliavin Derivatives of the Maximum and Minimum (Continuous Monitoring)

(i) M L ∈ D1,2 and

L L supt∈[0,T ](L(s) + z1{t≤s}) − M D M = 1 M 1 + 1 , (6.69) t,z {τ ≤t} {z=0} z {z6=0}

(ii) mL ∈ D1,2 and

L L inft∈[0,T ](L(s) + z1{t≤s}) − m D m = 1 m 1 + 1 . (6.70) t,z {τ ≤t} {z=0} z {z6=0} 164

Proof The proof of (6.69) is given by [10]. To complete the theorem, it suffice to show (6.70). Consider the normalized log-returns for the exp-L´evyprocess (6.2), then,

Dt,zL(s) = 1{t≤s}. (6.71)

L L 2 Since we have already shown that mn → m in L (Q), then it suffice to show that L 2 Dt,zmn converges to L (Q × µ). First, consider the case z 6= 0. Since min (L(u ) + zD L(u )) − mL D mL = 1≤k≤n k t,z k n (6.72) t,z n z and is a cadlag with respect to s, then

inf (L(u) + zD L(u)) − mL D mL a.s.→ s∈[0,T ] t,z . (6.73) t,z n z

From the identity

inf(ai + bi) − inf(ai) ≤ sup |bi| (6.74) i∈I i∈I i∈I where I is some index set, then

L L infu∈[0,T ](L(u) + zDt,zL(u)) − m ) Dt,zm − n z " L 2 # L 2 infu∈[0,T ](L(u) + zDt,zL(u)) − m ≤ 2 Dt,zm + n |z|2 " 2# 2 L L ≤ min (L(uk) + zDt,zL(uk)) − mn + inf (L(u) + zDt,zL(u)) − m |z|2 1≤k≤n u∈[0,T ] " # 2 2 ≤ 2 max |Dt,zL(uk)| + sup |Dt,zL(u)| 1≤k≤n u∈[0,T ]

2 ≤ 4 sup |Dt,zL(u)| = 4. (6.75) u∈[0,T ]

The convergence in L2(Q × µ) follows from the dominated convergence theorem. Next, we consider the case z = 0. Denote the following:

L L an,1 ={mn = L(u1)},

L L L L an,k ={mn 6= L(u1), ··· , mn 6= L(uk−1), mn = L(uk)}, 2 ≤ k ≤ n (6.76) 165

n X τn = uk1 L . (6.77) {an,k} k=1

L Then, from the above assumption, τn = uk, whenever mn = L(uk). Moreover, since U contains the index set that maximizes L, then

n n L X X a.s. Dt,0m = 1 L Dt,0L(uk) = 1 L 1{t≤u } → 1{t≤τ a}. (6.78) n {an,k} {an,k} k k=1 k=1

6.4 Some Important Identities

m×m Lemma 6.4.1 [60] Let Σ = {Σij ∈ R } positive definite symmetric matrix such that each of the entries Σij have all orders and that for any p ≥ 2, there exists 0(p) such that for all  ≤ 0(p)

sup P (vT Σv ≤ ) ≤ p. (6.79) kvk2=1

Then det(Σ)−1 ∈ Lp for all p.

−1 R T  p Claim 0 Ψ(Y (s))ds ∈ L , for all p ≥ 1 .

Note that

Z T Z T Z T Ψ(Y (t))dt = 1{Y (t)∈[0,a/2]}Ψ(Y (t))dt + 1{Y (t)∈(a/2,a)}Ψ(Y (t))dt 0 0 0 Z T + 1{Y (t)∈[a,∞)}Ψ(Y (t))dt 0 ≥T ∧ Y (−1)(a/2) (6.80) where Y (−1) is the generalized inverse function

(−1) Y (u) = inf{x ∈ R : Y (x) ≥ u}. (6.81)

Then for  < 0 small enough,

P (T ∧ Y (−1)(a/2) < ) ≤ P (Y (−1)(a/2) < ). (6.82) 166

From the event {Y (−1)(a/2) < }, then we have

a/2 ≤ Y (Y (−1)(a/2)) ≤ Y (). (6.83)

Hence, (6.83) together with Markov Inequality and (H), we obtain E[Y q()] α(q) P (Y (−1)(a/2) < ) ≤ P (Y ( ≥ a) ≤ ≤ C . (6.84) (a/2)q q (a/2)q So that we can choose a positive function α such that Z T  P Ψ(Y (s))ds <  = O(p) → 0 (6.85) 0 −1 R T  p as  → 0. Then, from the preceding lemma, 0 Ψ(Y (s))ds ∈ L , for all p ≥ 1.

6.5 Delta

Theorem 6.5.1 The Delta of the contingent claim Φ = Φ(M S, mS,S) from an exp- L´evyprocess provided that the Assumptions (S), (NS), and the dominating process Y satisfies Assumptions (H) and R(1) holds is given by the following:

∂ Q −rT −rT Q ∆ = E [e Φ] = e E [Π∆Φ] (6.86) ∂S0 where ! 1 W Ψ(Y (·)) −1 Π∆ = δ T σ , (6.87) S0 R 0 Ψ(Y (t))dt δW is the Skorohod integral with respect to the Wiener process W Q.

Proof From the density argument, it suffices to show that the identity holds for a

∞ smooth function Φ ∈ Cb (R) [31]. From the chain rule, we have     S S  ∂ −rT ∂Φ ∂ −rT 0 ∂M 0 ∂M 0 ∂S(T ) ∆ = E e = E e Φ1 + Φ2 + Φ3 (6.88) ∂S0 ∂S0 ∂S0 ∂S0 ∂S0 ∂S0

0 th where Φi is the of Φ with respect to the i argument. For an exponential L´evyprocess, we have the following: ∂M S M S ∂mS mS ∂S(T ) S(T ) = , = , = . (6.89) ∂S0 S0 ∂S0 S0 ∂S0 S0 167

Since

 0 S 0 S 0  Dt,zΦ = Φ1Dt,0M + mΦ2Dt,0m + Φ3 Dt,0S(T ) 1{z=0}+ Φ(M S + zD M S, mS + zD mS,S(T ) + zD S(T )) − Φ(M, m, S(T )) t,z t,z t,z 1 z {z6=0} (6.90)

and from the chain rule

 S sup S(u) exp z1{t≤u} − M D M S =M S1 1 + u∈[0,t] 1 , (6.91) t,z {t≤τM } {z=0} z {z6=0} inf S(u) exp z1  − mS D mS =mS1 1 + u∈[0,t] {t≤u} 1 , (6.92) t,z {t≤τm} {z=0} z {z6=0} exp z1  − 1 D S(T ) =S(T )1 + S(T ) {t≤u} 1 . (6.93) t,z {z=0} z {z6=0}

Hence,

1 0 0 D Φ = DW Φ = Φ M S1 + Φ mS1 + Φ 0S(T ) (6.94) t,0 σ t 1 {t≤τM } 2 {t≤τm} 3

By localization, we have the following:

0 0 Φ11{t≤τM }Ψ(Y (t)) = Φ1Ψ(Y (t)), (6.95)

0 0 Φ21{t≤τm}Ψ(Y (t)) = Φ2Ψ(Y (t)). (6.96)

We extend the reasoning of the above localization from the Wiener case [38] to

S the L´evycase. For (6.95), consider the following cases. If M < S0 exp(a), from L 0 Assumption (S), then Φ doesn’t depend on M so Φ1 = 0 and thus both sides of S (6.95) becomes 0 = 0. Conversely, if M ≥ S0 exp(a), that is,

sup S(s) ≥ S0 exp(a). (6.97) s∈I∩[0,T ]

Then, suppose there exists t ∈ [0,T ] such that Ψ(Y (t)) 6= 0, then Y (t) < exp(a). From the dominating process (6.11) implies

sup S(s) < S0 exp(a). (6.98) s∈I∩[0,t] 168

Combining the above inequalities gives us,

sup S(s) < S0 exp(a) ≤ sup S(s) (6.99) s∈I∩[0,t] s∈I∩[0,T ] and thus, t ≤ τM . On the other hand, for (6.96), consider the following cases. If S S 0 m > S0 exp(−a), from Assumption (S), then Φ doesn’t depend on m so Φ2 = 0 S and thus both sides of (6.95) becomes 0 = 0. Conversely, If m ≤ S0 exp(−a), that is,

inf L(s) ≥ S0 exp(−a). (6.100) s∈I∩[0,T ] Again, suppose there exists t ∈ [0,T ] such that Ψ(Y (t)) 6= 0, then Y (t) < exp(a). From the dominating process (6.11) implies

S0 exp(−a) < inf S(s). (6.101) s∈I∩[0,t]

Combining the above inequalities gives us,

inf S(s) ≤ S0 exp(−a) < inf S(s) (6.102) s∈I∩[0,T ] s∈I∩[0,t] and thus, t ≤ τm. Plugging (6.95) and (6.96) into 6.94 then multiplying by Ψ(Y (t)) and finally, integrating both sides yields Z T Z T 0 S 0 S 0 Dt,0Φ · Ψ(Y (t))dt = (Φ1M + mΦ2m + Φ3 S(T )) Ψ(Y (t))dt. (6.103) 0 0

2 2 From the isometry L (ΩW × ΩJ ) ' L (ΩW ;ΩJ ) [77], [78] the divergence relation [60], as well as the preceding lemma, we have as follows:

" −rT Z T ! # Q e Ψ(Y (t)) ∆ =E Dt,0Φ T dt S0 R 0 0 Ψ(Y (u))du " −rT Z T −1 ! # Q e W Ψ(Y (t))σ =E Dt Φ T dt S0 R 0 0 Ψ(Y (u))du " −rT −1 ! # Q e W Ψ(Y (·))σ =E δ T Φ . (6.104) S0 R 0 Ψ(Y (u))du

To evaluate the Skorohod integral, we need to recall the following proposition [60]. 169

Proposition 6.5.1 [60] Let F ∈ D1,2 and u ∈ Dom(δW ), then

W W W δ (F u) = F δ (u)− < D F, u >L2[0,T ] . (6.105)

In addition, if u is adapted, then

Z T Z T W W δ (F u) = F u(t)dW (t) − Dt F · u(t)dt. (6.106) 0 0

We let Z T −1 F = Ψ(Y (s))ds , u(t) = Ψ(Y (t))σ−1 (6.107) 0 From the condition R(q) and since F ∈ L2 from the above claim, then F ∈ D1,2. Moreover, since Y is adapted, then u ∈ Dom(∆) and is adapted. From chain rule,

−1 Z T DW F = Ψ0(Y (s))DW Y (s)ds. (6.108) t  2 t R T t 0 Ψ(Y (s))ds

Then

Z T W Dt F · u(s)ds 0 −1 Z T Z T = Ψ0(Y (s))DW Y (s)ds · Ψ(Y (t))σ−1dt  2 t R T 0 t 0 Ψ(Y (s))ds −1 Z T Z s = Ψ0(Y (s))σ−1 · Ψ(Y (t))DW Y (s)dtds (6.109)  2 t R T 0 0 0 Ψ(Y (s))ds where the last expression is done using Fubini’s theorem. Hence, performing integra- tion by parts (6.106), we obtain ! Ψ(Y (·)) S Π =∆ σ−1 0 ∆ R T 0 Ψ(Y (t))dt R T −1 R T 0 −1 R s W 0 Ψ(Y (t))σ dW (t) 0 Ψ (Y (s))σ · 0 Ψ(Y (t))Dt Y (s)dtds = + 2 . (6.110) R T Ψ(Y (t))dt R T  0 0 Ψ(Y (s))ds 170

6.6 Gamma

Theorem 6.6.1 The Gamma of the contingent claim Φ = Φ(M S, mS,S) from an exp-L´evyprocess provided that the Assumptions (S), (NS), and the dominating pro- cess Y satisfies Assumptions (H) and R(2) holds is given by the following:

2 ∂ Q −rT −rT Q Γ = 2 E [e Φ] = e E [ΠΓΦ] (6.111) ∂S0 where ! ! ! 1 Ψ(Y (·)) · σ−1 Ψ(Y (·)) · σ−1 1 Ψ(Y (·)) · σ−1 Π = δW δW − δW . Γ S2 R T R T S2 R T 0 0 Ψ(Y (t))dt 0 Ψ(Y (t))dt 0 0 Ψ(Y (t))dt (6.112)

Proof From our previous result,

∂∆ ∂ e−rT  Γ = = EQ δW (u)Φ (6.113) ∂S0 ∂S0 S0 where

Ψ(Y (·))σ−1 u = . (6.114) R T 0 Ψ(Y (u))du

∞ By density argument, it suffice to show the identity for Φ ∈ Cb (R), [31]. Note that,     −rT 1 Q W ∂Φ 1 Q  W  Γ = e E δ (u) − 2 E δ (u)Φ . (6.115) S0 ∂S0 S0 From the derivation of the Delta, Z T ∂Φ 1  0 0 0 1 W = MΦ1 + mΦ2 + S(T )Φ3 = Dt u · dt. (6.116) ∂S0 S0 S0 0 Also, from the divergence relation,   Z T  Q W ∂Φ Q W W Q  W W  E δ (u) = E Dt u · δ (u) dt = E δ uδ (u) . (6.117) ∂S0 0 Hence,

−rT e Q  W W  W   Γ = 2 E δ uδ (u) − δ (u) Φ . (6.118) S0 171

Note that we can write

−rT e Q  W   Γ = E δ (Π∆u) − Π∆ Φ (6.119) S0 then, its weight is given by

1  W   ΠΓ = δ (Π∆u) − Π∆ Φ . (6.120) S0 To explicitly evaluate Γ, we need to evaluate the Skorohod integral using integration by parts as well as the Malliavin derivative DW that will be involved in the derivation.

Since Π∆ has been computed explicitly, then it suffice to compute for the Skorohod W integral δ (Π∆u). We assume a suitable differentiability and Skorohod integrability conditions hold. Consider the expression

Π∆u   R T Ψ(Y (t))σ−1dW (t) R T Ψ0(Y (s))σ−1 · R s Ψ(Y (t))DW Y (s)dtds  0 0 0 t  −1 = 2 + 3 Ψ(Y (·))σ .  R T  R T   0 Ψ(Y (t))dt 0 Ψ(Y (s))ds (6.121)

The Skorohod integral of the first term of Π∆u is evaluated as follows. We let Z T −2 Z T F = Ψ(Y (s))ds , u = Ψ(Y (t))σ−1dW (t)Ψ(Y (·))σ−1. (6.122) 0 0 From integration by parts,   R T Ψ(Y (t))σ−1dW (t) W  0 −1 δ 2 Ψ(Y (·))σ  R T   0 Ψ(Y (t))dt 1 Z T  = δW Ψ(Y (t))σ−1dW (t)Ψ(Y (·))σ−1 (6.123)  2 R T 0 0 Ψ(Y (t))dt 2 Z T Z T + Ψ0(Y (s))DW Y (s)dsΨ(Y (t))σ−1dt  3 t R T 0 t 0 Ψ(Y (t))dt We let Z T F = Ψ(Y (t))dW (t)σ−1dW (t), u = Ψ(Y (·))σ−1 (6.124) 0 172

Again, applying by integration by parts, Z T  W −1 −1 δ Ψ(Y (t))σ dW (t)Ψ(Y.)σ 0 Z T Z T = Ψ(Y (t))σ−1dW (t) · Ψ(Y (t))σ−1dW (t) 0 0 Z T Z T 0 W −1 − Ψ (Y (s))Dt Y (s)ds · Ψ(Y (t))σ dt 0 t Z T 2 Z T Z s −1 0 −1 W = Ψ(Y (t))σ dW (t) − Ψ (Y (s))σ · Ψ(Y (t))Dt Y (s)dtds (6.125) 0 0 0

On the other hand, the Skorohod integral of the second term of Π∆u is evaluated as follows. We let Z T −3 Z T Z s 0 −1 W −1 F = Ψ(Y (s))ds , u = Ψ (Y (s))σ Ψ(Y (t))Dt Y (s)dtds · Ψ(Y (·))σ . 0 0 0 (6.126)

From integration by parts,   R T Ψ0(Y (s))σ−1 · R s Ψ(Y (t))DW Y (s)dtds W  0 0 t −1 δ 3 Ψ(Y (·))σ  R T   0 Ψ(Y (s))ds 1 Z T Z s  = δW Ψ0(Y (s))σ−1 Ψ(Y (t))DW Y (s)dtds · Ψ(Y (·))σ−1  3 t R T 0 0 0 Ψ(Y (s))ds 3 Z T Z T + Ψ0(Y (s))DW Y (s)ds·  4 t R T 0 t 0 Ψ(Y (s))ds Z T Z u 0 −1 W −1 Ψ (Y (u))σ Ψ(Y (v))Dv Y (u)dvdu · Ψ(Y (·))σ dt. 0 0 (6.127)

The last term is simplified as follows:

2 3 Z T Z s  Ψ0(Y (s))σ−1 Ψ(Y (t))DW Y (s)dtds· . (6.128)  4 t R T 0 0 0 Ψ(Y (s))ds We let Z T Z s 0 −1 W −1 F = Ψ (Y (s))σ Ψ(Y (t))Dt Y (s)dtds, u = Ψ(Y (·))σ . (6.129) 0 0 173

Again, by integration by parts, Z T Z s  W 0 −1 W −1 δ Ψ (Y (s))σ Ψ(Y (t))Dt Y (s)dtds · Ψ(Y (·))σ 0 0 Z T Z s Z T 0 −1 W −1 = Ψ (Y (s))σ Ψ(Y (t))Dt Y (s)dtds · Ψ(Y (s))σ ds 0 0 0 Z T Z T Z s  W 0 −1 W −1 − Du Ψ (Y (s))σ Ψ(Y (t))Dt Y (s)dtds Ψ(Y (u))σ du. 0 0 0 (6.130)

Performing differentiation, we have the following: Z T Z s  W 0 −1 W Du Ψ (Y (s))σ Ψ(Y (t))Dt Y (s)dtds 0 0 Z T  Z s  −1 W 0 W = σ Du Ψ (Y (s)) Ψ(Y (t))Dt Y (s)dt ds (6.131) u 0 where  Z s  W 0 W Du Ψ (Y (s)) Ψ(Y (t))Dt Y (s)dt 0 Z s Z s  00 W W 0 W W =Ψ (Y (s))Du Y (s) Ψ(Y (t))Dt Y (s)dt + Ψ (Y (s))Du Ψ(Y (t))Dt Y (s)dt 0 0 (6.132) and Z s  W W Du Ψ(Y (t))Dt Y (s)dt 0 Z s W W  = Du Ψ(Y (t))Dt Y (s) dt u Z s  0 W W W W  = Ψ (Y (t))Du Y (t)Dt Y (s) + Ψ(Y (t))Du (Dt Y (s)) dt. (6.133) u

6.7 Construction of Dominating Processes

We extend the construction of the dominating process for the exp-L´evyprocess. We verify whether dominating process proposed by Bernis, Gobet, and Kohatsu- Higa, [16] for the geometric Brownian motion will be still carried over to the exp-L´evy process. 174

6.7.1 Continuous-Time Monitoring

• Extrema Process

Y (t) = sup (L(s) − L(0)) − inf (L(s) − L(0)) (6.134) s∈[0,t] s∈[0,t] Claim: Y (t) is an L-dominating process. Note that (6.11) is clearly satisfied since

L(t) = (L(t) − L(0)) ≤ sup (L(s) − L(0)) − inf (L(s) − L(0)) = Y (t). s∈[0,t] s∈[0,t] (6.135) Moreover, for q ≥ 1, and from the identity (a + b)q ≤ 2q(aq + bq), we obtain " q#

Q q Q E [Y (t) ] =E sup (L(s) − L(0)) − inf (L(s) − L(0)) s∈[0,t] s∈[0,t] " q # q q Q ≤2 E sup (L(s) − L(0)) + inf (L(s) − L(0)) s∈[0,t] s∈[0,t] " # ≤2qEQ sup |L(s) − L(0)|q + sup |L(s) − L(0)|q s∈[0,t] s∈[0,t] " # ≤2(q+1)EQ sup |L(s) − L(0)|q . (6.136) s∈[0,t] From the L´evyprocess L(t), " # EQ sup |L(s) − L(0)|q ≤ s∈[0,t] " " Z t q# " Z q## q q Q Q Q Q 3 (µt) + E sup σdW (s) + E sup zN˜ (ds, dz) . s∈[0,t] 0 s∈[0,t] [0,t]×R0 (6.137)

From Burkholder’s Inequality and Kunita’s Inequality [7], there exists Ap > 0

and Bp > 0 such that " Z t q# Q Q q q/2 E sup σdW (s) ≤ Aqσ t , (6.138) s∈[0,t] 0 " Z q# " Z q/2 Z # Q ˜ Q q/2 2 q E sup zN (ds, dz) ≤ Bq t z ν(dz) + t |z| ν(dz) . s∈[0,t] [0,t]×R0 R0 R0 (6.139) 175

Hence, to satisfy (6.12) we can pick α(q) = q. Hence, Y (t) is an L-dominating process. Moreover, Y (t) it satisfies R(1) since for r ≤ s,

Dr,0Y (t) =Dr,0 sup (L(s) − L(0)) − Dr,0 inf (L(s) − L(0)) s∈[0,t] s∈[0,t]

=1{r≤τ M (t)} − 1{r≤τ m(t)} (6.140)

where τ M (t) and τ m(t) is the running supremum and running infimum of L respectively. Hence, " # " # Q p Q 0 p p sup E sup |Dt,0Ψ(Y (t))| = sup E sup |Ψ (Y (t))| |Dt,0Y (t)| r∈[0,t] t∈[r,T ] r∈[0,t] t∈[r,T ] 2C p ≤ 1 . (6.141) a

• Average Modulus of Continuity Process  Z γ 1/γ |L(s) − L(u)| m + 2 mγ Y (t) = 8 4 m+2 dsdu t (6.142) [0,t]2 |s − u| m γ  where γ is an even integer, and m ∈ 0, 2 − 2 . This dominating process is applicable only in the continuous case. To show that Y (t) is a dominating process, we use of the Garsia, Rodemich, Rumsey (GRR) lemma [32], [35] which assumes that L is continuous. The GRR lemma is stated as follows:

Theorem 6.7.1 Let (E, d) be a metric space, f ∈ C([0,T ],E) and Ψ, p be continuous and strictly increasing functions on [0, ∞) such that p(0) = g(0) = 0

and limt↑∞ g(t) = ∞. Then Z d(f(s), f(u)) g dsdu ≤ B (6.143) [0,t]2 p(|u − s|) implies for 0 ≤ s < u ≤ T , Z u−s 4B  d(f(s), f(u)) ≤ 8 g 2 dp(v). (6.144) 0 v

Denote ωf (∆) = sup{d(L(s),L(t)) : s, t ∈ [0,T ], |t − s| ≤ ∆} be the modulus of continuity of f, then Z ∆ 4B  ωf (∆) ≤ 8 g 2 dp(u). (6.145) 0 u 176

6.7.2 Discrete-Time Monitoring

• Extrema Process

Y (t) = sup (L(tk) − L(0)) − inf (L(tk) − L(0)) (6.146) 0≤i≤n,t ≤t 0≤k≤n,tk≤t k Claim: Y (t) is an L-dominating process. Note that (6.11) is clearly satisfied since

L(t) = (L(t) − L(0)) ≤ sup (L(ti) − L(0)) − inf (L(tk) − L(0)) = Y (t). 0≤k≤n,t ≤t 0≤k≤n,tk≤t k (6.147)

Moreover, for q ≥ 1, and from the identity (a + b)q ≤ 2q(aq + bq), we obtain  q Q q Q E [Y (t) ] =E sup (L(tk) − L(0)) − inf (L(tk) − L(0)) 0≤k≤n,t ≤t 0≤k≤n,tk≤t k   (q+1) Q q ≤2 E sup |L(tk) − L(0)| 0≤k≤n,tk≤t " # ≤2(q+1)EQ sup |L(s) − L(0)|q . (6.148) s∈[0,t]

This upper bound is similar to the continuous monitoring case. Hence, to satisfy (6.12) we can pick α(q) = q. Moreover, Y (t) it satisfies R(1) since for r ≤ s,

Dr,0Y (t) =Dr,0 sup (L(s) − L(0)) − Dr,0 inf (L(s) − L(0)) 0≤k≤n,t ≤t 0≤k≤n,tk≤t k n n X X = 1{An,k}Dr,0L(tk) − 1{an,k}Dr,0L(tk) k=0 k=0 n X  = 1{An,k} − 1{an,k} 1{t≤tk} (6.149) k=0 where

An,1 = {Mn = F1},An,k = {Mn 6= F1, ··· ,Mn 6= Fk−1,Mn = Fk}, 2 ≤ k ≤ n,

an,1 = {mn = F1}, an,k = {mn 6= F1, ··· mn 6= Fk−1, mn = Fk}, 2 ≤ k ≤ n, (6.150) 177

and

Mn = sup (L(s) − L(0)), mn = sup (L(s) − L(0)). (6.151) 0≤k≤n,tk≤t 0≤k≤n,tk≤t Hence, " # " # Q p Q 0 p p sup E sup |Dt,0Ψ(Y (t))| = sup E sup |Ψ (Y (t))| |Dt,0Y (t)| r∈[0,t] t∈[s,T ] s∈[0,t] t∈[s,T ] 2C p ≤ 1 . (6.152) a .

• Averaged Quadratic Increments Process s X 2 Y (t) = n |L(tk) − L(tk−1)| . (6.153)

1≤k≤n,tk≤t Claim: Y (t) is an L-dominating process. Note that (6.11) is satisfied by triangle inequality, followed by Cauchy-Schwartz inequality, that is, X |L(t) − L(0)| ≤ |L(tk) − L(tk−1)|

1≤k≤n,tk≤t s X 2 ≤ n |L(tk) − L(tk−1)| = Y (t). (6.154)

1≤k≤n,tk≤t From the stationary increments,  !q/2 Q q Q X 2 E [Y (t)] =E  n |L(tk) − L(tk−1)| 

1≤k≤n,tk≤t  !q/2 Q X 2 =E  n |L(tk − tk−1)| 

1≤k≤n,tk≤t " !q#1/2 q/2 Q X 2 =n E |L(tk − tk−1)| . (6.155)

1≤k≤n,tk≤t Then by Minkowski’s inequality, " !q#1/q Q X 2 X 2q 1/q E |L(tk − tk−1)| ≤ E[|L(tk − tk−1)| ]

1≤k≤n,tk≤t 1≤k≤n,tk≤t " #1/q ≤nEQ sup |L(s)|2q . (6.156) s∈[0,t] 178

Hence, " #1/2 EQ[Y q(t)] ≤ nqEQ sup |L(s)|2q . (6.157) s∈[0,t] The upper bound for inequality on the RHS of the last equation is evaluated similar to the continuous monitoring case where q is replaced by 2q. Hence, to satisfy (6.12) we can pick α(q) = q/2. Hence, Y (t) is an L-dominating process.

Moreover, since Y (t) ∈ D∞, then it satisfies R(q).

6.8 Example: Merton Model

We take a look again at the Merton model in Chapter 4.3.3. From the risk-neutral dynamics of the Merton model, (4.152) we have the following solution

S1(t) = S0 exp(rt + L(t)) (6.158) where L(t) is of the form (6.2) where σ2 Z b = − + (z − (ez − 1)) ν(dz) 2 R0 2 σ h  2 i = − + λ m − em+δ /2 − 1 . (6.159) 2 The numerical computation of Delta depends on Y (s). Consider the expression of the weights Π∆ in (6.110). To make the numerical computation explicit, it suffice W to compute for Dt Y (s).

6.8.1 Continuous Monitoring

Extrema Process

Y (s) = sup (L(u) − L(0)) − inf (L(u) − L(0)) (6.160) u∈[0,s] u∈[0,s] Denote the following running supremum and infimum

M L(s) = sup L(s) mL(s) = inf L(s). (6.161) u∈[0,s] u∈[0,s] 179

W Then, Dt Y (s) is computed for s ≤ t as follows:

W  Dt Y (s) = σ 1{τ M (s)≤t} − 1{τ m(s)≤t} (6.162)

where

τ M (s) = inf{t ∈ [0, s]: L(t) ∨ L(t−) = M L(s)},

τ m(s) = inf{t ∈ [0, s]: L(t) ∧ L(t−) = mL(s)}. (6.163)

6.8.2 Discrete Monitoring

For s ∈ [0,T ], there exists l ∈ {0, ··· , n} such that

0 = t0 < t1 < ··· < tl ≤ s < · · · ≤ tn = T. (6.164)

In particular, for an equally spaced partition, we have kT t = , k ∈ {0, ··· , n}. (6.165) k n j s k In this case, l = T/n . Also, we denote

M(s) = sup L(tk), m(s) = inf L(tk). (6.166) 0≤k≤l,t ≤s 0≤k≤l,tk≤s k • Extrema Process

Y (s) = sup (L(tk) − L(0)) − inf (L(tk) − L(0)) 0≤k≤l,t ≤s 0≤k≤l,tk≤s k =M(s) − m(s) (6.167)

W Then, Dt Y (s) is computed for s ≤ t as follows: l W X  Dt Y (s) = σ 1{Al,k} − 1{al,k} 1{tk≤t}. (6.168) k=1 where

Al,1 ={M(s) = L(t1)},

Al,k ={Ml 6= L(t1), ··· ,M(s) 6= L(tk−1),M(s) = L(t)}, 2 ≤ k ≤ l,

al,1 ={m(s) = L(t1)},

al,k ={m(s) 6= L(t1), ··· m(s) 6= L(tk−1), m(s) = L(t)}, 2 ≤ k ≤ l. (6.169) 180

• Averaged Quadratic Increments Process s X 2 Y (s) = l (L(tk) − L(tk−1)) . (6.170)

1≤k≤l,tk≤s

W For s ∈ [0, t1), Y (s) = 0 and thus, Dt Y (s) = 0. On the other hand, for

s ∈ [t1,T ], from chain rule, we obtain: ! 1 X DW Y (s) = DW l (L(t ) − L(t ))2 . (6.171) t 2Y (s) t k k−1 1≤k≤l,tk≤s Now since ! W X 2 Dt (L(tk) − L(tk−1))

1≤k≤l,tk≤s

X W =2 (L(tk) − L(tk−1))Dt (L(tk) − L(tk−1))

1≤k≤l,tk≤s X  =2σ (L(tk) − L(tk−1)) 1{t≤tk} − 1{t≤tk−1}

1≤k≤l,tk≤s X =2σ (L(tk) − L(tk−1))1{tk−1

1≤k≤l,tk≤s

=2σL(tj) (6.172)

j t k where j = inf{k : t < tk}. For an equally-spaced partition, j = T/n . Thus,

for s ∈ [t1,T ],

σlL(t ) DW Y (s) = j . (6.173) t Y (s) REFERENCES 181

REFERENCES

[1] K. Aase, B. Øksendal, N. Privault, and J. Ubøe. White noise generalizations of the Clark-Haussmann-Ocone theorem with application to mathematical finance. Finance Stoch., 4(4):465-496, 2000. [2] K. Aase, B. Øksendal, and J. Ubøe. Using the Donsker Delta function to compute hedging strategies. Potential Anal., 14(4):351-374, 2001. [3] M. Abramowitz, and I. A. Stegun. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, 1972. [4] E. Al´os, and C.-O. Ewald. Malliavin differentiability of the Heston volatility and applications to option pricing. Adv. Appl. Probab., 40(1):144-162, 2008. [5] E. Al´os,J. A. Leon, and J. Vives. An anticipating Itˆoformula for L´evyprocesses. Lat. Am. J. Probab. Math. Stat., 4:285-305, 2008. [6] E. Al´osand T. Rheinl¨ander.On Magrabe options written on stochastic volatility models. Preprint http://www.econ.upf.edu/docs/papers/downloads/1475.pdf, 2015. [7] D. Applebaum. L´evyProcesses and Stochastic Calculus, 2nd ed. Cambridge Uni- versity Press, 2009. [8] T. Arai. An extension of mean-variance hedging to the discontinuous case. Fi- nance Stoch., 9(1):129-139, 2005. [9] T. Arai. Local risk-minimization for Barndorff-Nielsen and Shephard models with volatility risk premium. Preprint http://arxiv.org/abs/1506.01477, 2015. [10] T. Arai. and R. Suzuki. Local risk-minimizaton for L´evymarkets. Intl. J. Financ. Engg., 2(1):1550015, 2015. [11] T. Arai and R. Suzuki. Local risk minimization for Barndorff-Nielsen and Shep- hard models. Preprint http://arxiv.org/pdf/1503.08589v1, 2015. [12] O. E. Barndorff-Nielsen and N. Shephard. Non-Gaussian Ornstein-Uhlenbeck based models and some of their uses in financial econometrics. J. R. Stat. Soc., 63(2): 167241, 2001. [13] D. Bates. Jumps and stochastic volatility: the exchange rate processes implicit in Deutschemark options. Rev. Fin. Studies, 9(1):69-107, 1996. [14] F. E. Benth and J. Pothoff. On the martingale property for generalized stochastic processes. Stochastic Stochastic Rep., 58(3-4):349-367, 1996. 182

[15] F. E. Benth, G. Di Nunno, A. Løkka, B. Øksendal, and F. Proske. Explicit repre- sentation of the minimal variance portfolio in markets driven by L´evyprocesses. Math. Finance, 13(1):55-72, 2003. [16] G. Bernis, E. Gobet, and A. Kohatsu-Higa. A Monte Carlo evaluation of Greeks for multidimensional barrier and lookback options. Math. Finance, 13(1):99-113, 2003. [17] J. Bertoin. L´evyProcesses. Cambridge University Press, 1996. [18] P. Carr and D. Madan. Option valuation using the fast Fourier transform. J. Comput. Finance, 2(4): 61-73, 1999. [19] P. Carr and J. Sun. A new approach for option pricing under stochastic volatility. Rev. Deriv. Res., 10:87-150, 2007. [20] K. Chourdakis. Option pricing using the fractional FFT. J. Comput. Finance, 8(2): 1-18, 2004. [21] R. Cont, R. and P. Tankov. Financial Modeling with Jump Processes. Chapman and Hall/CRC Press, 2004. [22] J. Cox, J. Ingersoll, and S. Ross. A theory of the term structure of interest rates. Econometrica, 53(2):385407, 1985. [23] L. Delong and P. Imkeller. On Malliavin’s differentiability of BSDEs with time delayed generators driven by Brownian motions and Poisson random measures. Stoch. Proc. Appl., 120(9): 1748-1775, 2010. [24] G. Di Nunno, B. Øksendal, and F. Proske. White noise analysis for L´evypro- cesses. J. Funct. Anal., 206(1):109-148, 2004. [25] G. Di Nunno, T. Meyer-Brandis, and B. Øksendal. Malliavin calculus and antic- ipative Itˆoformulae for L´evyprocesses, Infin. Dimens. Anal. Qunatum Probab. Relat. Top., 8(2):235-258, 2005. [26] G. Di Nunno and B. Øksendal. A representation theorem and a sensitivity result for functionals of jump diffusions. In of random phenom- ena, 177-190, World Sci. Publ., Hackensack, NJ, 2007. [27] G. Di Nunno, B. Øksendal, and F. Proske. Malliavin calculus for L´evyprocesses with applications to finance. Springer, 2009. [28] G. Drimus. Options on realized variance by transform methods: A non-affine stochastic volatility model. Quant. Finance, 12(11):1679-1694, 2012. [29] C.-O. Ewald. Local volatility in the Heston model: a Malliavin calculus approach. J. Appl. Math. Stoch. Anal., 3:307-322, 2005. [30] H. F¨ollmerand M. Schweizer. Hedging of contingent claims under incomplete information. In Applied Stochastic Analysis, M. Davis. and R. Elliott (eds.), Stochastics Monographs, vol. 5, Gordon and Breach, London/New York, 389- 414, 1991. [31] E. Fournie, J.-M. Lasry, J. Lebuchox, and N. Touzi. Applications of Malliavin calculus to Monte Carlo in finance. Finance Stoch., 3(4):391-412, 1999. 183

[32] P. K. Friz and N. B. Victoir. Multidimensional Stochastic Processes as Rough Paths: Theory and Applications. Cambridge University Press, 2010. [33] T. Fujiwara and M. Miyahara. The minimal entropy martingale measures for geometric L´evyprocesses. Finance Stoch., 7(4):509-531, 2003. [34] P. Glasserman. Monte Carlo Methods in Financial Engineering. Springer, 2004. [35] A. Garsia, E. Rodemich, and H. Rumsey. A real variable lemma and the conti- nuity of paths of some Gaussian processes. Indiana Univ. Math. J., 20:565-578, 1970. [36] C. Geiss and E. Laukkarinen. Denseness of certain smooth L´evyfunctionals in D1,2. Probab. Math. Statist. 31(1):1-15, 2011. [37] J. Goard and M. Mazur. Stochastic volatility models and the pricing of VIX options. Math. Fin., 23(3):439-458, 2013. [38] E. Gobet and A. Kohatsu-Higa. Computation of Greeks for barrier and lookback options using Malliavin calculus. Electron. Commun. Probab., 8(6):51-62, 2003. [39] C. Gouri´eroux, J. P. Laurent and H. Pham. Mean-variance hedging and num´eraire. Math. Fin., 8(3):179-200, 1998. [40] M. Grasselli. The 4/2 stochastic volatility model. Preprint http://papers. ssrn.com/sol3/papers.cfm?abstract_id=2523635, 2014. [41] D. Griffiths. Introduction to Quantum Mechanics, 2nd ed. Prentice Hall, 2004. [42] S. Heston. A closed-form solution for options with stochastic volatility with ap- plications to bond and currency options. Rev. Financ. Stud., 6(2):327343, 1993. [43] S. Heston. A simple new formula for options with stochastic volatility. Technical Report, Washington University of St. Louis, 1997. [44] T. Hida White noise analysis and its applications, L.H.Y. Chen (ed.) Proc. Int. Mathematical Conf., North-Holland, Amsterdam, 1982, pp. 4348. [45] T. Hida, H.-H. Kuo, J. Potthoff, and L. Streit. White Noise. Kluwer Academic Publishers, 1993. [46] H. Hølden, B. Øksendal, J. Ubøe, and T. Zhang. Stochastic Partial Differential Equations, 2nd ed. Springer, 2010. [47] R. Horn and C. Johnson. Matrix Analysis. Cambridge University Press, 2012. [48] Y. Hu and B. Øksendal. Fractional white noise calculus and applications to finance. Inf. Dim. Anal. Quantum Probab. Rel. Top., 6(1):1-32, 2003. [49] F. Huehne. A Clark-Ocone-Haussman formula for optimal portfolios under Gir- sanov transformed pure-jump L´evyprocesses. Working Paper, 2005. [50] K. Itˆo.Spectral type of the shift transformation of differential processes with stationary increments. Trans. Am. Math. Soc., 81(2):253-263, 1956. 184

[51] I. Karatzas and S. E. Shreve. Brownian Motion and Stochastic Calculus, 2nd ed. Springer, 1991. [52] D. Koleva. Option Pricing under Heston and 3/2 Stochastic Volatility Models: an Approximation to the Fast Fourier Transform. Master’s thesis, Aarhus Uni- versity, 2012. [53] H. Kunita. Stochastic differential equations based on L´evyprocesses and stochas- tic flows on diffeomorpisms. In Real and Stochastic Analysis, New Perspectives, ed. M. M. Rao, Birkha¨user,305-75, 2004. [54] J. Leon, J. L. Sol´e,F. Utzet, and J. Vives. On L´evyprocesses, Malliavin calculus and market models with jumps. Finance Stoch., 6(2):197-225, 2002. [55] D. Lepingl´eand J. M´emin.Sur l’int´egrabilit´euniforme des martingales exponen- tielles. Z. Wahrsch. Verw. Gebiete, 42(3):175-203, 1978. [56] Y.-J. Lee and H.-H. Shih. Analysis of generalized L´evywhite noise functionals. J. Funct. Anal., 211(1):1-70, 2004. [57] Y.-J. Lee and H.-H. Shih. The product formula of multiple L´evy-Itˆointegrals. Bulletin of the Institute of Mathematics, Academia Sinica, 32(2): 71-95, 2004. [58] S. Mataramvura, B. Øksendal, and F. Proske. The Donsker delta function of a L´evyprocess with application to chaos expansion of . Ann. Inst. H. Poincar´eProbab. Statist., 40(5):553-567, 2004. [59] R. Merton. Option pricing when underlying stock returns are discontinuous. J. Financial Econ., 3(1):125-144, 1976. [60] D. Nualart. The Malliavin Calculus and Related Topics, 2nd ed. Springer, 2006. [61] D. Nualart and W. Schoutens Chaotic and predictable representation for L´evy processes. Stoch. Process. Appl., 90(1):109-122, 2000. [62] Nualart, D. and J. Vives. Absolute continuity of the law of maximum of a con- tinuous process. C. R. Acad. Sci. Paris, 307(7):349-354, 1988. [63] D. Ocone and I. Karatzas. A generalized Clark representation formula, with applications to optimal portfolios. Stochastics, 37(3-4):187-220, 1991. [64] B. Øksendal and F. Proske. White noise of Poisson random measures. Potential Anal., 21(4): 375-403, 2004. [65] B. Øksendal and A. Sulem. Applied Stochastic Control of Jump Diffusions, 3rd. ed. Springer, forthcoming. [66] Y. Y. Okur. White noise generalization of the Clark-Ocone formula under change of measure. Stoch. Anal. Appl., 28(6): 1106-1121, 2010. [67] Y. Y. Okur. An extension of the Clark-Ocone formula under benchmark measure for L´evyprocesses. Stochastics, 84(2-3), 251-272, 2012. [68] J. Potthoff and M. Timpel. On a dual pair of smooth and generalized random variables. Potential Anal., 4(6):637-654, 1995. 185

[69] P. Protter. Stochastic Integration and Differential Equations, Version 2.1. Springer, 2005. [70] P. Protter, P. and K. Shimbo. No Arbitrage and General . In Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz, Inst. Math. Stat., 267-283, 2008. [71] T. Rheinl¨ander,and M. Schweizer. On L2-projections on a space of stochastic integrals. Ann. Probab., 25(4):1810-1831, 1997. [72] G. R. Shorack. Probability for Statisticians. Springer, 2000. [73] K. Sato. L´evyProcesses and Infinitely Divisible Distributions. Cambridge Uni- versity Pres, 1999. [74] W. Schoutens. L´evyProcesses in Finance: Pricing Financial Derivatives. Wiley, 2003. [75] M. Schweizer. On the minimal martingale measure and the F¨ollmer-Schweizer decomposition. Stoch. Anal. Appl., 13(5):573-599, 1995. [76] R. Situ. Theory of Stochastic Differential Equations with Jumps and Applica- tions: Mathematical and Analytical Techniques with Applications to Engineering. Springer, 1995. [77] J. L. Sol´e,F. Utzet, and J. Vives. Canonical L´evyprocess and Malliavin calculus. Stoch. Process. Appl., 117(2):165-187, 2007. [78] J. L. Sol´e,F. Utzet, and J. Vives. Chaos expansion and Malliavin calculus for L´evyProcess. In Stoch. Anal. Appl., The Abel Symposium, 595-612, Springer, 2007. [79] A. Stuart and J. K. Ord. Kendall’s Advanced Theory of Statistics, Vol. 2A: Classical Inference and the Linear Model, 6th ed. Oxford University Press, 1999. [80] R. Suzuki. A Clark-Ocone type formula under the change of measure for L´evy processes with L2-L´evymeasure. Comm. Stoch. Anal., 7(3):383-407, 2013. APPENDICES 186

A. WIENER AND POISSON CHAOS EXPANSIONS

We review some of the important concepts in white noise Malliavin Calculus in both Wiener and pure jump (compensated Poisson random measure) cases [1], [24], [27], and [64]. We state the classical and the alternative chaos expansions for both the Wiener and Poisson case.

A.1 Hermite Polynomial and Hermite Function

The Hermite polynomial hn(x) is defined as follows:

n 2 d 2 h (x) = (−1)nex /2 e−x /2. (A.1) n dxn

Its generating function is given by

∞ X tn  t2  h (x) = exp tx − . (A.2) n! n 2 n=0 The first few Hermite polynomials are as follows:

h0(x) =1,

h1(x) =x,

2 h2(x) =x − 1,

3 h3(x) =x − 3x,

4 2 h4(x) =x − 6x + 3,

5 3 h5(x) =x − 10x + 15x. (A.3)

The Hermite polynomial has a weighted orthogonal property given by Z √ −x2/2 hn(x)hm(x)e dx = 2πn!δmn. (A.4) R 187

The Hermite function en(x) is defined as follows: √ −1/4 −1/2 −x2/2 en(x) = π ((n − 1)!) e hn−1( 2x), n ∈ N. (A.5)

The first few Hermite functions are as follows:

−1/4 −x2/2 e1(x) =π e , √ −1/4 −x2/2 e2(x) = 2π xe , √ 1/4 −1 2 −x2/2 e3(x) =( 2π ) (2x − 1)e , √ 1/4 −1 3 −x2/2 e4(x) =( 3π ) (2x − 3x)e , √ 1/4 −1 4 2 −x2/2 e5(x) =(2 6π ) (4x − 12x + 3)e . (A.6)

Some important characterization of Hermite functions:

2 1. {en}n∈N is an orthonormal basis in L (R),

2. en ∈ S(R),

−1/12 3. supx∈R |en(x)| = O(n ),

−1/4 4. en(x) = O(n ) ∀x ∈ R.

Remark A.1.1 The Hermite functions also play a role in quantum mechanics. The relation

ψn(x) ∝ en+1(x), n ∈ N0 (A.7) is an eigenfunction of the harmonic oscillator in the Schr¨odinger’stime-independent wave equation [41].

A.2 Wiener Chaos Expansions

We let g ∈ L2([0,T ]n) and denote Z W ⊗n In (g) = gW (dt) (A.8) [0,T ]n be the n-fold iterated Itˆointegral with respect to the Wiener process. 188

Theorem A.2.1 Wiener Chaos Expansion.

2 2 n Let F ∈ L (P ) be FT -measurable, then there exists a unique sequence fn ∈ Ls([0,T ] ),

∀n ∈ 0 such that N ∞ X W F = In (fn). (A.9) n=0 In addition, we have the following isometry relation:

∞ 2 X 2 k F kL2(P )= n! k fn k([0,T ])n . (A.10) n=0 We let Z θk ≡ ek(t)dW (t) R n W Y W ⊗ˆ α Hα = hαk (θk) = In (e ). (A.11) k=1 where {hk}k∈N0 and {ek}k∈N and is the Hermite polynomial and Hermite function respectively. Let I be the set of mulli-indices α = (α1, ··· αn), n ∈ N0, n ∈ N . Also, W we let |α| = α1 + ··· + αn and α! = α1! ··· αn!. Then, {Hα }α∈I is an orthogonal basis for L2(P ) with norm

W 2 k Hα kL2(P )= α!. (A.12)

We now state the alternative Wiener chaos expansion.

2 Theorem A.2.2 Let F ∈ L (P ), then there exists unique cα ∈ R such that

X W F = cαHα . (A.13) α∈I In addition, we have the following isometry condition,

2 X 2 k F kL2(P )= α!cα. (A.14) α∈I

A.3 Poisson Chaos Expansions

2 n We let g ∈ L (([0,T ] × R0) ) and denote Z N˜ ˜ ⊗n In (g) = gN(dt, dx) (A.15) n ([0,T ]×R0) 189 be the n-fold iterated Itˆointegral with respect to the compensated Poisson random measure.

Theorem A.3.1 Poisson Chaos Expansion.

2 2 Let F ∈ L (P ) be FT -measurable, then there exists a unique sequence fn ∈ Ls([0,T ]×

R0), ∀n ∈ N0 such that ∞ X N˜ F = In (fn). (A.16) n=0 In addition, we have the following isometry relation:

∞ 2 X 2 k F k 2 = n! k f k n . (A.17) L (P ) n ([0,T ]×R0) n=0 We assume that the L´evymeasure ν satisfied the so-called Nualart-Schoutens

m assumption (3.23). Let {lm}m∈N0 be the orthogonalization of {z }m∈N0 with respect 2 2 to L (ρ) where ρ(dz) = z ν(dz) (Note: This is different from the πm presented in the canonical L´evycase). Define

zlm−1(z) pm(z) ≡ m ∈ N (A.18) k lm−1 k L2(ρ) which consists of an orthonormal basis functions in L2(ρ). Denote the Cantor diago- nalization mapping κ :→ N × N as follows: (i + j − 2)(i + j − 1) κ(i, j) = j + . (A.19) 2

Let k = κ(i, j) and

dk(t, z) = ei(t)pj(z) (A.20)

2 then {dk}k∈N forms an orthonormal basis in L (λ×ρ). Suppose that m = Index(α) = max{i : αi 6= 0} and n = |α|, define the following tensor product as follows:

⊗α d ((t1, z1) ··· (tn, zn))

⊗α1 ⊗αm =d1 ⊗ · · · ⊗ dm ((t1, z1) ··· (tn, zn))

=d1(t1, z1) ··· d1(tα1 , zα1 ) ··· dm(tn−αm+1, zn−αm+1) ··· dm(tn, zn) (A.21) 190

⊗0 with the convention di = 1, i ∈ {1, ··· , m}. Also, we denote the following sym- metrized tensor product

⊗ˆ α ⊗α ∧ d ((t1, z1) ··· (tn, zn)) = d ((t1, z1) ··· (tn, zn))

ˆ ˆ ⊗α1 ˆ ˆ ⊗αm =d1 ⊗ · · · ⊗dm ((t1, z1) ··· (tn, zn)). (A.22) and

N˜  ⊗ˆ α Hα = I|α| d . (A.23)

N˜ 2 Then, {Hα }α∈I is an orthogonal basis for L (P ) with norm

N˜ 2 k Hα kL2(P )= α!. (A.24)

We now state the alternative Poisson chaos expansion.

Theorem A.3.2 Let F ∈ L2(P ), then it has a unique chaos expansion of the form

X N˜ F = cαHα . (A.25) α∈I In addition, we have the following isometry condition,

2 X 2 k F kL2(P )= α!cα. (A.26) α∈I VITA 191

VITA

ROLANDO DANGANAN NAVARRO, JR. Date of Birth: 3 February 1978 Place of Birth: City of Manila, Philippines

RESEARCH INTERESTS

• L´evyProcesses

• Long Memory Dependence

• Machine Learning

• Malliavin Calculus

• Optimal Execution Using Stochastic Optimal Control

• Signal Processing

• Stochastic Analysis

• Stochastic Volatility Modeling

EDUCATION

• Ph.D. Statistics, Purdue University, West Lafayette, IN 2011-2015 Research: Malliavin Calculus in the Canonical L´evyProcess: White Noise The- ory and Financial Applications. Adviser: Frederi G. Viens, Ph.D.

• Certificate of Applied Management Principles (mini-MBA) -Purdue University, West Lafayette, IN 2014

• M.S. Statistics - University of the Philippines, Diliman,2004-2008 Research:Estimating the Gauss-Markov Parameters for Remote Sensing Image Tex- tures. Adviser: Joselito C. Magadia, Ph.D., Enrico C. Paringit, D.Eng. 192

• B.S. Electrical Engineering University of the Philippines, Los Ba˜nos1996-2001 Research: Recognition of Tagalog Alphabets Using the Adviser: Haidee P. Rosete

TEACHING EXPERIENCE

• Laboratory Instructor at the Undergraduate Level

– STAT 301 - Elementary Statistics (Regular and Honors)

• Teaching Assistant to the following Graduate Level Courses

– MA 515 / STAT 540 - Mathematics of Finance

– MA 516 / STAT 541 - Adv. Probability & Options, w/ Numerical Methods

– MA 539 / STAT 539 - Theory of Probability II

– STAT 520 - Time Series and Applications

– STAT 532 - Elementary Stochastic Processes

PROFESSIONAL EXPERIENCE

• Graduate Teaching Assistant (Aug 2011 to Present) Department of Statistics, Purdue University West Lafayette, IN, USA Evaluate students laboratory performance in Elementary Statistics, using SPSS; and homework covering graduate-level courses in Mathematical Finance, Time Series, Measure-Theoretic Probability, and Stochastic Processes.

• Risk Analyst, Group Model Validation (May 2010 - Jun 2011) Standard Chartered Bank Marina Bay Financial Centre, Singapore Identified modelling and SAS code development issues; formulated solutions for consumer banking models for Basel II compliance; and collaborated with Head of Methodology to improve PD and LGD estimation using Survival Analysis and Kalman Filtering. 193

• Data Analytics Engineer (May 2008 to Apr 2010) Vision Analytics, Inc. (formerly Vinta Systems, Inc.) Makati City, Philippines Analyzed the performance of machine learning , such as Support Vec- tor Machines and Neural Networks for application and behavioral credit scoring. Recommended statistical analysis tools to develop credit scorecard for RCBC Savings Bank.

• Design Engineer (Jan 2003 to Apr 2008) Luxembourg Electronics, Inc. Makati City, Philippines Responsible for the testing and installation of single and multimode fiber optic systems on a family-run business.

• Science Research Specialist, DSP Team (Apr 2002 to Dec 2002) Advanced Science and Technology Institute, Dept. of Science and Technology Quezon City, Philippines Investigated the performance and stability of timing synchronization of π/4- DQPSK modulation for baseband modem design

SKILLS

• Computing: C/C++, Python, SQL, Unix, Matlab, Excel-VBA, SPSS, SAS, R,

Mathematica, LATEX, Algorithmic Trading.

• Finance: Options and Fixed Income, Time Series, Market and Credit Risk, Portfolio Optimization, Macroeconomics.

• Mathematics: Numerical Analysis, Optimization, Linear Algebra, Stochastic Calculus, PDE, Stochastic Optimal Control.

• Statistics: Machine Learning, Bayesian Analysis, Signal Processing, Multivariate Analysis, Mathematical Statistics, Monte Carlo. 194

NOTABLE ACHIEVEMENTS

• Frederick Andrews Fellowship, Purdue University, 2011-2015.

• Best M.S. Statistics Thesis, Philippine Council for Advanced Science and Tech- nology Research and Development, 2008.

• MS Thesis Fellowship Grant, Statistical Research and Training Center (Philip- pines), 2007.

• Rank 13th Philippine Registered Electrical Engineering Board Licensure Ex- amination (REE License Number 28215), 2001.

• Philippine delegate to the 36th International Mathematical Olympiad, York Uni- versity, Toronto, Canada, 1995.

• 2nd Place - Philippine Mathematical Olympiad, National Level, 1995.

JOURNAL PUBLICATIONS

• R. Navarro and F. Viens. White Noise Analysis in the Canonical Levy Process, to be submitted in Communications in Stochastic Analysis.

• R. Navarro, R. Tamangan, N. Natan-Guba, E. Ramos, and A. de Guzman. Iden- tification of the Long Memory Process in the ASEAN-4 Stock Markets by Frac- tional and Multifractional Brownian Motion, Philippine Statistician, 55(2):65-83, 2006.

BOOK CHAPTER

• R. Navarro, J. Magadia, and E. Paringit. Estimation of the Separable MGMRF Parameters for Thematic Classification. In Remote Sensing - Advanced Tech- niques and Platforms, B. Escalante (ed.), 2012.

CONFERENCE PRESENTATIONS

• R. Navarro. White Noise Analysis in the Canonical Levy Process , Probability Seminar, Purdue University, 2015.

• R. Navarro. Canonical Levy Process in Finance Using White Noise Analysis, Computational Finance Seminar, Purdue University, 2015. 195

• R. Navarro. Mean-Variance Hedging with Partial Information Using the Clark- Ocone Representation with the Change of Measure for L´evyProcess, Barcelona GSE Summer Forum: Statistics, Jump Processes and Malliavin Calculus, 2014.

• R. Navarro, J. Magadia, and E. Paringit. Estimating the Gauss- Parameters for Remote Sensing Image Textures, IEEE TENCON 2009, Singapore, 2009.

• R. Navarro. Recognition of Tagalog Alphabets Using The Hidden Markov Model, 10th National Convention on Statistics, Manila, 2007.

• R. Navarro and J. R. Albert. A Compound Gauss-Markov Random Field Mod- eling of Philippine Unemployment Data, Physics Society of the Philippines Na- tional Congress, Ateneo de Davao University, 2006.

• R. Navarro and J. Noche. Classification of Mixtures of Student Grade Distribu- tion Based on the Gaussian Mixture Model Using the Expectation-Maximization , 4th National ECE Conference, University of San Carlos, Cebu City, Philippines, 2003.

• R. Navarro, C. G. Santos, and A. Manlapat.Performance Analysis of Gardner Timing Error Detector Over π/4-DQPSK Modulation, 3rd National ECE Con- ference, University of the Philippines, Diliman, Quezon City, Philippines, 2002.

PEER REVIEW ACTIVITIES

• Referee - The IET Signal Processing

FUNDED PARTICIPATION IN CONFERENCES

• AMS Mathematics Research Communities Financial Mathematics, Snowbird, UT, Jun 2015.

• High Frequency Conference, Stevens Institute of Technology, Hoboken, NJ, (Jul 2012 and Oct 2013).

• ASA Joint Statistical Meetings Diversity Workshop San Diego, CA, (Jul-Aug 2012) 196

LIFETIME MEMBERSHIPS OF HONOR SOCIETIES

• Golden Key International Honour Society (since March 2013)

• Phi Kappa Phi International Honor Society (since April 2007)

PROFESSIONAL AFFILIATION

• American Mathematical Society

• American Statistical Association

• Society of Industrial and

LEADERSHIP ACTIVITIES

• Student Team Leader 2015 Rotman International Trading Competition, (Dec 2014 to Feb 2015) Spearheaded training of 6-member team in futures, commodities, and algorith- mic trading events.

• President Purdue Quantitative Finance Club (PQFC), (Feb 2014 to May 2015) Revitalized student interest on Quantitative Finance by initiating seminars and organizing trading research group.

• Senator Purdue Graduate Student Government, (Jun 2013 - May 2014).

• Founding President UPLB Society of Electrical Engineering Students, (Jun 1997 - Nov 1998).