Purdue University Purdue e-Pubs
Open Access Dissertations Theses and Dissertations
January 2015 Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications. Rolando Dangnanan Navarro Purdue University
Follow this and additional works at: https://docs.lib.purdue.edu/open_access_dissertations
Recommended Citation Navarro, Rolando Dangnanan, "Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications." (2015). Open Access Dissertations. 1422. https://docs.lib.purdue.edu/open_access_dissertations/1422
This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Graduate School Form 30 Updated 1/15/2015
PURDUE UNIVERSITY GRADUATE SCHOOL Thesis/Dissertation Acceptance
This is to certify that the thesis/dissertation prepared
By Rolando D. Navarro, Jr.
Entitled Malliavin Calculus in the Canonical Levy Process: White Noise Theory and Financial Applications
For the degree of Doctor of Philosophy
Is approved by the final examining committee:
Frederi Viens Chair Jonathon Peterson
Michael Levine
Jose Figueroa-Lopez
To the best of my knowledge and as understood by the student in the Thesis/Dissertation Agreement, Publication Delay, and Certification Disclaimer (Graduate School Form 32), this thesis/dissertation adheres to the provisions of Purdue University’s “Policy of Integrity in Research” and the use of copyright material.
Approved by Major Professor(s): Frederi Viens
Approved by: Jun Xie 11/24/2015 Head of the Departmental Graduate Program Date MALLIAVIN CALCULUS IN THE CANONICAL LEVY´ PROCESS:
WHITE NOISE THEORY AND FINANCIAL APPLICATIONS
A Dissertation
Submitted to the Faculty
of
Purdue University
by
Rolando D. Navarro, Jr.
In Partial Fulfillment of the
Requirements for the Degree
of
Doctor of Philosophy
December 2015
Purdue University
West Lafayette, Indiana ii
”Stay hungry, stay foolish!” Steve Jobs (1955-2011) iii
ACKNOWLEDGMENTS
I would like to express my deepest gratitude to the following who made my journey towards the completion of my Ph.D. dissertation possible. The Almighty Father for giving me all the strength and endurance to accomplish this noble endeavor. My parents for their unyielding encouragement for me to pursue graduate studies in Purdue and well as for imparting their invaluable foresight on what it takes to be successful in life and to my relatives in New York City, Lynn Terrell, Edmundo Navarro, and Araceli Galvan Navarro who kept me home away from home. My adviser, Dr. Frederi Viens for the intellectual stimulation and insightful sug- gestions selflessly provided to me during the course of this research as well as for expanding my horizon in research opportunities in mathematical finance. Also, I would like to thank Dr. Michael Levine, Dr. Jose Figueroa-Lopez, and Dr. Jonathon Peterson for their invaluable comments for my dissertation. My academic sibling, Dr. Richard Eden for providing me his meticulously written notes on Malliavin calculus. Without his mentoring and fruitful discussions during my earlier stages in my stay here Purdue, this thesis would not have gone this far. My Boilermaker friends from PQFC and Statistics Department especially to Lin Yang Cheng, Berend Coster, Xiaoguang Wang, Tian Qiu, Jeffrey Nisen, Yao Tang, and Yudong Cao for sharing their individual and collective aspirations so that through hard work and gritty determination, we can achieve our dreams in the exciting world of Quantitative Finance. My brethren in Church of Christ Iglesia ni Cristo of the Locale of Indianapolis especially to the Gumasing family: Bro. Garry, Sis. Carol, and Bro. Paolo who help me remained steadfast to the faith. You are all awesome! 23rd of November 2015, West Lafayette, IN iv
TABLE OF CONTENTS
Page ABSTRACT ...... vii 1 Introduction ...... 1 1.1 Motivation ...... 1 1.2 Overview of the Dissertation ...... 4 1.3 Main Results ...... 5 2 Preliminaries ...... 7 2.1 L´evyProcesses ...... 7 2.2 Moment Inequalities ...... 11 2.3 Geometric L´evyProcesses ...... 12 2.4 Stochastic Differential Equations ...... 14 2.5 Canonical L´evySpace ...... 15 2.6 Iterated L´evy-ItˆoIntegral ...... 16 2.7 Skorohod Integral ...... 18 2.8 Predictable Process ...... 21 3 Canonical L´evyWhite Noise Processes ...... 22 3.1 Construction of Canonical L´evyWhite Noise Process ...... 22 3.2 Construction of Alternative Chaos Expansion for Canonical L´evyprocesses ...... 26 3.3 Alternative Chaos Expansion for Canonical L´evyprocesses ..... 35 3.4 Stochastic Test and Distribution Function ...... 41 3.4.1 The spaces G and G∗ ...... 42 3.4.2 Kontratiev and Hida spaces ...... 43 3.5 White Noise Processes from Canonical L´evyProcesses ...... 44 3.6 Wick Product and Hermite Transform ...... 49 v
Page
3.7 Stochastic Derivative ...... 52 3.8 Generalized Expectation and Generalized Conditional Expectation . 58 3.9 Skorohod Integration on G∗ ...... 63 3.10 Clark-Ocone Theorem in L2(P ) ...... 71 3.11 Multivariate Extension ...... 79 3.11.1 Notations ...... 80 3.11.2 Chaos Expansion ...... 81 3.11.3 Stochastic Test and Distribution Functions ...... 82 3.11.4 Wick Product ...... 84 3.11.5 Stochastic Derivatives ...... 84 3.11.6 Generalized Conditional Expectation ...... 85 3.11.7 Skorohod Integration on G∗ ...... 86 3.11.8 Clark Ocone Theorem in L2(P ) ...... 88 4 Clark-Ocone Theorem Under The Change of Measure and Mean-Variance Hedging ...... 89 4.1 Girsanov Theorem for L´evyProcesses ...... 89 4.2 Clark-Ocone Theorem in L2(P ) ∩ L2(Q) ...... 92 4.3 Mean Variance Hedging ...... 104 4.3.1 Financial Modeling Under a L´evyMarket ...... 104 4.3.2 Quadratic Hedging ...... 109 4.3.3 Geometric L´evyProcesses ...... 115 4.3.4 Minimal Martingale Measure ...... 121 4.3.5 The Bates Model ...... 126 5 Donsker Delta and Its Applications to Finance ...... 137 5.1 Donsker Delta ...... 137
5.2 Evaluation of E[Dt,zg(Y (T ))|Ft] ...... 138
5.2.1 Case I: E[Dt,0g(Y (T ))|Ft] ...... 139
5.2.2 Case II: E[Dt,zg(Y (T ))|Ft], z 6= 0 ...... 141 vi
Page 5.3 Examples ...... 143 5.3.1 Merton Model ...... 143 5.3.2 Continuous Case ...... 146 6 Evaluating Greeks In Exotic Options ...... 149 6.1 Preliminaries ...... 149 6.2 Markovian Property of the Payoff ...... 153 6.3 Malliavin Derivatives of the Supremum and Infimum ...... 154 6.4 Some Important Identities ...... 165 6.5 Delta ...... 166 6.6 Gamma ...... 170 6.7 Construction of Dominating Processes ...... 173 6.7.1 Continuous-Time Monitoring ...... 174 6.7.2 Discrete-Time Monitoring ...... 176 6.8 Example: Merton Model ...... 178 6.8.1 Continuous Monitoring ...... 178 6.8.2 Discrete Monitoring ...... 179 REFERENCES ...... 181 A Wiener and Poisson Chaos Expansions ...... 186 A.1 Hermite Polynomial and Hermite Function ...... 186 A.2 Wiener Chaos Expansions ...... 187 A.3 Poisson Chaos Expansions ...... 188 VITA ...... 191 vii
ABSTRACT
Navarro, Rolando, Jr. D. PhD, Purdue University, December 2015. Malliavin Calcu- lus in the Canonical L´evyProcess: White Noise Theory and Financial Applications. Major Professor: Frederi G. Viens.
We constructed a white noise theory for the Canonical L´evyprocess by Sol´e, Utzet, and Vives. The construction is based on the alternative construction of the chaos expansion of square integrable random variable. Then, we showed a Clark- Ocone theorem in L2(P ) and under the change of measure. The result from the Clark-Ocone theorem was used for the mean-variance hedging problem and applied it to stochastic volatility models such as the Barndorff-Nielsen and Shepard model model and the Bates model. A Donsker Delta approach is employed on a Binary option to solve the mean-variance hedging problem. Finally, we are able to derive the Delta and Gamma for a barrier and lookback options for an exp-L´evyprocess using the methodology of Bernis, Gobet, and Kohatsu-Higa by employing a dominating process. 1
1. INTRODUCTION
1.1 Motivation
Financial modeling of risky assets is assumed to follow the classical Black-Scholes- Merton model where the log-returns risky asset follows a normal distribution. How- ever, stylized facts suggests that the Black-Scholes-Merton model is inadequate. There is a growing interest that suggests that financial modeling under a L´evyprocess is bet- ter suited in capturing market behavior. This includes skewness and long-tailed dis- tribution of the asset returns, presence of jumps, and implied volatility smile [21], [74]. The classical Canonical space for a L´evyprocess is constucted from the σ-field of cylinder sets and a probaility measure using the Kolmogorov extension theorem [73], [7]. However, Sol´e,Utzet and Vives [77] has formulated another construction of the Canonical space for the L´evy process to be able to obtain an interpretation the Malliavin derivative for the L´evyprocess Dt,z. The derivative Dt,0 is associated with the Malliavin derivative with respect to the Wiener process while Dt,z, z 6= 0 is the Malliavin derivative with respect to the pure jump process that has a form of an increment quotient. We shall refer to the Canonical L´evyprocess to the Canonical space constructed by Sol´e,Utzet, and Vives. [77]. White noise theory was first introduced by Hida for Wiener process which has origins in quantum physics [45]. Subsequently, white noise theory was extended in the pure jump L´evyprocess [1], [64], [24]. This was done by incorporating generalized function spaces related to L2(P ) in a natural way [46]. This includes the dual spaces (G, G∗) and the Hida dual spaces ((S), (S)∗) with the following inclusions: (S) ⊂ G ⊂ L2(P ) ⊂ G∗ ⊂ (S)∗ [27]. We extend this theory for the Canonical L´evyspace by first deriving an alternative chaos expansion of square integrable random variable and give 2 some important characterizations such as the Wick-Skorohod identity, then prove the Clark-Ocone theorem for L2(P ). The Clark-Ocone theorem is the explicit representation of the Itˆorepresentation theorem in terms of the Malliavin derivative. The univariate version of the Clark-
Ocone theorem in D1,2 for the Canonical L´evyprocess can be stated as follows:
1,2 Theorem 1.1.1 [78] Let F ∈ D be FT -measurable, then Z F = E[F ] + E[Dt,zF |Ft− ]M(dt, dz) (1.1) [0,T ]×R where M is independent measure given by (2.53).
The Clark-Ocone representation can be weakened to a representation for F ∈ L2(P ) using white noise analysis with the same form (1.1). However, the Malliavin derivative Dt,z and the expectation E will be generalized to a stochastic gradient and generalized expectation respectively. An example of a contingent claim F that is not in D1,2 but belong to L2(P ) is a binary option. We will evaluate the generalized conditional expectation E[Dt,zF |Ft− ] using the Donsker Delta of an Itˆo-L´evyprocess [26]. Under the change of equivalent measure Q ∼ P , Ocone [63] and Huenhe [49], we were able to derive the Clark-Ocone theorem under the change in measure under
D1,2 for the Wiener and Pure Jump L´evyprocesses. Suzuki has further extended this representation for the Canonical L´evyprocesses [80]. Using white noise theory, the Clark-Ocone theorem under the change of measure was proven by Okur in the Wiener case [66], pure-jump L´evycase [67], and the combination of Wiener and pure jump L´evycase [67]. Let u(t) and θ(t, z) be the drift terms for the Wiener process W (t) and pure jump process N˜(dt, dz) such that dW Q = dW (t) + u(t)dt is a Q-Brownian motion and N˜ Q(dt, dz) = N˜(dt, dz) + θ(t, z)ν(dz)dt is a Q-compensated Poisson random measure. The L´evyprocess is in general an incomplete model. Hence, the Q measure is not unique. Nevertheless, there are some ways of finding drift parameters to obtain a unique equivalent measure Q by some 3
selection criterion such as the F¨ollmer-Schweizer minimal measure and the minimal martingale measure [7]. Okur [67] assumed that u(t) and θ(t, z) is either deterministic or driven by Brow- nian and compensated Poisson random measure respectively. However, this model is in general not adequate to obtain a Clark-Ocone theorem for stochastic volatility models. Hence, we will generalize the drift vectors u(t) and θ(t, z) to be driven pos- sibly by multivariate independent Wiener and Poisson noise sources. One example is the BNS model with drift under the minimal martingale measure. In this model, the drift parameter u(t) is driven by a compensated Poisson random measure. Another example is the Bates model which is driven by another independent Wiener process. As an application to financial modeling in L´evyprocesses, following the method- ology of Benth, et al., [15], the hedging portfolio by minimizing the quadratic hedging error under the martingale measure can be expressed in terms of the representation of the Clark-Ocone theorem. Another financial application of Malliavin calculus considered in the study is the evaluation of the sensitivities or so-called Greeks for exotic options under the exp- L´evyprocess. The Greeks are used in risk-management to hedge against changes in the parameters on the option price. For a L´evyprocess, a closed form of Greeks is in general not available. However, there are numerical methods in evaluating Greeks such as finite-difference, likelihood ratio, and pathwise approach [34]. Greeks using Malliavin calculus was first derived by Fournie, et al., [31]. One advantage of using the Malliavin calculus approach is it doesn’t require the density function in contrast to the likelihood ratio approach. Moreover, Bernis, Gobet, and Kohatsu-Higa was able to extend Fournie’s result for a class of exotic options which includes barrier and lookback options using a dominating process [16], [38] for a discrete and continuous monitoring case. We extend their result for an exp-L´evy process and find a suitable dominating process for discrete and continuous monitoring case. 4
1.2 Overview of the Dissertation
The dissertation is organized as follows. Chapter 2 presents a background review of the stochastic calculus of L´evyprocess, then we discuss the Malliavin calculus for the Canonical L´evyprocesses. We present the white noise theory for Canonical L´evyprocess in Chapter 3. First, we present the construction of the Canonical L´evywhite noise process. Then, we show the alternative chaos expansion of a square-integrable random variable under the Canonical L´evyprocesses and introduce the white noise L´evy process and L´evy white noise field. From this framework, we extend the white noise theory for a Canonical L´evyprocess to prove a Clark-Ocone theorem for L2(P ). Finally, we shall present the multivariate extensions. For readers interested in the financial applications of the Canonical L´evyprocess, readers can proceed immediately to Chapter 3.11 for an overview of important defi- nitions and characterizations of the white noise theory extended on the multivariate version on the first reading. Likewise, for those who are interested in the charac- terization of the white noise theory for the Canonical L´evyprocesses, we invite the reader to explore Chapter 3 on its entirety. We derive a Clark-Ocone formula under the change of equivalent measure Q ∼ P in Chapter 4. Then, we shall present an application to mean-variance hedging portfolio under the martingale measure Q. Specific applications are presented for geometric L´evyprocesses and for the stochastic volatility model such as the BNS model, and the Bates model (Heston volatility with jumps). We present the Donsker Delta approach in Chapter 5 in evaluating the generalized conditional expectation E[Dt,zF |Ft− ] and apply the technique for the binary option. In Chapter 6, we derive the Delta and Gamma for a barrier and lookback options for an exp-L´evyprocess using the methodology of Bernis, Gobet, and Kohatsu-Higa by employing suitable dominating processes. 5
1.3 Main Results
We briefly discuss the main results and contributions of this dissertation. Chapter 3 - Canonical L´evyWhite Noise Processes
• We have shown the alternative chaos expansion for F ∈ L2(P ) the Canonical L´evyprocess in Proposition 3.3.4. The proof of the the chaos expansion uses the chaotic representation property in Theorem 3.2.4 by Nualart and Schoutens [61]. From the results of Sol´e,Utzet, and Vives in Theorems 3.3.1-3.3.3, [78], we are able to construct the alternative chaos expansion Canonical L´evyprocess. In addition, we have shown the isometry relation in Theorem 3.3.1 for this chaos expansion.
This alternative chaos expansion for the Canonical L´evyprocess is new. From this expansion, we characterize the white noise theory using some family of fuc- tion spaces of stochastic test functions and distribution functions. This charac- terization is an extension of the Wiener case [44] and the Poisson case [64], [27].
• Let X(t) be the square-integrable L´evyprocess given by (2.49). Then, in Chap- ter 3.5 we introduce the white noise L´evyprocess X˙ (t) and show that X˙ (t) is the derivative of X(t) in (S)∗. Also, we introduce the L´evywhite noise field M˙ (t, x) and we show the Radon-Nikodym derivative relation M(dt, dx) = M˙ (t, x)µ(dt, dx) in (S)∗ in (3.163).
• The concepts of white noise theory to the Canonical L´evyspace is presented in Chapter 3.4 - 3.11. Moreover, these concepts has a parallel analog Wiener and Poisson cases [27], [24], [66], [67].
– Closability of the stochastic derivative Dt,z (Theorem 3.7.1).
2 ∗ – F ∈ L (P ) implies Dt,zF ∈ G (Theorem 3.7.2)
– Fundamental Theorem of stochastic calculus in G∗ (Theorem 3.9.1)
– Wick-Skorohod identity (Theorem 3.9.5). 6
– Clark-Ocone theorem for Wick polynomials (Theorem 3.10.3) and L2(P ) (Theorem 3.10.5)
• We give a multivariate extension to the white theory for Canonical L´evyspace (Chapter 3.11).
Chapter 4 - Clark-Ocone Theorem Under The Change of Measure and Mean- Variance Hedging
• We show Clark-Ocone theorem under the change of measure (Theorem 4.2.2)
2 2 2 for F ∈ L (P ) ∩ L (Q) is FT -measurable and FZ(T ) ∈ L (P ).
• We show the mean-variance hedging portfolio with partial information under the martingale measure (Theorem 4.3.1). Furthermore, we give some specific examples of finding the mean-variance hedging portfolio in the following models:
– Geomteric L´evyProcesses (Chapter 4.3.3)
– BNS model (Chapter 4.3.4)
– Bates model (Chapter 4.3.5)
Chapter 5 - Donsker Delta And Its Application to Finance
• The generalized conditional expectation E[Dt,zF |Ft− ] is evaluated using the Donsker Delta approach (Chapter 5.2).
• From this result, we give an example of evaluating the mean-variance hedging porfolio for a binary option under the Merton model (Chapter 5.3.1) and the continuous case (Chapter 5.3.2).
Chapter 6 - Evaluating Greeks In Exotic Options
• We derive the Delta (Theorem 6.5.1) and Gamma (Theorem 6.6.1) for a barrier and lookback options for an exp-L´evyprocess using the methodology of Bernis, Gobet, and Kohatsu-Higa. A suitable dominating process was constructed for continuous and discrete monitoring case (Chapter 6.7). 7
2. PRELIMINARIES
2.1 L´evyProcesses
We present some background on L´evyprocesses [7], [27], [73], [78]. Let (Ω, F,P ) be a complete probability space. A L´evyprocess X = {X(t): t ≥ 0} is a stochastic process with the following properties:
1. X(0) = 0,P − a.s.,
2. X(t) has independent increments,
3. X(t) has stationary increments,
p 4. X(t) is stochastically continuous, that is, X(s) → X(t), as s → t.
The Poisson random measure also known as the jump measure N :Ω×[0,T ]×R0 → N0 is a counting measure defined as X N(A) = 1{s:(s,∆X(s))∈A},A ∈ B([0,T ] × R0) (2.1) s∈(0,t]
− where R0 = R\0 and ∆X(t) = X(t) − X(t ) is the jump of X at time t. The L´evy measure ν of X is defined as the expectation of N as follows: X ν(B) = E[N((0, 1] × B)] = E 1{s:∆X(s)∈B} ,B ∈ B(R0). (2.2) s∈(0,1] The L´evymeasure is a σ-finite measure and satisfies Z (1 ∧ z2)ν(dz) < ∞. (2.3) R0 The compensated Poisson random measure also known as the compensated jump ˜ measure N :Ω × [0,T ] × R0 → R is given by
N˜(dt, dz) = N(dt, dz) − dtν(dz). (2.4) 8
The L´evyprocess X(t) has integral representation known as the L´evy-Itˆodecom- position theorem.
Theorem 2.1.1 L´evy-Itˆodecomposition theorem
Let X(t) ∈ R be a L´evyprocess, then there exists a triplet (a, σ2, ν) such that for all t ≥ 0 Z Z X(t) = at + σW (t) + zN(ds, dz) + zN˜(ds, dz). (2.5) [0,t]×{|z|≥1} [0,t]×{|z|<1}
The triplet (a, σ2, ν) is known as the L´evytriplet or the characteristic triplet. Like- wise, we can write the L´evyprocess representation as follows: Z X(t) = bt + σW (t) + zN˜(ds, dz) (2.6) [0,t]×R0 where Z b = a + zν(dz). (2.7) |z|≥1 The characteristic function of the L´evyprocess is given by the L´evyKhintchine for- mula [27].
Theorem 2.1.2 L´evyKhintchine formula
Let X(t) ∈ R be a L´evyprocess in law then a necessary and sufficient condition that its characteristic function is given as
E[exp(iuX(t))] = exp(Ψ(u)t) (2.8) where Ψ(u) is the characteristic exponent given by
1 Z Ψ(u) = iau − σ2u2 + (exp(iuz) − 1 − iuz1 )ν(dz) (2.9) 2 {|z|<1} R0
2 where α ∈ R, σ ≤ 0 are constants and ν = ν(dz), z ∈ R0 is σ-finite measure in
B(R0) satisfying Z (1 ∧ z2)ν(dz) < ∞. (2.10) R0 9
From the L´evy-Itˆorepresentation theorem, it is natural to consider an Itˆo-L´evypro- cess of the form of Z t Z t Z X(t) = x + α(s) + β(s)dW (s) + γ(s, z)N˜(ds, dz). (2.11) 0 0 [0,t]×R0 In short-hand SDE form, we have the following: Z dX(t) = α(t)dt + β(t)dW (t) + γ(t, z)N˜(dt, dz),X(0) = x. (2.12) R0
If the coefficients α(s), β, and γ(s, x) > −1 are predictable for all (s, x) ∈ R+ × R0 such that Z Z |µ(s)| + σ2(s) + θ2(s, z)ν(dz) ds < ∞, P a.s. (2.13) R R0 Then the stochastic integrals in (2.11) are local martingale. Furthermore, if we impose the following square-integrablilty condition Z Z E |µ(s)| + σ2(s) + θ2(s, z)ν(dz) ds < ∞. (2.14) R R0 Then the stochastic integrals in (2.11) are martingales. Now, we present Itˆo’slemma for Itˆo-L´evyproceses .
Theorem 2.1.3 [27] Let X(t) be an Itˆo-L´evyprocess given by (2.12) and let F ∈
1,2 C (R+ × R; R) and define Y (t) = F (t, X(t)), then Y (t)is a Itˆo-Lˆevyprocess with SDE
dY (t) ∂ ∂ 1 ∂2 = F (t, X(t))dt + F (t, X(t)) (α(t)dt + β(t)dW (t)) + F (t, X(t))β2(t)dt ∂t ∂x 2 ∂x2 Z ∂ + F (t, X(t) + γ(t, z) − F (t, X(t)) − F (t, X(t))γ(t, z) ν(dz)dt ∂x R0 Z + F (t, X(t) + γ(t, z) − F (t, X(t)−) N˜(dt, dz). (2.15) R0 T Extending the Itˆo-Lˆevyin the multidimensional case, X(t) = (X1(t), ··· ,Xn(t)) , we have following form Z dX(t) = α(t)dt + β(t)dW (t) + γ(t, z)N˜(ds, dz),X(0) = x. (2.16) R0 10
where α(t) ∈ Rn, β(t) ∈ Rn×d, and γ(t, z) ∈ Rn×l are predicable processes, W (t) = T (W1(t), ··· ,Wd(t)) is a vector of d-dimensional independent Wiener process and T ˜ ˜ ˜ N(dt, dz) = N1(dt, dz1), ··· , N1(dt, dz1) is a vector of l-dimensional independent
compensated Poisson random measures. That is, the SDE for Xi(t) is given as follows: d Z X ˜ dXi(t) =αi(t) + βij(t)dWj(t) + γij(t, zj)Nj(dt, dzj) j=1 R0
Xi(0) =xi, i ∈ {1, ··· , n}. (2.17)
Then, we have the following Itˆo’slemma for the multidimensional case.
Theorem 2.1.4 [27] Let X(t) be a -Itˆo-L´evyprocess given by (2.17) and let F ∈
1,2 n C (R+ × R ; R) and define Y (t) = F (t, X(t)), then Y (t) is a Itˆo-Lˆevyprocess with SDE
dY (t) n ∂ X ∂ = F (t, X(t))dt + F (t, X(t))α (t)dt ∂t ∂x i i=1 i n d n d X X ∂ 1 X X ∂2 + β (t)dW (t) + F (t, X(t))(ββT ) (t)dt ∂x ij j 2 ∂x ∂x ij i=1 j=1 i i=1 j=1 i i l n ! X Z X ∂ + F (t, X(t) + γj(t, z)) − F (t, X(t)) − F (t, X(t))γ (t, z ) ν(dz )dt ∂x ij j j j=1 R0 i=1 i l Z X j − ˜ + F (t, X(t) + γ (t, z)) − F (t, X(t )) Nj(dt, dzj). (2.18) j=1 R0 where γj is the jth column of γ.
The following theorem states criterion for a L´evyprocess concerning to the vari- ation process and moments.
2 Theorem 2.1.5 [73] Let X = {X(t)}t≥0 with characteristic triplet (a, σ , ν).
(i) X has a finite variation process iff Z σ = 0, |z|ν(dz) < ∞. (2.19) |z|<1 11
(ii) X has a finite nth absolute moment, where n ∈ N, that is, Z E[|X(t)|n] < ∞, ∀t > 0 ⇔ |z|ν(dz). (2.20) |z|≥1
(ii) X has a finite exponential moment E[euX(t)] where u ∈ R and ∀t > 0 iff Z euzν(dz) < ∞. (2.21) |z|≥1 In this case,
E[euX(t)] = etΨ(−iu). (2.22)
2.2 Moment Inequalities
We introduce some moment inequalities that will be useful in finding upper bound of moments for both continuous and pure jump case [7], [53]. Let F be a square- integrable, adapted process. Denote the following Wiener integral as follows: Z t M(t) = F (s)dW (s). (2.23) 0 Then M is a square-intergrable martingale. From the Burkholder’s inequality, fol- lowed by Doob’s martingale inequality, then for any p ≥ 2, there exists Cp > 0 such that " # p p/2 E sup |M(s)| ≤ CpE [M,M]t . (2.24) s∈[0,t] On the other hand, let H be a predictable process and denote the following compen- sated Poisson integral as follows: Z I(t) = H(s, z)N˜(ds, dz) (2.25) [0,t]×A
where A ∈ B(R0). Then for p ≥ 2, there exists Dp > 0 such that " # E sup |I(t)|p s∈[0,t] "Z p/2# Z ! 2 p ≤Dp E |H(s, z)| ν(dz)ds + E |H(s, z)| ν(dz)ds . [0,t]×A [0,t]×A (2.26) 12
2.3 Geometric L´evy Processes
We let Ss, s ∈ [0,T ] be a risky-asset (i.e. stock) price process modeled as geometric L´evyProcess of the form Z dS(s) =S(s) µ(s)ds + σ(s)dW (s) + θ(s, z)N˜(ds, dz) , s ∈ [t, T ], R0 S(t) =z. (2.27)
We denote Y (s) = log S(s), s ∈ [0,T ] (2.28)
to be the log-returns. Then, from Itˆo’s lemma by taking f(t, x) = log x, we obtain, a Itˆo-L´evyprocess of the form of Z dY (s) =α(s)ds + β(s)dW (s) + γ(s, z)N˜(ds, dz), s ∈ [t, T ], R0 Y (t) =y (2.29)
where
σ2(s) Z α(s) = µ(s) − + [log(1 + θ(s, z)) − θ(s, z)]ν(dz)ds, 2 R0 β(s) =σ(s),
γ(s, z) = log(1 + θ(s, z)),
y = log x (2.30)
where y is a constant, α(s), β(s), and γ(s, z) > −1 are deterministic for all (s, z) ∈
[0,T ] × R0 such that Z T Z |α(s)| + β2(s) + γ2(s, z)ν(dz) ds < ∞. (2.31) 0 R0 Then we have the following conditional characteristic function as stated in the fol- lowing lemma. 13
Lemma 2.3.1 Z T u2β(s) E[ exp(iuY (T ))|Ft] = exp iuY (t) + α(s) − ds t 2 Z + (exp(iuγ(s, z)) − 1 − iuγ(s, z)) ν(dz)ds (2.32) [t,T ]×R0 Proof We let F = F (s, x) = exp(iuY (s)). (2.33)
Then, from Itˆo’slemma for Fs = F (s, Y (s)) Z ˜ dFs = Fs a(s)ds + b(s)dW (s) + c(s, z)N(ds, dz) (2.34) R0 where u2β(s) Z a(s) = α(s) − + (exp(iuγ(s, z)) − 1 − iuγ(s, z)) ν(dz), 2 R0 b(s) =iuβ(s),
c(s, z) = eiu(s,z) − 1 . (2.35)
Integrating (2.34) we get Z T Z T Z ˜ FT = Ft + a(s)Fsds + b(s)FsdW (s) + c(s, z)N(ds, dz). (2.36) t t [t,T ]×R0
Taking the conditional expectation with respect to Ft gives us Z T E[FT |Ft] = Ft + a(s)E[Fs|Ft]ds. (2.37) t
We let m(s) = E[Fs|Ft], then by differentiating the above equation, we obtain the following ODE
dm(s) =a(s)m(s), s ∈ [t, T ],
m(t) =Ft (2.38)
Solving the ODE gives us Z s m(s) = Ft exp a(u)du . (2.39) t Hence, we finally obtain the desired result. 14
2.4 Stochastic Differential Equations
We present the conditions for the existence of the strong solutions for L´evypro- cesses namely: Lipschitz and growth conditions. Let X be a cadlag semimartingale with the following SDE Z dX(t) = α(t, X(t))dt + β(t, X(t))dW (t) + γ(t, X(t), z)N˜(ds, dz) (2.40) R0 where α, β : R+ ×R → R be jointly measurable and Ft-adapted, γ : R+ ×R×R0 → R be jointly measurable and Ft predictable. We say that the SDE in (2.40) has a strong solution if its X(t) pathwise unique Ft adapted solution. To ensure a strong solution, the following conditions should be satisfied [76]:
(i) Growth conditions :
|α(t, x)| ≤c(t)(1 + |x|), Z |β(t, x)|2 + |γ(t, x, z)|2ν(dz) ≤c(t)(1 + |x|2) (2.41) R0 where c(t) ≥ 0 is some deterministic function such that Z T C(T ) ≡ c(t)dt < ∞ ∀T > 0, (2.42) 0 (ii) Lipschitz conditions:
|α(t, x) − α(t, y)| ≤c(t)|x − y|, Z 2 2 2 2 |β(t, x) − β(t, y)| + |γ(t, x, z) − γ(t, y, z)| ν(dx) ≤ K1|x − y| ≤c(t)|x − y| R0 (2.43)
(iii) Initial conditions:
2 X(0) ∈ F0,E[X (0)] < ∞. (2.44)
The existence of the strong solution implies that " # E sup X2(t) ≤ k(T ) < ∞ (2.45) t∈[0,T ] where k(T ) depends on T and C(T ) only. 15
2.5 Canonical L´evy Space
The usual Canonical L´evyspace is constructed from the set of cadlag functions with the σ-field generated by the cylinders and with the measure given by the Kol- mogorov extension theorem [73]. The alternative construction of the Canonical L´evy space by Sol´e,Utzet, and Vives [77] was constructed to provide a probabilistic inter- pretation of the Malliavin derivative Dt,x. In their construction, the gradient operator becomes the sum of a derivative and increment quotient operators [5]. Consider the Canonical L´evyProcess
(Ω, F,P ) = (ΩW × ΩJ , FW ⊗ FJ ,PW ⊗ PJ ) (2.46) where (ΩW , FW ,PW ) is the Canonical Wiener space and (ΩJ , FJ ,PJ ) is the Canonical jump L´evyspace. If X(t) is a L´evyProcess with triplet (a, σ2, ν). From the L´evy-Itˆo decomposition [27], X(t) can be expressed as follows: Z X(t) = bt + σW (t) + zN˜(ds, dz) (2.47) [0,t]×R0 where Z b = a + zν(dz). (2.48) |z|≥1 Let X(t) be a centered, square-integrable L´evyprocess, then X(t) can be written as follows: Z X(t) = σW (t) + zN˜(ds, dz). (2.49) [0,t]×R0 Its characteristic function is given by 1 Z E (exp(iuX(t))] = exp − σ2u2 + (exp(iuz) − 1 − iuz)ν(dz) t . (2.50) 2 R0 From the the moment theorem [7], for p ≥ 1, Z E[|X(t)|p] < ∞ ⇔ |z|pν(dz) < ∞. (2.51) |z|≥1 Hence, from the square-integrable assumption of X(t), (2.51) and (2.3) implies Z z2ν(dz) < ∞. (2.52) R0 16
Itˆo[50] has extended the centered square-integrable L´evyprocess X to an inde- pendent measure M on (R+ × R, B(R+ × R)) can be constructed as follows Z Z M(E) = σ dW (t) + zdN˜(dt, dz) (2.53) 0 E0 E
0 where E ∈ B(R+ × R), E0 = {t ∈ R+ :(t, 0) ∈ E} and E = E \ E0. Then for
E1,E2 ∈ B(R+ × R) such that µ(E1) < ∞, µ(E2) < ∞
E[M(E1)M(E2)] = µ(E1 ∩ E2) (2.54)
where µ is a measure on ([0,T ] × R, B([0,T ] × R) and Z Z 2 2 µ(E) = σ dt + z dν(z)dt, E ∈ B([0,T ] × R). (2.55) 0 E0 E In differential form, we have
2 2 µ(dt, dz) = σ dδ0(z)dt + z (1 − δ0(z))dν(z)dt = λ(dt)η(dz) (2.56)
where λ(dt) = dt is the Lebesgue measure and
2 2 η(dz) = σ dδ0(z)dt + z (1 − δ0(x))dν(z). (2.57)
2.6 Iterated L´evy-ItˆoIntegral
Let f ∈ L2(µn) = L2((λ × η)n) = L2(([0,T ] × R)n) be a deterministic function such that Z Z 2 2 ||f||L2(µn) = ··· |f ((t1, z1) ··· (tn, zn)) | µ(dt1, dz1) ··· µ(dtn, dzn) < ∞. [0,T ]×R [0,T ]×R (2.58)
∧ The symmetrization of f denoted by f over the pairs (t1, x1), ··· , (tn, xn) is given by
1 X f ∧((t , z ), ··· (t , z )) = f((t , z ), ··· (t , z ) (2.59) 1 1 n n n! σ(1) σ(1) σ(n) σ(n) σ∈Sn where σ = (σ(1), ··· , σ(n)) is a permutation of {1, ··· , n} and Sn is the set of permutations of {1, ··· , n}. 17
Denote Sn = {(t1, z1), ··· (tn, zn) : 0 < t1, ··· tn < T, xi ∈ R, i ∈ {1, ··· n}}. For 2 n f ∈ L (µ ) define the n-fold iterated integral by over Sn as follows: Z Z Z Jn(f) ≡ ··· f ((t1, z1) ··· (tn, zn)) − − [0,T ]×R [0,tn )×R [0,t2 )×R
M(dt1, dz1) ··· M(dtn−1, dzn−1)M(dtn, dzn). (2.60)
Also, for f ∈ L2(µn), define the n-fold iterated integral over ([0,T ] × R)n as follows: Z Z Z In(f) ≡ ··· f ((t1, z1) ··· (tn, zn)) [0,T ]×R [0,T ]×R [0,T ]×R
M(dt1, dz1) ··· M(dtn−1, dzn−1)M(dtn, dzn). (2.61)
2 n 2 n Denote Ls(µ ) be the subspace of symmetric functions in L (µ ). Then, for f ∈ 2 n Ls(µ ), we have the following identity:
In(f) = n!Jn(f). (2.62)
The integrated integral In has the following properties [78]:
1. Symmetrization
∧ 2 n In(f) = In(f ), f ∈ L (µ ), (2.63)
2. Linearity
2 n In(af + bg) = aIn(f) + bIn(g), f, g ∈ L (µ ), a, b ∈ R, (2.64)
3. Isometry
∧ ∧ 2 n 2 m E[In(f)Im(g)] = n! hf , g iL2(µn) δmn, f ∈ L (µ ), g ∈ L (µ ). (2.65)
Itˆohas has shown the following chaos expansion for the L´evyspace.
Theorem 2.6.1 [50] Let F ∈ L2(P ), then F has chaos expansion given by ∞ X F = In(fn) (2.66) n=0 2 n where we set I0(f0) = E[F ]. The chaos expansion is unique if fn ∈ Ls(µ ) for all n ∈ N. Furthermore, we have the following isometry relation: ∞ 2 X 2 2 n ||F ||L2(P ) = n!||fn||L2(µn), fn ∈ Ls(µ ). (2.67) n=0 18
2.7 Skorohod Integral
Definition 2.7.1 [77], [78] Let F ∈ L2(P × µ) with chaos expansion of the form of
∞ X F (t, z) = In(fn(·, (t, z))) (2.68) n=0 such that ∞ X ˜ 2 (n + 1)!||f||L2(µn+1) < ∞ (2.69) n=0
˜ 2 n+1 where fn ∈ Ls(µ ) . Then we define the Skorohod integral of F with respect to M as follows: Z ∞ X ˜ δ(F ) = F (t, x)M(δt, dx) = In+1(fn). (2.70) R+×R n=0 We say that F is Skorohod integrable if it converges in L2(P ), that is,
∞ 2 X ˜ 2 ||δ(F )|| = (n + 1)!||fn||L2(µn+1) < ∞. (2.71) n=0 Definition 2.7.2 [77], [78], [80] Let F ∈ L2(P ) with chaos expansion of the form of (2.66). Denote the set D1,2 ≡ Dom D is the set F ∈ L2(P ) such that ∞ X 2 nn!||fn||L2(µn) < ∞. (2.72) n=1
For F ∈ Dom D, the Malliavin derivative DF :Ω × [0,T ] × R → R defined by ∞ X Dt,zF = nIn−1 (fn(·, (t, z))) . (2.73) n=1 with convergence in L2(P × µ). Moreover, we have the following:
∞ 2 X 2 ||Dt,zF ||L2(P ×µ) = nn!||fn||L2(µn) < ∞. (2.74) n=1 Dom D is a Hilbert space with scalar product of F,G ∈ Dom D Z < F, G >= E[FG] + E Dt,zFDt,zGµ(dt, dz) (2.75) [0,T ]×R and D is a closed operator from Dom D to L2(P × µ). 19
For f : ([0,T ] × R)n → R, we have the following decomposition: Z Z fdµ⊗n = σ2 f(·, (t, 0))dtdµ⊗n−1 ([0,T ]×R)n [0,T ]×([0,T ]×R)n−1 Z + f(·, (t, z))z2ν(dz)dtdµ⊗n−1 (2.76) n−1 [0,T ]×R0×([0,T ]×R) We define the spaces DomD0 and DomD1 as follows.
Definition 2.7.3 [77], [78], [80] Let F ∈ L2(P ) with chaos expansion of the form of (2.66).
(i) DomD0 is the set of F ∈ L2(P ) such that σ > 0,
∞ Z T X 2 2 nn! ||fn(·, (t, 0))||L2(µn−1)σ dt < ∞. (2.77) n=1 0
For F ∈ Dom D0, we define
∞ X Dt,0F = nIn−1 (fn(·, (t, 0))) (2.78) n=1
with convergence in L2(P × λ).
(ii) DomD1 is the set of F ∈ L2(P ) such that ν 6= 0 and
∞ Z X 2 2 nn! ||fn(·, (t, z))||L2(µn−1)z ν(dz)dt < ∞. (2.79) n=1 [0,T ]×R0
For F ∈ Dom D1, we define
∞ X Dt,zF = nIn−1 (fn(·, (t, z))) , z 6= 0 (2.80) n=1
with convergence in L2(P × z2ν(dz)dt).
Remark 2.7.1 If σ > 0 and ν 6= 0, then Dom D = Dom D0∩ Dom D1 ⊂ L2(P ). 20
Theorem 2.7.2 [80] Chain Rule
1,2 1 n Let F = (F1, ··· ,Fn), Fi ∈ D for i ∈ {1, ··· , n} and ϕ ∈ C (R , R). Suppose that
(i) ϕ(F ) ∈ L2(P ),
Pn ∂ϕ(F ) 2 (ii) Dt,0F ∈ L (P × λ), k=1 ∂xk
ϕ(F1+zDt,zF1,··· ,Fn+zDt,zFn)−ϕ(F1,··· ,Fn) 2 2 (iii) z ∈ L (P × z ν(dz)dt), then ϕ(F ) ∈ D1,2 and n X ∂ϕ(F ) Dt,zϕ(F ) = Dt,0Fk1{z=0} ∂xk k=1 ϕ(F + zD F , ··· ,F + zD F ) − ϕ(F , ··· ,F ) = + 1 t,z 1 n t,z n 1 n 1 . (2.81) z {z6=0}
Definition 2.7.4 [77], [78] The space L1,2 Let F ∈ L2(P ×µ) with chaos expansion of the form of (2.66). such that F (t, z) ∈ D1,2 for all (t, z) ∈ [0,T ] × R µ-a.e., DF ∈ L2(P × µ⊗2). Then the chaos expansion of F is equivalent to ∞ X ˆ 2 nn!||fn||L2(µn+1) < ∞. (2.82) n=1
Remark 2.7.3 The above chaos expansion implies L1,2 ⊂ D1,2.
We state some of the important characterization of L1,2
(i) Let F,G ∈ L1,2, then Z E[δ(F )δ(G)] = E F (t, z)G(t, z)µ(dt, dz) [0,T ]×R Z + E Dt,zF (s, x)Dt,xG(s, x)µ(ds, dx)µ(dt, dz) . (2.83) ([0,T ]×R)2
1,2 (ii) Let F ∈ L such that Dt,zF ∈ Dom δ for all (t, z) ∈ [0,T ] × R µ-a.e. Then δ(F ) ∈ D1,2 and
Dt,zδ(F ) = F (t, z) + δ(Dt,zF ) (2.84)
for all (t, z) ∈ [0,T ] × R µ-a.e. 21
2.8 Predictable Process
Definition 2.8.1 [27] Predictable Process A predictable process is a stochastic process measurable with respect to the σ-field generated by
A × (s, t] × B,A ∈ Fs, 0 ≤ s < t, B ∈ B(R0). (2.85)
Note: Any measurable F-adapted and left-continuous (with respect to t) process is predictable [27].
We shall present some important theorems related to predictable process. The first theorem is the isometry relation prsented by the following theorem.
Theorem 2.8.1 [7], [78] Let F and G be µ-square integrable predictable processes, then Z Z E F (t, z)M(dt, dz) F (t, z)M(dt, dz) [0,T ]×R [0,T ]×R Z =E F (t, z)G(t, z)µ(dt, dz) . (2.86) [0,T ]×R Theorem 2.8.2 [78] Let F be µ-square integrable predictable processes, then Z Z T Z F (t, z)M(dt, dz) = σ F (t, 0)dW (t) + zF (t, z)N˜(dt, dz). (2.87) [0,T ]×R 0 [0,T ]×R0 Theorem 2.8.3 [78] Let F ∈ L2(P × µ) be a predictable processes. Then F ∈ Dom(δ) and Z δ(F ) = F (t, z)M(dt, dz). (2.88) [0,T ]×R Finally, we have the Clark-Ocone theorem in D1,2 stated as follows.
1,2 Theorem 2.8.4 [78] Let F ∈ D be FT -measurable, then Z F = E[F ] + E[Dt,zF |Ft− ]M(dt, dz) (2.89) [0,T ]×R where M is independent measure given by (2.53). 22
3. CANONICAL LEVY´ WHITE NOISE PROCESSES
3.1 Construction of Canonical L´evyWhite Noise Process
We construct the Canonical L´evywhite noise process [56] using a parallel proce- dure in deriving Wiener and Poisson white noise process [46]. Let S ≡ S(R) be the Schwartz space of test functions which consists of rapidly decreasing smooth functions
f ∈ C∞(R) such that α (β) ||f||α,β = sup |x f (x)| < ∞. (3.1) x∈R
In addition, S(R) is a Fr´echet space with respect to the seminorm kfkα,β. Its dual S0 ≡ S0(R) is the Schwartz space of tempered distribution functions endowed with a weak* topology. The action of ω ∈ S0(R) on φ ∈ S(R) given by the mapping w : S(R) × S0(R) → R w(φ, ω) =< ω, φ > . (3.2)
Moreover, we have the following inclusions:
2 0 S(R) ⊂ L (P ) ⊂ S (R). (3.3)
We construct the Canonical L´evywhite noise process on the Ω = S0(R) using the Bochner-Minlos theorem which is stated as follows:
Theorem 3.1.1 A necessary and sufficient condition for the existence of a probability
measure P on S0(R) such that Z g(φ) = E[ei<ω,φ>] = ei<ω,φ>dP (ω) (3.4) S0(R) satisfies the following conditions:
a.) g(0) = 1, 23
0 b.) g is positive definite, that is, for φi ∈ S (R) and ci ∈ C such that c = T (c1 ··· cn) 6= 0n, i ∈ {1, ··· , n}, ∀n ∈ N,
n n X X cicjg(φi − φj) > 0, (3.5) i=1 j=1
c.) g is continuous in Fr´echetTopology.
In our construction, we let Z g(φ) = exp Ψ(φ(y))dy (3.6) R where σ2u2 Z Ψ(u) = − + (eiuz − iuz − 1)ν(dz). (3.7) 2 R0 Claim The functional g satisfies the Bochner-Minlos theorem.
Proof We can express g as the product
g(φ) = f(φ)h(φ) (3.8)
where σ2 Z σ2 f(φ) = exp − |φ(y)|2dy = exp − kφk2 (3.9) L2(R) 2 R 2 Z Z h(φ) = exp (eiφ(y)z − iφ(y)z − 1)ν(dz)dy . (3.10) R R0 Then, f and h satisfies the Bochner-Minlos theorem corresponding to the Wiener and the compensated Poisson case respectively [46]. Clearly, g satisfies conditions (a) and (c) of the Bochner-Minlos condition. It is suffice to check (b) to prove our assertion. Define the following n × n matrices:
Gn = {g(φi − φj)},Fn = {f(φi − φj)},Hn = {h(φi − φj)}. (3.11)
From (3.8) and (3.11),
Gn = {g(φi − φj)}i,j∈{1,···n} = {f(φi − φj)h(φi − φj) = Fn Hn. (3.12) 24 where denotes the Hadamard product. Since f and h are positive definite, so does the matrices Fn and Hn is also positive definite for all n ∈ N. By the Schur’s product theorem [47] implies Gn is positive definite. Since this holds for all n ∈ N, then g is positive definite. Thus, this proves our assertion.
By taking φ(y) = tϕ(y) with t ∈ R fixed then from (3.7), we obtain
E[eit<ω,ϕ>] σ2t2 Z Z Z = exp − |ϕ(y)|2dy + (exp(itzϕ(y)) − itzϕ(y) − 1)ν(dz)dy . (3.13) 2 R R R0
Claim Let ϕ ∈ S(R), then
E[< ω, ϕ >] =0, (3.14) Z E[< ω, ϕ >2] =ζ ϕ(y)dy (3.15) R where Z ζ = σ2 + z2ν(dz). (3.16) R0
∞ Proof By density argument, it is suffice to show the identity for ϕ ∈ C0 (R). Let the L´evydensity ν ∈ [−r, r]\{0} for some r > 0. Then by expanding the terms in (3.13) by Taylor series expansion, we obtain:
∞ Xintn E[< ω, ϕ >n] n! n=0 ∞ ∞ ! !n X 1 Z σ2t2ϕ2(y) Z X iktkzkϕk(y) = − + ν(dz) dy . (3.17) n! 2 k! n=0 R R0 k=2
Collecting the t and t2 coefficients yields the desired result.
We extend the definition of < ω, φ > from φ ∈ S(R) to L2(R). Since S(R) is dense 2 2 in L (R), then for ϕ ∈ L (R) arbitrary, there exists ϕn ∈ S(R) such that ϕn → ϕ in L2(R). By completeness of L2(R), as m, n → ∞
| < ω, ϕn > − < ω, ϕm > | = | < ω, ϕn − ϕm > | → 0. (3.18) 25
Hence, {< ω, ϕn >: n ∈ N} is a Cauchy sequence in R and its limit is < ω, ϕ >. ˜ 2 Then, define X(t, ω) ≡< ω, χ[0,t] > where χ[0,t] ∈ L (R) as follows: 1, s ∈ [0, t], t ≥ 0 χ[0,t] = −1, s ∈ [−t, 0), t < 0 (3.19) 0, otherwise.
Taking the characteristic function of X˜(t) yields
˜ E[ exp(iuX(t))] = E[exp(iu < ω, χ[0,t] >)] "Z σ2u2χ2 (y) Z ! # = exp − [0,t] + exp(iuz)χ (y)) − iuzχ (y) − 1 ν(dz) dy 2 [0,t] [0,t] R R0 σ2u2 Z = exp − + (exp(iuz) − iuz − 1) ν(dz) t . (3.20) 2 R0 By the L´evy-Khinchine theorem, X(t) is a L´evyprocess and there exists a c`adl`ag modification of X˜(t), say X(t) which is a L´evyprocess [7]. The smoothed white noise process for the Canonical L´evyprocess is given by: Z 2 < ω, φ >= φ(t)dX(t, ω), ω ∈ Ω, φ ∈ L (R) (3.21) R where X(t) has the following representation:
Z t Z X(t) = σ dW (t) + zN˜(ds, dz). (3.22) 0 [0,t]×R0
We define the filtered probability space (Ω, F, {Ft}t≥0,P ) for the white noise 0 X X Canonical L´evyprocess where F = B(S (R)) and Ft = Ft ∨ N where Ft = σ{X(s): s ∈ [0, t]} is the σ-field generated by X up to time t and N are the P -null sets. 26
3.2 Construction of Alternative Chaos Expansion for Canonical L´evy processes
Nualart-Schoutens Chaos Decomposition
We assume that the L´evymeasure ν satisfies the so-called Nualart-Schoutens assumption [61]: for all ε > 0 there exists λ > 0 such that Z exp(λ|z|)ν(dz) < ∞. (3.23) R0\(−ε,ε) This assumption covers some important classes in L´evyprocesses such as the nor- mal (Gaussian), Poisson, gamma, negative binomial, and Meixner processes. This assumption implies the following implications:
1. The absolute moments are greater than or equal to 2 with respect to ν is finite, that is, for all p ≥ 2, Z |z|pν(dz) < ∞ (3.24) R0 and thus, X(t) has moments of all orders for all p ≥ 2.
2. The characteristic function E[exp(iuX(t))] is analytic in the neighborhood of
zero and the polynomials are dense in L2(R,P ◦ X(t)−1).
Power Jump Processes
Definition 3.2.1 Power Jump Processes X(i) = {X(i)(t): t ≥ 0}, i ∈ N X ∆(X(t))i, i > 1, X(i)(t) = s∈(0,t] (3.25) X(t), i = 1.
Then, from the representation in (2.47) we can express X(i) in integral form follows: Z i z N(ds, dz), i > 1 (i) [0,t]×R0 X (t) = Z (3.26) i ˜ bt + σW (t) + z N(ds, dz), i = 1. [0,t]×R0 27
The power jump processes X(i) is also a L´evyprocesses. In general,
X X(t) 6= ∆X(s) (3.27) s∈(0,t]
and the equality only holds for a pure jump processes (σ = 0) with bounded variation. Taking the expectation yields
(i) E[X (t)] = mi(t) (3.28) where Z i x ν(dx), i > 1 mi = R0 (3.29) b, i = 1.
Definition 3.2.2 Compensated Power Jump Processes Y (i) = {Y (i)(t): t ≥ 0}, i ∈ N given by Y (i)(t) = X(i)(t) − E X(i)(t) . (3.30)
Y (i) is referred to as the Teugels martingale of order i, and it is a normal martingale. Alternatively, we can express Y (i) in integral form follows: Z i ˜ z N(ds, dx), i > 1 (i) [0,t]×R0 Y (t) = Z (3.31) i ˜ σW (t) + z N(ds, dz), i = 1. [0,t]×R0 Moreover, the quadratic covariation and the predicatable covariation processes for the Teugels martingales Y (i) are as follows: Z (i) (j) 2 i+j [Y ,Y ]t =σ t1{i=j=1} + z N(ds, dz) [0,t]×R0 2 (i+j) =σ t1{i=j=1} + X (t), (3.32)
Z (i) (j) 2 i+j < Y ,Y >t= σ t1{i=j=1}t + z ν(dz) t R0 2 =(σ 1{i=j=1} + mi+j)t. (3.33) 28
The Spaces S1 and S2 [61]
Let S1 be the space of real polynomials in R+, that is, ( n ) X k−1 S1 = ckz : ck ∈ R, z ∈ R+, k ∈ {1, ··· n}, n ∈ N (3.34) k=0 endowed with the inner product << ·, · >>1 given by Z 2 2 << P, Q >>1=σ P (0)Q(0) + P (z)Q(z)z ν(dz) R0 Z = P (z)Q(z)η(dz) =< P, Q >L2(η) (3.35) R
where P,Q ∈ S1. Note that Z i−1 j−1 2 i+j << z , z >>1=σ 1{i=j=1} + z ν(dx) R0 2 =σ 1{i=j=1} + mi+j. (3.36)
2 Let {pi(z)}i∈N be the orthogonalization of {1, z, z , ···} in S1. From the Gram- Schmidt orthogonality procedure, we have the following:
p1(z) =1,
i−1 i−1 i i−1 X << pj(z), z >>1 X j−1 pi(z) =z − 2 pj(z) = aijz (3.37) j=1 kpj(z)k1 j=1 where i−1 R i−1 << p (z), z >> pj(z)z η(dz) j 1 R − 2 = R 2 , j ∈ {1, ··· , i − 1}, kp (z)k p (z)η(dz) aij = j 1 R j (3.38) 1, j = i.
Example For i = 2, p2(z) = a21 + a22z, where a22 = 1 and R zη(dz) R z3ν(dx) a = − R = − R0 . 21 R η(dz) σ2 + R z2ν(dx) R R0
On the other hand, let S2 be the space of linear transformations of Teugels martingales of the L´evyProcesses, that is,
( n ) X (k) S2 = ckY : ck ∈ R, k ∈ {1, ··· n}, n ∈ N (3.39) k=0 29
endowed with the inner product << ·, · >>2 given by
(i) (j) (i) (j) 2 << Y ,Y >>2= E[[Y ,Y ]1] = σ 1{i=j=1} + mi+j. (3.40)
i−1 (i) Then x ↔ Y is an isometry between S1 and S2. (i) (1) (2) (3) Let {H }i∈N be the orthogonalization of {Y ,Y ,Y , ···} in S2. Then, (i) {H }i∈N are strongly orthogonal martingales. From the Gram-Schmidt orthogo- nality procedure, we have the following:
H(1) =Y (1),
i−1 (j) (i) i−1 X << H ,Y >>2 X H(1) =Y (i) − = a∗ Y (i) (3.41) ||H(j)||2 ij j=1 2 j=1
where << H(j),Y (i) >> E[[H(j),Y (i)] ] 2 1 − (j) 2 = − (j) , j ∈ {1, ··· , i − 1} ∗ ||H ||2 E[[H ]1] aij = (3.42) 1, j = i.
Lemma 3.2.1 The Gram-Schmidt coefficients in S1 and S2 coincide, that is,
∗ aij = aij, j ∈ {1, ··· , i}, i ∈ N. (3.43)
Proof We shall prove this lemma by induction. Base step: Since
p1(x) = a11 = 1 (3.44)
(1) ∗ (1) (1) H = a11Y = Y . (3.45)
Hence,
∗ a11 = a11 = 1. (3.46)
∗ Inductive step: Suppose that aij = aij, j ∈ {1, ··· , i}. Then, from the Gram- Schmidt procedure,
i+1 X j−1 pi+1(z) = ai+1,jz (3.47) j=1 30
where i << pj(z), z >>1 − 2 , j ∈ {1, ··· , i}, kpj(z)k ai+1,j = 1 (3.48) 1, j = i + 1. On the other hand,
i+1 (i+1) X ∗ (i) H = ai+1,jY (3.49) j=1
where << H(j),Y (i+1) >> 2 − (j) 2 , j ∈ {1, ··· , i}, ∗ ||H ||2 ai+1,j = (3.50) 1, j = i + 1.
From the isometry relation between S1 and S2 and by induction hypothesis, we obtain
j i X k−1 i << pj(z), z >>1= ajk << z , z >>1 k=1 j X ∗ (k) (i+1) (j) (i+1) = ajk << Y ,Y >>2=<< H ,Y >>2, (3.51) k=1
j j 2 X X k−1 l−1 kpj(z)k1 = ajkajl << x , x >>1 k=1 l=1 j j X X ∗ ∗ (k) (l) (j) 2 = ajkajl << Y ,Y >>2= H 2 (3.52) k=1 l=1 then from (3.48), (3.50), (3.51), (3.52), yields
∗ ai+1,j = ai+1,j, j ∈ {1, ··· , i + 1}. (3.53)
Since H(i) are pairwise strongly orthogonal martingales which forms a linear i∈N combination of Y (j), j ∈ {1, ··· , i} of the form of
i (i) X ∗ (j) H = aijY (t). (3.54) j=1 31
For i 6= j, the product H(i)H(j) and the quadratic covariation process [H(i),H(j)] are both uniformly integrable martingales [69]. Since H(i),H(i) is a predictable covariation process such that H(i)H(j) − H(i),H(i) is a martingale, then
(i) (j) H ,H t = 0, i 6= j. (3.55)
Moreover, we have following quadratic covariation and predictable covariation process for H(i) i∈N
i j (i) (j) X X ∗ ∗ (i) (j) [H ,H ]t = aikajl[Y ,Y ]t k=1 l=1 i j 2 X X ∗ ∗ (k+l) =σ t + aikajlX , (3.56) k=1 l=1
i j (i) (j) X X ∗ ∗ (i) (j) < H ,H >t= aikajl < Y ,Y >t k=1 l=1 i j ! X X ∗ ∗ 2 = aikajlmk+l + σ tδij = qitδij (3.57) k=1 l=1 where i i 2 X X ∗ ∗ qi = σ + aikailmk+l. (3.58) k=1 l=1 Theorem 3.2.2 [54] Z < pi(x), pj(x) >L2(η)= pi(z)pj(z)η(dz) = qiδij (3.59) R
2 where qi = kpikL2(η) is given by (3.58).
Proof Since Z << pi(z), pj(z) >>1= pi(z)pj(z)η(dz). (3.60) R
From the isometry relation of S1 and S2, we obtain
k−1 l−1 (k) (l) z , z 1 = Y ,Y 2 . (3.61) 32
Then, from the preceding lemma, since the Gram-Schmidt coefficients in S1 and S2 coincide then, we have the following:
** i j ++ X k−1 X k−1 << pi(z), pj(z) >>1= aikz , ajlz k=1 k=1 1 ** i j ++ X ∗ (k) X ∗ (l) (i) (j) = aikY , ajlY =<< H ,H >>2 . (3.62) k=1 k=1 2
For i 6= j, H(i) and H(j) are strongly orthogonal, then the quadratic covariation process [H(i),H(j)] is a martingale hence,
(i) (j) (i) (j) (i) (j) << H ,H >>2= E[[H ,H ]1] = E[[H ,H ]0] = 0. (3.63)
On the other hand, for i = j, since
(k) (l) 2 (k+l) [Y ,Y ]t = σ t1{k=l=1} + X (t) (3.64) then
" i i # (i) (i) X ∗ (l) X ∗ (l) [H ,H ]t = aikY , ailY k=1 l=1 t i i X X ∗ ∗ 2 (k+l) = aikaik σ t1{k=l=1} + X (t) k=1 l=1 i i 2 X X ∗ ∗ (k+l) =σ t + aikaikX (t). (3.65) k=1 l=1 Finally, taking the expectation at t = 1 gives us
i i (i) (i) 2 X X ∗ ∗ << pi(z), pi(z) >>1= E[[H ,H ]1] = σ + aikailmk+l = qi. (3.66) k=1 l=1 33
Chaotic and Predictable Representation Properties
Denote the following multiple integral for f ∈ L2([0,T ]n) with respect to the orthogonal martinagles H(i)’s:
(i1,··· ,in) Jn (f) − − Z T Z tn Z t2 (i1) (in−1) (in) = ··· f(t1, ··· tn−1, tn)dH (t1) ··· dH (tn−1)dH (tn). 0 0 0 (3.67)
Leon et al., [54] has shown orthogonality relationship between different multi- indices (i1, ··· , in) stated in the following theorem.
Theorem 3.2.3 [54] Let f ∈ L2([0,T ]n) and g ∈ L2([0,T ]m), then Z qi1 ··· qin f(t1, ··· , tn)g(t1, ··· , tn)dt1 ··· dtn Σn E[J (i1,··· ,in)(f)J (j1,··· ,jm)(g)] = n n for n = m, (i1, ··· , in) = (j1, ··· , jn), 0, otherwise (3.68) where
Σn = {(t1, ··· , tn) : 0 < t1 < ··· tn ≤ T } (3.69) is the positive simplex of [0,T ]n.
Proof We prove the theorem by induction as well as the identity
(i) (j) < H ,H >t= qitδij. (3.70)
(i1,··· ,in) Note that Jn can be written recursively as follows:
Z T (i1,··· ,in) (in) Jn (f) = αndH (tn), 0 Z T (j1,··· ,jm) (im) Jn (g) = βmdH (tm) (3.71) 0 34
where − Z tk (ik−1) αk = αk−1dH (tk−1), k ∈ {2, ··· n}, α1 = f(t1, ··· , tn), 0 − Z tk (ik−1) βk = βk−1dH (tk−1), k ∈ {2, ··· m}, β1 = g(t1, ··· , tm). (3.72) 0 Case I: (m = n). Z T (i1,··· ,in) (j1,··· ,jn) E[Jn (f)Jn (g)] = qin δin,jn E[αnβn]dtn. (3.73) 0 Then, the desired result is obtained by induction. Case II: (m 6= n). Without loss of generality, we assume that m < n. Then, by induction
(i1,··· ,in) (j1,··· ,jm) E[Jn (f)Jn (g)] = qin−m+1 ··· qin δin−m+1,j1 ··· δin,jm · − − Z T Z tn−1 Z tn−m+2 ··· E[αn−m+1β1]dtn−m+2 ··· dtn−1dtn. (3.74) 0 0 0 Since
E[αn−m+1β1] " − − # Z tn−m+1 Z t2 (i1) (in−m+2) =E ··· f(t1, ··· , tn)dH (t1)dH (tn−m+2)g(t1, ··· , tm) 0 0 " − − # Z tn−m+1 Z t2 (i1) (in−m+2) =E ··· f(t1, ··· , tn)g(t1, ··· , tm)dH (t1)dH (tn−m+2) 0 0
=0 (3.75) then, we obtain the desired result.
Nualart and Schoutens [61] have shown the for every F ∈ L2(P ) can be represented in terms of the iterated integrals in terms of H(i).
Theorem 3.2.4 Chaotic Representation Property (CRP) [61] Every random variable F ∈ L2(P ) has a representation of the form of ∞ X X (j1,··· ,jn) F =E[F ] + Jn (fj1,··· ,jn ) n=1 j1,··· ,jn≥1 (3.76) 35
As a corollary to the CRP, they have shown a predictable representation in terms of in terms of H(i).
Theorem 3.2.5 Predictable Representation Property (PRP) [61] Every random variable F ∈ L2(P ) has a representation of the form of ∞ X Z T F = E[F ] + φ(n)(s)dH(n)(s) (3.77) n=1 0 where φ(j)(s) is a predictable process.
3.3 Alternative Chaos Expansion for Canonical L´evyprocesses
We present some important results all based in Sol´e,et al., [78] which is crucial in finding the alternative chaos expansion for the Canonical L´evyspace.
Theorem 3.3.1 [78] Let g = {g(t): t ∈ [0,T ]} be a predictable process such that Z T E g2(t)dt < ∞. (3.78) 0
Then, g(t)pi(x) is integrable with respect to M and Z T Z (i) g(t)dH (t) = g(t)pi(x)M(dt, dx). (3.79) 0 [0,T ]×R
Proof For i = 1, p1(x) = 1 and Z t Z H(1) = Y (1) = σW (t) + zN˜(ds, dz). (3.80) 0 R0 Then Z T Z T Z g(t)dH(1)(t) = g(t) σdW (t) + zN˜(dt, dz) 0 0 R0 Z = g(t)p1(z)M(dt, dz). (3.81) [0,T ]×R For i > 1, since g(t) is predictable and so is g(t)zi. From Itˆoisometry, we obtain. "Z 2# Z E g(t)ziN˜(ds, dz) =E g2(t)z2idtν(dz) [0,T ]×R0 [0,T ]×R0 Z Z T = z2ν(dx) · E g2(t)dt < ∞ (3.82) [0,T ] 0 36
Hence, g(t)zi is square-integrable with respect to N˜ and thus, integrable with respect to N˜. In addition, with the square-integrability condition in (3.78) implies integra- bility with respect to W and thus to M. From the integral form of the compensated power jump process Y (j) for j > 1, Z T Z Z g(t)dY (j)(t) = g(t)zjN˜(dt, dz) = g(t)xj−1M(dt, dz). (3.83) 0 [0,T ]×R0 [0,T ]×R Hence, from the preceding equation, we finally obtain Z T i Z T (i) X (j) g(t)dH (t) = aij g(t)dY (t) 0 j=1 0 Z i X i−1 = g(t) aijz M(dt, dz) [0,T ]×R j=1 Z = g(t)pi(z)M(dt, dz). (3.84) [0,T ]×R
Corollary 3.3.2 [78] H(i)(t) can be expressed as the follows: Z (i) H (t) = pi(z)M(dt, dz). (3.85) [0,T ]×R Proof Take g = 1 from the preceding theorem.
Theorem 3.3.3 [78] Let f ∈ L2([0,T ]n), then
(j1,··· ,jn) Jn (f) = In(f(t1, ··· , tn)1Σn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)). (3.86)
Proof Using the preceding corollary, we have as follows:
(j1,··· ,jn) Jn (f)
− − Z T Z tn Z t2 (j1) (jn−1) (jn) = ··· f(t1, ··· , tn)dH (t1) ··· dH (tn−1)dH (tn) 0 0 0 − − Z T Z Z tn Z Z t2 Z
= ··· f(t1, ··· , tn)pj1 (x1) ··· pjn (zn)M(dt1, dz1) ··· M(dtn, dzn) 0 0 0 Z R R R
= f(t1, ··· , tn)1Σn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)M(dt1, dz1) ··· M(dtn, dzn) ([0,T ]×R)n
=In(f(t1, ··· , tn)1Σn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)). (3.87) 37
We follow Benth, et al., [15] approach in comparing the relationship between Itˆo’s chaos expansion and the CRP. However, their approach was only limited with respect to chaos expansion with respect to the iterated integral of the compensated Poisson random measure N˜. With their result, Di Nunno was able to derive the alternative expansion in the Poisson case [24]. A discussion of the alternative chaos expansion for the Wiener and Poisson case is referred to the Appendix. With the results of the preceding section, we are able to use this relationship for the Canonical L´evycase. From Itˆo’schaos expansion [50] of the Canonical L´evy process we have ∞ X 2 n F = E[F ] + In(fn) fn ∈ Ls(µ ). (3.88) n=1 On the other hand, from Nualart-Schoutens CRP and from the preceding theorem 3.86, we obtain
∞ X X (j1,··· ,jn) F − E[F ] = Jn (fj1,··· ,jn (t1, ··· , tn)) n=1 j1,··· ,jn≥1 ∞ X X = In (fj1,··· ,jn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)1Σn (t1, ··· , tn))
n=1 j1,··· ,jn≥1 ∞ ! X X = In fj1,··· ,jn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)1Σn (t1, ··· , tn) .
n=1 j1,··· ,jn≥1 (3.89)
We let
X gn((t1, z1), ··· (tn, zn)) = fj1,··· ,jn (t1, ··· , tn)pj1 (z1) ··· pjn (zn)1Σn (t1, ··· , tn).
j1,··· ,jn≥1 (3.90) Then, by the uniqueness of the chaos expansion, we obtain
∧ fn = gn , ∀n ∈ N. (3.91)
Since
2 n fj1,··· ,jn (t1, ··· , tn)1Σn (t1, ··· , tn) ∈ L (λ ) (3.92) 38 and the Hermite functions1 {e } forms an orthonormal basis in L2(λ), then we n n∈N can express (3.92) as follows:
X (j ,··· ,j ) f (t , ··· , t ) = γ 1 n e (t ) ··· e (t ). (3.93) j1,··· ,jn 1 n i1,··· ,in i1 1 in n i1,··· ,in≥1
We let pi(z) πi(z) = , i ∈ N (3.94) kpikL2(η) 2 then, from (3.59), {πi}i∈N are orthonormal basis functions in L (η). Denote
c(j1,··· ,jn) = kp k · · · kp k γ(j1,··· ,jn). (3.95) i1,··· ,in i1 L2(η) in L2(η) i1,··· ,in
Hence, we can express (3.90) in terms of orthonormal basis functions in L2(µn) as follows:
X X (j ,··· ,j ) g ((t , z ), ··· , (t , z )) = c 1 n e (t )π (z ) ··· e (t )π (z ). n 1 1 n n i1,··· ,in i1 1 j1 1 in n jn n i1,··· ,in≥1 j1,··· ,jn≥1 (3.96) Since the symmetrization operator is linear operator, that is, (af + bg)∧ = af ∧ + bg∧ where a, b, ∈ R, then
X X (j ,··· ,j ) ∧ g∧((t , z ), ··· (t , x )) = c 1 n (e (t )π (z ) ··· e (t )π (z )) . n 1 1 n n i1,··· ,in i1 1 j1 1 in n jn n i1,··· ,in≥1 j1,··· ,jn≥1 (3.97)
2 n We want to express (3.97) in terms of orthogonal functions in Ls(µ ). We shall adapt the same trick Di Nunno applied in the Poisson case using the Cantor diagonalization technique [24]. Denote the Cantor diagonalization mapping κ : N × N as follows: (i + j − 2)(i + j − 1) κ(i, j) = j + . (3.98) 2
Let k = κ(i, j) and
δk(t, z) = ei(t)πj(z) (3.99)
1 See Appendix for a discussion of the Hermite function en and its relationship to the Hermite polynomial hn. 39
2 then, {δk}k∈N is an orthonormal basis in L (µ). Then from (3.97) and (3.99)
∧ gn ((t1, z1), ··· , (tn, zn)) X X (j ,··· ,j ) ∧ = c 1 n δ (t , z ) ··· δ (t , z ) . (3.100) i1,··· ,in κ(i1,j1) 1 1 κ(in,jn) n n i1,··· ,in≥1 j1,··· ,jn≥1 Denote the following multi-indices given by
α = (α1, α2, ··· ), αi ∈ N0, i ∈ N (3.101) with compact support and I by the set of α in (3.101). Also, we denote the following:
Index(α) = max{i : αi 6= 0} (3.102)
m m X Y |α| = αi, α! = αi, m = Index(α). (3.103) i=1 i=1 Suppose that m = Index(α) and n = |α|, define the following tensor product as follows:
⊗α δ ((t1, z1) ··· (tn, zn))
⊗α1 ⊗αm =δ1 ⊗ · · · ⊗ δm ((t1, z1) ··· (tn, zn))
=δ1(t1, z1) ··· δ1(tα1 , zα1 )δ2(tα1+1, zα1+1) ··· δ2(tα1+α2 , zα1+α2 )
··· δm(tn−αm+1, zn−αm+1) ··· δm(tn, xn) (3.104)
⊗0 with the convention δi = 1, i ∈ {1, ··· , m}. Also, we denote the symmetrized tensor product as follows:
⊗ˆ α ⊗α ∧ δ ((t1, z1) ··· (tn, zn)) = δ ((t1, z1) ··· (tn, zn))
ˆ ˆ ⊗α1 ˆ ˆ ⊗αm =δ1 ⊗ · · · ⊗δm ((t1, z1) ··· (tn, zn)). (3.105)
Now, since
∧ ⊗ˆ α gn ∈ span{δ : |α| = n, α ∈ I} (3.106)
∧ then gn has of the form of ∧ X ⊗ˆ α fn = gn = cαδ . (3.107) |α|=n 40
⊗ˆ α By taking I0(δ ) = 1 and c0 = E[F ], then by (3.88), (3.91), and (3.107) we obtain
∞ ∞ X ∧ X X ⊗ˆ α X ⊗ˆ α F = E[F ] + In(gn ) = E[F ] + In cαδ = cαI|α| δ . n=1 n=1 |α|=n α∈I
We denote
⊗ˆ α Kα = I|α| δ (3.108) then we have the following chaos expansion stated in the theorem below.
Theorem 3.3.4 Let F ∈ L2(P ), then it has a unique chaos expansion of the form of
X F = cαKα. (3.109) α∈I Moreover, we shall establish an isometry relation to this chaos expansion by prov- ing the following lemma.
Lemma 3.3.5
E[KαKβ] = α! · 1{α=β} (3.110)
Proof We let
mα = Index(α), nα = |α|, mβ = Index(β), nβ = |β|. (3.111)
Then, by isometry (2.65),
h ⊗ˆ α ⊗ˆ βi E[KαKβ] =E Inα δ Inβ δ
D ⊗ˆ α ⊗ˆ βE =nα! δ , δ δnαnβ Z ⊗ˆ α ⊗ˆ β ⊗n =nα! δ δ dµ δnαnβ . (3.112) ([0,T ]×R)nα
For nα 6= nβ, the (3.112) vanishes. Throughout the remainder of the proof, it suffice to evaluate for the case n = nα = nβ. Denote m = mα, and consider the tensor product in (3.104). There are n! terms in the symmetrization on δ⊗ˆ α as well as δ⊗ˆ β with each term of these symmetrized tensor product has a factor of 1/n!. Since {δk}k∈N forms an orthonomal basis in L2(µ), then for α 6= β, (3.112) vanishes. 41
Consider the case α = β, for each n! terms in Kα, one can get a non-zero expec-
tation term with a product on a term in Kβ by permuting the terms in (3.104) by
permuting the first α1 terms, then permuting the next α2 terms, and so forth and
finally, permuting the last αm terms. There are α! = α1! ··· αm! possible combinations 2 in this procedure each with the weight of one by orthonormality of δk{k∈N} in L (µ). Hence, we obtain 1 E[ ] = n! · · n! · α! · 1 = α! · 1 . (3.113) KαKβ (n!)2 {α=β} {α=β}
Proposition 3.3.1 (Isometry) Let F ∈ L2(P ), with a chaos expansion of the form of (3.109), then
2 X 2 kF kL2(P ) = cαα!. (3.114) α∈I Proof From the preceding lemma, " # 2 2 X X X X X 2 kF kL2(P ) = E[F ] = E cαKα cβKβ = cαcβE[KαKβ] = cαα!. α∈I β∈I α∈I β∈I α∈I (3.115)
3.4 Stochastic Test and Distribution Function
Consider the following formal expansion
X F = cαKα. (3.116) α∈I If the following growth condition holds,
X 2 cαα! < ∞, (3.117) α∈I then F ∈ L2(P ). We relax this growth condition to obtain a family of generalized function spaces of stochastic test functions and stochastic distribution functions that relates to L2(P ) naturally [46]. 42
3.4.1 The spaces G and G∗
The stochastic test function G and the stochastic distribution function G∗ was first investigated by Pothoff and Timpel in the Wiener case [68]. A parallel definition was carried out by Di Nunno [24] in the Poisson case. We extend these definitions for the Canonical L´evyspace.
Definition 3.4.1 (i) Let Gq, q ∈ R be the space of formal expansion X F = In(fn) (3.118) n∈N0 such that !1/2 X 2 2qn kF k = n! kf k 2 n e < ∞. (3.119) Gq n L (µ ) n∈N0 For every q ∈ R, Gq is a Hilbert space with inner product
X 2qn 2 n < X, Y >Gr = n! < fn, gn >L (µ ) e (3.120) n∈N0 where F and G has the following formal sum:
X X F = In(fn),G = In(gn). (3.121) n∈N0 n∈N0 Define the stochastic test function G as
\ G = Gq (3.122) q>0 endowed with the projective topology, that is, as n → ∞
F → F on G ⇔ kF − F k → 0 ∀q > 0. (3.123) n n Gq
(ii) Define the stochastic distribution function G∗ as
∗ [ G = G−q (3.124) q>0 endowed with the inductive topology, that is, as n → ∞
G → G on G∗ ⇔ ∃q > 0 such that kG − Gk → 0. (3.125) n n G−q 43
Note that G∗ is a dual of G. Let F ∈ G and G ∈ G∗ with the formal expansion of F and G of the form of (3.121). The action of G on F is given by: X < G, F >G,G∗ = n! < fn, gn >L2(µn) . (3.126) n∈N0
Also, we can express the Gq-norm, q ∈ R in terms of the chaos expansion (3.109) as follows: 2 X kF k = c α!e2qn. (3.127) Gq α α∈I
3.4.2 Kontratiev and Hida spaces
Similarly, we extend the definitions of Kontratiev and Hida space [46] to the Canonical L´evyspace. We let α ∈ I and suppose that Index(α) = m, then we denote
m αk Y α k (2N) = (2j) j (3.128) j=1 where k ∈ Z. In particular, if α = ε(m) = (0, ··· , 0, 1, 0, ··· ), that is, ε(m) is a multi-index with all zeros except for the m-th component which contains one, then
(m) (2N)ε k = (2m)k.
Definition 3.4.2 (i) Let p ∈ [0, 1]. Suppose that F has a formal expansion of the
form (3.121). Then, F belongs to the space (S)p,q, q ∈ R if
2 X 2 1+p αq kF kp,q = aα(α!) (2N) < ∞. (3.129) α∈I
Define the Kontratiev test function (S)p as \ (S)p = (S)p,q (3.130) q>0 endowed with the projective topology.
(ii) Let q ∈ R, then the formal expansion X G = bαKα (3.131) α∈I 44
belongs to the space (S)−q if
2 X 2 1−p −αq kGk−p,−q = bα(α!) (2N) < ∞. (3.132) α∈I Define the Kontratiev distribution function (S)∗ as [ (S)−p = (S)−p,−q (3.133) q>0 endowed with the inductive topology.
Note that (S)−p is a dual of (S)p. The action of G ∈ (S)−p on F ∈ (S)p, with the formal expansion of F and G of the form (3.121) is given by X < G, F >= α!aαbα. (3.134) α∈I The Hida spaces are the special cases of the Kondratiev spaces. The Hida test ∗ ∗ function (S) and Hida distribution function (S) is given by (S) = (S)0 and (S) =
(S)−0 respectively. From the above definitions, we have the following inclusions for p ∈ [0, 1]:
2 ∗ (S)1 ⊂ (S)p ⊂ (S)0 ⊂ G ⊂ L (P ) ⊂ G ⊂ (S)−0 ⊂ (S)−p ⊂ (S)−1. (3.135)
3.5 White Noise Processes from Canonical L´evyProcesses
We extend the concept of white noise processes in the Canonical L´evyspace. Consider the chaos expansion of X(t) in (2.49) ! X X X(t) =I1(1) = I1 h1, eiiL2(λ) h1, πjiL2(ν) ei(s)πj(z) i∈N j∈N X X Z t Z = ei(s)ds πj(s)η(dz)I1 (ei(s)πj(z)) . (3.136) 0 i∈N j∈N R Now since
⊗εκ(i,j) Kεκ(i,j) =I1(δ ) = I1 (ei(s)πj(z)) Z t Z = ei(s)πj(z)dsη(dz) (3.137) 0 R 45
2 and {πj}j∈N is orthonormal with respect to L (η), then Z Z Z πj(z)η(dz) = π1(z)πj(z)η(dz) = η(dz) · 1{j=1} = ζ1{j=1} (3.138) R R R where Z ζ = σ2 + z2ν(dz). (3.139) R0 Alternatively, we can write X(t) as follows:
X Z t X(t) = ζ ei(s)dsKεκ(i,1) . (3.140) 0 i∈N Lemma 3.5.1 For i, j ∈ N, κ(i, j) ≥ i. (3.141)
Proof Since 1 κ(i, j) = j + (i + j − 2)(i + j − 1) (3.142) 2 then 1 κ(i, j) − i = i2 + (2j − 5)i + (j2 − j + 2) . (3.143) 2
Let j ∈ N and consider the quadratic equation in i as follows:
2 2 Fj(i) = i + (2j − 5)i + (j − j + 2). (3.144)
To prove the lemma, it suffice to show that Fj(i) ≥ 0 for all (i, j) ∈ N × N.
• Case I: (j > 1) The discriminant in this case is
∆ = (2j − 5)2 − 4(j2 − j + 2) = −16j + 17 < 0. (3.145)
Since Fj is concave upwards then, Fj(i) > 0 for all i ∈ N.
• Case II: (j = 1) In this case, we have
2 F1(i) = i − 3i + 2 = (i − 1)(i − 2). (3.146)
and it is concave upwards. Thus, for all i ∈ N, F1(i) ≥ 0. 46
Definition 3.5.1 White Noise L´evyProcess X˙ (t) Z ˙ X X X X(t) = ei(t) πj(z)η(dz)Kεκ(i,j) = ζ ei(t)Kεκ(i,1) . (3.147) i∈N j∈N R i∈N Lemma 3.5.2 X˙ (t) ∈ (S)∗. (3.148)
−1/12 Proof Since κ(i, 1) ≥ i and supt∈R |en(t)| = O(n ) [46], then,
2 κ(i,1) ˙ 2 X κ(i,1) 2 −ε q X(t) =ζ ε !ei (t)(2N) −q i∈N 2 X 2 −q =ζ ei (t)(2κ(i, 1)) i∈N 2 X 2 −q ≤ζ ei (t)(2i) . (3.149) i∈N Hence, the series converges for q ≥ 2 and thus proves our claim.
Lemma 3.5.3 dX(t) X˙ (t) = in (S)∗. (3.150) dt That is, there exists q > 0 such that
2 X(t + h) − X(t) − X˙ (t) → 0 as h → 0. (3.151) h −q
Proof Note that from (3.140) and (3.147), we have the following:
Z t+h X(t + h) − X(t) ˙ X 1 − X(t) =ζ (ei(s) − ei(t))dsKεκ(i,1) h h t i∈N X =ζ ai(h)Kεκ(i,1) (3.152) i∈N where 1 Z t+h ai(h) = (ei(s) − ei(t))ds. (3.153) h t 47
−1/12 Since supt∈R |en(t)| = O(n ) then, supi∈N |ai(h)| < ∞ for all h ∈ [0, 1]. Further- more, since κ(i, 1) ≥ 1 then,
2 X(t + h) − X(t) ˙ 2 X (i,1) 2 −q − X(t) =ζ ε !|ai(h)| (2κ(i, 1)) h −q i∈N 2 X 2 −q ≤ζ |ai(h)| (2i) . (3.154) i∈N
Now, since ai(h) → 0 as h → 0 for all i ∈ N then, for all q ≥ 2 from the dominated convergence theorem,
X 2 −q |ai(h)| (2i) → 0 as h → 0. (3.155) i∈N Hence, from the bounded convergence theorem, we obtain
2 X(t + h) − X(t) − X˙ (t) → 0 as h → 0. (3.156) h −q
Definition 3.5.2 L´evyWhite Noise Field M˙ (t, z)
˙ X X M(t, z) = ei(t)πj(z)Kεκ(i,j) . (3.157) i∈N j∈N
Lemma 3.5.4 For i, j ∈ N, pij ≤ κ(i, j). (3.158)
(i+j−2)(i+j−1) Proof For i, j ∈ N, i+j −2 ≥ 0, hence κ(i, j) = j + 2 ≥ j. On the other √ hand, since κ(i, j) ≥ i. Hence, from the above arguments, we have ij ≤ κ(i, j).
Lemma 3.5.5 M˙ (t, z) ∈ (S)∗ µ − a.e.
Proof Since 2 ˙ X X 2 2 −q M(t, z) = ei (t)πj (z)(2κ(i, j)) . (3.159) −q i∈N j∈N 48
Then, from the preceding lemma and by orthonormality of {ei}i∈N and {πj}j∈N is orthonormal with respect to L2(λ) and L2(η) respectively, then Z 2 Z ˙ X X 2 2 p −q M(t, z) µ(dt, dz) ≤ ei (t)πj (z)(2 ij) dtη(dz) × −q × R+ R R+ R i∈N j∈N √ Z Z X −q 2 X p −q 2 = ( 2i) ei (t)dt ( 2j) πj (z)η(dz) i∈N R+ j∈N R X √ X = ( 2i)−q (p2j)−q. (3.160) i∈N j∈N Hence, the above equation converges for q > 2, thus proves our claim.
Remark 3.5.6 Radon-Nikodym Interpretation of the L´evyWhite Noise Field
Let t ∈ R+ and A ∈ B(R), then Z t Z M(t, A) = M(ds, dz). (3.161) 0 A Likewise, we can express M(t, A) as follows: Z t Z t Z M(t, A) =σ dW (s) + zN˜(ds, dz) 0 0 A
=I1(1[0,t](s)1A(z)) ! X X =I1 1[0,t], ei L2(λ) h1A, πjiL2(ν) ei(s)πj(z) i∈N j∈N X X = 1[0,t], ei L2(λ) h1A, πjiL2(ν) I1 (ei(s)πj(z)) i∈N j∈N X X Z t Z = ei(s)ds πj(z)η(dz)Kεκ(i,j) 0 A i∈N j∈N Z t Z X X = ei(s)πj(z)µ(ds, dz)Kεκ(i,j) 0 A i∈N j∈N Z t Z = M˙ (t, z)µ(ds, dz). (3.162) 0 A Hence, from (3.161) and (3.162), M˙ (s, z) as the Radon-Nikodym derivative in (S)∗ is as follows: M(dt, dz) = M˙ (t, z)µ(dt, dz). (3.163) 49
3.6 Wick Product and Hermite Transform
The Wick Product was a first introduced by Wick in 1950 as a renormalization tool in quantum field theory and its application in stochastic analysis was introduced by Hida and Ikeda in 1965 [46]. We state some of its properties which are similar to the Wiener and Poisson white noise theory.
Definition 3.6.1 Wick Product P P Let F = α∈I aαKα ∈ (S)−1 and G = β∈I bβKβ ∈ (S)−1, then the Wick Product of X and Y denoted by X Y is defined as
X X X X X Y = aαbβKα+β = aαbβKγ. (3.164) α∈I β∈I γ∈I α+β=γ
We define the Wick powers of X ∈ (S)−1 as follows:
n (n−1) 0 X = X X, n ∈ N,X = 1. (3.165)
Moreover, if f : C → C is entire, given by the following Taylor series expansion:
∞ X n f(z) = anz . (3.166) n=0
then, we define the following Wick version f (X), X ∈ (S)−1 given as
∞ X n f (X) = anX . (3.167) n=0
Moreover, we define the the Wick exponential of X ∈ (S)−1 denoted as
∞ X Xn exp(X) = . (3.168) n! n=0
2 2 whenever it is convergent in (S)−1. Let β ∈ L (R) and γ ∈ L (R × R0) deterministic, then we have the following Wick exponentials with respect to the Wiener and the compensated Poisson random measure respectively [27]:
Z Z 1 Z exp β(t)dW (t) = exp β(t)dW (t) − β2(t)dt , (3.169) 2 R+ R+ R+ 50
Z exp γ(t, z)N˜(dt, dz) R+×R0 Z Z = exp (log(1 + γ(t, z)) − γ(t, z))ν(dz)dt + log(1 + γ(t, z))N˜(dt, dz) . R+×R0 R+×R0 (3.170)
Some of the important properties of the Wick Product [46].
∗ ∗ 1. The Wick product is closed in the following spaces: (S)−1,(S) ,(S), (S)1, G , G∗ However, the Wick product is in general, not closed in L2(P ).
2. If either X or Y are deterministic, then
X Y = X · Y. (3.171)
3. Let X, Y , and Z ∈ (S)−1, then,
X Y =Y X,
(X Y ) Z =X (Y Z),
X (Y + Z) =(X Y ) + (X Z) (3.172)
that is, the commutative, associative, and distributive law holds respectively.
4. Wick algebra follows the same rules as ordinary algebra. For example,
(X + Y )2 =X2 + 2X Y + Y 2,
exp(X + Y ) = exp(X) exp(Y ). (3.173)
5. Expectation Properties
(a) Let X, Y , X Y ∈ L1(P ), then
E[X Y ] = E[X] · E[Y ]. (3.174)
Note: Independence of X and Y is not required. 51
(b) Let X ∈ L1(P ), then
E[exp X] = exp(E[X]). (3.175)
6. Wick Chain Rule: Let X(·): R → (S)−1 continuously differentiable, and let f : C → C be entire such that f(R) ⊂ R, then d d f (X(t)) = (f 0)(X(t)) X(t) (3.176) dt dt
in (S)−1.
P Definition 3.6.2 [46] Let F = α∈I aαHα ∈ (S)−1, then the Hermite Transform denoted by HF or F˜ is defined by
˜ X α HF (z) = F (z) = aαz ∈ C. (3.177) α∈I
N α α1 α2 αn 0 where z = (z1, z2, ··· , zn, ··· ) ∈ C , and z = z1 z2 ··· zn ··· , α ∈ I, and zj ∈ 1, ∀j ∈ N.
Some of the important properties of the Hermite transform which will enable us to manipulate the Wick product is stated below.
Theorem 3.6.1 [46] Let F,G ∈ (S)−1, then
H(F G)(z) = HF (z) ·HG(z). (3.178)
In addition, let f : C → C, be an entire function such that f(R) ⊂ R, and f (F ) ∈
(S)−1, then H(f (F ))(z) = f(HF (z)). (3.179)
whenever it converges in C. 52
Lemma 3.6.2 [46] Suppose X(t, ω) and F (t, ω) are (S)−1 processes such that
dX˜(t,z) ˜ (i) dt = F (t, z), ∀t ∈ (a, b), z ∈ Kq(δ)
˜ (ii) F (t, z) is a bounded function of (t, z) ∈ (a, b) × Kq(δ), continuous in t ∈ (a, b)
for each z ∈ Kq(δ). where
N X α qα 2 Kq(δ) = {z ∈ C : |z| (2N) < δ }. (3.180) α6==0
Then X(t, ω) is a differentiable (S)−1 process and dX(t, z) = F (t, z), ∀t ∈ (a, b). (3.181) dt
3.7 Stochastic Derivative
Consider the formal sum
X X F = In(fn) = cαKα (3.182) n∈N0 α∈I where
⊗ˆ α Kα = I|α| δ , (3.183)
X ⊗ˆ α fn = cαδ . (3.184) |α|=n If F ∈ D1,2, then we have the Malliavin derivative in F ∈ D1,2 as follows, ∞ X Dt,zF = nIn−1(fn(., (t, z))). (3.185) n=1
Let us relax the D1,2 case and let us define a stochastic derivative in F ∈ G∗ with ∗ the same form as (3.185). This is well-defined if Dt,zF converges in G . We employ this same strategy of Øksendal and Proske [64] in taking the stochastic derivative in F ∈ G∗ in the Poisson case. Similarly, we can define a stochastic derivative in ∗ ∗ F ∈ (S) whenever Dt,zF converges in (S) . In the Wiener case, the stochastic derivative corresponds to the Hida-Malliavin derivative [27]. 53
From (3.184), we have as follows:
X ⊗ˆ α fn(., (t, z)) = cαδ (., (t, z)). (3.186) |α|=n
T Let p = Index(α), then αi = 0 for i > p and εi = (0, ··· , 1, ··· , 0) , a unit vector with a 1 in the ith component and zero otherwise. Then, δ⊗ˆ α(., t, z) can be computed as follows:
p ˆ 1 X ˆ ˆ δ⊗α(., t, z) = α |α − ε |!δ⊗(α−εi)δ⊗εi (t, z) |α|! i i i=1 1 X ˆ ˆ = α δ⊗(α−εi)δ⊗εi (t, z). (3.187) |α| i i∈N Then, from (3.185), (3.186), and (3.187), we obtain the stochastic derivative
∞ X X X ⊗ˆ (α−εi) ⊗ˆ εi Dt,zF = In n cα αiδ δ (t, z) n=1 |α|=n i∈N ∞ X X X ⊗ˆ (α−εi) ⊗ˆ εi = cααiIn δ δ (t, z) n=1 |α|=n i∈N
X X ⊗ˆ εi = cααiKα−εi δ (t, z). (3.188) α∈I i∈N
Note that if F ∈ D1,2, then the Malliavin derivative in (3.185) and the stochastic derivative in (3.188) coincide. Since κ is bijective, then for any i ∈ N, ∃(k, m) ∈ N×N such that i = κ(k, m). Hence, we can also express (3.188) as follows:
X X X ⊗ˆ εκ(k,m) Dt,zF = cαακ(k,m)Kα−εκ(k,m) δ (t, z) (3.189) α∈I k∈N m∈N X X X = cαακ(k,m)Kα−εκ(k,m) ek(t)πm(z) (3.190) α∈I k∈N m∈N X X X = cβ+εκ(k,m) (βκ(k,m) + 1)Kβek(t)πm(z). (3.191) β∈I k∈N m∈N 54
Theorem 3.7.1 Closability of Stochastic Derivatives
∗ Let Fm,F ∈ G such that as m → ∞
∗ (i) Fm → F in G ,
∗ (ii) Dt,zFm converges in G
∗ Then, Dt,zFm → Dt,zF in G .
Proof We follow a parallel arguments in showing closability in the D1,2 case [27]. Consider the formal expansion
X F = cαKα, (3.192) α∈I X m Fm = cα Kα (3.193) α∈I ∗ such that Fm → F in G , then there exists q > 0 such that
2 X m 2 −2q|α| ||Fm − F ||G∗ = α!|cα − cα| e → 0. (3.194) α∈I m Hence, cα → cα. Since the stochastic derivative of Dj,t,zF is given as
X X X m Dt,zFm = cα ακ(k,l)Kα−εκ(k,l) ek(t)πl(z). (3.195) α∈I k∈N l∈N ∗ and since Dt,zFm converges in G then by the Cauchy criterion, there exists r > 0 such that
2 X X X m n 2 2 −2r|α−κ(k,l)|! ||Dt,zFm − Dt,zFn||G−r = |cα − cα| ακ(k,l)(α − κ(k,l))!e → 0. α∈I k∈N l∈N (3.196)
From Fatou’s lemma,
X X X m 2 2 −2r|α−κ(k,l)!| lim |cα − cα| ακ(k,l)(α − κ(k,l))!e m→∞ α∈I k∈N l∈N
X X X m n 2 2 −2r|α−κ(k,l)|! = lim lim inf |cα − cα| ακ(k,l)(α − κ(k,l))!e m→∞ n→∞ α∈I k∈N l∈N
X X X m n 2 2 −2r|α−κ(k,l)|! ≤ lim lim inf |cα − cα| ακ(k,l)(α − κ(k,l))!e = 0. (3.197) m→∞ n→∞ α∈I k∈N l∈N 55
Hence,
2 ||Dt,zFm − Dt,zF ||G−r → 0. (3.198)
So therefore,
∗ Dt,zFm → Dt,zF ∈ G−r ⊂ G . (3.199)
Theorem 3.7.2 Let ∞ X ∗ F = In(fn) ∈ G (3.200) n=0 2 n ∗ where fn ∈ Ls(µ ). Then, Dt,zF ∈ G , µ a.e. given by
X Dt,zF = nIn(fn−1(·, (t, z))). (3.201) n=1
Proof We follow parallel arguments of [67] in the Poisson case. Since F ∈ L2(P ) and define its partial sum as
m X Fm = In(fn) (3.202) n=0
∗ then Fm → F in G as m → ∞. Pick q > 0 be arbitrary, then
∞ 2 X 2 −2qn 2 n ||Fm − F ||G−q = n!||fn||L (µ )e → 0. (3.203) n=m+1
Since q > 0 is arbitrary, then F ∈ G∗. Note that
∞ 2 X 2 −2q(n−1) 2 n−1 ||Dt,zFm − Dt,zF ||G−q = nn!||fn(·, (t, z))||L (µ )e . (3.204) n=m+1 56
Integrating both sides and as m → ∞ yields Z 2 ||Dt,zFm − Dt,zF ||G−q µ(dt, dz) [0,t]×R Z ∞ X 2 −2q(n−1) = nn!||fn(·, (t, z))||L2(µn−1)e µ(dt, dz) [0,t]×R n=m+1 ∞ Z X 2 −2q(n−1) . = nn! ||fn(·, (t, z))||L2(µn−1)e µ(dt, dz) n=m+1 [0,t]×R ∞ X 2 −2q(n−1) = nn!||fn||L2(µn)e n=m+1 ∞ X 2 ≤K n!||fn||L2(µn) → 0 (3.205) n=m+1
for some K > 0. Thus, verifies our claim.
We define the counterpart of Dom D, Dom D0, and Dom D1 in the space G∗ denoted by G, G0, and G1 respectively.
Definition 3.7.1 Let F ∈ G∗ with chaos expansion of the form (3.182). G is the set F ∈ G∗ such that there exists q > 0 such that there exists q > 0 such that
∞ X 2 −2q(n−1) nn!||fn||L2(µn)e < ∞. (3.206) n=1
For F ∈ G, the stochastic derivative DF :Ω × [0,T ] × R → R defined by
∞ X Dt,zF = nIn−1 (fn(·, (t, z))) . (3.207) n=1 with convergence in G∗ × L2(µ). Moreover, we have the following: Z 2 2 ||D F || 2 = ||D F || 2 µ(dt, dz) t,z G−q L (µ) t,z ×L (µ) [0,T ]×R ∞ X 2 −2q(n−1) = nn!||fn||L2(µn)e < ∞. (3.208) n=1
Definition 3.7.2 Let F ∈ G∗ with chaos expansion of the form (3.182). 57
(i) G0 is the set of F ∈ G∗ such that σ > 0 and there exists q > 0 such that
∞ Z T X 2 −2q(n−1) 2 nn! ||fn(·, (t, 0))||L2(µn−1)e σ dt < ∞. (3.209) n=1 0
For F ∈ G0, we define
∞ X Dt,0F = nIn−1 (fn(·, (t, 0))) . (3.210) n=1
with convergence in G∗ × L2(λ). Moreover, we have the following:
Z T 2 2 2 ||D F || 2 = ||D F || σ dt t,0 G−q×L ([0,T ]) t,0 G−q 0 ∞ Z T X 2 −2q(n−1) 2 = nn! ||fn(·, (t, 0))||L2(µn−1)e σ dt < ∞. n=1 0 (3.211)
(ii) G1 is the set of F ∈ G∗ such that ν 6= 0 and there exists q > 0 such that
∞ Z X 2 −2q(n−1) 2 nn! ||fn(·, (t, z))||L2(µn−1)e z ν(dz)dt < ∞. (3.212) n=1 [0,T ]×R0
For F ∈ G1, we define
∞ X Dt,zF = nIn−1 (fn(·, (t, z))) , z 6= 0. (3.213) n=1
with convergence in G∗ × L2(z2ν(dz)dt). Moreover, we have the following: Z 2 2 2 ||D F || 2 = ||D F || z ν(dz)dt t,z G−q×L (R0) t,z G−q [0,T ]×R ∞ Z X 2 −2q(n−1) 2 = nn! ||fn(·, (t, z))||L2(µn−1)e z ν(dz)dt < ∞. n=1 [0,T ]×R0 (3.214)
If σ > 0 and ν 6= 0, then G = G0 ∩ G1 ⊂ G∗.
We state a chain rule in G. The proof of this chain rule is analogous to Dom D case [36], [80] by weakening the assumption from L2(P ) to G∗. 58
Theorem 3.7.3 Chain Rule
1 n Let F = (F1, ··· ,Fn), Fi ∈ G for i ∈ {1, ··· , n} and ϕ ∈ C (R ; R). Suppose that
(i) ϕ(F ) ∈ G∗,
(ii) there exists q0 > 0 such that
n X ∂ϕ(F ) 2 Dt,0 ∈ L (λ), ∂xk k=1 G−q0
(iii) there exists q1 > 0 such that
ϕ(F1 + zDt,zF1, ··· ,Fn + zDt,zFn) − ϕ(F1, ··· ,Fn) 2 2 ∈ L (z ν(dz)dt). z G−q1
Then, ϕ(F ) ∈ G and
n X ∂ϕ(F ) Dt,zϕ(F ) = Dt,0Fk1{z=0} ∂xk k=1 ϕ(F + zD F , ··· ,F + zD F ) − ϕ(F , ··· ,F ) + 1 t,z 1 n t,z n 1 n 1 . (3.215) z {z6=0} Corollary 3.7.4 Product Rule Let F,G ∈ L2(P ) and suppose that F 2,G2,FG ∈ G∗, then
Dt,z(FG) = FDt,zG + GDt,zF + zDt,zFDt,zG. (3.216)
Lastly, we state the chain rule under a Wick polynomial g that is entire.
Theorem 3.7.5 [25] Let F ∈ (S)∗ and g : C → C be entire, then
0 Dt,zg (F ) =(g ) (F ) Dt,zF. (3.217)
3.8 Generalized Expectation and Generalized Conditional Expectation
Definition 3.8.1 Generalized Expectation and Generalized Conditional Expectation in (S)∗ P ∗ Let F = α∈I cαKα ∈ (S) , we define the generalized expectation E[F ] is given by
E[F ] = c0. (3.218) 59
We define generalized conditional expectation E[F |FA] with respect to FA, A ∈ B(R+) is given by X E[F |FA] = cαE[Kα|FA] (3.219) α∈I whenever it converges in (S)∗.
Remark 3.8.1 If F ∈ L2(P ) ⊂ (G)∗, then the generalized expectation coincides with the usual conditional expectation.
Theorem 3.8.2 [27] Properties of conditional expectation in (S)∗
∗ (i) Suppose that F , G, E[F |Ft], and E[G|Ft] belongs to (S) , then
E[F G|FA] = E[F |FA] E[G|FA]. (3.220)
In addition, if F , G, ∈ L1(P ), then
E[F G] = E[F ] · E[G]. (3.221)
∗ ∗ (ii.) Let F ∈ (S) , and suppose that exp (F ), E[F |Ft], exp (E[F |FA]) ∈ (S) then
E[exp F |FA] = exp (E[F |FA]). (3.222)
In addition, if F ∈ L1(P ), then
E[exp F ] = exp(E[F ]). (3.223)
Theorem 3.8.3 [27] Suppose that u(s, x) is Skorohod integrable and E[u(s, x)|Ft] ∈ ∗ (S) for all (s, x) ∈ R+ × R, then Z Z
E u(s, x)M(δs, dx) Ft = E [u(s, x)| Ft] M(δs, dx), R+×R [0,t]×R Z
E u(s, x)M(δs, dx) Ft =0. (3.224) (t,∞)×R 60
Definition 3.8.2 Generalized Expectation and Generalized Conditional Expectation in G∗ P∞ ∗ ∗ Let F = n=0 In(fn) ∈ G , we define the generalized expectation E[F ] in G is given by
E[F ] = I0(f0) (3.225) and we define the conditional expectation of F with respect to A ∈ B([0,T ]) is given by ∞ X ⊗n E[F |FA] = In(fn1A ). (3.226) n=0
The generalized conditional expectation in G∗ is more tractable to handle com- pared to the generalized conditional expectation in (S)∗.
Remark 3.8.4 If F ∈ L2(P ) ⊂ G∗, then the generalized expectation coincides with the usual conditional expectation.
Lemma 3.8.5 [14], [27] Basic properties of conditional expectation in G∗
(i) Closure under G∗
∗ ∗ Let F ∈ G and A ∈ B([0,T ]), then E[F |FA] ∈ G and for some q > 0.
kE[F |F ]k ≤ ||F || . (3.227) A G−q G−q
(ii) Closure under L2(P )
∗ 2 Let F ∈ G and A ∈ B([0,T ]), then E[F |FA] ∈ L (P ) and
kE[F |FA]kL2(P ) ≤ ||F ||L2(P ). (3.228)
(iii) Linearity
Let F,G ∈ G∗, a, b ∈ R, and A ∈ B([0,T ]), then
E[aF + bG|FA] = aE[F |FA] + bE[G|FA]. (3.229) 61
(iv) Tower Property Let F ∈ G∗, and A, B ∈ B([0,T ]) such that A ⊂ B, then
E[E[F |FA]FB] = E[F |FA] = E[E[F |FB]FA]. (3.230)
Proof We denote the following formal expansions:
∞ ∞ X X F = In(fn),G = In(gn) (3.231) n=0 n=0
2 n where fn, gn ∈ Ls(µ ) for all n ∈ N0.
∗ (i) Since F ∈ G , then ||F ||G−q < ∞ for some q > 0. Hence,
∞ 2 X 2 2 kE[F |F ]k = n! f 1⊗n e−2qn ≤ kF k < ∞. (3.232) A G−q n A L2(µn) G−q n=0
2 (ii) Since F ∈ L (P ), then ||F ||L2(P ) < ∞. Hence,
∞ 2 X ⊗n 2 kE[F |FA]kF ∈L2(P ) = n! fn1A L2(µn) ≤ kF kL2(P ) < ∞. (3.233) n=0
(iii) Using Cauchy-Schwartz inequality, we can show that aF + bG ∈ G∗. Then, we have the following expansion
∞ X ⊗n E[aF + bG|FA] = In((afn + bgn)1A ) n=0 ∞ ∞ X ⊗n X ⊗n =a In(fn1A ) + b In(gn1A ) = aE[F |FA] + bE[G|FA]. n=0 n=0 (3.234)
(iv) Since A ⊂ B, then 1A1B = 1B1A = 1A∩B = 1A. Hence,
∞ ∞ ∞ X ⊗n ⊗n X ⊗n ⊗n X ⊗n E[E[F |FA]FB] = In(fn1A 1B ) == In(fn1B 1A ) = In(fn1A ). n=0 n=0 n=0 (3.235) That is,
E[E[F |FA]FB] = E[F |FA] = E[E[F |FB]FA]. (3.236) 62
Theorem 3.8.6 [27], [66] Let F ∈ G∗ and A ∈ B([0,T ]), then
Dt,zE[F |FA] = E[Dt,zF |FA]1A(t). (3.237)
Proof
∞ ∞ X ⊗n X ⊗(n−1) Dt,zE[F |FA] = Dt,z In(fn1A ) = In−1(fn1A )1A = E[Dt,zF |FA]1A(t). n=0 n=1 (3.238)
∗ Corollary 3.8.7 [27] Let u : R+ × R → G be an F-predictable process, then
(i) Dt,zu(s, x) is F-predictable process for all (t, z) ∈ R+ × R,
(ii) Dt,zu(s, x) = 0, for s < t, z ∈ R.
Proof The assertion holds in (i) and (ii) by applying previous theorem
Dt,zu(s, x) = E[u(s, x)|Fs− ]1[0,s)(t) = E[u(s, x)|Fs− ]1{t>s}. (3.239)
Theorem 3.8.8 [27] Properties of conditional expectation in G∗
(i) Let F,G ∈ G∗, and A ∈ B([0,T ]), then
E[F G|FA] = E[F |FA] E[G|FA]. (3.240)
(ii) Let F , exp F ∈ G∗, and A ∈ B([0,T ]) then
E[exp F |FA] = exp (E[F |FA]). (3.241) 63
3.9 Skorohod Integration on G∗
Definition 3.9.1 Skorohod Integral in G∗
∗ Let u : R+ × R → G with the formal expansion given by
∞ X u(t, z) = In(fn(·, t, z)) (3.242) n=0 such that
∞ X ˜ 2 −2q(n+1) (n + 1)!||f||L2(µn+1)e < ∞ (3.243) n=0
˜ 2 n+1 where fn ∈ Ls(µ ), and for some q > 0. Then we define the Skorohod integral of u with respect to M as follows: Z ∞ X ˜ δ(u) = u(t, x)M(δt, dx) = In+1(fn). (3.244) R+×R n=0 We say that u is Skorohod integrable if δ(u) ∈ G∗, that is, there exists some q > 0 such that
∞ 2 X 2 −2q(n+1) ˜ 2 n+1 ||δ(u)||G−q = (n + 1)!||fn||L (µ )e < ∞. (3.245) n=0 Theorem 3.9.1 Fundamental Theorem of Stochastic Calculus
∗ Let u : R+ × R → G be a random field satisfying the following conditions:
(i) u ∈ L2(P × µ),
(ii) Dt,zu is Skorohod integrable for all (t, z) ∈ [0,T ] × R,
∗ ∗ (iii) Dt,zδ(u) ∈ G and δ(Dt,zu) ∈ G , for all (t, z) ∈ [0,T ]×R and there exists q > 0 such that Z 2 ||Dt,zδ(u)||G−q µ(dt, dz) < ∞, (3.246) [0,T ]×R
Z 2 ||δ(Dt,zu)||G−q µ(dt, dz) < ∞. (3.247) [0,T ]×R 64
Then,
Dt,z (δ(u)) = u(t, z) + δ(Dt,zu), (3.248) that is, Z Z Dt,z u(s, x)M(δs, dx) = u(t, z) + Dt,zu(s, x)M(δs, dx). (3.249) [0,T ]×R [0,T ]×R Proof First, suppose the base case where u(s, x) has of the form
u(s, x) = In(fn(·, (s, x))) (3.250) then
˜ δ(u) = In+1(fn) (3.251) where
1 f˜ = f˜ ((t , z ), ··· , (t , z )) = [f (·, (t , z )) + ··· f (·, (t , z ))] . n n 1 1 n+1 n+1 n + 1 n 1 1 n n+1 n+1 (3.252)
Since
1 f˜ (·, (t, z)) = [f (·, (t , z ), (t, z)) + ··· f (·, (t , z ), (t, z)) + f (·, ·, (t, z))] n n + 1 n 1 1 n n n n (3.253) then
˜ Dt,zδ(u) =(n + 1)In(f(·, (t, z)))
=In(fn(·, (t1, z1), (t, z))) + ··· In(fn(·, (tn, zn), (t, z))) + In(fn(·, ·, (t, z)))
=In(fn(·, (t1, z1), (t, z))) + ··· In(fn(·, (tn, zn), (t, z))) + u(t, z) (3.254) and also,
Dt,zu(s, x) = nIn−1(fn(·, (t, z), (s, x))). (3.255) 65
Then from (ii), its Skorohod integral is given as Z δ(Dt,zu) = Dt,zu(s, x)M(δs, dx) [0,t]×R Z = nIn−1(fn(·, (t, z), (s, x))M(δs, dx) [0,t]×R ˜ =nIn(fn(·, (t, z), ·)) (3.256) where 1 f˜ (·, (t, z), ·) = [f (·, (t, z), (t , z )) + ··· + f (·, (t, z), (t , z ))] (3.257) n n n 1 1 n n n is the symmetrization with respect to (t1, z1), ··· , (tn, zn). Hence, from (3.256) and (3.257) yields
δ(Dt,zu) =In(fn(·, (t1, z1), (t, z))) + ··· In(fn(·, (tn, zn), (t, z))) (3.258)
Then from (3.254) and (3.258) yields (3.248). On the other hand, consider the general case of u(s, x) has of the form
∞ X u(s, x) = In(fn(·, (s, x))). (3.259) n=0 Then, we have the following:
∞ X ˜ δ(u) = In+1(fn), (3.260) n=0 ∞ X ˜ Dt,zδ(u) = (n + 1)In(fn(·, (t, z))). (3.261) n=0 Consider the partial sum
m X um(s, x) = In(fn(·, (s, x))). (3.262) n=0 From (i) and by isometry, Z ∞ 2 X 2 ||u||L2(P ×µ) = ||fn(·, (t, z))||L2(µn)µ(dt, dz) [0,T ]×R n=0 ∞ X 2 = ||fn||L2(µn+1) < ∞. (3.263) n=0 66
So therefore, we have the following convergence as m → ∞, ∞ 2 X 2 ||u − um||L2(P ×µ) = ||fn||L2(µn+1) → 0. (3.264) n=m+1
2 Hence, um → u in L (P × µ). Applying the result from base case, we obtain
Dt,z(δ(um)) = δ(Dt,zum) + um(t, z). (3.265)
To show (3.248), we need to show the following as m → ∞,
Dt,z(δ(um)) →u(t, z) + δ(Dt,zu), (3.266)
Dt,z(δ(um)) →Dt,z(δ(u)) (3.267) in G∗ × L2(µ). To show (3.266), we have we have the following:
∞ X Dt,zu(s, x) = nIn−1(fn(·, (t, z), (s, x))) n=1 ∞ ∞ X ˜ X ˜ δ(Dt,zu(s, x)) = nIn(fn(·, (t, z), ·)) = In(nfn(·, (t, z), ·)). (3.268) n=1 n=0 where the last equation is from (ii). Then from (iii), there exists q > 0 such that Z 2 ||δ(Dt,zu)||G−q µ(ds, dx) [0,T ]×R Z ∞ X ˜ 2 −2qn = n!||nfn(·, (t, z), ·)||L2(µn)e µ(dt, dz) [0,T ]×R n=0 ∞ Z X 2 ˜ 2 −2qn = n!n ||fn(·, (t, z), ·)||L2(µn)µ(dt, dz)e n=0 [0,T ]×R ∞ X 2 ˜ 2 −2qn = n!n ||fn||L2(µn+1)e < ∞. (3.269) n=0 Hence, as m → ∞, we obtain Z ∞ 2 X 2 2 −2qn ˜ 2 n+1 ||δ(Dt,zu) − δ(Dt,zum)||G−q µ(ds, dx) = n!n ||fn||L (µ )e → 0. [0,T ]×R n=0 (3.270)
This implies as m → ∞,
∗ 2 δ(Dt,zum) → δ(Dt,zu), G × L (µ). (3.271) 67
From (3.265), we have the following:
∗ 2 Dt,z(δ(um)) → u(t, z) + δ(Dt,zu), G × L (µ). (3.272)
On the other hand, to show (3.267), note that
˜ (n + 1)f(·, (t, z)) =fn(·, (t1, z1), (t, z)) + ··· + fn(·, (tn, zn), (t, z)) + fn(·, ·, (t, z)) ˜ =nf(·, (t, z), ·) + fn(·, ·, (t, z)) (3.273)
then we have,
n 1 f˜(·, (t, z)) = f˜ (·, (t, z), ·) + f (·, ·, (t, z)). (3.274) n + 1 n n + 1 n
From the parallelogram inequality
2 2 2n 2 2 2 ||f˜ (·, (t, z))|| 2 n ≤ ||f˜(·, (t, z), ·)|| 2 n + ||f (·, ·, (t, z))|| 2 n . n L (µ ) (n + 1)2 L (µ ) (n + 1)2 n L (µ ) (3.275)
Hence,
˜ 2 ||fn||L2(µn+1) Z ˜ 2 = ||fn(·, (t, z))||L2(µn)µ(dt, dz) [0,T ]×R Z 2 2n ˜ 2 2 2 ≤ 2 ||fn(·, (t, z), ·)||L2(µn) + 2 ||fn(·, ·, (t, z))||L2(µn) µ(dt, dz) [0,T ]×R (n + 1) (n + 1) 2 2n ˜ 2 2 2 = ||f || 2 n+1 + ||f || 2 n+1 . (3.276) (n + 1)2 n L (µ ) (n + 1)2 n L (µ )
So therefore, Z 2 ||Dt,zδ(u)||G−q µ(dt, dz) [0,T ]×R ∞ ∞ X 2 ˜ 2 −2qn X 2 −2qn ≤ 2 n n!||fn||L2(µn+1)e + 2 n!||fn||L2(µn+1)e n=0 n=0 Z 2 2 2 ≤ 2 ||Dt,zδ(u)||G−q µ(dt, dz) + 2||u||L (P ×µ) < ∞. (3.277) [0,T ]×R 68
The last term is finite from (i) and (iii). Then finally, we have the following expression as m → ∞, Z 2 ||Dt,zδ(u) − Dt,zδ(um)||G−q µ(dt, dz) [0,T ]×R ∞ ∞ X 2 ˜ −2qn X −2qn ≤ 2 n n!||fn||L2(µn+1)e + 2 n!||fn||L2(µn+1)e → 0. (3.278) n=m+1 n=m+1
Hence, as m → ∞
∗ 2 Dt,z(δ(um)) → u(t, z) + Dt,z(δ(u)), G × L (µ). (3.279)
Finally, the limits of the integral in (3.249) is a consequence of (3.272).
The special case of the theorem if u is predictable then by applying Corollary 3.8.7, we have the following corollary.
Corollary 3.9.2 Let u satisfies the conditions of the preceding theorem and in addi- tion, suppose that it is also predictable, then we have the following identity: Z Z Dt,z u(s, x)M(ds, dx) = u(t, z) + Dt,zu(s, x)M(ds, dx). (3.280) [0,T ]×R [t,T ]×R Corollary 3.9.3 Let u satisfies the conditions of the preceding corollary, then Z Z −1 Dt,z u(s, 0)dW (s) =σ u(t, 0)1{z=0} + Dt,zu(s, 0)dW (s), (3.281) [0,T ] [t,T ] Z Z ˜ −1 ˜ Dt,z u(s, x)xN(ds, dx) =σ u(t, z)1{z6=0} + Dt,zu(s, x)N(ds, dx). [0,T ]×R0 [t,T ]×R0 (3.282)
Remark 3.9.4 From the corollary, we have the following identities:
(i) For z = 0, Z Z −1 Dt,0 u(s, 0)dW (s) =σ u(t, 0) + Dt,zu(s, 0)dW (s), (3.283) [0,T ] [t,T ] Z Z ˜ ˜ Dt,0 u(s, x)xN(ds, dx) = Dt,zu(s, x)N(ds, dx), (3.284) [0,T ]×R0 [t,T ]×R0 69
(ii) For z 6= 0, Z Z Dt,z u(s, 0)dW (s) = Dt,zu(s, 0)dW (s), (3.285) [0,T ] [t,T ] Z Z ˜ ˜ Dt,z u(s, x)xN(ds, dx) =u(t, z) + Dt,zu(s, x)xN(ds, dx) [0,T ]×R0 [t,T ]×R0 (3.286)
Proof From the independent measure
˜ M(ds, dx) = σdW (t)δ0(x) + xN(ds, dx)(1 − δ0(x)) (3.287)
and from (3.281), we have following: Z Dt,z u(s, x)M(ds, dx) [0,T ]×R Z Z ˜ =σDt,z u(s, 0)dW (s) + Dt,z u(s, x)xN(ds, dx) (3.288) [0,T ] [0,T ]×R0 and Z u(t, z)+ Dt,zu(s, x)M(ds, dx) [t,T ]×R Z = u(t, z)1{z=0} + σ Dt,zu(s, 0)dW (s) [0,T ] Z ˜ +u(t, z)1{z6=0} + Dt,zu(s, x)xN(ds, dx). (3.289) [0,T ]×R0 Separating the continuous and the jump term, we obtain the desired identity.
We extend the concept of (S)∗ integrability [45], [64] in the Canonical L´evyspace.
Definition 3.9.2 (S)∗ integrability
∗ ∗ The random field u : R+ × R is (S) -integrable if the action of u for all F ∈ (S) satisfies
< u, F >∈ L1(µ) (3.290)
The (S)∗-integral denoted by
. Z I = u(t, z)µ(dt, dz) (3.291) R+×R 70 is a unique element in (S)∗ such that Z Z u(t, z)µ(dt, dz),F = hu(t, z),F i µ(dt, dz). (3.292) R+×R R+×R Theorem 3.9.5 Wick-Skorohod Identity Let u be Skorohod integrable with respect to M, then u(t, z) M˙ (t, x), for all (t, z) ∈
∗ R+ × R is (S) integrable and Z Z u(t, x)M(δt, dz) = u(t, z) M˙ (t, z)µ(dt, dz). (3.293) R+×R R+×R Proof Since G ∈ (S)∗, then it remains to show the identity in (3.293). Likewise, since u is Skorohod-integrable with respect to M, then it has a representation of the form ∞ X X u(t, z) = cα(t, z)Kα = In(fn(·, (t, z))) (3.294) α∈I n=0 2 n where fn(·, (t, z)) ∈ Ls(µ ). The right-hand side of (3.293) yields the following: Z u(t, z) M˙ (dt, dz)µ(dt, dz) R+×R Z X X X = cα(t, x)Kα ek(t)πm(z)Kκ(k,m) µ(dt, dz) × R+ R α∈I k∈N m∈N X X X Z = cα(t, x)ek(t)πm(z)µ(dt, dx)Kα+κ(k,m) × α∈I k∈N m∈N R+ R X X X = < cα, ekpm >L2(µ) Kα+κ(k,m) . (3.295) α∈I k∈N m∈N Now since
X ⊗ˆ α fn(·, (t, z)) = cα(t, z)δ (3.296) |α|=n
Then, fn(·, (t, z)) has the following orthonormal expansion
X X X ⊗ˆ α fn(·, (t, z)) = < cα, ekπm >L2(µ) δ ek(t)πm(z). (3.297) k∈N m∈N |α|=n 71
Hence, the left-hand side of (3.293) yields the following: Z u(t, z)M(δt, dz) R+×R ∞ Z X = In(fn(·, t, x))M(δt, dz) R+×R n=0 Z ∞ X X X X ⊗ˆ α = In < cα, ekπm >L2(µ) δ ek(t)πm(z) M(δt, dz) × R+ R n=0 k∈N m∈N |α|=n ∞ Z X X X X ⊗ˆ α ⊗κ(k,m) = In < cα, ekπm >L2(µ) δ ⊗ δ M(δt, dz) × n=0 R+ R k∈N m∈N |α|=n
∞ X X X X ⊗ˆ (α+κ(k,m)) = In+1 < cα, ekπm >L2(µ) δ n=0 k∈N m∈N |α|=n ∞ X X X X ⊗ˆ (α+κ(k,m)) = < cα, ekπm >L2(µ) In+1 δ n=0 |α|=n k∈N m∈N X X X = < cα, ekpm >L2(µ) Kα+κ(k,m) . (3.298) α∈I k∈N m∈N Finally, form (3.295) and (3.298) gives us the desired identity.
3.10 Clark-Ocone Theorem in L2(P )
With the framework concepts presented for the white noise theory for Canonical L´evyprocesses, our goal is to show a Clark-Ocone theorem in L2(P ) with respect to the independent random measure M. The steps in proving the Clark-Ocone theorem in L2(P ) is similar to the Wiener and Poisson white noise cases [27] by first showing the Clark-Ocone theorem for a Wick polynomial then establish an auxiliary lemma (Lemma 3.10.4) that will prove the Clark-Ocone theorem in L2(P ). Denote the following polynomial:
X α N P (x) = cαx , x ∈ R , cα ∈ R (3.299) α∈I 72
α . α1 α2 0 where x = x1 x2 ··· and xj = 1, j ∈ N. Denote its Wick version of the polynomial T at X = (X1, ··· Xn) by
X α P (X) = cαX . (3.300) α∈I
∗ Throughout this section, we assume that a process u : R+ → (S) is differentiable in the (S)∗ sense. define the following processes in (S)∗ as follows: Z Xk,m = ek(x)πm(s)M(ds, dx) = Kκ(k,m) , R+×R Z (t) . (t) Xk,m = ek(x)πm(s)M(ds, dx) = Kκ(k,m) . (3.301) [0,t]×R from the relation (3.163) in (S)∗, then from the Wick-Skorohod identity, we have Z ˙ Xk,m = ek(x)πm(s)M(ds, dx)µ(ds, dx), R+×R Z (t) ˙ Xk,m = ek(x)πm(s)M(ds, dx)µ(ds, dx). (3.302) [0,t]×R Then, from the have the following derivative in (S)∗
d X(t) = e (x)L (t) (3.303) dt k,m k m
where Z ˙ Lm(t) = πm(t)M(ds, dx)η(dx). (3.304) R
We let P (x) be a polynomial in Rn, that is, P (x) can be written as follows:
X α n n P (x) = cαx x ∈ R , cα ∈ R, I = N . (3.305) α∈I and let
T X =(Xk1,m1 , ··· ,Xkn,mn ) , X(t) =(X(t) , ··· ,X(t) )T , k1,m1 kn,mn T α =(ακ(k1,m1), ··· , ακ(kn,mn)) (3.306) 73
where ki, mi ∈ N, for all i ∈ {1, ··· , n}. Then, its Wick version of the polynomial at T X = (X1, ··· Xn) by
X α P (X) = cαX . (3.307) α∈I Moreover, we have the following identities:
α κ(k1,m1) κ(kn,mn) X = (Xk1,m1 ) · · · Xk,mn = Kα, α κ(k1,m1) κ(kn,mn) X(t) = X(t) · · · X(t) = (t). (3.308) k1,m1 k,mn Kα
If F = P (X) ∈ G∗, then its generalized conditional expectation in G∗ is given by
X (t)α (t) E[F |Ft] = cα X = P X . (3.309) α∈I
∗ We define the concept of FT measurablity in G in the following definition.
∗ Definition 3.10.1 [27] Let T > 0 be a constant, we say that F ∈ G is FT measur- able if
E[F |FT ] = F. (3.310)
∗ Lemma 3.10.1 [27] F ∈ G is FT measurable iff F can be written as
X (T )α F = cα X . (3.311) α∈I Lemma 3.10.2 Differentiation of the Wick Polynomial
(i)
n X X α−κ(ki,mi) Dt,zP (X) = cαακ(ki,mi)X eki (t)πmi (z), (3.312) i=1 α∈I n X X (α−κ(ki,mi)) Dt,zP (X) = cαακ(ki,mi)X eki (t)πmi (z). (3.313) i=1 α∈I
(ii)
n d X ∂P P (X(t)) = X(t) e (t)L (z). (3.314) dt ∂x ki mi i=1 i 74
Proof (i) From the chain rule,
n X ∂P (X) D P (X) = D X , (3.315) t,z ∂x t,z ki,mi i=1 i n X ∂P (X) D P (X) = D X . (3.316) t,z ∂x t,z ki,mi i=1 i Since
∂P (X) X α−κ(ki,mi) = cαακ(ki,mi)X , (3.317) ∂xi α∈I ∂P (X) X (α−κ(ki,mi)) = cαακ(ki,mi)X (3.318) ∂xi α∈I and Z
Xki,mi = eki (s)πmi (x)M(ds, dx) = I1(eki πmi ) (3.319) R+×R Then,
Dt,zXki,mi = eki (t)πmi (x). (3.320)
Plugging (3.317) − (3.320) into (3.315) − (3.316), yields the desired result.
(ii) From the Wick chain rule and (3.319), we obtain n d X ∂P d (t) P (X(t)) = X(t) X dt ∂x dt ki,mi i=1 i n X ∂P = X(t) e (t)L (z). (3.321) ∂x ki mi i=1 i
To show the Clark-Ocone theorem in L2(P ), we first establish a Clark-Ocone theorem for polynomials.
Theorem 3.10.3 Clark-Ocone Theorem for Polynomials
∗ Let F ∈ G be an FT measurable Wick polynomial of degree n, then Z F =E[F ] + E[Dt,zF |Ft−]M(dt, dz). (3.322) [0,T ]×R 75
Proof Since F Wick polynomial of degree n, then it has of the form
X α F = P (X) = cαX (3.323) α∈I
n where P (x) is a polynomial in R . Moreover, since F is an an FT measurable, then " #
X (T )α F =E[F |FT ] = E cα X FT α∈I h α i X (T ) = cαE X FT α∈I X (T )α = cα X . (3.324) α∈I
The expansion F and E[Dt,zF |Ft−] consists of finite number of terms. Hence, both processes are Skorohod integrable. Then from the Wick-Skorohod identity and from the preceding lemma, Z E[Dt,zF |Ft−]M(dt, dz) [0,T ]×R " n # Z X ∂P = E (X) e (t)π (z) F M˙ (t, z)η(dz)dt ∂x ki mi t− [0,T ]×R i=1 i n Z X ∂P = X(t) e (t)π (z) M˙ (t, z)η(dz)dt ∂x ki mi [0,T ]×R i=1 i n Z X ∂P Z = X(t) π (z)M˙ (t, z)η(dz)e (t)dt ∂x mi ki [0,T ] i=1 i R n Z X ∂P Z = X(t) e (t) π (z)M˙ (t, z)η(dz)dt. ∂x ki mi [0,T ] i=1 i R (3.325)
Now, since
Z d e (t) π (z)M˙ (t, z)η(dz)dt = e (t)L (t) = X(t) . (3.326) ki mi ki mi ki,mi R dt 76
Then, from the Wick chain rule and since F is be FT − measurable, we finally obtain Z E[Dt,zF |Ft−]M(dt, dz) [0,T ]×R Z n X ∂P d (t) = X(t) X dt ∂x dt ki,mi [0,T ] i=1 i Z d = P X(t) [0,T ] dt =P X(T ) − P X(0)
=E[F |FT ] − E[F |F0] =F − E[F ]. (3.327)
We need the following auxiliary lemma in establishing a Clark-Ocone in L2(P ).
Theorem 3.10.4 Let F ∈ G∗, then we have the following:
∗ ∗ (i) Dt,zF ∈ G , G , µ a.e.,
∗ ∗ (ii) Let Fn ∈ G , ∀ n ∈ N such that Fn → F in G as n → ∞, then there exists a ∗ ∗ sub-sequence Fnk , k ∈ N such that Dt,zFnk → Dt,zF ∈ G as k → ∞, G , µ a.e.
Proof The proof is similar to the proof of Okur [66] in the Wiener case.
(i) Since F ∈ G∗, then it has a formal expansion X F = cαKα (3.328) α∈I and there exists q ∈ R such that
2 X 2 −2q|α| ||F ||G−q = α!cαe . (3.329) α∈I Then X X X Dt,zF = cβ+εκ(k,m) (βκ(k,m) + 1)ek(t)πm(z)Kβ β∈I k∈N m∈N X = gβ(t, z)Kβ (3.330) β∈I 77 where
X X gβ(t, z) = cβ+εκ(k,m) (βκ(k,m) + 1)ek(t)πm(z). (3.331) k∈N m∈N
Since ekpm is an orthonormal basis with respect to µ, then
Z X X |g (t, z)|2η(dz)dt = c2 (β + 1)2. (3.332) β β+εκ(k,m) κ(k,m) × R+ R k∈N m∈N Also, since
X k D F k2 = g (t, z)β!e−2(q+1)|β|. (3.333) t,z G−(q+1) β β∈IN
From the identity (z + 1)e−z ≤ 1 for all z ≥ 0, then we obtain the following expression Z k D F k2 η(dx)dt t,z G−(q+1) R+×R X X X = c2 (β + 1)2β!e−2(q+1)|β| β+εκ(k,m) κ(k,m) β∈I k∈N m∈N X X X = (β + 1)e−2(q+1)|β|c2 (β + ε )! κ(k,m) β+εκ(k,m) κ(k,m) β∈I k∈N m∈N X X X ≤ (|β| + 1)e−2(q+1)|β| c2 (β + ε )! β+εκ(k,m) κ(k,m) β∈I k∈N m∈N X −2(q+1)|β| X 2 = (|β| + 1)e cαα! β∈I |α|=|β|+1 ∞ X X 2 −2(q+1)n = α!cαe n=0 |α|=n+1 ∞ X X 2 −2qn ≤ α!cαe n=0 |α|=n
X 2 −2q|α| = α!cαe α∈I 2 =||F ||G−q . (3.334)
∗ ∗ Hence, Dt,zF ∈ G−(q+1) ⊂ G , G , µ a.e. 78
∗ (ii) Since Fn → F in G as n → ∞, then ∃q ∈ N0 such that k Fn − F kG−q → 0
as n → ∞. Let Gn = Fn − F , then it is suffice to show that there exists a
sub-sequence Gnk such that Dt,xGnk → 0. From our previous result, we obtain Z k D G k2 η(dx)dt ≤k G k2 → 0. t,z n G−(q+1) n G−q R+×R (3.335)
2 Hence, k Dt,zGn kG−(q+1) → 0 in L (η × λ). Thus, there exists a sub-sequence ∗ ∗ k Dt,zGnk kG−(q+1) for k ∈ N such that as k → ∞,Dt,zGnk → 0 in G , G , µ a.e.
Theorem 3.10.5 Clark-Ocone Theorem in L2(P )
2 Let F ∈ L (P ) be FT measurable, then Z F =E[F ] + E[Dt,zF |Ft−]M(dt, dz) (3.336) [0,T ]×R 2 where E[Dt,zF |Ft−] ∈ L (P × µ), (t, z) ∈ [0,T ] × R.
Proof Since F is FT -measurable, then it has chaos expansion of the form X F = cαKα. (3.337) α∈I
Let Fn be the truncation of F such that X Fn = cαKα (3.338) α∈In where In = {α ∈ I : |α| ≤ n, Index(α) ≤ n}. Then, from the Clark-Ocone theorem for polynomials, for at n ∈ N, Z Fn =E[Fn] + E[Dt,zF |Ft−]M(dt, dz). (3.339) [0,T ]×R From Itˆo’srepresentation theorem, there exists a unique predictable process u(t, z),
(t, z) ∈ [0,T ] × R such that Z E u2(t, z)µ(t, z) < ∞ (3.340) [0,T ]×R 79 and Z F =E[F ] + u(t, z)M(dt, dz). (3.341) [0,T ]×R From the isometry relation (Theorem 2.8.1), we obtain
2 E |(Fn − E[Fn]) − (F − E[F ])| " Z 2#
=E (E[Dt,zFn|Ft−] − u(t, z))M(dt, dz) [0,T ]×R Z 2 =E |E[Dt,zFn|Ft−] − u(t, z)| µ(dt, dz) . (3.342) [0,T ]×R
2 Then, since Fn → F in L (P ), then the right hand side approaches zero as n → ∞. Thus, we have the following convergence:
2 E[Dt,zFn|Ft− ] → u(t, z),L (P × µ). (3.343)
2 ∗ Now since Fn → F ∈ L (P ) ⊂ G , then from Lemma 3.10.4 then there exists a sub-sequence Fnk , k ∈ N such that
∗ ∗ E[Dt,xFnk |Ft−] → E[Dt,xF |Ft−] ∈ G , k → ∞ G , µ a.e. (3.344)
Taking a further sub-sequence, we have
2 E[Dt,xFnk |Ft−] → u(t, z), k → ∞ L (P ), µ a.e. (3.345)
Thus, it follows that
2 u(t, z) = E[Dt,xF |Ft−],L (P ), µ a.e. (3.346)
3.11 Multivariate Extension
In this section, we provide an overview of extending the white noise frame for the Canonical L´evyprocess in the multivariate setting. We follow a similar framework 80
of [1] and [64] which combines the Gaussian white noise process and pure jump L´evy white noise process as a product σ-field of these processes. Since the arguments of the theorems in the multivariate case is similar to the univariate case, then we shall state the theorems without proof.
3.11.1 Notations
(j) (j) (j) (j) Let, (Ω , F , {Ft }t≥0,P ), j ∈ {1, ··· ,N} be an independent probability space for the white noise Canonical L´evyprocess. Its independent measure Mj is given by Z Z ˜ Mj(E) = σj dWj(t) + zdNj(dt, dz) (3.347) 0 E0 E 0 where E0 = {t ∈ R+ :(t, 0) ∈ E} and E = E \ E0. Then for E1,E2 ∈ B(R+ × R)
such that µj(E1) < ∞, µj(E2) < ∞
E[Mj(E1)Mj(E2)] = µj(E1 ∩ E2) (3.348)
where µj is a measure on ([0,T ] × R, B([0,T ] × R), where Z Z 2 2 µj(E) = σj dt + z dνj(z)dt, E ∈ B([0,T ] × R). (3.349) 0 E0 E In differential form, we have
2 2 µj(dt, dz) = σj dδ0(z)dt + z (1 − δ0(z))dνj(z)dt = λj(dt)ηj(dx) (3.350)
where λj(dt) = dt is the Lebesgue measure and
2 2 ηj(dz) = σj dδ0(z)dt + z (1 − δ0(z))dνj(z). (3.351) 81
Denote (Ω, F, {Ft}t≥0,P ) be a filtered probability space of the multivariate white (j) (j) (j) (j) noise Canonical L´evyprocess which is a product space of (Ω , F , {Ft }t≥0,P ), j ∈ {1, ··· ,N} where
Ω =Ω(1) × · · · × Ω(N),
F =F (1) ⊗ · · · ⊗ F (N),
(1) (N) Ft =Ft ⊗ · · · ⊗ Ft , t ≥ 0 P =P (1) × · · · × P (N). (3.352)
(1) (N) (1) We let the index α = (α , ··· , α ) where αj ∈ I and the index set IN = I × (N) (j) ˙ · · · × I where I = I where j ∈ {1, ··· ,N}. The white noise processes Xj, Xj, ˙ ˙ ˙ Mj, and Mj are defined naturally from X, X, M, and M respectively. Likewise, we have the following Radon-Nikodym relation in (S)∗
˙ Mj(dt, dz) = Mj(t, z)µj(dt, dz). (3.353)
3.11.2 Chaos Expansion
Denote the following notations:
N N X Y |α| = |α(j)|, α! = α(j)!. (3.354) j=1 j=1 Consider the product of the form
N Y (j) (1) (N) Kα(ω) = Kα(j) (ω ), ω = (ω , ··· , ω ) (3.355) j=1
{ } forms an orthogonal basis is L2(P ) with the following relation: Kα α∈IN
E [KαKβ] = α!1{α=β}. (3.356)
2 For F ∈ L (P ), FT -measurable can be uniquely written of the form
X F = cαKα. (3.357) α∈IN 82
for some constants cα ∈ R. In terms of the iterated integral, F can be written as follows: N X Y F = Inj fj,nj . (3.358) N j=1 n∈N0
T 2 nj where n = (n1, ··· , nN ) ,nj ∈ N0 and fj,nj ∈ Ls(µ ), j ∈ {1, ··· ,N}. From isometry and independence, we have following relation:
N 2 X 2 Y (j) X 2 ||F ||L2(P ) = cα α ! = cαα!. (3.359) α∈IN j=1 α∈IN Alternatively, in terms of the iterated integral:
N N 2 X Y 2 X Y 2 n n ||F ||L2(P ) = nj!||fj,nj ||L2(µ j ) = n! ||fj,nj ||L2(µ j ). (3.360) N j=1 N j=1 n∈N0 n∈N0
3.11.3 Stochastic Test and Distribution Functions
The space G and G∗
Suppose that F has a formal expansion of the form (3.357). Then, F belongs to
the space Gq, q ∈ R if N 2 X 2 Y (j) (j) ||F ||Gq = cα α ! exp(2qα ) α∈IN j=1 X 2 = cαα! exp(2q|α|) < ∞. (3.361)
α∈IN Alternatively, in terms of the iterated integral:
N 2 X Y 2 n ||F ||Gq = nj!||fj,nj ||L2(µ j ) exp(2qnj) N j=1 n∈N0 N X Y 2 n = n! ||fj,nj ||L2(µ j ) · exp(2q|n|) < ∞. (3.362) N j=1 n∈N0 Define the stochastic test function G given by \ G = Gq (3.363) q>0 83 endowed with inductive topology. The stochastic test function G∗ is defined as
[ G = G−q (3.364) q>0 endowed with projective topology. Note that the G∗ is the dual of G. Let G ∈ G and F ∈ G∗ with the following formal expansion:
X X F = cαKα,G = dαKα. (3.365) α∈IN α∈IN Then the action of F on G is given by
X < G, F >G,G∗ = α!cαdα. (3.366)
α∈IN
Kontratiev Spaces and Hida Spaces
Let p ∈ [0, 1]. Suppose that F has a formal expansion of the form (3.357). Then,
F belongs to the space (S)q, q ∈ R if
N 2 X 2 Y (j) 1+p α(j)q kF kp,q = cα (α !) (2N) α∈IN j=1 N X 2 Y (j) α(j)q = cα α !(2N) < ∞. (3.367) α∈IN j=1
Define the Kondratiev test function (S)p as \ (S)p = (S)p,q (3.368) q>0 endowed with the projective topology. The Kondratiev distribution function (S)−p as
∗ [ (S) = (S)−p,−q (3.369) q>0 endowed with the inductive topology. Note that (S)∗ is a dual of (S). Let G ∈ (S) and F ∈ S∗ with the following formal expansion:
X X F = cαKα,G = dαKα. (3.370) α∈IN α∈IN 84
Note that (S)−p is a dual of (S)p. The action of G ∈ (S)−p on F ∈ (S)p, with the formal expansion of F and G of the form (3.121) is given by
X < G, F >= α!cαdα. (3.371)
α∈IN The Hida spaces are the special cases of the Kondratiev spaces. The Hida test ∗ ∗ function (S) and Hida distribution function (S) is given by (S) = (S)0 and (S) =
(S)−0 respectively. From the above definitions, we have the following inclusions for p ∈ [0, 1]:
2 ∗ (S)1 ⊂ (S)p ⊂ (S)0 ⊂ G ⊂ L (P ) ⊂ G ⊂ (S)−0 ⊂ (S)−p ⊂ (S)−1. (3.372)
3.11.4 Wick Product
Definition 3.11.1 Wick Product Let F = P a ∈ (S) and G = P b ∈ (S) , then the Wick Product α∈IN αKα −1 β∈IN βKβ −1 of X and Y denoted by X Y is defined as
X X X X X Y = aαbβKα+β = aαbβKγ. (3.373) α∈IN β∈IN γ∈I α+β=γ
3.11.5 Stochastic Derivatives
We extend the stochastic derivative in the multivariate case as follows:
X X ⊗ˆ ε(j) Dj,t,zF = cααiK (j) δ i (t, z). (3.374) α−εi α∈IN i∈N
(j) T th (j) where εi = (0, ··· , εi, ··· , 0) such that εi is the j subvector of εi and a zero vector otherwise. Likewise, we can also express Dj,t,zF as follows: 85
ˆ (j) X X X (j) ⊗εκ(k,m) Dj,t,zF = cαακ(k,m)K (j) δ (t, z) (3.375) α−εκ(k,m) α∈IN k∈N m∈N X X X (j) = cαακ(k,m)K (j) ek(t)πm(z) (3.376) α−εκ(k,m) α∈IN k∈N m∈N X X X (j) = c (j) (βκ(k,m) + 1)Kβek(t)πm(z). (3.377) β+εκ(k,m) β∈IN k∈N m∈N Theorem 3.11.1 Let N X Y 2 F = Ini (fi,ni ) ∈ L (P ). (3.378) N i=1 n∈N0
T 2 ni where n = (n1, ··· , nN ) ,nj ∈ N0 and fi,ni ∈ Ls(µ ), i ∈ {1, ··· ,N}. Then, ∗ Dj,t,zF ∈ G , µ a.e. given by
∞ N X X Y ∗ Dj,t,zF = njInj fj,nj Ini (fi,ni ) ∈ G . (3.379)
nj =1 N−1 i=1,i6=j n/nj ∈N0 Theorem 3.11.2 Closability of Stochastic Derivatives
∗ Let Fm,F ∈ G such that as m → ∞
∗ (i) Fm → F in G ,
∗ (ii) Dj,t,zFm converges in G for j ∈ {1, ··· ,N}.
∗ Then, Dj,t,zFm → Dj,t,zF in G , j ∈ {1, ··· ,N}.
3.11.6 Generalized Conditional Expectation
Let F has a formal expansion of the form (3.358). The conditional expectation
E[F |FA], A ∈ B([0,T ]) is given as
∞ N X Y ⊗nj E[F |FA] = Inj fj,nj 1A . N j=1 n∈N0 If F ∈ G∗, we can easily show, by writing the chaos expansion in terms of the iterated integral, the following properties conditional expectation in G∗ holds in the multivariate case. 86
3.11.7 Skorohod Integration on G∗
Definition 3.11.2 Skorohod Integral in G∗
∗ Let u : R+ × R → G be a random field with the formal expansion given by
N ∞ X Y X ∗ u(t, z) = Ini (fi,ni ) Inj fj,nj (·, (t, z)) ∈ G . (3.380) N−1 i=1,i6=j nj =0 n/nj ∈N0
T 2 nj where n = (n1, ··· , nN ) ,nj ∈ N0, fj,nj ∈ Ls(µ ), j ∈ {1, ··· ,N} such that for some q > 0,
N X −2q(|n|+1) Y 2 2 ˜ n (n + j)!e ||fi,ni ||L2(µni )||fj,nj ||L2(µ j+1 ) < ∞ (3.381) N i=0,i6=j n/nj ∈N0
Define the Skorohod integral of u with respect to Mj as follows: Z δj(u) = u(t, x)Mj(δt, dx) R+×R N ∞ X Y X ˜ = Ini (fi,ni ) Inj+1 fj,nj (3.382)
N−1 i=0,i6=j nj =1 n/nj ∈N0 We say that u is Skorohod integrable if δ(u) ∈ G∗, that is, there exists some q > 0 such that ||δ(u)||G−q < ∞. Moreover, we have the following
2 ||δ(u)||G−q N ∞ X Y 2 −2qni X 2 −2q(nj +1) ˜ n = ni!||fi,ni ||L2(µni )e nj+1!||fj,nj ||L2(µ j+1 )e N−1 i=0,i6=j nj =1 n/nj ∈N0 N X −2q(|n|+1) Y 2 2 ˜ n = (n + j)!e ||fi,ni ||L2(µni )||fj,nj ||L2(µ j+1 ) < ∞ (3.383) N i=0,i6=j n/nj ∈N0
T where j = (0, ·, 1, ··· , 0) is a unit vector of length n with one at the jth-component and zero otherwise.
Theorem 3.11.3 Fundamental Theorem of Stochastic Calculus
∗ Let u : R+ × R → G be a random field satisfying the following conditions:
(i) u ∈ L2(P × µ), 87
(ii) Dj,t,zu is Skorohod integrable for all (t, z) ∈ [0,T ] × R,
∗ ∗ (iii) Dj,t,zδ(u) ∈ G and δ(Dt,zu) ∈ G , for all (t, z) ∈ [0,T ] × R and there exists q > 0 such that Z 2 ||Dj,t,zδ(u)||G−q µj(dt, dz) < ∞, (3.384) [0,T ]×R Z 2 ||δ(Dj,t,zu)||G−q µj(dt, dz) < ∞. (3.385) [0,T ]×R Then,
Dj,t,z (δ(u)) = u(t, z) + δ(Dj,t,zu), (3.386) that is, Z Z Dj,t,z u(s, x)Mj(δs, dx) = u(t, z) + Dj,t,zu(s, x)Mj(δs, dx). (3.387) [0,T ]×R [0,T ]×R Corollary 3.11.4 Let u satisfy the conditions of the preceding theorem and in addi- tion, suppose that it is also predictable, then we have the following identity: Z Z Dj,t,z u(s, x)Mj(ds, dx) = u(t, z) + Dj,t,zu(s, x)Mj(ds, dx). (3.388) [0,T ]×R [t,T ]×R Corollary 3.11.5 Let u satisfy the conditions of the preceding corollary, then Z Z −1 Dj,t,z u(s, 0)dWj(s) =σj u(t, 0)1{z=0} + Dj,t,zu(s, 0)dW (s), [0,T ] [t,T ] (3.389) Z Z ˜ −1 ˜ Dj,t,z u(s, x)xNj(ds, dx) =σj u(t, z)1{z6=0} + Dj,t,zu(s, x)Nj(ds, dx). [0,T ]×R0 [t,T ]×R0 (3.390) To conclude this section, we state the Wick-Skorohod theorem in the multivariate case.
Theorem 3.11.6 Wick-Skorohod Theorem ˙ Let u be Skorohod integrable with respect to Mj, then u(t, z) Mj(t, z), for all (t, z) ∈ ∗ R+ × R is (S) integrable and Z Z ˙ u(t, z)Mj(δt, dz) = u(t, z) Mj(t, z)µj(dt, dz). (3.391) R+×R R+×R 88
3.11.8 Clark Ocone Theorem in L2(P )
We state the Clark-Ocone theorem in the multivariate case in L2(P ), as follows.
Theorem 3.11.7 Clark-Ocone Theorem in L2(P )
2 Let F ∈ L (P ) be FT measurable, then
N X Z F =E[F ] + E[Dj,t,zF |Ft−]Mj(dt, dz) (3.392) j=1 [0,T ]×R
2 where E[Dj,t,zF |Ft−] ∈ L (P × µ), (t, z) ∈ [0,T ] × R for j ∈ {1, ··· ,N}. 89
4. CLARK-OCONE THEOREM UNDER THE CHANGE OF MEASURE AND MEAN-VARIANCE HEDGING
4.1 Girsanov Theorem for L´evyProcesses
To prove the Clark-Ocone theorem under the change in measure, we shall state the Girsanov theorem to be able to define the equivalent measure Q ∼ P . We state the Girsanov theorem for L´evyprocesses.
Theorem 4.1.1 [27] [65] Girsanov Theorem for L´evyProcesses
Suppose that there exists a predictable processes uj(s), j ∈ {1, ··· , d} and θj(s, x) < 1, j ∈ {1, ··· , l} where (s, x) ∈ [0,T ] × R0 such that Z T 2 uj (s)ds <∞, a.s., (4.1) 0 Z 2 (| log(1 − θj(s, x))| + θj (s, x))νj(dx)ds <∞, a.s., (4.2) [0,T ]×R0 for all j ∈ {1, ··· ,N}. Denote the Doleans-Dade exponential Z(t) for t ∈ [0,T ] by
d X Z t 1 Z t Z(t) = exp − u (s)dW (s) + u2(s)ds + (4.3) j j 2 j j=1 0 0 l Z ! X ˜ log(1 − θj(s, x))Nj(ds, dx) + (log(1 − θj(s, x)) + θj(s, x))νj(dx)ds j=1 [0,T ]×R0 d Z t l Z !! X X ˜ =E − uj(s)dWj(s) + θj(s, x)Nj(ds, dx) (4.4) j=1 0 j=1 [0,t]×R0
where E is the stochastic exponential operator. Define a measure Q on FT by
dQ(ω) = Z(T, ω)dP (ω). (4.5) 90
Suppose that a Novikov-type condition is satisfied (to be discussed later), then E[Z(T )] = 1 and W Q and N˜ Q is a Brownian motion and compensated Poisson random measure under Q respectively where
Q dWj (t) =dWj(t) + uj(t)dt, (4.6) ˜ Q ˜ Nj (dt, dz) =Nj(dt, dz) + θj(t, z)νj(dz)dt (4.7)
for all j ∈ {1, ··· ,N}.
Remark 4.1.2 We can write (4.6) and (4.7) in matrix-vector form as follows:
dW Q(t) =dW (t) + u(t)dt, (4.8)
N˜ Q(dt, dz) =N˜(dt, dz) + θ(t, z)ν(dz)dt (4.9)
where
T ˜ ˜ ˜ T W (t) = [W1(t), ··· ,Wd(t)] , N(dt, dz) = [N1(dt, dz), ··· , Nl(dt, dz)] , Q Q Q T ˜ Q ˜ Q ˜ Q T W (t) = [W1 (t), ··· ,Wd (t)] , N (dt, dz) = [N1 (dt, dz), ··· , Nl (dt, dz)] ,
T u(t) = [u1(t), ··· , ud(t)] , θ(t, z) = diag[θ1(t, z), ··· , θl(t, z)],
T ν(dz) =[ν1(dz), ··· , νl(dz)] . (4.10)
Applying the Novikov-type conditions to Z(t) to become a martingale implies the following for all t ∈ [0,T ], the dynamics of Z is given as
dZ(t) =Z(t−)dL(t),
Z(0) =1 (4.11)
where L is a local martingale given by d l Z X X ˜ dL(t) = − uj(t)dWj(t) − θj(t, zj)Nj(dt, dz). (4.12) j=1 j=1 R0 Hence, Z(t) = E(M(t)) with the following continuous and discrete parts
N c X dL (t) = − uj(t)dWj(t), (4.13) j=1 N Z d X ˜ dL (t) = − θj(t, z)Nj(dt, dz). (4.14) j=1 R0 91
The corresponding angle bracket process is given as
d Z t c X 2 < L >t= uj (s)ds, (4.15) j=1 0 l Z d X 2 < L >t= θj (s, z)Nj(ds, dz). (4.16) j=1 [0,T ]×R0 We state two Novikov-type theorems below. The first theorem is attributed to Lepin- gle and Memin [55] and the second theorem is attributed to Protter and Shimbo [70].
Theorem 4.1.3 [55], [65] Let L be a local martingale such that ∆L > −1 and
1 X A(t) = < Lc > + [(1 + ∆L(s))) log(1 + ∆L(s))) − ∆L(s)] (4.17) 2 t s∈(0,t] which has a compensator B = {B(t)}t≥0 such that
E[exp(B(T ))] < ∞. (4.18)
Then, E(M) is a u.i. martingale and E(M) > 0 almost surely.
Applying (4.12) to Theorem 4.1.3, we have as follows:
d l 1 X Z t X Z A(t) = u2(s)ds + ((1 − θ (s, x)) log θ (s, x) + θ (s, z))N (ds, dx) 2 j j j j j j=1 0 j=1 [0,t]×R0 (4.19) then its compensator is given as follows:
d l 1 X Z t X Z B(t) = u2(s)ds + ((1 − θ (s, x)) log θ (s, x) + θ (s, x))ν (dx)ds. 2 j j j j j j=1 0 j=1 [0,t]×R0 (4.20)
Theorem 4.1.4 [70], [65] Let L be a square integrable martingale local such that ∆L > −1. If
1 E exp < Lc > + < Ld > < ∞ (4.21) 2 T T then E(L) is a uniformly integrable martingale. 92
From the Theorem 4.1.4 for L = Z and from (4.15) − (4.16) we have the following Novikov-type condition
" d l !# 1 X Z T X Z E exp u2(s)ds + θ2(s, x)ν (dx)ds < ∞. (4.22) 2 j j j j=1 0 j=1 [0,T ]×R0
Definition 4.1.1 [27] Generalized Bayes Formula We let Q(dω)Z(T )P (dω), where Z is the Doleans-Dade exponential. Let F,Z(T )F ∈ (G)∗, then we define the Generalized Bayes Formula as follows:
E[Z(T )F |F ] EQ[F |F ] = A ,A ∈ B([0,T ]). (4.23) A Z(t)
Remark 4.1.5 If F,Z(T )F ∈ L2(P ) such that the Novikov condition is satisfied, then Z is a martingale and thus satisfies (4.23) which corresponds to the abstract Bayes rule.
4.2 Clark-Ocone Theorem in L2(P ) ∩ L2(Q)
Before we present Clark-Ocone Theorem in L2(P ) ∩ L2(Q), we shall present an important lemma. For simplicity of presentation, we shall assume that N = d = l. The summation can be adjusted accordingly if d 6= l.
Lemma 4.2.1 Stochastic derivative of Z(T ). Suppose that the assumptions of the Girsanov theorem for L´evyprocesses and the
assumptions of fundamental theorem of stochastic calculus for ui(t) and log(1−θi(t, z)) for i ∈ {1, ··· ,N} are satisfied. Then we have following stochastic derivative for Z(T ).
(i) If z = 0, then
" N Z −1 X Q Dj,t,0Z(T ) = Z(T ) −σj uj(t) − Dj,t,0ui(s)dWi (s) i=1 [t,T ] Z D θ (s, x) + j,t,0 i N˜ Q(ds, dx) . (4.24) 1 − θ (s, x) i [t,T ]×R0 i 93
(ii) If z 6= 0, then
−1 Dj,t,zZ(T ) = z Z(T ) (exp(zDj,t,z log Z(T )) − 1) (4.25)
where
−1 Dj,t,z log Z(T ) = z log(1 − θj(t, z)) N X Z T 1 Z T + − D u (s)dW Q(s) − z(D u (s))2ds j,t,z i i 2 j,t,z i i=1 0 t Z zD θ (s, x) + z−1 log 1 − j,t,z i N˜ Q(ds, dx) 1 − θ (s, x) i [t,T ]×R0 i Z zD θ (s, x) + z−1 log 1 − j,t,z i (1 − θ (s, x)) + D θ (s, x) ν (dx)ds . 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i (4.26)
Proof (i) Consider the process (4.3). We let
N X Z t 1 Z t Y (t) = log Z(t) = − u (s)dW (s) + u2(s)ds + i i 2 i i=1 0 0 N Z X ˜ log(1 − θi(s, x)Ni(ds, dx) + (log(1 − θi(s, x)) + θi(s, x))νi(dx)ds . i=1 [0,t]×R0 (4.27)
Then for all z ∈ R
N X Z T 1 Z T D Y (T ) = − − D u (s)dW (s) − D u2(s)ds j,t,z j,t,z i i 2 j,t,z i j=1 0 0 Z ˜ + Dj,t,z log(1 − θi(s, x)Ni(ds, dx) [0,T ]×R0 Z +Dj,t,z (log(1 − θi(s, x)) + θi(s, x))νi(dx)ds . [0,T ]×R0 (4.28)
From the chain rule,
Dj,t,0Z(T ) = Z(T )Dj,t,0Y (T ). (4.29) 94
Then, we have the following derivatives: Z T Z T −1 Dj,t,0 ui(s)dWi(s) = σj (s)uj(t)1{i=j} + Dj,t,0ui(s)dWi(s), (4.30) 0 t
Z T Z T Z T 2 2 Dj,t,0 ui (s)ds = Dj,t,0ui (s)ds = 2ui(s)Dj,t,0ui(s)ds, (4.31) 0 t t
Z ˜ Dj,t,0 log(1 − θi(s, x)Ni(ds, dx) [0,T ]×R0 Z −1 ˜ = Dj,t,0(x log(1 − θi(s, x))xNi(ds, dx) [t,T ]×R0 Z D θ (s, z ) = j,t,0 i j N˜ (ds, dz ), (4.32) 1 − θ (s, z ) i j [t,T ]×R0 i j
Z Dj,t,0 (log(1 − θi(s, x)) + θi(s, x))νi(dx)ds [0,T ]×R0 Z = (Dj,t,0 log(1 − θi(s, x)) + Dj,t,0θi(s, x))νi(dx)ds [t,T ]×R0 Z D θ (s, x) = − j,t,0 i + D θ (s, x) ν (dx)ds. (4.33) 1 − θ (s, x) j,t,0 i i [t,T ]×R0 i Collecting terms yields " N Z −1 X Dj,t,0Z(T ) =Z(T ) −σj uj(t) − Dj,t,0ui(s)(dWi(s) + ui(s)) i=1 [t,T ] Z D θ (s, x) + j,t,0 i (N˜ (ds, dx) + ν (dx)ds) 1 − θ (s, x) i i [t,T ]×R0 i " N Z −1 X Q =Z(T ) −σj uj(t)1{σj 6=0} − Dj,t,0ui(s)dWi (s) i=1 [t,T ] Z D θ (s, x) + j,t,0 i N˜ Q(ds, dx) . (4.34) 1 − θ (s, x) i [t,T ]×R0 i
(ii) From the chain rule, we have the following derivatives for z 6= 0,
Dj,t,zZ(T ) =Dj,t,z exp(Y (T ))
−1 =z [exp(Y (T ) + zDj,t,zY (T )) − exp(Y (T ))]
−1 =z Z(T )[exp(zDj,t,zY (T )) − 1], (4.35) 95
2 −1 2 2 Dj,t,zui (s) =z [(ui(s) + zDj,t,zui(s)) − ui (s)]
2 =2ui(s)Dj,t,zui(s) + z(Dj,t,zui(s)) , (4.36)
−1 Dj,t,z(x log(1 − θi(s, x))
−1 −1 =x z [log(1 − θi(s, x)) − zDj,t,zθi(s, x)) − log(1 − θi(s, x))] zD θ (s, x) =z−1 log 1 − j,t,z i . (4.37) 1 − θi(s, x)
Hence, we have following derivatives:
Z T Z T Dj,t,z ui(s)dWi(s) = Dj,t,zui(s)dWi(s), (4.38) 0 t
Z T Z T 2 2 Dj,t,z ui (s)ds = Dj,t,zui (s)ds 0 t Z T 2 = 2ui(s)Dj,t,zui(s) + z(Dj,t,zui(s)) , (4.39) t
Z ˜ Dj,t,z log(1 − θi(s, x))Ni(ds, dzi) [0,T ]×R0 Z −1 −1 ˜ =z log(1 − θj(s, z))1{i=j} + Dj,t,z(x log(1 − θi(s, x))xNi(ds, dx) [t,T ]×R0 Z zD θ (s, x) =z−1 log(1 − θ (s, z))1 + z−1 log 1 − j,t,z i N˜ (ds, dx), j {i=j} 1 − θ (s, x) i [t,T ]×R0 i (4.40)
Z Dj,t,z (log(1 − θi(s, x)) + θi(s, x))νi(dx)ds [0,T ]×R0 Z = (Dj,t,z log(1 − θi(s, x)) + Dj,t,zθi(s, x))νi(dx)ds [t,T ]×R0 Z zD θ (s, x) = z−1 log 1 − j,t,z i + D θ (s, x) ν (dz)ds. (4.41) 1 − θ (s, x) j,t,z i i [t,T ]×R0 i 96
Finally, collecting terms yield
−1 Dj,t,zY (T ) = z log(1 − θj(t, z))+ N X Z T 1 Z T + − D u (s)(dW (s) + u (s)ds) − z(D u (s))2ds j,t,z i i i 2 j,t,z i i=1 t t Z zD θ (s, x) + z−1 log 1 − j,t,z i N˜ (ds, dx) + ν (dx)ds 1 − θ (s, x) i i [t,T ]×R0 i Z zD θ (s, x) + z−1 log 1 − j,t,z i (1 − θ (s, x)) + D θ (s, x) ν (dx)ds 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i N X Z T 1 Z T =z−1 log(1 − θ (t, z)) + − D u (s)dW Q(s) − z(D u (s))2ds j j,t,z i i 2 j,t,z i i=1 0 t Z zD θ (s, x) + z−1 log 1 − j,t,z i N˜ Q(ds, dx) 1 − θ (s, x) i [t,T ]×R0 i Z zD θ (s, x) + z−1 log 1 − j,t,z i (1 − θ (s, x)) + D θ (s, x) ν (dx)ds . 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i (4.42) 97
Theorem 4.2.2 Clark-Ocone theorem under the change of measure
2 2 2 Let F ∈ L (P ) ∩ L (Q) be FT -measurable and FZ(T ) ∈ L (P ). Suppose that the assumptions of Lemma 4.2.1 are satisfied, then
N Z T Q X Q Q F =E [F ] + σjE [Dj,t,0F − FKj(t)|Ft− ]dWj (t)+ (4.43) j=1 0 N Z X Q ˜ Q E [F (Hj(t, z) − 1) + zHj(t, z)Dj,t,zF |Ft− ]Nj (dt, dz) (4.44) j=1 [0,T ]×R0 where
N Z T Z X Dj,t,0θi(s, x) K (t) = D u (s)dW Q(s) + N˜ Q(ds, dx) , (4.45) j j,t,0 i i 1 − θ (s, x) i i=1 t [t,T ]×R0 i
Hj(t, z) N X Z T 1 Z T = exp − zD u (s)dW Q(s) − (zD u (s))2ds j,t,z i j 2 j,t,z i i=1 t t Z zD θ (s, x) + log 1 − j,t,z i N˜ Q(ds, dx) 1 − θ (s, x) i [t,T ]×R0 i Z zD θ (s, x) + log 1 − j,t,z i (1 − θ (s, x)) + zD θ (s, x) ν (dx)ds 1 − θ (s, x) i j,t,z i i [t,T ]×R0 i (4.46) for all j ∈ {1, ··· ,N}.
Proof We let
Λ(t) = Z(t)−1 (4.47) 98
where Z(t) is given by (4.3). From Itˆo’slemma,
dΛ(t)
N N 1 X 1 2 X = − (−u (t)Z(t−))dW (t) + + (−u (t)Z(t−))2dt Z2(t−) i i 2 Z3(t−) i i=1 i=1 N X Z 1 1 − N˜ (dt, dz)+ Z(t−) + (−θ (t, z)Z(t−)) Z(t−) i i=1 R0 i N Z − X 1 1 −θi(t, z)Z(t ) − − ν (dz)dt Z(t−) + (−θ (t, z)Z(t−)) Z(t−) Z(t−)2 i i=1 R0 i " N N − X X 2 =Λ(t ) ui(t)dWi(t) + ui (t)dt i=1 i=1 N X Z 1 1 + − 1 N˜ (dt, dz)+ Z(t−) 1 − θ (t, z) i i=1 R0 i N # X Z 1 1 − 1 − θ (t, z) ν (dz)dt Z(t−) 1 − θ (t, z) i i i=1 R0 i " N − X Q 2 =Λ(t ) ui(t) dWi (t) − ui(t)dt + ui (t)dt+ i=1 N X Z 1 1 − 1 N˜ Q(dt, dz) − θ (t, z)ν (dz)dt + Z(t−) 1 − θ (t, z) i i i i=1 R0 i N # X Z 1 1 − 1 − θ (t, z) ν (dz)dt Z(t−) 1 − θ (t, z) i i i=1 R0 i " N N Z # X X θi(t, z) =Λ(t−) u (t)dW Q(t) + N˜ Q(dt, dz) . (4.48) i i 1 − θ (t, z) i i=1 i=1 R0 i We let
Q Y (t) = E [F |Ft] (4.49)
Then assuming that a Novikov-type condition to Z(t) is satisfied, from the abstract Bayes rule,
E[FZ(T )|F ] Y (t) = t = Λ(t)V (t). (4.50) E[Z(T )|Ft] 99
2 P 2 Since FZ(T ) ∈ L (P ), then V (t) ≡ E [FZ(T )|Ft] ∈ L (P ). From the Clark-Ocone theorem in L2(P ), we have the following:
N X Z E[FZ(T )|Ft] =E[E[FZ(T )|Ft]] + E[Dj,s,zE[FZ(T )|Ft]|Fs− ]Mj(ds, dx) j=1 [0,T ]×R N X Z =E[FZ(T )] + E[Dj,s,z(FZ(T ))|Fs− ]Mj(ds, dx) (4.51) j=1 [0,T ]×R N X Z T =E[FZ(T )] + σjE[Dj,t,0(FZ(T ))|Ft− ]dWj(t) j=1 0 N Z X ˜ + E[Dj,t,z(FZ(T ))|Ft− ]xNj(dt, dx). (4.52) j=1 [0,T ]×R0
The first term is by the tower property of conditional expectation
E[E[FZ(T )|Ft]] = E[FZ(T )]. (4.53)
By applying the tower property of conditional expectation yields
E[Dj,s,zE[FZ(T )|Ft]Fs− ]] =E[Dj,s,z(FZ(T ))|Fs− ]1{s < t}. (4.54)
From the product rule
− − dY (t) = Λ(t )dV (t) + V (t )dΛ(t) + d[Λ,V ]t (4.55)
The quadratic variation is evaluated as follows:
" N − X d[Λ,V ]t =Λ(t ) uj(t)σjE[Dj,t,0(FZ(T ))|Ft− ]dt+ j=1 N Z # X θj(t, z) = E[D (FZ(T ))|F − ]zN (dt, dz) . (4.56) 1 − θ (t, z) j,t,z t j j=1 R0 j 100
Hence,
dY (t)
" N − X =Λ(t ) σjE[Dj,t,0(FZ(T ))|Ft− ]dWj(t)+ j=1 N Z # X ˜ E[Dj,t,z(FZ(T ))|Ft− ]zNj(dt, dz) + j=1 R0 " N N Z # − X Q X θj(t, z) Q E[FZ(T )|F − ]Λ(t ) u (t)dW (t) + N˜ (dt, dz) t j j 1 − θ (t, z) j j=1 j=1 R0 j " N − X Λ(t ) σjuj(t)E[Dj,t,0(FZ(T ))|Ft− ]dt+ j=1 N Z # X θj(t, z) E[D (FZ(T ))|F − ]z(N˜ (dt, dz) + ν (dz)dt) 1 − θ (t, z) j,t,z t j j j=1 R0 j " N − X Q =Λ(t ) σjE[Dj,t,0(FZ(T ))|Ft− ](dWj (t) − uj(t)dt)+ j=1 N Z # X ˜ Q E[Dj,t,z(FZ(T ))|Ft− ]z(Nj (dt, dz) − θj(t, z)ν(dz)dt) + j=1 R0 " N N Z # X X θj(t, z) Y (t) u (t)dW Q(t) + N˜ Q(dt, dz) j j 1 − θ (t, z) j j=1 j=1 R0 j " N − X Λ(t ) uj(t)σjE[Dj,t,0(Z(T )F )|Ft− ]dt+ j=1 N Z # X θj(t, z) E[D (Z(T )F )|F − ]z(N˜ (dt, dz) + (1 − θ (t, z))ν (dz)dt) 1 − θ (t, z) j,t,z t j j j j=1 R0 j " N − X Q =Λ(t ) (σjE[Dj,t,0(Z(T )F )|Ft− ] + E[Z(T )F uj(t)|Ft− ])dWj (t)+ j=1 Z E[Dj,t,z(Z(T )F )|Ft− ] θj(t, z) Q z + E[Z(T )F |F − ] N˜ (dt, dz) (4.57) 1 − θ (t, z) 1 − θ (t, z) t j R0 j j Since Z(T )F ∈ L2(P ) then from product rule
Dj,t,z(Z(T )F ) = FDj,t,zZ(T ) + Z(T )Dj,t,zF + zDj,t,zZ(T )Dj,t,zF. (4.58) 101
Consider the case z = 0. Note that Kj(t) in (4.45) can be written in terms of
Dj,t,zZ(T ) in (4.24) as follows: