<<

arXiv:1303.4625v1 [math.PR] 19 Mar 2013 Here, r tcatcitgaso h form the of Volterra Lévy-driven stochastic modulated are volatility to theor respect a developed with (2012) al. et Barndorff-Nielsen Recently, Introduction 1 and h process the rwindie otrapoessvawienoise white via processes Volterra Brownian-driven nsohsi nerto o oaiiymodulated volatility for integration stochastic On l .Barndorff-Nielsen E. Ole 2 3 1 eteo ahmtc o plctos nvriyo Oslo of University Applications, for Mathematics of Centre aaeet ahsUiest,N ukgd 1,D-00A DK-8000 118, Munkegade Ny University, Aarhus Management, aaeet ahsUiest,N ukgd 1,D-00A DK-8000 118, Munkegade Ny University, Aarhus Management, heeCnr,Dprmn fMteais RAE,Schoo CREATES, & Mathematics, of Department Centre, Thiele heeCnr,Dprmn fMteais RAE,Schoo CREATES, & Mathematics, of Department Centre, Thiele L Keywords: a model. product volatility pointwise-multiplied Wick the the to through discu relation method are modulation volatility the of new properties generaliz and of results integrability regularity for conditions Sufficient tions. M ujc Classification: Subject AMS integral Skorohod ; Malliavin analysis; noise indie otrapoessot h space volatili the for onto theory processes integration Volterra nian-driven the generalizes paper This g ( t ) sadtriitckernel, deterministic a is saLv rcs.When process. Lévy a is X Z 0 ( t t ) Y tcatcitga;Vler rcs;vltlt modula volatility process; Volterra integral; stochastic stre oaiiymdltdBona-rvnVolterra Brownian-driven modulated volatility a termed is ( t ) dX -31Ol,Norway, Oslo, N-0361 ( t ) , 1 rdEpnBenth Espen Fred , where 00,6H0 00,60G22 60H07, 60H40, 60H05, [email protected] σ [email protected] analysis sasohsi rcs moyn h volatility the embodying process stochastic a is L Abstract ( t = ) 1 [email protected] X B ( ( t t = ) ) G stesadr rwinmotion, Brownian standard the is ∗ Z fPthffTme distribu- Potthoff–Timpel of 0 t 2 g ( n eeytSzozda Benedykt and ,s t, dpoessaegiven, are processes ed ..Bx15,Blindern, 1053, Box P.O. , fsohsi integration stochastic of y sd eitouea introduce We ssed. rcse ( processes ymdltdBrow- modulated ty ) σ ( s ) ru ,Denmark, C, arhus ru ,Denmark, C, arhus fEooisand Economics of l fEooisand Economics of l dL ddsusits discuss nd ( in white tion; s VMLV ) . process ,that ), (1.1) 3 ( ), and this is the class of processes that we will concentrate our attention VMBV on in this paper; so from now on we fix L = B in Equation (1.1). Barndorff-Nielsen et al. (2012) use methods of Malliavin calculus to validate the following definition of the integral:

t t t Y (s) dX(s)= (Y )(t, s)σ(s) δM B(s)+ DM ( (Y )(t, s)) σ(s) ds, (1.2) Kg s Kg Z0 Z0 Z0 where t (Y )(t, s)= Y (s)g(t, s)+ (Y (u) Y (s)) g(du,s), Kg − Zs M M δ B(s) denotes the Skorohod integral and Dt is the Malliavin derivative. The superscript M is used above to stress that the operators are defined in the Malliavin calculus setting, but as we will show, the only difference between these operators and the ones used in the forthcoming sections is the restriction of the domain. The only results needed to establish the above definition are the Malliavin calculus versions of the “fundamental theorem of calculus” and the “ formula.” Before we begin the theoretical discussion, let us review some of the literature that is closely related to the problems addressed in this paper. The results presented in the following sections are extending the results from the already mentioned work of Barndorff-Nielsen et al. (2012) and those results are in turn generalizing (among others) the results of Alòs et al. (2001); Decreusefond (2002, 2005). Note that the operator ( ) used by Barndorff-Nielsen et al. (2012) is the same as the operator Kg · used by Alòs et al. (2001), however the definition of the integral is different. The latter authors keep only the first integral in the right-hand side of Equation (1.2) thus making sure that the expectation of the integral is zero. The choice between the two definitions should be based on modelling purposes, but one has to keep in mind that requiring zero-expectation in the non-semimartingale setting might be unreasonable. It should be noted, that the processes are a superclass of the Lévy semis- VMLV tationary processes ( ) and a subclass of ambit processes (more precisely, null- spatial ambit processes).LSS In order to obtain an process from the general form of process, we take g to be a shift-kernel,LSS that is g(t, s)= g(t s). Examples VMLV − of such kernels include the Ornstein–Uhlenbeck kernel (g(u) = e−αu,α > 0) and a often used in turbulence (g(u)= uν−1e−αu,α> 0,ν > 1/2). On the other hand, if g(t, s)= c(H)(t s)H−1/2+c(H) 1 H t(u s)H−3/2 1 (s/u)1/2−H du, − 2 − s − − where c(H)=(2HΓ(3/2 H))1/2 (Γ(H +1/2)Γ(2 2H))−1/2, with H (0, 1) then −  R − ∈  t X(t)= g(t, s) dB(s) Z0 is the fractional Brownian motion with Hurst parameter H. As pointed out by Barndorff-Nielsen et al. (2012) and illustrated above, the class of processes is very flexible as it has already been applied in mod- elling ofVMLV a wide range of naturally occurring random phenomena. pro- cesses have been studied in the context of financial data (Barndorff-NielsenVMLV et al., 2013+, 2011; Veraart and Veraart, 2013+) and in connection with turbulence (Barndorff-Nielsen and Schmiegel, 2008, 2009).

2 As mentioned in Barndorff-Nielsen et al. (2012), there are several properties of the integral defined in Equation (1.2) that one might find important in applications. Firstly, the definition of the integral does not require adaptedness of the integrand. Secondly, the kernel function g(t, s) can have a singularity at t = s (for example the shift-kernel used in turbulence and presented above.) Finally, the integral allows for integration with respect to non-semimartingales (as illustrated above by the fractional Brownian motion.) Our approach allows to treat less regular stochastic processes than the approach of Barndorff-Nielsen et al. (2012) because we are not limited to a subspace of square- integrable random variables. The price we have to pay with the white noise approach is that the integral might not be a square-integrable . However, the choice of the ∗ space as the domain of consideration has its advantages, as we can G approximate any random variable from ∗ by square-integrable random variables. We discuss the properties of the spaces weG work on in the forthcoming sections. We consider the definition of the integral in Equation (1.2) in the white noise analysis setting. We concentrate mostly on the so-called Potthoff–Timpel space ∗ and it is important to note here that this space is much larger than the G space of square-integrable random variables and thus our results extend those of Barndorff-Nielsen et al. (2012) considerably. We review the relevant parts of white M noise analysis in Section 2. In Section 3 we show that the Malliavin derivative Dt ∗ ∗ can be generalized to an operator Dt : as can the Skorohod integral. More- over, we obtain a version of the “fundamentalG → G theorem of calculus” and “integration by parts formula” in the new setting, making it possible to retrace the steps taken by Barndorff-Nielsen et al. (2012) in the heuristic derivation of the definition of the integral. VMBV In Section 4 we first examine regularity of the operator g( ) in the white noise setting. Next, we consider the case without volatility moduKlation,· that is σ =1. In Section 5 we introduce the volatility modulation in two different situations. Namely, we consider σ to be a test stochastic process that multiplies the kernel function g. We also study the processes in which volatility modulation is introduced through VMLV the Wick product. This allows us to consider generalized stochastic processes as the volatility. In the case that the volatility is a generalized process that is strongly independent of g(Y ), we show the equivalence of the definition of the integrals using the Wick and pointwiseK products. In all three cases, we establish mild conditions on the integrand that ensure the existence of the integral and obtain regularity results. In Section 6 we explore the properties of the integral and in Section 7 we give an example which cannot be treated with the methods of Barndorff-Nielsen et al. (2012).

2 A brief background on white noise analysis

In this section we present a brief background on Gaussian white noise analysis. We will discuss only the relevant parts of this vast theory, and refer an interested reader to standard books Hida et al. (1993); Holden et al. (2010); Kuo (1996) and references therein for a more complete discussion of this topic. In order to simplify the exposition of what follows, we recall some standard

3 notation that will be used throughout this paper. We denote by ( , ) and an · · H |·|H inner product and a norm of a , and by the symmetrization of functions or function spaces. H ·

Let (Ê) denote the of rapidly decreasingb smooth functions and ′ S (Ê) be its dual, that is the space of tempered distributions, and let , denote

S ′ h· ·i Ê the bilinear pairing between (Ê) and ( ). By the Bochner–Minlos theorem, S ′ S there exists a Gaussian µ on (Ê) defined through S 1 2 ihx,ξi − |ξ| 2 2 L (Ê) e dµ(x)= e , ξ (Ê). ′

Ê ∈ S ZS ( )

′ ′ Ê From now on, we take (Ω, ,P ) :=( (Ê), ( ( )),µ) as the underlying probabil-

′ F S B S ′ Ê ity space, where ( (Ê)) is the Borel σ-field of subsets of ( ).

B S S′ Ê Observe that we can reconstruct the spaces (Ê) and ( ) as nuclear spaces. We recall this construction briefly, as a similar oneS will beS used in the definition of spaces of test and generalized random variables , ∗, ( ) and ( )∗. Start with a G G S S family of seminorms , with p Ê, defined by |·|p ∈ p 2

f := (A) f 2 , f L (Ê), | |p | |L (Ê) ∈ d2 2 where A = dx2 +(1+ x ) is a second order differential operator densely defined

2 − 2

Ê Ê on L (Ê). We denote by p( ) the space of those f L ( ) that have finite p norm. The Schwartz spaceS of rapidly decreasing functions∈ is the projective limit|·|

of spaces (Ê): p > 0 and the space of tempered distributions is its dual, or {Sp } the inductive limit of spaces −p(Ê): p > 0 . Note that we have the inclusions

2 ′ {S }

Ê Ê (Ê) L ( ) ( ). S ⊂ 2 ⊂2 S ′ Let (L ) = L ( (Ê),µ). By the Wiener–Itô decomposition theorem, for any 2 S (n) 2 n ϕ (L ) there exists a unique sequence of symmetric functions ϕ L (Ê ) such ∈ ∈ that ∞ (n) c ϕ = In(ϕ ), (2.1) n=0 X 2 where In is the n times iterated Wiener integral. Moreover, the (L ) norm of ϕ can be expressed as ∞ 2 (n) 2 ϕ 2 = n! ϕ 2 n . k k(L ) L (Ê ) n=0 X Let us remark, that we will keep the convention of naming the kernel functions of the chaos expansion of ϕ by ϕ(n). Next, we recall two types of spaces of test and generalized random variables. The construction of these spaces follows the construction of the Schwartz spaces of test and generalized functions. The first pair we discuss below are the Hida spaces. 2 For any ϕ (L ) and p Ê define the following norm ∈ ∈ ∞ 2 ⊗n p (n) 2 ϕ := n! (A ) ϕ 2 n k kp L (Ê ) n=0 X and a corresponding space ( ) := ϕ (L2): ϕ < . S p { ∈ k kp ∞} 4 It is easy to show that for p > q the following inclusion holds ( ) ( ) . We S p ⊂ S q define the Hida space of test functions ( ) as the projective limit of ( )p : p > 0 and the Hida space of generalized functionsS as its dual ( )∗. Note that{(S)∗ can also} be defined as the inductive limit of the spaces ( ) : p>S 0 . The bilinearS pairing { S −p } between spaces ( )∗ and ( ) is denoted by , and we have S S hh· ·ii ∞ Φ,ϕ := n! Φ(n),ϕ(n) . hh ii h i n=0 X The second pair of function spaces are the spaces that were studied (among others) by Potthoff and Timpel (1995) and are denoted by and ∗. These spaces G G are constructed through (L2) norms with exponential weights of the number operator (sometimes also called Ornstein–Uhlenbeck operator). The number operator N can be defined through its on the chaos expansion. It multiplies the n-th chaos ∞ (n) ∞ (n) by n, that is if ϕ = n=0 In(ϕ ), then Nϕ = n=0 nIn(ϕ ).

For any λ Ê define the norm ∈ P P ∞ 2 λN 2 2λn (n) 2 ϕ := e ϕ 2 = n!e ϕ 2 n k kλ (L ) L (Ê ) n=0 X and a corresponding space

:= ϕ (L2): ϕ < . Gλ { ∈ k kλ ∞}

The space of test random variables is the projective limit of spaces λ : λ > 0 and the space of generalized randomG variables ∗ is its dual, or the inductive{G limit} of : λ > 0 . As in the case of the Hida spaces,G we denote the bilinear pairing {G−λ } between ∗ and by , . It is aG well knownG hh· fact·ii (see e.g. Kuo (1996); Potthoff and Timpel (1995)), that we have the following proper inclusions

( ) (L2) ∗ ( )∗, S ⊂ G ⊂ 2 ⊂ G ⊂ S ∗ ( ) ( )p ( )q (L ) ( )−q ( )−p ( ) , 0 q p, S ⊂ S ⊂ S ⊂ 2 ⊂ S ⊂ S ⊂∗ S ≤ ≤ ′ ′ (L ) ′ , 0 λ λ . G ⊂ Gλ ⊂ Gλ ⊂ ⊂ G−λ ⊂ G−λ ⊂ G ≤ ≤ Note that, unlike with the space ( )∗, truncation of an element of ∗ is always 2 S ∗ G 2 n in (L ). This happens because the kernel functions of are elements of L (Ê ), and so G N Φ = I (Φ(n)) (L2) (2.2) N n ∈ n=0 X (n) because Φ 2 n < and a finite sum of such norms is finite, so ΦN 2 < . L (Ê ) (L ) ∞ ∗ 2 k k ∞ Thus we can approximate any random variable by (L ) random variables by truncating the chaos expansion asG in Equation (2.2). This is not the case with the Hida space ∗ because the kernels of Hida random variables are elements of a much

S ′ 2 Ê larger Schwartz space (Ê) and might have infinite L ( ) norms. S

5 Remark 2.1. Note that if ϕ ( ), then ϕ < for any p > 0 and if Φ ∗, ∈ S k kp ∞ ∈ S then for some q > 0 we have Φ < . In this case, k k−q ∞ Φ,ϕ Φ ϕ . |hh ii| ≤ k k−qk kq Similarly, if ϕ , then ϕ < for any λ > 0 and if Φ ∗, then for some ∈ G k kλ ∞ ∈ G λ0 > 0 we have Φ < . And again, k k−λ0 ∞ Φ,ϕ Φ ϕ . |hh ii| ≤ k k−λ0 k kλ0 An important tool in white noise analysis is the -transform which we define below. S

∗ Definition 2.2. For any Φ ( ) and ξ (Ê), we define the -transform of Φ ∈ S ∈ S S at ξ as 1 2 h·,ξi− |ξ| 2 (Φ)(ξ) := Φ, e 2 L (Ê) . S 1 2 DD EE h·,ξi− |ξ| 2 2 L (Ê) ∗ ∗ Note that e for any ξ (Ê), so for any Φ , the func- ∈ G ∈ S ∈ G tion Φ is everywhere defined on (Ê) (see Potthoff and Timpel, 1995, Example 2.1).TheS importance of the -transformS is well illustrated by the fact that it is an S injective operator (see Hida et al. (1993); Kuo (1996) for details.) Therefore we have the following useful result. Theorem 2.3. If Φ, Ψ ( )∗ and Φ= Ψ then Φ=Ψ. ∈ S S S Thus a can be uniquely determined by its -transform. S Making use of this fact, we can define the Wick product of two distributions. ⋄ Definition 2.4. For Φ, Ψ ( )∗, we define the Wick product of Φ and Ψ as ∈ S Φ Ψ := −1( Φ Ψ). ⋄ S S · S Alternatively, the Wick product can be expressed in terms of the chaos expansion by ∞ ∞ n Φ Ψ= I Φ(n) Ψ(m) = I Φ(n−m) Ψ(m) . (2.3) ⋄ n+m ⊗ n ⊗ n,m=0 n=0 m=0 ! X  X X The following is an important factb stating that all of the spacesb considered in this paper, namely , ∗, ( ) and ( )∗ are closed under the Wick product. G G S S Fact 2.5. If Φ, Ψ (or ∗, ( ), ( )∗) then Φ Ψ (or ∗, ( ), ( )∗, respec- ∈ G G S S ⋄ ∈ G G S S tively). This is the advantage of using the Wick product instead of the pointwise prod- uct, as the latter is usually not defined on spaces ∗ and ( )∗. However, under G S strong independence of Φ and Ψ, the Wick and pointwise products coincide (see e.g. Benth and Potthoff (1996) for details.) Definition 2.6. We say that Φ, Ψ ∗ are strongly independent if there are two

∈ G Æ measurable subsets IΦ,IΨ of Ê such that Leb(IΦ IΨ)=0 and for all m, n we have supp Φ(n) (I )n and supp Ψ(m) (I )m. ∩ ∈ ⊂ Φ ⊂ Ψ 6 From Benth and Potthoff (1996, Proposition 2) we know that strong indepen- dence and regular independence of random variables are closely related. Namely, if X,Y (L2) are two independent random variables measurable with respect to ∈ 2 σ B(s): a s< , a Ê, then Y has a version Y˜ (L ) such that Y˜ and X are { ≤ ∞} ∈ ∈ strongly independent. The next theorem states which products of generalized random variables are well- defined. The first part (which is a standard result) deals with the product of gener- alized and test random variables and the second part takes advantage of the strong independence assumption. For the proof of the second part see Benth and Potthoff (1996).

Theorem 2.7. i. For Φ ( )∗ (or ∗) and ϕ ( ) (or ) the product ϕ Φ is well-defined through ∈ S G ∈ S G ·

ϕ Φ, ψ = Φ, ψ ϕ , for all ψ ( ) (or respectively). hh · ii hh · ii ∈ S G ii. If Φ, Ψ ∗ are strongly independent, then the product Φ Ψ is well-defined, and ∈ G · Φ Ψ = Φ Ψ. · ⋄ Next, we state several results that are used to establish some norm estimates in the following sections of this paper. First, we recall an estimate on the norm of a product of two test random variables given in Potthoff and Timpel (1995, Proposi- tion 2.4). 1 √ Proposition 2.8. Let λ0 := 2 ln(2 + 2) and assume that λ>λ0 and ϕ, ψ λ. Then, for all ν>λ , ϕ ψ and there is a constant C so that ∈ G 0 · ∈ Gλ−ν ν ϕ ψ C ϕ ψ . k · kλ−ν ≤ νk kλk kλ Using Proposition 2.8, we can establish a norm estimate of a pointwise product of generalized and test random variables.

Theorem 2.9. Let λ := 1 ln(2 + √2) and assume that λ>λ Suppose that σ 0 2 0 ∈ G and Φ ∗, where ν>λ. Then there is a constant C such that ∈ G−λ+ν ⊂ G ν σ Φ C Φ σ . k · k−λ ≤ νk k−λ+νk kλ Proof. Consider, for any ϕ , ∈ G σ Φ,ϕ = Φ, σ ϕ |hh · ii| |hh · ii| Φ σ ϕ ≤k k−λ+νk · kλ−ν C˜ Φ σ ϕ . ≤ νk k−λ+νk kλk kλ Since the above holds for any ϕ , there is a constant dependent only on ν such ∈ G that σ Φ C Φ σ . k · k−λ ≤ νk k−λ+νk kλ Hence the theorem holds.

7 Next, we recall an estimate of the norm of a Wick product of two generalized random variables from Potthoff and Timpel (1995, Proposition 2.6).

1 ′ Proposition 2.10. Let Φ, Ψ , λ Ê. Let λ = λ , and λ < λ . Then ∈ Gλ ∈ 0 − 2 0 Φ Ψ ′ and ⋄ ∈ Gλ Φ Ψ ′ C ′ Φ Ψ , k ⋄ kλ ≤ λ,λ k kλk kλ 1 − ′ where C ′ = (2(λ λ′) 1) 2 eλ−λ −1. λ,λ − − Finally, let us review the Pettis-type integral in the white noise setting. Suppose that ( , , m) is a measure space and Φ(t): ( )∗ is a generalized stochas- tic process.T B We say that Φ is Pettis-integrableT if → theS following two conditions are satisfied:

i. Φ is weakly measurable, that is t Φ(t),ϕ is a measurable function for all → hh ii ϕ ( ); ∈ S ii. Φ is weakly integrable, that is

Φ(t),ϕ dm < , |hh ii| ∞ ZT for all ϕ ( ). ∈ S

For a Pettis-integrable generalized process Φ, we define its T Φ(t) dm by R Φ(t) dm,ϕ := Φ(t),ϕ dm. hh ii ZT  ZT Note that we can derive the chaos expansion of the Pettis white noise integral (see Hida et al. (1993); Kuo (1996) for details), as

∞ (n) Φ(t) dm = In Φ (t) dm , T n=0 T Z X Z  where the integrals in the chaos expansion are understood as Pettis integrals on ′ n the spaces (Ê ) (see Pettis, 1938). Note that the white noise Pettis integral is defined forS processes in the ( ∗) space. However, due to the fact that ( ) and S S ⊂ G ∗ ( ∗), we say that a ∗-valued process is Pettis-integrable if it is integrable asG an⊂ (S )∗-valued processG and the result of integration is a ∗ random variable. Alternatively,S we can restate the above definitions requiringG that Φ(t) ∗ and ∈ G ϕ . ∈In G what follows, the fact that Pettis integral and -transform are interchangable operations is important. S

∗ Proposition 2.11. For all Φ ( ) and ξ (Ê), ∈ S ∈ S t t Φ(s) ds (ξ)= (Φ(s))(ξ) ds. S S Z0  Z0

8 3 Calculus in ∗ and ( )∗ G S 3.1 Stochastic differentiation Before we present the definition of the stochastic derivative that we use in the remainder of this paper, we motivate our choice by showing how it fits with other definitions that can be found in Malliavin calculus and white noise analysis. Let us first recall that the Malliavin derivative is defined on a subset of (L2), namely ∞ 2 (n) 2 1,2 := ϕ (L ): n n! ϕ 2 n < . D ∈ · L (Ê ) ∞ ( n=0 ) X For ϕ we define the Malliavin derivative by its chaos expansion as ∈D1,2 ∞ DM ϕ := nI (ϕ(n)( , t)). (3.1) t n−1 · n=0 X Observe that is chosen in such a way that DM ϕ (L2) whenever ϕ . D1,2 t ∈ ∈D1,2 In Potthoff and Timpel (1995), the authors define an operator Dh for any h 2 ∈ L (Ê) as the Gâteaux derivative in direction h. It can be shown that Dh can be described in terms of its chaos expansion as

∞ (n) 2 Dhϕ = nIn−1 (h, ϕ )L (Ê) , n=0 X  2

2 Ê where ( , ) Ê is the L ( ) inner product, that is · · L ( )

(n) (n−1) (n) (n−1) (n−1) n−1

2 Ê (h, ϕ )L (Ê)(u ) := h(s)ϕ (u ,s) ds, u .

Ê ∈ Z Note that, since ϕ(n) can be assumed to be symmetric, it does not matter which of the coordinates is chosen as s in the formula above. For Dt and Dh to be equal, we need h to be a function satisfying

(n) (n) (n) 2 n

2 Ê (h, ϕ ) Ê = ϕ ( , t), ϕ L ( ). L ( ) · ∀ ∈ 2 But there is no h L (Ê) that satisfies the above condition. It is a well-known fact ∈ though, that the Dirac delta – a generalized function on Ê – has this exact property. We cannot formally take h(s)= δt(s), but we can do it informally to obtain

∞ (n) 2 Dδt ϕ = nIn−1 (δt,ϕ )L (Ê) n=0 X∞  = nI (ϕ(n)( , t)) n−1 · n=0 X = Dtϕ.

(n) (n−1) (n−1) n−1 Ê Note that for the above to hold, we need ϕ (u , ) (Ê) (with u ),

· ∈ S ∈ Ê as the Dirac delta is a continuous linear operator on (Ê). However, since ( ) S S 9 2 is a dense subset of L (Ê), the Dirac delta can be uniquely extended to a densely 2 ∗ defined, unbounded linear on L (Ê). As we will show later, Dδt Φ for all Φ ∗. ∈ G In Benth∈ G (1999), we encounter yet another differentiation operator. This time it ∗ ′ is defined on the Hida space ( ) as Φ = Φ W Φ W , where (with ω (Ê) S D · − ⋄ ∈ S and f (Ê)) W (f)(ω) = ω, f is the coordinate process sometimes also called ∈ S h i a smoothed white noise. In this case, the operator should be understood as a D functional on the product space (Ê) ( ), with its action given by S × S Φ(f,ϕ)=(Φ W Φ W )(f,ϕ)= Φ W (f) Φ W (f),ϕ . D · − ⋄ hh · − ⋄ ii In Benth (1999, Proposition 3.3), it is shown that operator can be expressed in terms of the chaos expansion of the distribution it acts on –D much in the same (n) ′ n way as the Malliavin derivative is defined. In order to see this, for Φ (Ê ),

(n) n (n) ∈ S Ê ϕ (Ê ) and g ( ), define Φ ( ,g) by ∈ S ∈ S · b Φ(n)( ,g),ϕ(n) := Φ(n),ϕ(n) g . b h · i h ⊗ i Now, the chaos expansion of Φ(g) is given by D b ∞ Φ(g)= nI (Φ(n)( ,g)). D n−1 · n=0 X It is enough to justify that fixing the n-th functional coordinate of the functional

(n) ′ Ê Φ : (Ê) ( ) at a certain g is equivalent to fixing the n-th variable in the S (n→) S ∞ (n) ∗ function Φ . Suppose that Φ = n=0 In(Φ ) . Then, for all n 0 the (n) 2 n ∈ G ≥ functions Φ are elements of L (Ê ) and can be viewed as functions of n variables P 2 n or, due to the Riesz representation theorem, as linear operators acting on L (Ê ).

(n−1) (n) (n−1) n 2 n Ê With ϕ , Φ and g as above, we have that ϕ g (Ê ) L ( ), ⊗ ∈ S 2 ⊂ so the bilinear pairing can be viewed as an inner product in L (Ê). Thus, (n) (n) with notation x = (x1, x2,...,xn), x6k = (x1, x2,...,xb k−1,b xk+1,...,xbn) and (n) dx = dx1dx2 ...dxn we have

(n) (n−1) (n) (n−1) Φ ( ,g),ϕ = Φ ( ,g),ϕ 2 n · · L (Ê ) n 1 (n) (n) (n−1) (n−1) (n) = Φ (x )ϕ (x6k )g(xk) dx . n

n Ê Xk=1 Z (n) (n) Taking, again informally, g(x)= δt(x) and using the symmetry of ϕ and Φ , we have n (n) (n−1) 1 (n) (n) (n−1) (n−1) (n) Φ ( ,g),ϕ = Φ (x )ϕ (x6k )δt(xk) dx n

· n Ê k=1 Z Xn 1 (n) (n) (n−1) (n−1) (n) = Φ (x6k , xk)ϕ (x6k )δt(xk) dxk dx6k n

n Ê k=1 Z Xn 1 (n) (n) (n−1) (n−1) (n) = Φ (x6k , t)ϕ (x6k ) dx6k n

n Ê Xk=1 Z 10 = Φ(n)(x(n−1), t)ϕ(n−1)(x(n−1)) dx(n−1)

n Ê Z (n) (n−1) = Φ ( , t),ϕ 2 n−1 . · L (Ê ) Thus we have the following informal equality

D Φ= D t Φ= Φ(δ ). t δ D t Therefore, we can regard the derivative defined by Equation (3.1) as a restriction of defined in Benth (1999) to the space ∗, an extension of the Malliavin deriva- D M G tive Dt onto a larger domain, and an extension of the derivative Dh defined in Potthoff and Timpel (1995). This motivates the following definition.

∗ ∞ (n) Definition 3.1. For any Φ with chaos expansion given by Φ= n=0 In(Φ ) we define the stochastic derivative∈ G of Φ at t by P ∞ D Φ= nI (Φ(n)( , t)). t n−1 · n=1 X Theorem 3.2 assures that the stochastic derivative is in fact a well-defined func- tional acting on ∗. G ∗ ∗ Theorem 3.2. For any Φ , we have D Φ for almost all t Ê. Moreover, ∈ G t ∈ G ∈ if for some λ> 0, Φ then for any ε> 0 there is a constant C , such that ∈ G−λ ε 2 2 DtΦ −λ−ε dt Cε Φ −λ < , (3.2)

Ê k k ≤ k k ∞ Z

and in consequence D Φ for almost all t Ê. t ∈ G−λ−ε ∈ Proof. It is enough to show that Equation (3.2) holds because ∗ = . In G λ>0 G−λ order to do this, we need the following fact: for any ε> 0, there exists x0 > e such ln x S that f(x) = x < ε for all x > x0. This is a consequence of the fact that f(x) is decreasing on the interval (e, ) and limx→∞ f(x)=0. ∞ ∞ Let Φ= I (Φ(n)) be an element of , and consider n=0 n G−λ P ∞ 2 −2(λ+ε)n (n) 2 DtΦ dt = n(n!)e Φ ( , t) 2 n dt

−λ−ε L (Ê ) Ê Ê k k · Z Z n=0 ∞ X −2(λ+ε)n (n) 2 = n(n!)e Φ ( , t) 2 n dt L (Ê )

Ê · n=0 Z X∞ −2(λ+ε)n (n ) 2 = n(n!)e Φ 2 n+1 . L (Ê ) n=0 X By the fact stated at the beginning of this proof, we have that for any ε > 0 ln n there is a k Æ0 such that for all n k we have n < 2ε. This ensures that ne−2(λ+ε)n ∈e−2λn. Hence ≥ ≤ ∞ ∞ −2(λ+ε)n (n) 2 −2λn (n) 2

n(n!)e Φ 2 n+1 < (n!)e Φ 2 n+1 Ê L (Ê ) L ( ) n=k n=k X X

11 Φ 2 ≤k k−λ −2(λ+ε)n Now, for any n 0, 1,...,k 1 there is a constant cn,ε such that ne < c e−2λn. Let C˜ ∈= { max(c : n− } 0, 1,...,k 1 ). We have n,ε ε n,ε ∈{ − } k−1 k−1 −2(λ+ε)n (n) 2 ˜ −2(λ)n (n) 2

n(n!)e Φ 2 n+1 Cε (n!)e Φ 2 n+1 . Ê L (Ê ) ≤ L ( ) n=0 n=0 X X Thus we have shown that ˜ 2 2 2 DtΦ −λ−ε dt Cε Φ −λ + Φ −λ Cε Φ −λ.

Ê k k ≤ k k k k ≤ k k Z Therefore D Φ < for almost all t as required. k t k−λ−ε ∞ The above theorem improves the result of Aase et al. (2000, Lemma 3.10), where it was shown that if Φ , then D Φ for almost all t. The notation used ∈ G−λ t ∈ G−λ−1 in Aase et al. (2000) differs from ours, but the definitions of the spaces and ∗ as well as the definitions of the stochastic derivative are equivalent. G G Recall that Definition 3.1 of the stochastic derivative is exactly the same (in terms of chaos expansion) as the definition of the Malliavin derivative. The drawback of the Malliavin derivative is that it is defined on a smaller space 1,2 so that the derivative takes values in the (L2) space for almost all t. Since weD define the ∗ derivative on a larger space ) 1,2, the result of differentiation also falls into a larger space, namely ∗ ) (LG2). ThusD the derivative of a random variable from ∗ is G G no longer an element of (L2), but rather a generalized stochastic process. However, taking derivative of any test random variable ϕ results in a test stochastic 2 ∈ G process that is in ( (L ) for almost all t Ê, as can be seen from Theorem 3.2. G ∈ 3.2 Properties of the stochastic derivative

Now we turn our attention to some of the properties of the stochastic derivative Dt of Definition 3.1. All of the formulas presented below are well-known in the setting of Malliavin calculus. We include them for the sake of completeness and give only sketches of the proofs or omit the proofs completely.

(0) (0) Proposition 3.3. If Φ is deterministic, that is Φ= I (Φ ), Φ Ê, then D Φ= 0 ∈ t 0. Proof. This is a direct consequence of the definition of the stochastic derivative. Proposition 3.4. If Φ, Ψ ∗, then ∈ G D (Φ Ψ) = D (Φ) Ψ + Φ D Ψ. (3.3) t ⋄ t ⋄ ⋄ t Proof. This follows from straightforward but tedious explicit operations on the chaos expansion and comparison of the chaos expansions of the left- and right-hand sides of the Equation (3.3). The computations are the same as in the Malliavin deriva- tive case, as the formulas defining the are the same and only the do- main differs. Existence of both sides of Equation (3.3) follows from Theorem 3.2 and Proposition 2.10

12 Similarly, we can show the pointwise product rule with the restriction that we operate on smooth random variables only. Proposition 3.5. If ϕ, ψ , then ∈ G D (ϕ ψ)= D (ϕ) ψ + ϕ D ψ. t · t · · t Since pointwise product is not well-defined for random variables in ∗, we cannot generalize the above result to all Φ, Ψ ∗. However, there are two casesG of interest for which the product rule makes sense.∈ G First, under an additional assumption of strong independence of Φ and Ψ, application of Theorem 2.7 and the fact that the stochastic derivative preserves strong independence yields: Proposition 3.6. If Φ, Ψ ∗ are strongly independent, then ∈ G D (Φ Ψ) = D (Φ) Ψ + Φ D Ψ. t · t · · t Finally, since ϕ implies that D ϕ for almost all t, and the product of ∈ G t ∈ G test and generalized random variables is well defined, we obtain: Proposition 3.7. If ϕ ∗ and Ψ , then ∈ G ∈ G D (ϕ Ψ) = D (ϕ) Ψ+ ϕ D Ψ. t · t · · t Finally, following Hida et al. (1993, Equation (5.17)), we give the formula for the -transform of the Malliavin derivative. In the spirit of completeness, we first recall theS definition of the Fréchet functional derivative that appears in the formula for the -transform of the stochastic derivative. We say that a real-valued function f defined onS an open subset U of a B is Fréchet differentiable at x if there exists δf δf a bounded linear functional δx : B Ê such that f(x + y) f(x) δx (y) = o( y ) for all y B. → | − − | k k ∈ ∗ Proposition 3.8. For all Φ ( ) and ξ (Ê), ∈ S ∈ S (D Φ) (ξ)= δ (Φ)(ξ), S t δξ(t) S δ where δξ(s) is the Fréchet functional derivative.

3.3 Stochastic integration In this section we introduce the Skorohod integral for processes in ∗. In Malliavin calculus, the Skorohod integral can be defined through the chaos expansionG as ∞ ∞ ϕ(t)= I ϕ(n)( , t) = δM (ϕ)= I ϕ(n) . (3.4) n · ⇒ n+1 n=0 n=0 X  X  The domain of δM consists of all those processes whose Skorohodb integral results in a random variable in (L2), namely

∞ M 2 (n) 2 Dom δ = ϕ (L ): (n + 1)! ϕ 2 n+1 < . ∈ L (Ê ) ∞ ( n=0 )  X We extend the Skorohod integral in the same manner b as we extended the Malliavin derivative.

13 Definition 3.9. For Φ(t) = ∞ I (Φ(n)( , t)) ∗, we define the Skorohod inte- n=0 n · ∈ G gral by P ∞ (n) δ(Φ) = Φ(t) δB(t) := In+1 Φ ,

Ê n=0 Z X   ∞ −2(n+1)λ (n) b whenever (n + 1)!e Φ 2 n+1 < for some λ> 0. n=0 | |L (Ê ∞ The next result gives sufficient conditions for Φ(t) to be Skorohod-integrable and P b provides a norm estimate on δ(Φ) under the assumption of square-integrability of the norm δ(Φ) . k k−λ

Theorem 3.10. If Φ(t) for all t Ê and ∈ G−λ ∈ 2 Φ(t) −λ dt < ,

Ê k k ∞ Z then for any ε> 0 there is a constant Cε such that

2 2 δ(Φ) −λ−ε Cε Φ(t) −λ dt.

k k ≤ Ê k k Z Thus δ(Φ) and in particular, δ(Φ) ∗. ∈ G−λ−ε ∈ G 2 n+1 (n) Proof. Fix an arbitrary ε> 0. Keeping in mind that the L (Ê ) norm of Φ ( , t) · and its symmetrization Φ(n)( , t) are equal, consider · ∞ b −2(λ+ε)n (n) 2 δ(Φ) = (n + 1)!e Φ 2 n+1 k k−λ−ε L (Ê ) n=0 X∞ −2(λ+ε) n (n) 2 = (n + 1)n!e Φ ( , t) 2 n dt L (Ê )

Ê · n=0 Z X ∞ −2(λ+ε)n (n) 2 = (n + 1)n!e Φ ( , t) 2 n dt. (3.5) L (Ê )

Ê · Z n=0 X By the linearity of the integral, it is enough to show that for k large enough, the following integral converges

∞ −2(λ+ε)n (n) 2 (n + 1)n!e Φ ( , t) 2 n dt. L (Ê )

Ê · Z n=k X

Note that for any ε > 0 there is a k Æ0 such that for any n k we have −2(λ+ε)n −2λn ∈ x≥+1 (n + 1)e < e . This follows from the fact that f(x) = x is strictly decreasing in the interval (0, ) and limx→∞ f(x)=0. Hence, for k large enough, we have ∞ ∞ −2(λ+ε)n (n) 2 (n + 1)n!e Φ ( , t) 2 n dt L (Ê )

Ê · Z n=k X ∞ −2λn (n) 2 n!e Φ ( , t) 2 n dt L (Ê )

≤ Ê · Z n=k X

14 ∞ −2λn (n) 2 n!e Φ ( , t) 2 n dt L (Ê )

≤ Ê · Z n=0 X 2 Φ(t) −λ dt

≤ Ê k k Z Note that we can treat the first n elements of the sum in Equation (3.5) as in the 2 proof of Theorem 3.2, so δ(Φ) Cε Φ(t) dt < , as required. k k−λ−ε ≤ Ê k k−λ ∞ It is a well known fact, that in the settingR of Hida spaces ( ), ( )∗ the Skorohod S S integral can be interpreted as a white noise integral. Namely, for Φ(t) ( )∗ we can view the following integral as the extension of the Skorohod integral ∈ S

∗ ∂t Φ(t) dt, Z Ê where the integral is understood in Pettis sense and ∂∗ : ( )∗ ( )∗ is the white t S → S noise integration operator, that is the adjoint to ∂t, the Gâteaux derivative in the direction δt. We have an explicit expression for the chaos expansion of the above integral, given by (e.g. Kuo, 1996; Hida et al., 1993)

∞ ∗ (n)

∂t Φ(t) dt = In+1 δt Φ (t) dt . (3.6) Ê Ê ⊗ n=0 Z X Z  As in the case of stochastic derivative, with the sameb notation as previously, it is (n) 2 n straightforward to check that for Φ ( , t) L (Ê ) we have · ∈ n (n) 1 (n) (n) (n) δt Φ (t) dt = Φ (x6k , xk)= Φ ( , t).

Ê ⊗ n · Z Xk=0 b Thus this integral isb an actual extension of the stochastic integral defined in Equation (3.4). Recall, that the same integral can be defined (in the ( )∗ setting) as S

Φ(t) Wt dt,

Ê ⋄ Z and the chaos expansion of this generalized random variable is the same as the one ∗ ∗ in Equation (3.6). Note however, that Wt = I1(δt) ( ) is not an element of 2 ∈ S G because δ / L (Ê). But the above reasoning justifies Definition 3.9 as a Skorohod t ∈ integral of processes in ∗ and Theorem 3.10 gives sufficient conditions for the result of integration to be anG element of ∗. G 3.4 Properties of the stochastic integral First, we state some properties that are readily seen directly from Definition 3.9 of the Skorohod integral.

Theorem 3.11. i. The Skorohod integral is linear;

15 ii b . a 0 δB(t)=0; iii. R b 1 δB(t)= B(b) B(s); a − iv R b c c . If a

∗ Theorem 3.12. Suppose that Φ(t) is Skorohod-integrable over , and DtΦ(s) is Skorohod-integrable for almost all∈t G . Then T ∈ T

Dt Φ(s) δB(s) = Φ(t)+ DtΦ(s) δB(s). (3.7) ZT  ZT Proof. Note that since Φ(t) ∗ for all t , by Theorem 3.2 the stochastic derivative D Φ(s) exists for almost∈ G all t ∈ Tand its norm is square-integrable, t ∈ T hence DsΦ(s) is Skorohod-integrable by Theorem 3.10. It remains to show that Equation (3.7) holds. ∞ (n) n 2 n Let Φ(t) = I Φ ( , t) , where Φ ( , t) L (Ê ) for all t . The n=0 n · · ∈ ∈ T left-hand side of Equation (3.7) is given by P  b ∞ (n) Dt Φ(s) δB(s) = Dt In+1 Φ T n=0 ! Z  X   ∞ b = (n + 1)I Φ(n)( , t) . n · n=0 X  On the other hand, we can write out the right-hand side of Equation (3.7) as

∞ ∞ Φ(t)+ D Φ(s) δB(s)= I Φ(n)( , t) + δ D I Φ(n)( , t) t n · t n · ZT n=0 n=0 !! X∞  ∞ X  = I Φ(n)( , t) + δ nI Φ(n)( ,s,t) n · n−1 · n=0 n=0 ! X∞  ∞ X  = I Φ(n)( , t) + nI Φ(n)( , t) n · n · n=0 n=0 X∞  X∞   b = I Φ(n)( , t) + nI Φ(n)( , t) n · n · n=0 n=0 X∞  X  = (n + 1)I Φ(n)( , t) . n · n=0 X  So the two sides of Equation (3.7) are equal and the theorem holds. When comparing the above result with its Malliavin calculus counterpart (see Barndorff-Nielsen et al., 2012, Proposition 1), we see that we are not required to

16 assume the existence of the stochastic derivative of Φ because it is ensured by the properties of the derivative in the space ∗. Next, we present an “integration by partsG formula” for the stochastic derivative and integral. Note that we cannot use the pointwise product freely as its result might be undetermined for generalized random variables. However, we can always take a product of test and generalized random variables.

Theorem 3.13. Suppose that ϕ and Φ(t) ∗ for all t. If for some λ> 0 and 1 √ ∈ G ∈ G ν > 2 ln(2 + 2) T Φ(t) 2 dt < , k k−λ+ν ∞ Z0 then T T T ϕΦ(t) δB(t)= ϕ Φ(t) δB(t) Φ(t)D ϕ dt. (3.8) − t Z0 Z0 Z0 Proof. First we show that all components of Equation (3.8) are elements of ∗. By G Theorem 3.10, for the integral on the left-hand side of Equation (3.8) to be well- T 2 defined it suffices that 0 ϕΦ(t) −λ dt < . By Theorem 2.9 and our assumption, we have k k ∞ R T T ϕΦ(t) 2 dt C2 ϕ Φ(t) 2 dt k k−λ ≤ ν k kλ k k−λ+ν Z0 Z0 < . ∞ Thus ϕΦ(t) is Skorohod-integrable. The integral in the first component on the right-hand side of Equation (3.8) is also well-defined by our assumption on square-integrability of the norm. Since ϕ , the first product on the right-hand side is an element of ∗. ∈Finally, G the Pettis integral on the right-hand side of EquationG (3.8) exists because for any ψ ∈ G T Φ(t)Dtϕ, ψ dt 0 |hh ii| Z T

Φ(t)Dtϕ −λ+ε ψ λ−ε dt ≤ 0 k k k k Z T

ψ λ−εCν Φ(t) −λ+ε+ν Dtϕ λ−ε dt ≤k k 0 k k k k Z 1 1 T 2 T 2 ψ C Φ(t) 2 dt D ϕ 2 dt ≤k kλ−ε ν k k−λ+ε+ν k t kλ−ε Z0  Z0  < . ∞ The first integral in the last statement above is finite by assumption and monotonic- ity of the norms . The second integral is finite by Theorem 3.2. Above we have k·k−λ also used Theorem 2.9 and Remark 2.1. Finally, although very tedious, it is straightforward to check that the chaos ex- pansions of both sides of Equation (3.8) agree. In order to see this, one might start

17 (n) (m) with Φ = In(Φ (t)) and ϕ = Im(ϕ ) as linear combinations of variables of this form are dense in ∗ and respectively. This choice of ϕ, Φ significantly simplifies the computations asG one canG use the product formula

∞ ∞ m∧n m n ϕ Φ= k! I Φ(n) ϕ(m) , · k k m+n−2k ⊗k n=0 m=0 k=0    X X X  b where is the symmetrized tensor product on k variables. ⊗k The last well-known property of the Skorohod integral that we use in the forth- b coming sections is the form of its -transform. S ∗ Proposition 3.14. For all Φ ( ) and ξ (Ê), ∈ S ∈ S t t Φ(s) δB(s) = (Φ(s))(ξ) ξ(s) ds. S S · Z0  Z0 4 Integration for Volterra processes

As we have already mentioned, in order to define an integral with respect to VMBV process, we follow Barndorff-Nielsen et al. (2012). We define the integral

t t Φ(s) dX1(s), where X1(t)= g(t, s) dB(s), (4.1) Z0 Z0 with the use of the following operator

t (Φ)(t, s) := Φ(s)g(t, s)+ (Φ(u) Φ(s)) g(du,s). (4.2) Kg − Zs The definition of the integral in Equation (4.1) is given by

t t t Φ(s) dX (s) := (Φ)(t, s) δB(s)+ D (Φ)(t, s) ds. (4.3) 1 Kg s {Kg } Z0 Z0 Z0 Before we discuss the integral defined above, we have to turn our attention to the study of the properties of the operator g which is a main building block of the integral itself. K

4.1 Properties of the operator g K In this section, we study the regularity of the operator g. That is we wish to find out for which γ > 0 does (Φ)(t, s) when Φ(t) K for all t. First of all, from Kg ∈ G−γ ∈ G−λ Equation (4.2), we see that Φ(u) Φ(s) has to be Pettis–Stieltjes-integrable with respect to g(du,s) on [s, t] for 0 −s < t T . Using previously introduced notation, we have ( , , m) = ([s, t], ([s,≤ t]), m ),≤ where m is the Lebesgue–Stieltjes measure T B B g g associated to g( ,s). In order to consider integrability of Φ with respect to g(du,s), and later, for the· existence of (Φ)(t, s) we need the following assumptions. Kg

18 Assumption A. Suppose that i. For any 0 s

t (Φ(u) Φ(s)) g(du,s) (4.4) − Zs exists as a Pettis–Stieltjes integral. Moreover, if Φ(t) −λ,λ> 0 for all 0 t T , then for any 0 s < t T , ∈ G ≤ ≤ ≤ ≤ t (Φ(u) Φ(s)) g(du,s) < , (4.5) − ∞ Zs −λ that is the integral in Equation (4.4) is an element of . G−λ Proof. In order to prove integrability in the Pettis sense, consider

t (Φ(u) Φ(s)) g(du,s),ϕ − Zs  t = (Φ(u) Φ(s)),ϕ g(du,s ) hh − ii Zs t

Φ(u) Φ(s),ϕ g (du,s) ≤ s |hh − ii| | | Z t

Φ(u) Φ(s) −λ ϕ λ g (du,s) ≤ s k − k k k | | Z t

= ϕ λ Φ(u) Φ(s) −λ g (du,s) k k s k − k | | Z 1 1 t 2 t 2 ϕ Φ(u) Φ(s) 2 g (du,s) 1 g (du,s) ≤k kλ k − k−λ | | | | Zs  Zs  < , ∞ where we have used the Hölder inequality, Assumption A and Remark 2.1. To prove that the norm in Equation (4.5) is finite, consider

t 2 (Φ(u) Φ(s)) g(du,s) − Zs −λ

19 ∞ t 2 = n! (Φ(n)(u) Φ(n)(s)) g(du,s) − n=0 Zs −λ X ∞ t 2 V t [g( ,s)] n! Φ(n)(u) Φ(n)(s) g (du,s) ≤ s · − −λ | | n=0 Zs Xt ∞ 2 = V t [g( ,s)] n! Φ(n)(u) Φ(n)(s) g (du,s) s · − −λ | | Zs n=0 X t = V t [g( ,s)] Φ(u) Φ(s) 2 g (du,s) s · k − k−λ | | Zs < , ∞ t where Vs [f] denotes the total variation of f on the interval [s, t], which by Assumption A is finite. Theorem 4.2. If Assumption A holds and Φ(t) for all 0 t T , then ∈ G−λ ≤ ≤ (Φ)(t, s) for all 0 s t T . Kg ∈ G−λ ≤ ≤ ≤ Proof. As in the proof of the Proposition 4.1, it is enough to establish that (Φ)(t, s) < . Consider kKg k−λ ∞ t (Φ)(t, s) = Φ(s)g(t, s)+ (Φ(u) Φ(s)) g(du,s) kKg k−λ − Zs −λ t

Φ(s)g(t, s) + (Φ(u) Φ(s)) g(du,s ) ≤k k−λ − Zs −λ t

= g(t, s) Φ(s) + (Φ(u) Φ(s)) g(du,s ) | |k k−λ − Zs −λ

< . ∞ Thus the result holds

As we will see in the forthcoming sections, the fact that the operator g( ) preserves the regularity of Φ is of crucial importance in the derivation of regularityK · properties of the integrals defined below.

4.2 The integral Now we go back to the study of the integral defined in Equation (4.1). Since we have established sufficient conditions for (Φ)(t, s) , we can now look at the Kg ∈ G−λ Skorohod integral of (Φ)(t, s). By Theorem 3.10, it is enough to show that Kg T (Φ)(t, s) 2 ds < kKg k−λ ∞ Z0 in order to establish Skorohod integrability of g(Φ)(t, s). We will show that this is the case under the following assumptions. K Assumption B. Suppose that

20 i. T g(t, s) 2 Φ(s) 2 ds < ; | | k k−λ ∞ Z0 ii. For any 0 s

Remark 4.3. Notice that in what follows, Assumption B Items i and ii can be t 2 substituted with the weaker assumption that 0 g(Φ)(t, s) −λ ds < for all t [0, T ]. kK k ∞ ∈ R

Proposition 4.4. Suppose that Assumptions A and B hold. If Φ(t) −λ for all 0 t T , then for any ε> 0, ∈ G ≤ ≤ t (Φ)(t, s) δB(s) . Kg ∈ G−λ−ε Z0 Proof. Using our assumptions and Hölder’s inequality we obtain

t (Φ)(t, s) 2 ds kKg k−λ Z0 t t 2 = Φ(s)g(t, s)+ (Φ(u) Φ(s)) g(du,s) ds − Z0 Zs −λ 2 t t Φ(s)g(t, s) + (Φ(u) Φ(s)) g (du,s) ds ≤ k k−λ − Z0  Zs −λ 2 t t t 2 Φ(s)g(t, s) 2 ds +2 (Φ(u) Φ(s)) g( du,s) ds ≤ k k−λ − Z0 Z0 Zs −λ

< . ∞ Hence the result follows by Theorem 3.10. Next, we consider Pettis-integrability of D (Φ)(t, s). sKg

Proposition 4.5. Suppose that Assumptions A and B hold. If Φ(t) −λ, then for any ε> 0, ∈ G t D (Φ)(t, s) ds . sKg ∈ G−λ−ε Z0 1 Proof. We will show that Ds g(Φ)(t, s) is weakly in L ([0, t]), that is for all ϕ we K1 ∈ G have Ds g(Φ)(t, s),ϕ L ([0, t]) . Observe that if Φ(t) −λ, then g(Φ)(t, s) andhh inK consequenceii∈D (Φ)(t, s) for any ε>∈0 G. ConsiderK ∈ G−λ sKg ∈ G−λ−ε t t

Ds g(Φ)(t, s),ϕ ds Ds g(Φ)(t, s) −λ−ε ϕ λ+ε ds 0 |hh K ii| ≤ 0 k K k k k Z Z t = ϕ D (Φ)(t, s) ds k kλ+ε k sKg k−λ−ε Z0 21 t ϕ t D (Φ)(t, s) 2 ds ≤k kλ+ε k sKg k−λ−ε Z0 < . ∞ Above we have used Remark 2.1, Hölder’s inequality and Theorem 3.2. Putting Propositions 4.4 and 4.5 together yields the main result of this section.

Theorem 4.6. Suppose that Assumptions A and B hold. If Φ(t) −λ for all t [0, T ], then for any ε> 0 ∈ G ∈ t Φ(s) dX (s) . 1 ∈ G−λ−ε Z0 2 ∗ Remark 4.7. Recall that λ (L ) −λ for any λ> 0. So the above theorem assures that G ⊂ G ⊂ ⊂ G ⊂ G

i. if Φ for some λ> 0 then the integral is an (L2) process; ∈ Gλ ii. if Φ then the integral is a process again; ∈ G G 2 iii. if Φ (L ), then the integral is a −ε process for any ε> 0, thus in a certain sense∈ it is very close to (L2). G

5 Integration for volatility modulated Volterra pro- cesses

In this section, we introduce stochastic volatility in the integrator process X(t). In the defining Equation (1.2) we see that the volatility is multiplying the integrands on the right-hand side of Equation (1.2). This is an ordinary operation when consid- ering non-generalized stochastic processes, however the product of two generalized random variables from ∗ does not have to be an element of ∗. We overcome this difficulty in two differentG approaches. In the first part of thisG section, we take Σ(s) = σ(s) to be a test stochastic process, that is σ(s) for all s [0, T ]. In the second part of this section, we use the Wick product∈ to G introduce volatility∈ modulation as this operation is well defined for all Σ ∗. Note that under strong ∈ G independence (see Definition 2.6 or Benth (2001); Benth and Potthoff (1996)) this is equivalent to the pointwise product case.

5.1 Pointwise product with smooth volatility In this subsection, we assume that the volatility process σ is a smooth stochastic process and study the following integral

t t Φ(s) dXσ(s), where Xσ(t)= g(t, s)σ(s) dB(s). (5.1) Z0 Z0

22 Let us remark that the assumption that the volatility is a stochastic test process is not as restrictive as it appears. For example, Brownian-driven Volterra processes are test stochastic processes because

t

σ(t)= h(t, s) dB(s)= I ½ h(t, ) . 1 [0,t] · Z0  2 So if h(t, ) L (Ê), then σ(t) < for all t [0, T ] and λ> 0, hence σ(t) · ∈ k kλ ∞ ∈ ∈ G for all t [0, T ]. As we∈ will show, the sufficient conditions for the integral in Equation (5.1) to be well-defined are the following.

Assumption C. Suppose that

i. For all t [0, T ] we have σ(t) ; ∈ ∈ G ii. T σ(s) 2 ds < ; k kλ ∞ Z0 iii. For any 0 s < t T ≤ ≤ t g(t, s) 2 Φ(s) 2 σ(s) 2 ds < . | | k k−λ+νk k−λ ∞ Z0 iv. For any 0 s < t T ≤ ≤ t t 2 (Φ(u) Φ(s)) g(du,s) σ(s) 2 ds < . − k k−λ ∞ Z0 Zs −λ+ν

Remark 5.1. As previously, in what follows, Assumption C Items iii and iv can be t 2 2 substituted with the weaker assumption that 0 g(Φ)(t, s) −λ+ν σ(s) λ ds < for all t [0, T ]. kK k k k ∞ ∈ R Theorem 5.2. Under Assumptions A and C the integral

t Φ(s) dXσ(s) (5.2) Z0 is well-defined in the sense of Pettis. Moreover, if Φ(t) where ν > 1 ln(2 + ∈ G−λ+ν 2 √2), then for any ε> 0 t Φ(s) dX (s) . σ ∈ G−λ−ε Z0 Proof. First we establish the existence of the Skorohod integral. By Theorem 3.10 it suffices to show that

t (Φ)(t, s) σ(s) 2 ds < . kKg · k−λ ∞ Z0

23 This follows immediately from Theorem 2.9 and Assumption C:

t 2 g(Φ)(t, s) σ(s) −λ ds 0 kK · k Z t 2 2 2 Cν g(Φ)(t, s) −λ+ν σ(s) λ ds ≤ 0 kK k k k Z t 2C2 g(t, s) 2 Φ(s) 2 σ(s) 2 ds ≤ ν | | k k−λ+νk kλ Z0 t t 2 +2C2 Φ(u) Φ(s) g(du,s) σ(s) 2 ds ν − k kλ Z0 Zs −λ+ν

< . ∞ The existence of the Pettis integral follows from Remark 2.1, Theorems 2.9 and 3.2, and Assumption C. We have to show that Ds ( g(Φ)(t, s)) σ(s),ϕ is integrable for any ϕ : hh K · ii ∈ G t Ds ( g(Φ)(t, s)) σ(s),ϕ ds 0 |hh K · ii| Z t

Ds g(Φ)(t, s) σ(s) −λ ϕ λ ds ≤ 0 k K · k k k Z t

Cν ϕ λ Ds g(Φ)(t, s) −λ+ν σ(s) λ ds ≤ k k 0 k K k k k Z 1 1 t 2 t 2 C ϕ D (Φ)(t, s) 2 ds σ(s) 2 ds ≤ νk kλ k sKg k−λ+ν k kλ Z0  Z0  < . ∞ Finally, suppose that Φ(t) . Then (Φ)(t, s) and consequently, ∈ G−λ+ν Kg ∈ G−λ+ν (Φ)(t, s) σ(s) . Thus for any ε > 0 we have δ ( (Φ)(t, s) σ(s)) Kg · ∈ G−λ Kg · ∈ −λ−ε. Also, Ds g(Φ)(t, s) −λ+ν−ε and so Ds g(Φ)(t, s) σ(s) −λ−ε, hence Gt K ∈ G K · ∈ G D (Φ)(t, s) σ(s) ds . So the theorem holds. 0 sKg · ∈ G−λ−ε R In comparison with the results of Barndorff-Nielsen et al. (2012), the above re- sult allows integration of a larger class of processes, however it restricts the class of volatility modulators. We present the extension of the latter class in the next subsection.

5.2 Wick product with generalized volatility Below, we consider the generalized stochastic process as the volatility. Since the volatility is introduced through multiplication and the pointwise product of two generalized stochastic processes does not have to be well-defined, we use the Wick product instead. It is worth noting that the choice between the pointwise and Wick products should be based on modeling considerations as the two products coincide only under special circumstances. We give an example of an additional assumption on the volatility process that ensures the equality of the Wick and pointwise products in the definition given below.

24 We define the integral with respect to a Wick– as VMBV t t t Φ(s) dX⋄Σ(s) := g(Φ)(t, s) Σ(s) δB(s)+ Ds ( g(Φ)(t, s)) Σ(s) ds, 0 0 K ⋄ 0 K ⋄ Z Z Z (5.3) t where X⋄Σ(t) = 0 g(t, s)Σ(s) δB(s). In what follows, we show that the following are sufficient conditions for the integral in Equation (5.3) to be well-defined. R Assumption D. Suppose that

i. T Σ(s) 2 ds < ; k k−λ ∞ Z0 ii. For any 0 s < t T ≤ ≤ t g(t, s) 2 Φ(s) 2 Σ(s) 2 ds < . | | k k−λk k−λ ∞ Z0 iii. For any 0 s < t T ≤ ≤ t t 2 (Φ(u) Φ(s)) g(du,s) Σ(s) 2 ds < . − k k−λ ∞ Z0 Zs −λ

Remark 5.3. As in Remarks 4.3 and 5.1, in what follows, Assumption D Items ii and iii can be substituted with the weaker assumption that for all t [0, T ] we have t 2 2 ∈ (Φ)(t, s) Σ(s) ds < . 0 kKg k−λk k−λ ∞ RTheorem 5.4. Under Assumptions A and D the integral in Equation (5.3) is well- defined. Moreover, if Φ(t) for all t [0, T ], then for any ε> 0 ∈ G−λ ∈ t

Φ(s) dX⋄Σ(s) −λ− 1 −ε. ∈ G 2 Z0 Proof. For the Skorohod integral in Equation (5.3) to exist, it is enough to show that t (Φ)(t, s) Σ(s) 2 ds < . kKg ⋄ k−λ ∞ Z0 Applying Proposition 2.10, with ε> 0 we have

t 2 g(Φ)(t, s) Σ(s) −λ− 1 −ε ds 0 kK ⋄ k 2 Z t 2 2 2 Cε g(Φ)(t, s) −λ Σ(s) −λ ds ≤ 0 kK k k k Z t C2 g(t, s) 2 Φ(s) 2 Σ(s) 2 ds ≤ ε | | k k−λk k−λ Z0 t t 2 + C2 (Φ(u) Φ(s)) g(du,s) Σ(s) 2 ds ε − k k−λ Z0 Zs −λ

25 < . ∞

Thus the Skorohod integral in Equation (5.3) exists and is an element of −λ− 1 −ε. G 2 Now, using arguments similar to the ones used in the case of σ = 1, we show that under Assumption D the Pettis integral in Equation (5.3) also exists. Below we apply Proposition 2.10. For any ϕ and ε> 0 consider ∈ G t Ds ( g(Φ)(t, s)) Σ(s),ϕ ds 0 |hh K ⋄ ii| Z t

Ds ( g(Φ)(t, s)) Σ(s) −λ− 1 −ε ϕ λ+ 1 +ε ds ≤ 0 k K ⋄ k 2 k k 2 Z t

Cε ϕ λ+ 1 +ε Ds g(Φ)(t, s) −λ−ε Σ(s) −λ−ε ds ≤ k k 2 0 k K k k k Z 1 1 t 2 t 2 2 2 Cε ϕ λ+ 1 +ε Ds g(Φ)(t, s) −λ−ε ds Σ(s) −λ−ε ds ≤ k k 2 k K k k k Z0  Z0  < . ∞ The finiteness of the first integral above is a consequence of Theorem 3.2 and finite- ness of the second integral is ensured by Assumption D Item i. Recall from Theorem 2.7 that if Φ, Ψ ∗ are strongly independent, then Φ ∈ G · Ψ = Φ Ψ. Using this fact, we see that under an additional assumption of strong independence⋄ of (Φ) and Σ, we have the following result. Kg

Corollary 5.5. Suppose that g(Φ)(t, s) and Σ(s) are strongly independent for all 0 s t T . Under AssumptionsK A and D the integral ≤ ≤ ≤ t t t Φ(s) dX (s) := (Φ)(t, s) Σ(s) δB(s)+ D ( (Φ)(t, s)) Σ(s) ds (5.4) Σ Kg · s Kg · Z0 Z0 Z0 is well-defined and equal to the one in Equation (5.3). Moreover, if Φ(t) −λ for all t [0, T ], then for any ε> 0 ∈ G ∈ t

Φ(s) dXΣ(s) −λ− 1 −ε. ∈ G 2 Z0 Note that we cannot assume that Φ(t) and Σ(t) are strongly independent as the operator ( ) does not preserve the support, that is supp Φ(n) = supp (Φ(n)) . Kg · 6 Kg However, in applications one often works with the volatility that is an (L2) process   independent of “everything else” in the model. This case is covered by Corollary 5.5 and it turns out that we can interchange the Wick and the pointwise product making this a very flexible setup. Observe that this case is in general not applicable in the setup of the previous section, as the space is much smaller than the space (L2). G Thus this extension of the class of volatility modulators is important.

26 6 Properties of the integral

First of all, recall that the definition of the stochastic derivative is the same as the definition of the Malliavin derivative that is used in Barndorff-Nielsen et al. (2012) and the only difference is the domain of the derivative. Also, the Skorohod integral in our setting is defined through the same formula as the Skorohod integral in Barndorff-Nielsen et al. (2012) but on a larger domain. These two observations allow us to state the following. Proposition 6.1. The integrals defined by Equations (4.1), (5.1) and (5.4), and the one defined in Barndorff-Nielsen et al. (2012) are equal on the intersection of their domains. Proof. This follows immediately from the definition of the stochastic derivative and Skorohod integral that we use and the fact that the Pettis integral is an extension of the Lebesgue integral to Banach space valued integrands. Proposition 6.2. The integrals defined in Equations (4.1), (5.1), (5.3) and (5.4) are all linear. Proof. First observe that directly from the definition of the stochastic derivative and Skorohod integral we know that both of these operations are linear. This, with the linearity of operator g and the fact that (aΦ) Ψ = a(Φ Ψ) and (Φ+Ψ) Σ = Φ Σ+Ψ Σ gives usK linearity of the integral in⋄ all the cases⋄ considered above.⋄ ⋄ ⋄ Proposition 6.3. If Φ is integrable with respect to X1, (Xσ,X⋄Σ,XΣ respectively) on the interval [0, T ] then for any S [0, T ] it is also integrable on the interval ∈ [0,S]. Moreover, the following holds T S

Φ(t)½[0,S](t) dX∗(t)= Φ(t) dX∗(t), Z0 Z0 where 1, σ, Σ, Σ ∗∈{ ⋄ } Theorem 6.4. Suppose that ϕ and Φ(t) is dX1 or dXσ-integrable on [0, T ]. Then for t [0, T ] ∈ G ∈ t t ϕ Φ(t) dX = ϕ Φ(t) dX , (6.1) · ∗ · ∗ Z0 Z0 where 1, σ . ∗∈{ } Proof. Our arguments follow closely those in the proof of (Barndorff-Nielsen et al., 2012, Proposition 8). First, note that the case with σ(s) = 1 will differ from the one with σ(s)=1 only in the norm estimates as seen in the6 previous sections. It is enough to establish that Equation (6.1) holds in one of the cases. Observe that (ϕ Φ)(t, s)= ϕ (Φ)(t, s). (6.2) Kg · ·Kg Next, by Equation (6.2), Proposition 3.7, and Theorem 3.13, we have t ϕΦ(s) dXσ(s) 0 Z t t = (ϕΦ)(t, s)σ(s) δB(s)+ D (ϕΦ)(t, s) σ(s) ds Kg s {Kg } Z0 Z0 27 t t = ϕ g(Φ)(t, s)σ(s) δB(s)+ Ds ϕ g(Φ)(t, s) σ(s) ds 0 K 0 { K } Z t Z = ϕ g(Φ)(t, s)σ(s) δB(s) 0 K Z t + (ϕDs g(Φ)(t, s) + g(Φ)(t, s)Ds ϕ ) σ(s) ds 0 {K } K { } Zt t = ϕ g(Φ)(t, s)σ(s) δB(s) Ds ϕ g(Φ)(t, s)σ(s) ds 0 K − 0 { }K Z t Z t + ϕ Ds g(Φ)(t, s) σ(s) ds + g(Φ)(t, s)Ds ϕ σ(s) ds 0 {K } 0 K { } tZ t Z = ϕ g(Φ)(t, s)σ(s) δB(s)+ ϕ Ds g(Φ)(t, s) σ(s) ds 0 K 0 {K } Z t Z = ϕ Φ(s) dXσ(s). Z0 So the theorem holds. All of the above properties are quite straightforward and generalize the results of Barndorff-Nielsen et al. (2012). In the white noise analysis setting, the -transform S (see Definition 2.2) plays a central role and we next discuss it in the context of the integrals we defined in the following subsection.

6.1 The -transform S We can apply some of the well known facts about the -transform and use the S properties of the operator g to find the -transform of the integral with respect to a process. BelowK we present twoS formulas for the -transform of the integralsVMBV in the case with no volatility modulation and with modulationS introduced through Wick product. We give explicit formulas depending on the -transform of the integrand only. S

Theorem 6.5. If Φ(s) is integrable with respect to dX1(s) on the interval [0, t], then t t Φ(s) dX (s) = ( (Φ)(ξ))(t, s)ξ(s) ds S 1 Kg S Z0  Z0 (6.3) t δ + ( (Φ)(ξ))(t, s) ds. δξ(s) {Kg S } Z0 Proof. It is easy to see that ( (Φ))(ξ) = ( (Φ)(ξ)) because the -transform S Kg Kg S S is linear and the Lebesgue measure in Proposition 2.11 can be substituted by any measure. Now, Equation (6.3) is a simple consequence of Propositions 3.8 and 3.14.

Theorem 6.6. If Φ(s) is integrable with respect to X⋄Σ(s) on the interval [0, t], then t t Φ(s) dX (s) = ( (Φ)(ξ))(t, s) (Σ(s))(ξ)ξ(s) ds S ⋄Σ Kg S · S Z0  Z0 (6.4) t δ + ( (Φ)(ξ))(t, s) (Σ(s))(ξ) ds. δξ(s) {Kg S · S } Z0 28 Proof. This follows from a reasoning similar to the one in the proof of Theorem 6.5 with the additional use of the fact that (Φ Ψ) = (Φ) (Ψ). S ⋄ S · S Remark 6.7. Observe that Equation (6.4) holds also in the case of strong indepen- dence discussed in Corollary 5.5.

6.2 Chaos expansion In this section, we give explicit chaos expansions in both cases, of no volatility modulation and with the volatility introduced through the Wick product. It is possible to find the chaos expansion for the dXσ integral, however the complexity of the formula renders it almost useless.

Theorem 6.8. If Φ(s) is dX1(s)-integrable on the interval [0, t], then t t Φ(s) dX (s)= (Φ(1))(t, s) ds 1 Kg Z0 Z0 ∞ t + I (Φ(n−1))(t, )+(n + 1) (Φ˜ (n+1))(t, s) ds , n Kg · Kg n=1 0 X  Z  (n+1) (n+1) where Φ˜ (x1,...,xn,s) = Φ (x1,...,xn,s,s). ∞ (n) Proof. Suppose that Φ(t)= n=0 In(Φ (t)). It is not difficult to see that with the application of the stochastic Fubini theorem we have P ∞ (Φ)(t, s)= I ( (Φ(n))(t, s)). Kg n Kg n=0 X Hence we have the following

t ∞ (Φ)(t, s) δB(s)= I ( (Φ(n))(t, )) Kg n+1 Kg · Z0 n=0 X∞ = I ( (Φ(n−1))(t, )). (6.5) n Kg · n=1 X Also, ∞ D (Φ)(t, s)= nI ( (Φ˜ (n))(t, s)) sKg n−1 Kg n=0 X ∞ = (Φ˜ (1))(t, s)+ (n + 1)I ( (Φ˜ (n+1))(t, s)), Kg n Kg n=1 X (n) (n+1) where Φ˜ (x1,...,xn−1,s) = Φ (x1,...,xn−1,s,s), because the stochastic derivative is taken in s and Φ already depends on s. Hence, t t D (Φ)(t, s) ds = (Φ˜ (1))(t, s) ds sKg Kg Z0 Z0 ∞ t (6.6) + I (n + 1) (Φ˜ (n+1))(t, s) ds , n Kg n=1 0 X  Z  29 where we have again used the stochastic Fubini theorem. Putting Equations (6.5) and (6.6) together, we obtain the desired result.

Theorem 6.9. If Φ(s) is dX⋄Σ(s)-integrable on the interval [0, t], then

t t Φ(s) dX (s)= (Φ(1))(t, s) Σ(0)(s) ds ⋄Σ Kg ⊗ Z0 Z0 ∞ n−1 + I b(Φ(n−1−m))(t, s) Σ(m)(s) n Kg ⊗ n=0 m=0 X X n t b +(n + 1) (Φ(n+1−m))(t, s) Σ(m)(s) ds . Kg ⊗ m=0 0 ! X Z Proof. We can establish the formula above using the same argumentsb as in the proof of Theorem 6.8 with the addition of the formula for the Wick product given in Equation (2.3).

Remark 6.10. The above holds in the case of strong independence discussed in Corollary 5.5.

6.3 Stability

In this section, we show that strong convergence of Φn to Φ implies strong conver- t t gence of 0 Φn(s) dX1(s) to 0 Φ(s) dX1(s).

TheoremR 6.11. Suppose thatR Φn, Φ are dX1-integrable. Suppose also that for some

λ> 0 and almost all t [0, T ] we have Φn(t) Φ(t) −λ 0 and Φn(t) Φ(t) −λ h(t), where h L1([0,∈ T ]) . Then for anyk ε>−0 k → k − k ≤ ∈ t t lim Φn(s) dX1(s) Φ(s) dX1(s) =0. n→∞ − Z0 Z0 −λ−ε

Proof. By linearity of the integral and the triangle inequality we have t t Φ (s) dX (s) Φ(s) dX (s) n 1 − 1 Z0 Z0 −λ−ε t

= Φn(s) Φ(s) dX1(s) − Z0 −λ−ε t t

g(Φn Φ)(t, s) δB( s) + Ds g(Φn Φ)(t, s) ds ≤ K − K − Z0 −λ−ε Z0 −λ−ε

It is enough to show that both of the norms above converge to zero as n . → ∞ First, we estimate the square of the norm of g(Φn Φ) as n 0 as it will be useful later. Below, we use a part of the proof ofK Proposition− 4.1,→ where we have shown that

t 2 t Φ(u) Φ(s) g(du,s) V t[g( ,s)] Φ(u) Φ(s) 2 g (du,s). − ≤ s · k − k−λ | | Zs −λ Zs

30 Now, consider (Φ Φ)(t, s) 2 kKg n − k−λ t 2 = g(t, s)(Φ (s) Φ(s)) + (Φ (u) Φ(u)) (Φ (s) Φ(s)) g(du,s) n − n − − n − Zs −λ 2 2   2 g(t, s) Φn(s) Φ(s) ≤ | | k − k−λ t 2 +2 (Φ (u) Φ(u)) (Φ (s) Φ(s)) g(du,s) n − − n − Zs −λ 2 2  2 g(t, s) Φn(s) Φ(s) −λ ≤ | | k t− k t 2 +2Vs [g( ,s)] (Φn(u) Φ(u)) (Φn(s) Φ(s)) −λ g (du,s) · s k − − − k | | 2 Z 2 2 g(t, s) Φn(s) Φ(s) −λ ≤ | | k t− k +4V t[g( ,s)] Φ (u) Φ(u) 2 + Φ (s) Φ(s) 2 g (du,s) s · k n − k−λ k n − k−λ | | Zs 0, as n ,  → →∞ by Lebesgue’s dominated convergence theorem because Φn(t) Φ(t) −λ 0 for almost all t. k − k → Now, by Theorem 3.10, there is a constant Cε such that t t (Φ Φ)(t, s) δB(s) C (Φ Φ)(t, s) 2 dt Kg n − ≤ ε kKg n − k−λ Z0 −λ−ε Z0

0 → as n . →∞ Finally, by Hölder’s inequality and Theorem 3.2, we have t t D ( (Φ Φ)(t, s)) ds D ( (Φ Φ)(t, s)) ds s Kg n − ≤ k s Kg n − k−λ−ε Z0 −λ−ε Z0 t 2 t Ds( g(Φn Φ)(t, s)) ds ≤ k K − k−λ−ε Z0 tC˜ (Φ Φ)(t, s) 2 ≤ εkKg n − k−λ 0 → as n . Thus the result holds. →∞ The following two theorems restate the above result in the setting with smooth volatility and the volatility introduced through the Wick product. We omit the proofs as they follow the same argument as the proof of the results above with additional use of some of the norm estimates from previous section.

Theorem 6.12. Suppose that Φn, Φ are dXσ-integrable. Suppose also that for some 1 √ λ > ν = 2 ln(2 + 2) and almost all t [0, T ] we have Φn(t) Φ(t) −λ+ν 0 and Φ (t) Φ(t) h(t), where h ∈L1([0, T ]) . Thenk for any− ε>k0 → k n − k−λ+ν ≤ ∈ t t lim Φn(s) dXσ(s) Φ(s) dXσ(s) =0. n→∞ − Z0 Z0 −λ−ε

31 Theorem 6.13. Suppose that Φn, Φ are dX⋄Σ-integrable. Suppose also that for some 1 √ λ > ν = 2 ln(2 + 2) and almost all t [0, T ] we have Φn(t) Φ(t) −λ+ν 0 and Φ (t) Φ(t) h(t), where h ∈L1([0, T ]) . Thenk for any− ε>k0 → k n − k−λ+ν ≤ ∈ t t lim Φn(s) dX⋄Σ(s) Φ(s) dX⋄Σ(s) =0. n→∞ 0 − 0 −λ− 1 −ε Z Z 2

7 An example – the Donsker delta function

In this section, we present an example of a generalized process that cannot be inte- grated in the setting of Barndorff-Nielsen et al. (2012). We study the integrability of the Donsker delta function with respect to a Brownian-driven Volterra process built upon an Ornstein–Uhlenbeck kernel function g. The importance of the Donsker delta function is well-illustrated in Aase et al. (2001), where the authors use the Donsker delta function to compute hedging strategies. It is well-known that the Donsker delta function, that is δ0(B(t)), is not an (L2) stochastic process, however it is a process in for any λ > 0 (see G−λ Potthoff and Timpel, 1995, Example 2.2) and it has a chaos expansion given by

∞ n 1 ( 1) ⊗2n δ (B(t)) = I − ½ . 0 √ 2n (2t)nn! [0,t) 4πt n=0 X   We also have the following formula for the norm of δ0(B(t))

1 ∞ (2n)! δ (B(t)) 2 = , k 0 k−λ 2πt 4n(n!)2eλ4n n=0 X where the sum converges for any λ> 0. Therefore 1 δ (B(t)) 2 = C , k 0 k−λ t λ where Cλ is a constant depending only on λ. Now, we take g(t, s) = e−α(t−s), and will show that for any ε > 0, Φ(t) =

½[ε,∞)(t)δ0(B(t)) is dX1(t)-integrable. We need to avoid 0 as δ0(B(0)) does not exist. It can be easily verified that Assumption A Items i and ii are satisfied with our choice of Φ and g. In order to show that Assumption A Item iii holds, take ε

32 To show that Assumption B holds, one can apply arguments similar to the ones used above together with some of the properties of the exponential integral function x et Ei = −∞ t dt. We omit these computations because they are straightforward but rather tedious.

R ½ So ½[ε,∞)(t)δ0(B(t)) is dX1(t)-integrable. Moreover, since [ε,∞)(t)δ0(B(s)) −λ for any λ> 0, by Theorem 4.6, we have that ∈ G

t

½ (t)δ (B(s)) dX (s) (7.2) [ε,∞) 0 1 ∈ G−λ Z0 for any λ> 0. Since the chaos expansion of the integral in Equation (7.2) is rather long and complex, we give the chaos expansion of (δ (B))(t, s) to show an intermediate Kg 0 step in the derivation of the complete chaos expansion of the integral. And so

∞ −α(t−s) −α(t−s) 1 αe + e 1 ⊗2n (Φ)(t, s)= I − ½ (v ...v ) Kg 4π 2n αsn+1 [0,s) 1 2n n=0 X αs n + e α (Γ( n,αs) Γ( n, α min t, v1,...v2n )) , − − − · { } !

∞ ν−1 −t where Γ(ν, x)= x t e dt is the incomplete Gamma function. Note that with a different Volterra kernel, it may be possible to balance the R infinite norm of the Donsker delta at zero without resorting to an explicit cut-off

function like ½[ε,∞)(t). Looking at Equation (7.1), an obvious and rather trivial example is a Volterra kernel of the form g(t, s)= a(t)b(s), where b(s) and a(s)b(s) are functions that are decaying to 0 at an at most linear rate as s 0+. Also from → Equation (7.1) we see that it is impossible to find a shift-kernel such that δ0(B(s)) is dX1(s)-integrable on [0,ε] for any ε> 0.

8 Conclusions

We have extended the theory of integration with respect to volatility modulated Brownian-driven Volterra processes first discussed in Barndorff-Nielsen et al. (2012) onto the space of generalized Potthoff–Timpel distributions ∗. We have employed the white noise analysis tools to show the properties of the stochasticG derivative and Skorohod integral in the space ∗ as well as numerous properties of the G VMBV integral without volatility modulation and with modulation introduced in two dif- ferent ways, through the pointwise product and through the Wick product. We also show that under strong independence, the two volatility modulation approaches are equivalent. Our approach allows to integrate, for example, the Donsker delta function which is not an element of (L2) and thus not tractable in the setting of Barndorff-Nielsen et al. (2012). Moreover, the theory presented in this paper al- lows for integration with respect to non-semimartingales (e.g. fractional Brownian motion) and for integration of non-adapted stochastic processes. There are still some questions that are left without an answer. For instance, it is natural to ask whether this approach can be generalized to the setting of Hida

33 spaces ( ) and ( )∗. Another such question is that of the change of the driving S S process. In Barndorff-Nielsen et al. (2012), the authors discuss not only Brownian motion as the driver of the Volterra process, but also a pure-jump Lévy process, thus it is interesting to see whether our approach can be applied in that setting. Another possible generalization opportunity comes from the fact that for now, the domain of integration is a finite interval, namely [0, T ]. Recently Basse-O’Connor et al. (2013+) studied integration theory on the real line that may be used to define integrals with respect to processes of the form

t X(t)= g(t, s)σ(s) dL(s). (8.1) Z−∞ Such processes are interesting both from theoretical and practical perspective and it would be useful to extend the theory discussed in the present paper in the setting of Equation (8.1).

Acknowledgments

Ole E. Barndorff-Nielsen and Benedykt Szozda acknowledge support from The T.N. Thiele Centre For Applied Mathematics In Natural Science and from CREATES (DNRF78), funded by the Danish National Research Foundation. Fred Espen Benth acknowledges financial support from the two projects "Energy Markets: Modelling, Optimization and Simulation (EMMOS)" and "Managing Weather Risk in Electric- ity Markets (MAWREM)", funded by the Norwegian Research Council.

References

Knut Aase, Bernt Øksendal, Nicolas Privault, and Jan Ubøe. White noise generalizations of the Clark-Haussmann-Ocone theorem with application to mathematical finance. Finance Stoch., 4(4):465–496, 2000.

Knut Aase, Bernt Øksendal, and Jan Ubøe. Using the Donsker delta function to compute hedging strategies. Potential Anal., 14(4):351–374, 2001.

Elisa Alòs, Olivier Mazet, and David Nualart. Stochastic calculus with respect to Gaussian processes. Ann. Probab., 29(2):766–801, 2001.

Ole. E. Barndorff-Nielsen and Jürgen Schmiegel. A stochastic differential equation frame- work for the timewise dynamics of turbulent velocities. Theory of Probability and Its Applications, 52(3):372–388, 2008.

Ole E. Barndorff-Nielsen and Jürgen Schmiegel. Brownian semistationary processes and volatility/intermittency. In Advanced financial modelling, volume 8 of Radon Ser. Com- put. Appl. Math., pages 1–25. Walter de Gruyter, Berlin, 2009.

Ole E. Barndorff-Nielsen, Fred Espen Benth, and Almut E. D. Veraart. Modelling electric- ity forward markets by ambit fields, 2011.

34 Ole E. Barndorff-Nielsen, Fred Espen Benth, Jan Pedersen, and Almut E. D. Veraart. On stochastic integration for volatility modulated Lévy-driven Volterra processes, 2012. arXiv:1205.3275 [math.PR].

Ole E. Barndorff-Nielsen, Fred Espen Benth, and Almut E. D. Veraart. Modelling energy spot prices by lévy semistationary processes. Bernoulli, 2013+. to appear.

Andreas Basse-O’Connor, Svend-Erik Graversen, and Jan Pedersen. Stochastic integration on the real line. Theory of Probability and Its Applications, 2013+. to appear.

Fred Espen Benth. The Gross derivative and generalized random variables. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 2(3):381–396, 1999.

Fred Espen Benth. On weighted L2(Ω)-spaces, their duals and Itô integration. Stochastic Anal. Appl., 19(3):329–341, 2001.

Fred Espen Benth and Jürgen Potthoff. On the martingale property for generalized stochas- tic processes. Stochastics Stochastics Rep., 58(3-4):349–367, 1996.

Laurent Decreusefond. Stochastic integration with respect to Gaussian processes. C. R. Math. Acad. Sci. Paris, 334(10):903–908, 2002.

Laurent Decreusefond. Stochastic integration with respect to Volterra processes. Ann. Inst. H. Poincaré Probab. Statist., 41(2):123–149, 2005.

Takeyuki Hida, Hui-Hsiung Kuo, Jürgen Potthoff, and Ludwig Streit. White noise: An infinite-dimensional calculus, volume 253 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht, 1993.

Helge Holden, Bernt Øksendal, Jan Ubøe, and Tusheng Zhang. Stochastic partial differ- ential equations. Universitext. Springer, New York, second edition, 2010. A modeling, white noise functional approach.

Hui-Hsiung Kuo. White noise distribution theory. Probability and Stochastics Series. CRC Press, Boca Raton, FL, 1996.

Billy James Pettis. On integration in vector spaces. Trans. AMS, 44:277–304, 1938.

Jürgen Potthoff and Matthias Timpel. On a dual pair of spaces of smooth and generalized random variables. Potential Anal., 4(6):637–654, 1995.

Almut E. D. Veraart and Luitgard A. M. Veraart. Risk premia in energy markets. Journal of Energy Markets, 2013+. to appear.

35