arXiv:1410.2230v5 [math.PR] 22 Mar 2016 stes-aldVler ersnain A representation. Volterra so-called the is into back transfered then and level” “Brownian stocha X the the that in that done so principle be transfer a develop then and a atal uddb h ins utrlFudto (Na Foundation Cultural Pool). Finnish the by funded partially was ex series processes; Gaussian of representation calculus; 60H30. ae scalnig n a ooecm h hleg sto is challenge the overcome consideration, under to process way Gaussian One challenging. is gales where (1.1) as represented be can that process ovnet tmasta h lrto of filtration the that means it convenient: osol upto only goes h neligBona motion Brownian underlying the ..temngah 5 n 2]adrfrne therein. references and repre [20] Volterra and its [5] using monographs by the mostly admittin st e.g. developed its process been and Gaussian has motion [11], indeed Brownian famous and fractional most the [2] is the in representation Apparently, e.g., studied, been few. has analysis stochastic TCATCAAYI FGUSA RCSE VIA PROCESSES GAUSSIAN OF ANALYSIS STOCHASTIC . 2010 Date n ftems tde ersnaini em faBrownian a of terms in representation studied most the of One ar itsaiwsprilyfne yEi atnnFoun Aaltonen Emil by funded partially was Viitasaari Lauri phrases. and words Key h tcatcaayi fGusa rcse htaents not are that processes Gaussian of analysis stochastic The ac 3 2016. 23, March : W ahmtc ujc Classification. Subject Mathematics qain n aiu ieiodestimations. stochas likelihood expansions, maximum series ap and bridges, giving equations law, by in representation equivalence Fredholm representationto the usin Fredholm by of analysis the convenience stochastic the extend develop We and principle motion. transfer Brownian representation a Fredholm to a admits function variance grable Abstract. saBona oinand motion Brownian a is t ec h ae“otra.Ti otrantr svery is nature Volterra This “Volterra”. name the hence , OM OTNNADLUIVIITASAARI LAURI AND SOTTINEN TOMMI RDOMREPRESENTATION FREDHOLM eso hteeysprbeGusa rcs ihinte- with process Gaussian separable every that show We X t = qiaec nlw asinpoess Itˆo Mall formula; processes; Gaussian law; in Equivalence Z 0 t 1. K ( Introduction ,s t, W d ) asinVler rcse n their and processes Volterra Gaussian . 1 K W X rmr 01;Scnay6H5 60H07, 60H05, Secondary 60G15; Primary s ∈ t , a,i em faBona motion Brownian a of terms in say, L X 2 asinVler process Volterra Gaussian ([0 sicue ntefitainof filtration the in included is asos tcatcanalysis. stochastic pansions; ∈ T , [0 ] inlFudtos Professor Foundations’ tional 2 T , .Hr h integration the Here ). ] ain om Sottinen Tommi dation. , i differential tic t eshow We it. g ihrespect with catcanalysis ochastic tcaayi can analysis stic utt mention to just plications etto,see sentation, ersn the represent h ee of level the oa to emimartin- Volterra g motion iavin sa is 2 SOTTINENANDVIITASAARI

In discrete finite time the Volterra representation (1.1) is nothing but the Cholesky lower-triangular factorization of the covariance of X, and hence every Gaussian process is a Volterra process. In continuous time this is not true, see Example 3.1 in Section 3. There is a more general representation than (1.1) by Hida, see [13, Theo- rem 4.1]. However, this Hida representation includes possibly infinite num- ber of Brownian motions. Consequently, it seems very difficult to apply the Hida representation to build a transfer principle needed by stochastic analysis. Moreover, the Hida representation is not quite general. Indeed, it requires, among other things, that the Gaussian process is purely non- deterministic. The Fredholm representation (1.2) below does not require pure non-determinism. Our Example 3.1 in Section 3, that admits a Fred- holm representation, does not admit a Hida representation, and the reason is the lack of pure non-determinism. The problem with the Volterra representation (1.1) is the Volterra nature of the kernel K, as far as generality is concerned. Indeed, if one considers Fredholm kernels, i.e., kernels where the integration is over the entire interval [0, T ] under consideration, one obtains generality. A Gaussian Fredholm process is a process that admits the Fredholm representation T (1.2) Xt = KT (t,s)dWs, t ∈ [0, T ], Z0 2 2 where W is a Brownian motion and KT ∈ L ([0, T ] ). In this paper we show that every separable Gaussian process with integrable variance function ad- mits the representation (1.2). The price we have to pay for this generality is twofold: (i) The process X is generated, in principle, from the entire path of the underlying Brownian motion W . Consequently, X and W do not necessarily generate the same filtration. This is unfortunate in many applications. (ii) In general the kernel KT depends on T even if the covariance R does not, and consequently the derived operators also depend on T . This is why we use the cumbersome notation of explicitly stat- ing out the dependence when there is one. In stochastic analysis this dependence on T seems to be a minor inconvenience, however. Indeed, even in the Volterra case as examined, e.g., by Al`os, Mazet and Nualart [2], one cannot avoid the dependence on T in the trans- fer principle. Of course, for statistics, where one would like to let T tend to infinity, this is a major inconvenience. Let us note that the Fredholm representation has already been used, with- out proof, in [3], where the H¨older continuity of Gaussian processes was studied. Let us mention a few papers that study stochastic analysis of Gaussian processes here. Indeed, several different approaches have been proposed in the literature. In particular, fractional Brownian motion has been a subject FREDHOLMREPRESENTATION 3 of active study (see the monographs [5] and [20] and references therein). More general Gaussian processes have been studied in the already men- tioned work by Al`os, Mazet and Nualart [2]. They considered Gaussian Volterra processes where the kernel satisfies certain technical conditions. In particular, their results cover fractional Brownian motion with Hurst pa- 1 rameter H > 4 . Later Cheridito and Nualart [7] introduced an approach based on the covariance function itself rather than to the Volterra kernel K. Kruk et al. [17] developed stochastic calculus for processes having finite 1 2-planar variation, especially covering fractional Brownian motion H ≥ 2 . Moreover, Kruk and Russo [16] extended the approach to cover singular co- 1 variances, hence covering fractional Brownian motion H < 2 . Furthermore, Mocioalca and Viens [21] studied processes which are close to processes with stationary increments. More precisely, their results cover cases where 2 2 E(Xt − Xs) ∼ γ (|t − s|) where γ satisfies some minimal regularity condi- tions. In particular, their results cover some processes which are not even continuous. Finally, the latest development we are aware of is a paper by Lei and Nualart [19] who developed stochastic calculus for processes having absolute continuous covariance by using extended domain of the divergence introduced in [16]. Finally, we would like to mention Lebovits [18] who used the S-transform approach and obtained similar results to ours, albeit his notion of integral is not elementary as ours. The results presented in this paper gives unified approach to stochas- tic calculus for Gaussian processes and only integrability of the variance function is required. In particular, our results cover processes that are not continuous. The paper is organized as follows: Section 2 contains some preliminaries on Gaussian processes and isonor- mal Gaussian processes and related Hilbert spaces. Section 3 provides the proof of the main theorem of the paper: the Fred- holm representation. In Section 4 we extend the Fredholm representation to a transfer principle in three contexts of growing generality: First we prove the transfer principle for Wiener integrals in Subsection 4.1, then we use the transfer principle to define the multiple Wiener integral in Subsection 4.2, and finally, in Subsec- tion 4.3 we prove the transfer principle for Malliavin calculus, thus showing that the definition of multiple Wiener integral via the transfer principle done in Subsection 4.2 is consistent with the classical definitions involving Brow- nian motion or other Gaussian martingales. Indeed, classically one defines the multiple Wiener integrals by either building an isometry with removed diagonals or by spanning higher chaoses by using the Hermite polynomials. In the general Gaussian case one cannot of course remove the diagonals, but the Hermite polynomial approach is still valid. We show that this approach is equivalent to the transfer principle. In Subsection 4.3 we also prove an Itˆoformula for general Gaussian processes and in Subsection 4.4 we extend the Itˆoformula even further by using the technique of extended domain in 4 SOTTINENANDVIITASAARI the spirit of [7]. This Itˆoformula is, as far as we know, the most general version for Gaussian processes existing in the literature so far. Finally, in Section 5 we show the power of the transfer principle in some applications. In Subsection 5.1 the transfer principle is applied to the ques- tion of equivalence of law of general Gaussian processes. In subsection 5.2 we show how one can construct net canonical-type representation for gener- alized Gaussian bridges, i.e., for the Gaussian process that is conditioned by multiple linear functionals of its path. In Subsection 5.3 the transfer prin- ciple is used to provide series expansions for general Gaussian processes.

2. Preliminaries

Our general setting is as follows: Let T > 0 be a fixed finite time-horizon and let X = (Xt)t∈[0,T ] be a Gaussian process with covariance R that may or may not depend on T . Without loss of any interesting generality we assume that X is centered. We also make the very weak assumption that X is separable in the sense of following definition. Definition 2.1 (Separability). The Gaussian process X is separable if the L2(Ω,σ(X), P) is separable. Example 2.1. If the covariance R is continuous, then X is separable. In particular, all continuous Gaussian processes are separable. Definition 2.2 (Associated operator). For a kernel Γ ∈ L2([0, T ]2) we as- sociate an operator on L2([0, T ]), also denoted by Γ, as T Γf(t)= f(s)Γ(t,s)ds. Z0 Definition 2.3 (Isonormal process). The isonormal process associated with X, also denoted by X, is the Gaussian family (X(h), h ∈ HT ), where the Hilbert space HT = HT (R) is generated by the covariance R as follows:

(i) indicators 1t := 1[0,t), t ≤ T , belong to HT . H (ii) T is endowed with the inner product h1t, 1siHT := R(t,s).

Definition 2.3 states that X(h) is the image of h ∈ HT in the isometry that extends the relation X (1t) := Xt linearly. Consequently, we can define: Definition 2.4 (Wiener integral). X(h) is the Wiener integral of the ele- ment h ∈ HT with respect to X. We shall also denote T h(t)dXt := X(h). Z0 Remark 2.1. Eventually, all the following will mean the same: T T X(h)= h(t)dXt = h(t) δXt = IT,1(h)= IT (h). Z0 Z0 FREDHOLMREPRESENTATION 5

Remark 2.2. The Hilbert space HT is separable if and only if X is sepa- rable.

Remark 2.3. Due to the completion under the inner product h·, ·iHT it may happen that the space HT is not a space of functions, but contains distributions, cf. [25] for the case of fractional Brownian motions with Hurst index bigger than half. H 0 H Definition 2.5. The function space T ⊂ T is the space of functions that can be approximated by step-functions on [0, T ] in the inner product h·, ·iHT . H 0 Example 2.2. If the covariance R is of bounded variation, then T is the space of functions f satisfying T T |f(t)f(s)| |R|(ds, dt) < ∞. Z0 Z0 H 0 ′ Remark 2.4. Note that it may be that f ∈ T but for some T < T we H 0 have f1T ′ 6∈ T ′ , cf. [4] for an example with fractional Brownian motion with Hurst index less than half. For this reason we keep the notation HT instead of simply writing H . For the same reason we include the dependence of T whenever there is one.

3. Fredholm Representation

Theorem 3.1 (Fredholm representation). Let X = (Xt)t∈[0,T ] be a separable 2 2 centered Gaussian process. Then there exists a kernel KT ∈ L ([0, T ] ) and a Brownian motion W = (Wt)t≥0, independent of T , such that T (3.1) Xt = KT (t,s)dWs Z0 if and only if the covariance R of X satisfies the trace condition T (3.2) R(t,t)dt< ∞. Z0 The representation (3.1) is unique in the sense that any other represen- tation with kernel K˜T , say, is connected to (3.1) by a U 2 on L ([0, T ]) such that K˜T = UKT . Moreover, one may assume that KT is symmetric.

Proof. Let us first remark that (3.2) is precisely what we need to invoke the Mercer’s theorem and take square root in the resulting expansion. Now, by the Mercer’s theorem we can expand the covariance function R on [0, T ]2 as ∞ T T T (3.3) R(t,s)= λi ei (t)ei (s), Xi=1 6 SOTTINENANDVIITASAARI

T ∞ T ∞ where (λi )i=1 and (ei )i=1 are the eigenvalues and the eigenfunctions of the covariance operator T RT f(t)= f(s)R(t,s)ds. Z0 T ∞ 2 Moreover, (ei )i=1 is an orthonormal system on L ([0, T ]). Now, RT , being a covariance operator, admits a square root operator KT defined by the relation T T T T T T (3.4) ei (s)RT ej (s)ds = KT ei (s)KT ej (s)ds Z0 Z0 T T for all ei and ej . Now, condition (3.2) means that RT is and, consequently, KT is Hilbert–Schmidt. In particular, KT is a compact oper- ator. Therefore, it admits a kernel. Indeed, a kernel KT can be defined by using the Mercer expansion (3.3) as ∞ T T T (3.5) KT (t,s)= λi ei (t)ei (s). Xi=1 q This kernel is obviously symmetric. Now, it follows that T R(t,s)= KT (t, u)KT (s, u)du, Z0 and the representation (3.1) follows from this. Finally, let us note that the uniqueness upto a unitary transformation is obvious from the square-root relation (3.4). 

Remark 3.1. The Fredholm representation (3.1) holds also for infinite in- tervals, i.e. T = ∞, if the trace condition (3.2) holds. Unfortunately, this is seldom the case.

Remark 3.2. The above proof shows that the Fredholm representation (3.1) holds in law. However, one can also construct the process X via (3.1) for a given Brownian motion W . In this case, the representation (3.1) holds of course in L2. Finally, note that in general it is not possible to construct the Brownian motion in the representation (3.1) from the process X. Indeed, there might not be enough randomness in X. To construct W from X one needs that the indicators 1t, t ∈ [0, T ], belong to the range of the operator KT . Remark 3.3. We remark that the separability of X ensures representation of form (3.1) where the kernel KT only satisfies a weaker condition KT (t, ·) ∈ L2([0, T ]) for all t ∈ [0, T ], which may happen if the trace condition (3.2) fails. In this case, however, the associated operator KT does not belong to L2([0, T ]), which may be undesirable.

Example 3.1. Let us consider the following very degenerate case: Suppose Xt = f(t)ξ, where f is deterministic and ξ is a standard normal random FREDHOLMREPRESENTATION 7 variable. Suppose T > 1. Then T (3.6) Xt = f(t)1[0,1)(s)dWs. Z0 2 So, KT (t,s) = f(t)1[0,1)(s). Now, if f ∈ L ([0, T ]), then condition (3.2) is 2 2 2 satisfied and KT ∈ L ([0, T ] ). On the other hand, even if f∈ / L ([0, T ]) we can still write X in form (3.6). However, in this case the kernel KT does not belong to L2([0, T ]2). Example 3.2. Consider a truncated series expansion n T Xt = ek (t)ξk, Xk=1 where ξk are independent standard normal random variables and t T T ek (t)= e˜k (s)ds, Z0 T N 2 wheree ˜k , k ∈ , is an orthonormal basis in L ([0, T ]). Now it is straightfor- ward to check that this process is not purely non-deterministic (see [8] for definition) and consequently, X cannot have Volterra representation while it is clear that X admits a Fredholm representation. On the other hand, T 2 by choosing the functionse ˜k to be the trigonometric basis on L ([0, T ]), X is a finite-rank approximation of the Karhunen–Lo`eve representation of standard Brownian motion on [0, T ]. Hence by letting n tend to infinity we obtain the standard Brownian motion, and hence a Volterra process. Example 3.3. Let W be a standard Brownian motion on [0, T ] and consider the Brownian bridge. Now, there are two representations of the Brownian bridge (see [28] and references therein on the representations of Gaussian bridges). The orthogonal representation is t Bt = Wt − WT . T Consequently, B has a Fredholm representation with kernel KT (t,s) = 1t(s) − t/T . The canonical representation of the Brownian bridge is t 1 Bt = (T − t) dWs. Z0 T − s Consequently, the Brownian bridge has also a Volterra-type representation with kernel K(t,s) = (T − t)/(T − s).

4. Transfer Principle and Stochastic Analysis

4.1. Wiener Integrals. The following Theorem 4.1 is the transfer principle in the context of Wiener integrals. The same principle extends to multiple Wiener integrals and Malliavin calculus later in the following subsections. Recall that for any kernel Γ ∈ L2([0, T ]2) its associated operator on L2([0, T ]) is T Γf(t)= f(s)Γ(t,s)ds. Z0 8 SOTTINENANDVIITASAARI

Definition 4.1 (Adjoint associated operator). The adjoint associated op- erator Γ∗ of a kernel Γ ∈ L2([0, T ]2) is defined by linearly extending the relation ∗ Γ 1t = Γ(t, ·).

∗ Remark 4.1. The name and notation of “adjoint” for KT comes from Al`os, ∗ Mazet and Nualart [2] where they showed that in their Volterra context KT admits a kernel and is an adjoint of KT in the sense that T T ∗ KT f(t) g(t)dt = f(t) KT g(dt). Z0 Z0 for step-functions f and g belonging to L2([0, T ]). It is straightforward to check that this statement is valid also in our case.

Example 4.1. Suppose the kernel Γ(·,s) is of bounded variation for all s and that f is nice enough. Then T Γ∗f(s)= f(t)Γ(dt,s). Z0 Theorem 4.1 (Transfer principle for Wiener integrals). Let X be a separable centered Gaussian process with representation (3.1) and let f ∈ HT . Then T T ∗ f(t)dXt = KT f(t)dWt. Z0 Z0

Proof. Assume first that f is an elementary function of form n

f(t)= ak1Ak Xk=1 for some disjoint intervals Ak = (tk−1,tk]. Then the claim follows by the very ∗ definition of the operator KT and Wiener integral with respect to X together ∗ with representation (3.1). Furthermore, this shows that KT provides an 2 isometry between HT and L ([0, T ]). Hence HT can be viewed as a closure of ∗ elementary functions with respect to kfkHT = kKT fkL2([0,T ]) which proves the claim. 

4.2. Multiple Wiener Integrals. The study of multiple Wiener integrals go back to Itˆo[15] who studied the case of Brownian motion. Later Huang and Cambanis [14] extended to notion to general Gaussian processes. Das- gupta and Kallianpur [10, 9] and Perez-Abreu and Tudor [24] studied mul- tiple Wiener integrals in the context of fractional Brownian motion. In [10, 9] a method that involved a prior control measure was used and in [24] a transfer principle was used. Our approach here extends the transfer principle method used in [24]. We begin by recalling multiple Wiener integrals with respect to Brownian motion and then we apply transfer principle to generalize the theory to arbitrary Gaussian process. FREDHOLMREPRESENTATION 9

Let f be a elementary function on [0, T ]p that vanishes on the diagonals, i.e. n f = a 1 , i1...ip ∆i1 ×···×∆ip i1,...,iXp=1 where ∆k := [tk−1,tk) and ai1...ip = 0 whenever ik = iℓ for some k 6= ℓ. For such f we define the multiple Wiener integral as T T W IT,p(f) := · · · f(t1,...,tp) δWt1 · · · δWtp Z0 Z0 n

:= ai1...ip ∆Wt1 · · · ∆Wtp , i1,...,iXp=1 W where we have denoted ∆Wtk := Wtk − Wtk−1 . For p = 0 we set I0 (f)= f. Now, it can be shown that elementary functions that vanish on the diago- 2 p W nals are dense in L ([0, T ] ). Thus, one can extend the operator IT,p to the space L2([0, T ]p). This extension is called the multiple Wiener integral with respect to the Brownian motion.

W Remark 4.2. It is well known that IT,p(f) can be understood as a multiple of iterated Ito integral if and only if f(t1,...,tp) = 0 unless t1 ≤···≤ tp. In this case we have

T tp t2 W IT,p(f)= p! · · · f(t1,...,tp)dWt1 · · · dWtp . Z0 Z0 Z0 For the case of Gaussian processes that are not martingales this fact is totally useless.

For a general Gaussian process X, recall first the Hermite polynomials: p p (−1) 1 2 d 1 2 2 x − 2 x Hp(x) := e p e . p! dx   For any p ≥ 1 let the pth Wiener chaos of X be the closed linear subspace of 2 L (Ω) generated by the random variables {Hp (X(ϕ)) , ϕ ∈ H , kϕkH = 1}, where Hp is the pth Hermite polynomial. It is well known that the mapping X ⊗n Ip (ϕ )= n!Hp (X(ϕ)) provides a linear isometry between the symmetric H ⊙p and the pth Wiener chaos. The random variables X ⊗p Ip (ϕ ) are called multiple Wiener integrals of order p with respect to the Gaussian process X.

Let us now consider the multiple Wiener integrals IT,p for a general Gauss- ian process X. We define the multiple integral IT,p by using the transfer principle in Definition 4.3 below and later argue that this is the “correct” way of defining them. So, let X be a centered Gaussian process on [0, T ] with covariance R and representation (3.1) with kernel KT .

Definition 4.2 (p-fold adjoint associated operator). Let KT be the kernel ∗ in (3.1) and let KT be its adjoint associated operator. Define ∗ ∗ ⊗p (4.1) KT,p := (KT ) . 10 SOTTINENANDVIITASAARI

In the same way, define H H ⊗p H 0 H ⊗p 0 T,p := T and T,p := ( T ) . Here the tensor products are understood in the sense of Hilbert spaces, i.e., they are closed under the inner product corresponding to the p-fold product of the underlying inner-product.

Definition 4.3. Let X be a centered Gaussian process with representation (3.1) and let f ∈ HT,p. Then W ∗ IT,p(f) := IT,p KT,pf  The following example should convince the reader that this is indeed the correct definition.

Example 4.2. Let p = 2 and let h = h1 ⊗ h2, where both h1 and h2 are step-functions. Then ∗ ∗ ∗ (KT,2h)(x,y) = (KT h1)(x)(KT h2)(y) and W ∗ IT,2 KT,2f T  T ∗ ∗ ∗ ∗ = KT h1(v)dWv · KT h2(u)dWu − hKT h1, KT h2iL2([0,T ]) Z0 Z0

= X(h1)X(h2) − hh1, h2iHT as supposed to by analog of the Gaussian martingale case.

The following proposition shows that our approach to define multiple Wiener integrals is consistent with the traditional approach where multiple Wiener integrals for more general Gaussian process X are defined as the closed linear space generated by Hermite polynomials.

th Proposition 4.1. Let Hp be the p Hermite polynomial and let h ∈ HT . Then ⊗p p X(h) IT,p h = p!khkH Hp T khkH   T

Proof. First note that without loss of generality we can assume khkHT = 1. Now by the definition of multiple Wiener integral with respect to X we have

⊗p W ∗ ⊗p IT,p h = IT,p KT,ph ,   where ∗ ⊗p ∗ ⊗p KT,ph = (KT h) . Consequently, by [23, Proposition 1.1.4] we obtain ⊗p ∗ IT,p h = p!Hp (W (KT h))  which implies the result together with Theorem 4.1.  FREDHOLM REPRESENTATION 11

Proposition 4.1 extends to the following product formula, which is also well-known in the Gaussian martingale case, but apparently new for gen- eral Gaussian processes. Again, the proof is straightforward application of transfer principle.

Proposition 4.2. Let f ∈ HT,p and g ∈ HT,q. Then p∧q p q (4.2) IT,p(f)IT,q(g)= r! IT,p q r(f⊗˜ K ,rg), rr + −2 T Xr=0 where ˜ ∗ −1 ∗ ˜ ∗ (4.3) f⊗KT ,rg = KT,p+q−2r KT,pf⊗rKT,qg −1   ∗ ∗ and KT,p+q−2r denotes the pre-image of KT,p+q−2r.  

Proof. The proof follows directly from the definition of IT,p(f) and [23, Proposition 1.1.3]. 

Example 4.3. Let f ∈ HT,p and g ∈ HT,q be of forms f(x1,...,xp) = p q k=1 fk(xk) and g(y1,...,yp)= k=1 gk(yk). Then Q ∗ ∗ Q KT,pf⊗˜ rKT,qg (x1,...,xp−r,y1,...,yq−r) p−r  r ∗ ∗ = KT fk(xk) KT fp−r+j(sj) Z r [0,T ] kY=1 jY=1 q−r r ∗ ∗ × KT gk(yk) KT gq−r+j(sj)ds1 · · · dsr kY=1 jY=1 p−r q−r r r ∗ ∗ = KT fk(xk) KT gk(yk)h fp−r+j, gq−r+jiHT,r . kY=1 kY=1 jY=1 jY=1 Hence p−r q−r r r ˜ f⊗KT ,rg = fk(xk) gk(yk)h fj, gjiHT,r kY=1 kY=1 jY=1 jY=1 as supposed to. Remark 4.3. In the literature multiple Wiener integrals are usually defined as the closed linear space spanned by Hermite polynomials. In such a case Proposition 4.1 is clearly true by the very definition. Furthermore, one has a multiplication formula (see e.g. [22]) p∧q p q IT,p(f)IT,q(g)= r! IT,p q r(f⊗˜ H ,rg), rr + −2 T Xr=0 ˜ where f⊗HT ,rg denotes symmetrization of tensor product ∞

⊗r ⊗r f ⊗HT ,r g = hf, ei1 ⊗ . . . ⊗ eir iH ⊗ hg, ei1 ⊗ . . . ⊗ eir iH . T T i1,...,iXr=1 12 SOTTINENANDVIITASAARI and {ek, k = 1,...} is a complete orthonormal basis of the Hilbert space HT . Clearly, by Proposition 4.1, both formulas coincide. This also shows that (4.3) is well-defined.

4.3. Malliavin calculus and Skorohod integrals. We begin by recalling some basic facts on Malliavin calculus. Definition 4.4. Denote by S the space of all smooth random variables of the form

F = f(X(ϕ1), · · · , X(ϕn)), ϕ1, · · · , ϕn ∈ HT , ∞ Rn where f ∈ Cb ( ) i.e. f and all its derivatives are bounded. The Malliavin X 2 H derivative DT F = DT F of F is an element of L (Ω; T ) defined by n DT F = ∂if(X(ϕ1), · · · , X(ϕn))ϕi. Xi=1 In particular, DT Xt = 1t. D1,2 D1,2 Definition 4.5. Let = X be the Hilbert space of all square integrable Malliavin differentiable random variables defined as the closure of S with respect to norm 2 E 2 E 2 kF k1,2 = |F | + kDT F kHT .     The divergence operator δT is defined as the adjoint operator of the Malli- avin derivative DT .

Definition 4.6. The domain Dom δT of the operator δT is the set of random 2 variables u ∈ L (Ω; HT ) satisfying E (4.4) hDT F, uiHT ≤ cukF kL2 1,2 for any F ∈ D and some constant cu depending only on u. For u ∈ Dom δT the divergence operator δT (u) is a square integrable random variable defined by the duality relation E [F δ (u)] = EhD F, ui T T HT for all F ∈ D1,2. 1,2 Remark 4.4. It is well-known that D ⊂ Dom δT .

We use the notation T δT (u)= us δXs. Z0 Theorem 4.2 (Transfer principle for Malliavin calculus). Let X be a sepa- rable centered Gaussian process with Fredholm representation (3.1). Let DT and δT be the Malliavin derivative and the Skorohod integral with respect to W W X on [0, T ]. Similarly, let DT and δT be the Malliavin derivative and the Skorohod integral with respect to the Brownian motion W of (3.1) restricted on [0, T ]. Then W ∗ ∗ W δT = δT KT and KT DT = DT . FREDHOLM REPRESENTATION 13

Proof. The proof follows directly from transfer principle and the isometry ∗ provided by KT with same arguments as in [2]. Indeed, by isometry we have H ∗ −1 2 T = (KT ) (L ([0, T ])), ∗ −1 where (KT ) denotes the pre-image, which implies that D1,2 H ∗ −1 D1,2 2 ( T ) = (KT ) ( W (L ([0, T ]))) ∗ W which justifies KT DT = DT . Furthermore, we have relation ∗ W Ehu, D F i = EhK u, D F i 2 T HT T T L ([0,T ]) 2 for any smooth random variable F and u ∈ L (Ω; HT ). Hence, by the very definition of Dom δ and transfer principle, we obtain ∗ −1 W Dom δT = (KT ) (Dom δT ) W ∗  and δT (u)= δT (KT u) proving the claim.

Now we are ready show that the definition of the multiple Wiener integral IT,p in Subsection 4.2 is correct in the sense that it agrees with the iterated Skorohod integral. H p Proposition 4.3. Let h ∈ T,p be of form h(x1,...,xp) = k=1 hk(xk). Then h is iteratively p times Skorohod integrable and Q T T

(4.5) · · · h(t1,...,tp) δXt1 · · · δXtp = IT,p(h) Z0 Z0 H 0 Moreover, if h ∈ T,p is such that it is p times iteratively Skorohod inte- grable, then (4.5) still holds.

Proof. Again the idea is to use the transfer principle together with induction. Note first that the statement is true for p = 1 by definition and assume next j that the statement is valid for k = 1,...,p. We denote fj = k=1 hk(xk). Hence, by induction assumption, we have Q T T T

· · · h(t1,...,tp,tv) δXt1 · · · δXtp δXv = IT,p(fp)hp+1(v) δXv . Z0 Z0 Z0

Put now F = IT,p(fp) and u(t)= hp+1(t). Hence by [23, Proposition 1.3.3] and by applying the transfer principle we obtain that F u belongs to Dom δT and δ (F u)= δ (u)F − hD F, u(t)i T T t HT = IW (K∗ h )IW (K∗ f ) − phI (f (·,t)), h (t)i T T p+1 T,p T,p p T,p−1 p p+1 HT W ∗ ∗ = IT (hp+1)IT,p(fp) − pIT,p−1(KT,p−1fp⊗˜ 1KT hp+1) ˜ = IT (hp+1)IT,p(fp) − pIT,p−1(fp⊗KT ,1hp+1). Hence the result is valid also for p + 1 by Proposition 4.2 with q = 1. H 0 The claim for general h ∈ T,p follows by approximating with a products of simple function.  14 SOTTINENANDVIITASAARI

H 0 Remark 4.5. Note that (4.5) does not hold for arbitrary h ∈ T,p in general without the a priori assumption of p times iterative Skorohod integrability. For example, let p = 2, X = BH be a fractional Brownian motion with 1 H ≤ 4 and define ht(s, v)= 1t(s)1s(v) for some fixed t ∈ [0, T ]. Then T ht(s, v) δXv = Xs1t(s) Z0 But X·1t does not belong to Dom δT (see [7]). We end this section by providing an extension of Itˆoformulas provided by Al`os, Mazet and Nualart [2]. They considered Gaussian Volterra processes, i.e., they assumed the representation t Xt = K(t,s)dWs, Z0 where the Kernel K satisfied certain technical assumptions. In [2] it was proved that in the case of Volterra processes one has t t ′ 1 ′′ (4.6) f(Xt)= f(0) + f (Xs) δXs + f (Xs)dR(s,s) Z0 2 Z0 if f satisfies the growth condition 2 (4.7) max |f(x)|, |f ′(x)|, |f ′′(x)| ≤ ceλ|x|  1 E 2 −1 for some c > 0 and λ < 4 sup0≤s≤T Xs . In the following we will consider different approach which ables us to: (i) prove that such formula holds with minimal requirements, (ii) give more instructive proof of such result, (iii) extend the result from Volterra context to more general Gaussian processes, (iv) drop some technical assumptions posed in [2]. For simplicity, we assume that the variance of X is of bounded variation to guarantee the existence of the integral T ′′ (4.8) f (Xt)dR(t,t). Z0 If the variance is not of bounded variation, then the integral (4.8) may be understood by integration by parts if f ′′ is smooth enough, or in the general case, via the inner product h·, ·iHT . In Theorem 4.3 we also have to assume that the variance of X is bounded. The result for polynomials is straightforward, once we assume that the 2 paths of polynomials of X belong to L (Ω; HT ). Proposition 4.4 (Itˆoformula for polynomials). Let X be a separable cen- tered Gaussian process with covariance R and assume that p is a polyno- mial. Furthermore, assume that for each polynomial p we have p(X·)1t ∈ 2 L (Ω; HT ). Then for each t ∈ [0, T ] we have t t ′ 1 ′′ (4.9) p(Xt)= p(X0)+ p (Xs) δXs + p (Xs)dR(s,s) Z0 2 Z0 FREDHOLM REPRESENTATION 15 if and only if X·1t belongs to Dom δT . Remark 4.6. The message of the above result is that once the processes 2 p(X·)1t ∈ L (Ω; HT ), then they automatically belong to the domain of δT 2 which is a subspace of L (Ω; HT ). However, in order to check p(X·)1t ∈ 2 L (Ω; HT ) one needs more information on the Kernel KT . A sufficient condition is provided in Corollary 4.1 which covers many cases of interest.

Proof. By definition and applying transfer principle, we have to prove that ′ p (X·)1t belongs to domain of δT and that t E ∗ ′ (4.10) DsGKT [p (X·)1t]ds Z0 t 1 ′′ = E [Gp(Xt)] − E [Gp(X0)] − E Gp (Xt) dR(t,t) 2 Z 0   for every random variable G from a total subset of L2(Ω). In other words, it is sufficient to show that (4.12) is valid for random variables of form W ⊗n G = In (h ), where h is a step function. Note first that it is sufficient to prove the claim only for Hermite polyno- mials Hk, k = 1,.... Indeed, it is well-known that any polynomial can be expressed as a linear combination of Hermite polynomials and consequently, the result for arbitrary polynomial p follows by linearity.

We proceed by induction. First it is clear that first two polynomials H0 ′ and H1 satisfies (4.10). Furthermore, by assumption H2(X·)1t belongs to Dom δT from which (4.10) is easily deduced by [23, Proposition 1.3.3]. As- sume next that the result is valid for Hermite polynomials Hk, k = 0, 1,...n. Then, recall well-known recursion formulas

Hn+1(x) = xHn(x) − nHn−1(x), ′ Hn(x) = nHn−1(x). The induction step follows with straightforward calculations by using the recursion formulas above and [23, Proposition 1.3.3]. We leave the details to the reader. 

We will now illustrate how the result can be generalized for functions satisfying the growth condition (4.7) by using Proposition 4.4. First note that the growth condition (4.7) is indeed natural since it guarantees that the left side of (4.6) is square integrable. Consequently, since operator δT is 2 2 a mapping from L (Ω; HT ) into L (Ω), functions satisfying (4.7) are largest class of functions for which (4.6) can hold. However, it is not clear in general ′ whether f (X·)1t belongs to Dom δT . Indeed, for example in [2] the authors posed additional conditions on the Volterra kernel K to guarantee this. As our main result we show that Ekf ′(X )1 k2 < ∞ implies that (4.6) holds. · t HT In other words, the Itˆoformula (4.6) is not only natural but it is also the only possibility. Theorem 4.3 (Itˆoformula for Skorohod integrals). Let X be a separable centered Gaussian process with covariance R such that all the polynomials 16 SOTTINENANDVIITASAARI

2 2 p(X·)1t ∈ L (Ω; HT ) Assume that f ∈ C satisfies growth condition (4.7) and that the variance of X is bounded and of bounded variation. If E ′ 2 (4.11) kf (X·)1tkHT < ∞ for any t ∈ [0, T ], then

t t ′ 1 ′′ f(Xt)= f(X0)+ f (Xs) δXs + f (Xs)dR(s,s). Z0 2 Z0

Proof. In this proof we assume, for notational simplicity and with no loss of generality, that sup0≤s≤T R(s,s) = 1. ′ First it is clear that (4.11) implies that f (X·)1t belongs to domain of δT . Hence we only have to prove that

EhD G, f ′(X )1 i T · t HT t 1 ′′ = E[Gf(Xt)] − E[Gf(X0)] − E[Gf (Xs)] dR(s,s). 2 Z0 W ⊗n for every random variable G = In (h ). Now, it is well-known that Hermite polynomials, when properly scaled, form an orthogonal system in L2(R) when equipped with the Gaussian mea- sure. Now each f satisfying the growth condition (4.7) have a series repre- sentation ∞ (4.12) f(x)= αkHk(x). Xk=0 Indeed, the growth condition (4.7) implies that

2 − x |f ′(x)|2e 2 sup0≤s≤T R(s,s) dx< ∞. ZR

Furthermore, we have

∞ f(Xs)= αkHk(Xs) Xk=0 where the series converge almost surely and in L2(Ω), and similar conclusion ′ ′′ is valid for derivatives f (Xs) and f (Xs).

Then, by applying (4.11) we obtain that for any ǫ> 0 there exists N = Nǫ such that we have

EhD G, f ′ (X )1 i <ǫ, n ≥ N T n · t HT where ∞ ′ ′ fn(Xs)= αkHk(Xs). kX=n FREDHOLM REPRESENTATION 17

W ⊗n Consequently, for random variables of form G = In (h ) we obtain, by choosing N large enough and applying Proposition 4.4, that t 1 ′′ E[Gf(Xt)] − E[Gf(X0)] − E(Gf (Xt)) dR(t,t) 2 Z0 − EhD G, f ′(X )1 i T · t HT E ′ = hDT G, fn(X·)1tiHT < ǫ. Now the left side does not depend on n which concludes the proof.  Remark 4.7. Note that actually it is sufficient to have ∞ (4.13) EhD G, f ′(X )1 i = α EhD G, H′ (X )1 i T · t HT k T k · t HT Xk=1 from which the result follows by Proposition 4.4. Furthermore, taking ac- count growth condition (4.7) this is actually sufficient and necessary condi- tion for formula (4.6) to hold. Consequently, our method can also be used to obtain Itˆoformulas by considering extended domain of δT (see [16] or [19]). This is the topic of Subsection 4.4 below. Example 4.4. It is known that if X = BH is a fractional Brownian motion 1 ′ 1 with H > 4 , then f (X·)1t satisfies condition (4.11) while for H ≤ 4 it does not (see [23, Chapter 5]). Consequently, a simple application of Theorem 4.3 1 1 covers fractional Brownian motion with H > 4 . For the case H ≤ 4 one has to consider extended domain of δT which is proved in [16]. Consequently, in this case we have (4.13) for any F ∈ S .

We end this section by illustrating the power of our method with the following simple corollary which is an extension of [2, Theorem 1]. Corollary 4.1. Let X be a separable centered continuous Gaussian process with covariance R that is bounded and such that the Fredholm kernel KT is of bounded variation and T T 2 kXt − XskL2(Ω)|KT |(dt,s) ds< ∞. Z0 Z0  Then for any t ∈ [0, T ] we have t t ′ 1 ′′ f(Xt)= f(X0)+ f (Xs) δXs + f (Xs)dR(s,s). Z0 2 Z0 Proof. Note that assumption is a Fredholm version of condition (K2) in [2] which implies condition (4.11). Hence the result follows by Theorem 4.3. 

4.4. Extended divergence operator. As shown in Subsection 4.3 the Itˆo formula (4.6) is the only possibility. However, the problem is that the space 2 ′ L (Ω; HT ) may be to small to contain the elements f (X·)1t. In particular, 2 it may happen that not even the process X itself belong to L (Ω; HT ) (see 1 e.g. [7] for the case of fractional Brownian motion with H ≤ 4 ). This 18 SOTTINENANDVIITASAARI problem can be overcome by considering an extended domain of δT . The idea of extended domain is to extend the inner product hu, ϕiHT for simple ϕ to more general processes u and then define extended domain by (4.4) with a restricted class of test variables F . This also gives another intuitive reason why extended domain of δT can be useful; indeed, here we have proved that Itˆoformula (4.6) is the only possibility, and what one essentially needs for such result is that

(i) X·1t belongs to Dom δT , (ii) equation (4.13) is valid for functions satisfying (4.7).

Consequently, one should look for extensions of operator δT such that these two things are satisfied. To facility the extension of domain, we make the following relatively mod- erate assumption:

(H) The function t 7→ R(t,s) is of bounded variation on [0, T ] and T sup |R|(ds,t) < ∞. t∈[0,T ] Z0

Remark 4.8. Note that we are making the assumption on the covariance R, not the Kernel KT . Hence our case is different from that of [2]. Also, [19] assumed absolute continuity in R; we are satisfied with bounded variation.

We will follow the idea from Lei and Nualart [19] and extend the inner H product h·, ·iHT beyond T . Consider a step function ϕ. Then, on the one hand, by the isometry property we have T ∗ hϕ, 1tiHT = (KT ϕ)(s)gt(s)ds, Z0 2 where gt(s) = K(t,s) ∈ L ([0, T ]). On the other hand, by using adjoint property (see Remark 4.1) we obtain T T ∗ (KT ϕ)(s)gt(s)ds = ϕ(s) (KT gt) (ds), Z0 Z0 where, computing formally, we have T (KT gt) (ds) = gt(u)KT (ds, u)du Z0 T = KT (t, u)KT (ds, u)du Z0 = R(t, ds). Consequently, T

hϕ, 1tiHT = ϕ(s)R(t, ds). Z0 FREDHOLM REPRESENTATION 19

This gives motivation to the following definition similar to that of [19, Def- inition 2.1].

Definition 4.7. Denote by TT the space of measurable functions g satisfy- ing T sup |g(s)||R|(t, ds) < ∞ t∈[0,T ] Z0 n and let ϕ be a step function of form ϕ = k=1 bk1tk . Then we extend T h·, ·iHT to T by defining P n T hg, ϕiH = bk g(s)R(tk, ds). T Z Xk=1 0 In particular, this implies that for g and ϕ as above, we have t

hg1t, ϕiHT = g(s)dh1s, ϕiHT . Z0

E We define extended domain Dom δT similarly as in [19]. 1 E Definition 4.8. A process u ∈ L (Ω; TT ) belongs to Dom δT if E | hu, DF iHT |≤ cukF k2 for any smooth random variable F ∈ S . In this case, δ(u) ∈ L2(Ω) is defined by duality relationship E E [F δ(u)] = hu, DF iHT . E Remark 4.9. Note that in general Dom δT and Dom δT are not compara- ble. See [19] for discussion.

Note now that if a function f satisfies the growth condition (4.7), then ′ 1 f (X·)1t ∈ L (Ω; TT ) since (4.7) implies ′ p E sup |f (Xt)| < ∞ t∈[0,T ] −1 1 for any p < 2λ supt∈[0,T ] R(t,t) . Consequently, with this definition we are able the get rid of the problem that processes might not belong to cor- responding HT -spaces. Furthermore, this implies that the series expansion 1 (4.12) converges in the norm L (Ω; TT ) defined by T E |u(s)||R|(t, ds) Z0 which in turn implies (4.13). Hence it is straightforward to obtain the follow- ing by first showing the result for polynomials and then by approximating in a similar manner as done in the previous Subsection 4.3, but using the extended domain instead. Theorem 4.4 (Itˆoformula for extended Skorohod integrals). Let X be a separable centered Gaussian process with covariance R and assume that f ∈ C2 satisfies growth condition (4.7). Furthermore, assume that (H) 20 SOTTINENANDVIITASAARI holds and that the variance of X is bounded and of bounded variation. Then ′ E for any t ∈ [0, T ] the process f (X·)1t belongs to Dom δT and t t ′ 1 ′′ f(Xt)= f(X0)+ f (Xs) δXs + f (Xs)dR(s,s). Z0 2 Z0 Remark 4.10. As an application of Theorem 4.4 it is straightforward to de- rive version of Itˆo–Tanaka formula under additional conditions which guar- antee that for a certain sequence of functions fn we have the convergence 1 t ′′ of term 2 0 fn (Xs)dR(s,s) to the local time. For details we refer to [19], where authorsR derived such formula under their assumptions.

Finally, let us note that the extension to functions f(t,x) is straightfor- ward, where f satisfies the following growth condition. λ|x|2 (4.14) max [|f(t,x)|, |∂tf(t,x)|, |∂xf(t,x)|, |∂xxf(t,x)|] ≤ ce 1 E 2 −1 for some c> 0 and λ< 4 sup0≤s≤T Xs . Theorem 4.5 (Itˆoformula for extended Skorohod integrals, II). Let X be a separable centered Gaussian process with covariance R and assume that f ∈ C1,2 satisfies growth condition (4.14). Furthermore, assume that (H) holds and that the variance of X is bounded and of bounded variation. Then E for any t ∈ [0, T ] the process ∂xf(·, X·)1t belongs to Dom δT and t t f(t, Xt) = f(0, X0)+ ∂xf(s, Xs) δXs + ∂tf(s, Xs)ds Z0 Z0 1 t + ∂xxf(s, Xs)dR(s,s). 2 Z0 Proof. Taking into account that we have no problems concerning processes to belong to the required spaces, the formula follows by approximating with polynomials of form p(x)q(t) and following the proof of theorem 4.3. 

5. Applications

We illustrate how some results transfer easily from the Brownian case to the Gaussian Fredholm processes.

5.1. Equivalence in Law. The transfer principle has already been used in connection with the equivalence of law of Gaussian processes in e.g. [26] in the context of fractional Brownian motions and in [11] in the context of Gaussian Volterra processes satisfying certain non-degeneracy conditions. The following proposition uses the Fredholm representation (3.1) to give a sufficient condition for the equivalence of general Gaussian processes in terms of their Fredholm kernels. Proposition 5.1. Let X and X˜ be two Gaussian process with Fredholm ker- 2 2 nels KT and K˜T , respectively. If there exists a Volterra kernel ℓ ∈ L ([0, T ] ) such that T (5.1) K˜T (t,s)= KT (t,s) − KT (t, u)ℓ(u, s)du, Zs FREDHOLM REPRESENTATION 21 then X and X˜ are equivalent in law.

Proof. Recall that by the Hitsuda representation theorem [13, Theorem 6.3] a centered Gaussian process W˜ is equivalent in law to a Brownian motion on [0, T ] if and only if there exists a kernel ℓ ∈ L2([0, T ]2) and a Brownian motion W such that W˜ admits the representation t s (5.2) W˜ t = Wt − ℓ(s, u)dWuds. Z0 Z0 Let X have the Fredholm representations T (5.3) Xt = KT (t,s)dWs. Z0 Then X˜ is equivalent to X if it admits, in law, the representation T d (5.4) X˜t = KT (t,s)dW˜ s, Z0 where W˜ is connected to W of (5.3) by (5.2). In order to show (5.4), let T ˜ ˜ ′ Xt = KT (t,s)dWt Z0 be the Fredholm representation of X˜. Here W ′ is some Brownian motion. Then, by using the connection (5.1) and the Fubini theorem, we obtain T ˜ ˜ ′ Xt = KT (t,s)dWs Z0 T d = K˜T (t,s)dWs Z0 T T = KT (t,s) − KT (t, u)ℓ(u, s)du dWs Z0  Zs  T T T = KT (t,s)dWs − KT (t, u)ℓ(u, s)du dWs Z0 Z0 Zs T T s = KT (t,s)dWs − KT (t,s)ℓ(s, u)dWu ds Z0 Z0 Z0 T T s = KT (t,s)dWs − KT (t,s) ℓ(s, u)dWu ds Z0 Z0 Z0  T s = KT (t,s) dWs − ℓ(s, u)dWu ds Z0  Z0  T = KT (t,s)dW˜ s. Z0 Thus, we have shown the representation (5.4), and consequently the equiv- alence of X˜ and X.  22 SOTTINENANDVIITASAARI

5.2. Generalized Bridges. We consider the conditioning, or bridging, of i N X on N linear functionals GT = [GT ]i=1 of its paths: T T N GT (X)= g(t)dXt = gi(t)dXt . Z0 Z0 i=1 We assume, without any loss of generality, that the functions gi are linearly independent. Also, without loss of generality we assume that X0 = 0, and T the conditioning is on the set { 0 g(t)dXt = 0} instead of the apparently R T more general conditioning on the set { 0 g(t)dXt = y}. Indeed, see [28] how to obtain the more general conditioningR from this one. The rigorous definition of a bridge is the following. Definition 5.1. The generalized bridge measure Pg is the regular condi- tional law T g g P = P [X ∈ · ]= P X ∈ · g(t)dXt = 0 .  Z  0 g A representation of the generalized Gaussian bridge is any process X sat- isfying T g g P [X ∈ · ]= P [X ∈ · ]= P X ∈ · g(t)dXt = 0 .  Z  0

We refer to [28] for more details on generalized Gaussian bridges. There are many different representations for bridges. A very general rep- resentation is the so-called orthogonal representation given by T g ⊤ −1 Xt = Xt − hhh1t, giii hhhgiii g(u)dXu, Z0 where, by the transfer principle, T T C hhhgiiiij := ov gi(t)dXt , gj(t)dXt Z0 Z0  T ∗ ∗ = KT gi(t) KT gj(t)dt. Z0 A more interesting representation is the so-called canonical representation where the filtrations of the bridge and the original process coincide. In [28] such representations were constructed for the so-called prediction-invertible Gaussian processes. In this subsection we show how the transfer princi- ple can be used to construct a canonical-type bridge representation for all Gaussian Fredholm processes. We start with an example that should make it clear how one uses the transfer principle. Example 5.1. We construct a canonical-type representation for X1, the bridge of X conditioned on XT = 0. Assume X0 = 0. Now, by the Fredholm representation of X we can write the conditioning as T T (5.5) XT = 1dXt = KT (T,t)dWt = 0. Z0 Z0 FREDHOLM REPRESENTATION 23

∗ Let us then denote by W 1 the canonical representation of the Brownian bridge with the conditioning (5.5). Then, by [28, Theorem 4.12], s 1∗ KT (T,s)KT (T, u) dWs =dWs − dWu ds. Z T 2 0 u KT (T, v) dv R Now, by integrating against the kernel KT , we obtain from this that T s 1 KT (T,s)KT (T, u) Xt = Xt − KT (t,s) dWu ds. Z Z T 2 0 0 u KT (T, v) dv R This canonical-type bridge representation seems to be a new one.

Let us then denote T hhgiiij (t) := gi(s)gj (s)ds. Zt Then, in the same was as Example 5.1 the same way, by applying the trans- fer principle to [28, Theorem 4.12], we obtain the following canonical-type bridge representation for general Gaussian Fredholm processes.

Proposition 5.2. Let X be a Gaussian process with Fredholm kernel KT g such that X0 = 0. Then the bridge X admits the canonical-type represen- tation T s ∗ g ∗ ∗ ⊤ ∗ −1 g (u) Xt = Xt − KT (t,s) |hhg ii|(s) (g ) (s) hhg ii (s) ∗ dWu ds, Z0 Z0 hhg ii(u) ∗ ∗ where g = KT g.

5.3. Series Expansions. The Mercer square root (3.5) can be used to build the Karhunen–Lo`eve expansion for the Gaussian process X. But the Mercer form (3.5) is seldom known. However, if one can find some kernel KT such that the representation (3.1) holds, then one can construct a series expansion for X by using the transfer principle of Theorem 4.1 as follows:

Proposition 5.3 (Series expansion). Let X be a separable Gaussian pro- T ∞ cess with representation (3.1). Let (φj )j=1 be any orthonormal basis on L2([0, T ]). Then X admits the series expansion

∞ T T (5.6) Xt = φ (s)KT (t,s)ds · ξj, Z j Xj=1 0

∞ where the (ξj)j=1 is a sequence of independent standard normal random vari- ables. The series (5.6) converges in L2(Ω); and also almost surely uniformly if and only if X is continuous.

The proof below uses reproducing kernel Hilbert space technique. For more details on this we refer to [12] where the series expansion is constructed for fractional Brownian motion by using the transfer principle. 24 SOTTINENANDVIITASAARI

Proof. The Fredholm representation (3.1) implies immediately that the re- 2 producing kernel Hilbert space of X is the image KT L ([0, T ]) and KT is actually an isometry from L2([0, T ]) to the reproducing kernel Hilbert space of X. The L2-expansion (5.6) follows from this due to [1, Theorem 3.7] and the equivalence of almost sure convergence of (5.6) and continuity of X follows [1, Theorem 3.8]. 

5.4. Stochastic differential equations and maximum likelihood esti- mators. Let us briefly discuss the following generalized Langevin equation θ θ dXt = −θXt dt +dXt, t ∈ [0, T ] with some Gaussian noise X, parameter θ > 0, and initial condition X0. This can be written in the integral form t T θ θ θ (5.7) Xt = X0 − θ Xs ds + 1t(s)dXs. Z0 Z0 T Here the integral 0 1t(s)dXs can be understood in a pathwise sense or in a Skorohod sense,R and both integrals coincides. Suppose now that the Gaussian noise X has the Fredholm representation T Xt = KT (t,s)dWs. Z0 By applying the transfer principle we can write (5.7) as t T θ θ θ Xt = X0 − θ Xs dt + KT (t,s)dWs. Z0 Z0 This equation can be interpreted as a stochastic differential equation with T some anticipating Gaussian perturbation term 0 KT (t,s)dWs. Now the θ unique solution to (5.7) with an initial conditionRX0 = 0 is given by t θ −θt θs Xt = e e dXs. Z0 By using integration by parts and by applying the Fredholm representation of X this can be written as T t T θ −θt θs Xt = KT (t, u)dWu − θ e e KT (s, u)dWuds Z0 Z0 Z0 which, thanks to Stochastic Fubini’s theorem, can be written as T t θ −θt θs Xt = KT (t, u) − θ e e KT (s, u)ds dWu. Z0  Z0  In other words, the solution Xθ is a Gaussian process with a Kernel t θ −θ(t−s) KT (t, u)= KT (t, u) − θ e KT (s, u)ds. Z0 Note that this is just an example how transfer principle can be applied in order to study stochastic differential equations. Indeed, for a more general equation a a dXt = a(t, Xt )dt +dXt FREDHOLM REPRESENTATION 25 the existence or uniqueness result transfers immediately to the existence or uniqueness result of equation t T a a a Xt = X0 + a(s, Xs )ds + KT (t, u)dWu, Z0 Z0 and vice versa. Let us end this section by discussing briefly how the transfer principle can be used to build maximum likelihood estimators (MLE’s) for the mean- reversal-parameter θ in equation (5.7). For details on parameter estimation in such equations with general stationary-increment Gaussian noise we refer to [27] and references therein. Let us assume that the noise X in (5.7) is infinite-generate, in the sense that the Brownian motion W in its Fredholm representation is a linear transformation of X. Assume further that the transformation admits a kernel so that we can write T −1 Wt = KT (t,s)dXs. Z0 −1 Then, by operating with the kernels KT and KT , we see that the equation (5.7) is equivalent to the anticipating equation

θ θ (5.8) Wt = AT,t(W )+ Wt, where T T θ −1 θ AT,t(W )= −θ KT (t,s)KT (s, u)dWu ds. Z0 Z0 Consequently, the MLE for the equation (5.7) is the MLE for the equation (5.8), which in turn can be constructed by using a suitable anticipating Girsanov theorem. There is a vast literature on how to do this, see e.g., [6] and references therein.

6. Conclusions

We have shown that virtually every Gaussian process admits a Fredholm representation with respect to a Brownian motion. This apparently a simple fact has, as far a we know, remained unnoticed until now. The Fredholm representation immediately yields the transfer principle with allows one to transfer the stochastic analysis of virtually any Gaussian process into sto- chastic analysis of the Brownian motion. We have show how this can be done. Finally, we have illustrated the power of the Fredholm representation and the associated transfer principle in many applications. Stochastic analysis becomes easy with the Fredholm representation. The only obvious problem is to construct the Fredholm kernel from the covari- ance function. In principle this can be done algorithmically, but analytically it is very difficult. The opposite construction is, however, trivial. There- fore, if one begins the modeling with the Fredholm kernel and not with the covariance, one’s analysis will be simpler and much more convenient. 26 SOTTINENANDVIITASAARI

References

[1] R. Adler, An Introduction to Continuity, Extrema, and Related Topics for General Gaussian Processes, vol. 12 of Institute of Mathematical Statistics Lecture Notes– Monograph series, Institute of Mathematical Statistics, Hayward, CA, 1990. [2] E. Alos,` O. Mazet, and D. Nualart, Stochastic calculus with respect to Gaussian processes, The Annals of Probability, 29 (2001), pp. 766–801. [3] E. Azmoodeh, T. Sottinen, L. Viitasaari, and A. Yazigi, Necessary and suffi- cient conditions for H¨older continuity of Gaussian processes, Statist. Probab. Lett., 94 (2014), pp. 230–235. [4] C. Bender and R. Elliott, On the Clark–Ocone theorem for fractional Brown- ian motions with Hurst parameter bigger than a half, Stoch. Stoch. Rep., 75 (2003), pp. 391–405. [5] F. Biagini, Y. Hu, B. Øksendal, and T. Zhang, Stochastic Calculus for Fractional Brownian Motion and Applications, Springer, 2008. [6] J. P. N. Bishwal, Maximum likelihood estimation in Skorohod stochastic differential equations, Proc. Amer. Math. Soc., 138 (2010), pp. 1471–1478. [7] P. Cheridito and D. Nualart, Stochastic integral of divergence type with respect h< 1 to the fractional Brownian motion with Hurst parameter 2 , Ann. Institut Henri Poincar´e, 41 (2005), pp. 1049–1081. [8] H. Cramer´ , On the structure of purely non-deterministic processes, Ark. Mat., 4 (1961), pp. 249–266. [9] A. Dasgupta and G. Kallianpur, Chaos decomposition of multiple fractional in- tegrals and applications, Probability Theory and Related Fields, 115 (1999), pp. 527– 548. [10] , Multiple fractional integrals, Probability Theory and Related Fields, 115 (1999), pp. 505–525. [11] B. F. and N. D., Equivalence of Volterra processes, Stochastic processes and their applications, 107 (2003), pp. 327–350. [12] H. Gilsing and T. Sottinen, Power series expansions for fractional Brownian motions, Theory of Stochastic Processes, 9 (2003), pp. 38–49. [13] T. Hida and M. Hitsuda, Gaussian Processes, vol. 120, Translations of Mathemat- ical Monographs, AMS, Providence, 1993. [14] S. Huang and S. Cambanis, Stochastic and multiple Wiener integrals for Gaussian processes, The Annals of Probability, 6 (1978), pp. 585–614. [15] K. Ito, Multiple Wiener integral, J. Math. Soc. Japan, 3 (1951), pp. 157–169. [16] I. Kruk and F. Russo, Malliavin-Skorohod calculus and Paley-Wiener integral for covariance singular processes, arxiv: 1011.6478, (2010). [17] I. Kruk, F. Russo, and C. Tudor, Wiener integrals, Malliavin calculus and co- variance structure measure, Journal of , 249 (2007), pp. 92–142. [18] J. Lebovits, Stochastic calculus with respect to Gaussian processes, arxiv: 1408.1020, (2014). [19] P. Lei and D. Nualart, Stochastic calculus for Gaussian processes and applications to hitting times, Communications in Stochastic Analysis, 6(3) (2012), pp. 379–402. [20] Y. Mishura, Stochastic calculus for fractional Brownian motion and related pro- cesses, vol. 1929 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2008. [21] O. Mocioalca and F. Viens, Skorohod integration and stochastic calculus beyond the fractional Brownian scale, Journal of Functional Analysis, 222 (2005), pp. 385– 434. [22] I. Nourdin and G. Peccati, Stein’s method and exact Berry-Esseen asymptotics for functionals of Gaussian fields, The Annals of Probability, 37 (2009), pp. 2231–2261. [23] D. Nualart, The Malliavin Calculus and Related Topics, Probability and Its Appli- cations, Springer, 2006. FREDHOLM REPRESENTATION 27

[24] V. Perez-Abreu and C. Tudor, Multiple stochastic fractional integrals: a trans- fer principle for multiple stochastic fractional integrals, Bol. Soc. Mat. Mexicana, 8 (2002), pp. 187–203. [25] V. Pipiras and M. Taqqu, Are classes of deterministic integrands for fractional Brownian motion on an interval complete?, Bernoulli, 7 (2001), pp. 873–879. [26] T. Sottinen, On Gaussian processes equivalent in law to fractional Brownian motion, J. Theoret. Probab., 17 (2004), pp. 309–325. [27] T. Sottinen and L. Viitasaari, Parameter estimation for the Langevin equation with stationary-increment Gaussian noise, arxiv: 1603.00390, (2016). [28] T. Sottinen and A. Yazigi, Generalized Gaussian bridges, Stoch. Process. Appl., 124 (2014), pp. 3084–3105.

Tommi Sottinen, Department of Mathematics and Statistics, University of Vaasa, P.O. Box 700, FIN-65101 Vaasa, FINLAND E-mail address: [email protected]

Lauri Viitasaari, Department of Mathematics and System Analysis, Aalto University School of Science, Helsinki, P.O. Box 11100, FIN-00076 Aalto, FIN- LAND E-mail address: [email protected]