Quick viewing(Text Mode)

On the Fractional Stochastic Integration for Random Non-Smooth Integrands

On the Fractional Stochastic Integration for Random Non-Smooth Integrands

arXiv:1510.01207v8 [math.PR] 19 Apr 2020 3 ,6 ,1] hsapoc lositgad fqiegen quite of integrands allows approach This rath 17]. product clas 9, Wick general 6, the more 4, so-called on the [3, on extension base an approach is special problem the integrands; 4 6 1 2 3.Seilsaitclifrnemtosd methods int inference This statistical 27]. 35]. Special 20, 30, 18, 33]. 29, [11, 32, 28, 26, 31, fina 19, 26, quantitative 17, 24, for 16, especially 15, modelling, 14, statistical 13, in 10, app 2, different using [1, defined in be can These integ motion. random nian of integration stochastic considers paper The Introduction 1 ntefatoa tcatcitgainfrrno non-sm random for integration stochastic fractional the On aual,teitga a edfie saReansmfrpie for sum a as defined be can the Naturally, osntivleMlivncluu rWc rdcs nadto,i ssho is it addition, In products. on continuously Wick based depend or is integrals integration stochastic Malliavin This involve error. not forecasting does the inte on for restrictions integration ing foresighted as interpreted be can integration o h oeato nabtaiysothrzngvncretobs current given conditiona horizon the short of arbitrarily rate H growth an of the on rando on forecast adapted restriction the integrable mild square for a of via class wide described short a is a on on extended predictable ”piecewise” is are gral that processes the on initially ofatoa rwinmto ihteHrtparameter Hurst the with motion Brownian fractional to us aaee,frcs error. forecast parameter, Hurst colo lcrclEgneig optn n ahmtclScie Mathematical and Computing Engineering, Electrical of School e words Key h ae ugssawyo tcatcitgaino adminte random of integration stochastic of way a suggests paper The ahmtc ujc lsicto (2010) Classification Subject le rpryo n ido erei o eurdfrteintegra the for required not is degree or kind any of property ¨ older umte:Otbr5 05 eie:Arl1,2020 19, April Revised: 2015. 5, October Submitted: tcatcitgain rcinlBona oin adminte random motion, Brownian fractional integration, stochastic : ioa Dokuchaev Nikolai integrands Abstract H 1 at H 60G22 : 1 = c;seeg 3 ,6 ,9 2 1 2 23, 22, 21, 12, 9, 8, 6, 4, [3, e.g. see nce; / ad ihrsett rcinlBrow- fractional to respect with rands 0. + 2 > H vlpdfrteemdl;see.g. see models; these for eveloped oce;serve n discussion and review see roaches; rta imn us e,e.g. see, sums; Riemann than er rnsfauigcorrespond- featuring grands gainhsmn applications many has egration 1 rltp u h etrsthe features the but type eral e fitgad.Teei a is There integrands. of ses / rain;differentiability ervations; .Teitga sdefined is integral The 2. nIˆ’ nerto and Itˆo’s integration on rcse.Ti class This processes. m oio.Te h inte- the Then horizon. cs utnUniversity Curtin nces, eiecntn ntime in constant cewise rnswt respect with grands ensur error square mean l d h suggested The nd. nta these that wn grands, ooth Wick product makes the corresponding integrals quite distinctive from the integrals based on the Riemann sums.

Currently, stochastic integrals with respect to the fractional Brownian motion BH with a Hurst parameter H (1/2, 1) are defined for random integrands in the following cases. ∈ (i) The integral is defined for the integrands that are pathwise H¨older with index p> 1 H; − see, e.g., Theorem 21 in [19] and [13, 35].

(ii) The integral is defined pathwise for integrands that has q-bounded variation with q < 1/(1 H); see, e.g., [34, 7]. − (iii) The integral is defined as a Skorohod integral for integrands γ such that γ is L -integrable ∇ p for p> (1/2 H)−1, where is the Gross-Sobolev derivative ( Theorem 3.6 [15] (2003) or − ∇ Theorem 6.2 [16]). This approach is based on anticipating integrals (see, e.g., [3, 9, 14, 17], and review in [16]). It can be noted that this requires certain differentiability of the integrand in the sense of existence of g or the fractional derivative [1]. ∇ We exclude from this list the integrals based on the Wick product and integrals for piecewise constant integrands. In this paper, we readdress stochastic integration of random integrands with respect to frac- tional Brownian motion. We suggests an integration scheme allowing to extend the class of admissible random integrands known in the literature. In particular, we show that stochastic integral with respect to the fractional Brownian motion B with H (1/2, 1) is well defined on H ∈ a wide class of L2-integrable processes with a mild restriction on the growth rate for conditional variance for a short term forecast. It is not required that the integrands g satisfy H¨older con- dition, or have finite p-variation, or γ is L - integrable, or a fractional derivative exists. The ∇ p description of this class does not require to use Malliavin calculus as in [15, 16] and does not use any kind of derivatives. We use a modification of the classical Riemann sums. Instead of the standard extension of the Riemann sums from the set of piecewise constant integrands, we used an extension of different sums from processes being ”piecewise predictable” on a short horizon that are not necessarily piecewise constant. More precisely, these integrands are adapted to the filtration generated by the observations being frozen at grid time points. In other words, this ”piecewise predictable” class includes all integrands that are predictable without error on a fixed time horizon that can be arbitrarily short. The corresponding stochastic integral is represented via sums of integrals of two different types: one type is a standard Itˆo’s integral, and another type is a Lebesgue integral for random integrands.

In the second step, we extended this integral on a wide class of L2-integrable processes (Theorem 3.1 below); the resulting integrals is denoted as d B The corresponding condition · F H allows a simple formulation that does not require MalliavinR calculus used in [15, 16]. This

2 theorem implies prior estimates of the stochastic integral via a norm of a random integrand (Corollary 3.1). Furthermore, it is shown that the stochastic integrals depend continuously on H at H = 1/2 + 0 under some additional mild restrictions on the growth rate for the conditional variance of the future values given current observations (Theorem 4.1 below). The paper is organized as follows. Section 2 presents some definitions. In Section 3, we present the definition of the new type of integral and some convergence results and prior esti- mates. In Section 4, we show some continuity of the new integral with respect to a variable Hurst parameter. The proofs are given in Section 5.

2 Some definitions

We are given a probability space (Ω, F, P), where Ω is a set of elementary events, F is a complete σ-algebra of events, and P is a probability . We assume that B (t) R is a fractional Brownian motion with the Hurst parameter { H }t∈ H (1/2, 1) defined as described in [26] such that B (0) = 0 and ∈ H t BH (t)= f(t, r)dB(r), (2.1) Z−∞ where t 0 and ≥ f(t, r) =∆ c (t r)H−1/2I + c ((t r)H−1/2 ( r)H−1/2)I . (2.2) H − r≥0 H − − − r<0

Here c = 1/Γ(H + 1/2), Γ is the Gamma function, I is the indicator function, and B(t) R H { }t∈ is a standard Brownian motion such that B(0) = 0; we denote by dB the standard Itˆo’s · integration. R Let d =∆ c (H 1/2). H H − For T > 0, τ [0, T] and g L (0, T), set ∈ ∈ 2 T G (τ, T, g) =∆ d (t τ)H−3/2g(t)dt. (2.3) H H − Zτ By the property of the Riemann–Liouville integral, there exists c> 0 such that

G ( , T, g) c g . (2.4) k H · kL2 (s,T) ≤ k kL2(s,T) It can be noted that this c is independent on H (1/2, 1). ∈ Let G be the filtration generated by the process B(t). { t} Let T > 0 be given.

3 Let L22 be the linear normed space formed as the completion in L2-norm of the set of all Gt- 1/2 T 2 adapted bounded measurable processes γ(t), t [0, T ], with the norm γ L = E γ(t) dt) . ∈ k k 22 0 For ε> 0, let Xε be the set of all γ L22 such that there exists an integern> 0 and a set ∈ R of non-random times T = T n [0, T ], where T = 0, T = T , and 0 < T T ε, such { k}k=1 ⊂ 0 n k+1 − k ≤ that γ(t) is G -measurable for t [T , T ). Tk ∈ k k+1 In particular, the set X includes all γ L such that γ(t) is G -measurable for all ε ∈ 22 t−ε t [0, T ]. ∈ Let X =∆ X . ∪ε>0 ε For the brevity, we sometimes denote L (Ω, G , P) by L (Ω), p 1. p T p ≥ Let X be the set of all γ L such that there exists an integer n > 0 and a set of ε,PC ∈ 22 non-random times T = T n [0, T ], where n > 0 is an integer, T = 0, T = T , and { k}k=1 ⊂ 0 n T T ε, such that γ(t)= γ(T ) for t [T , T ). k+1 − k ≥ k ∈ k k+1

3 The main result: integration for random integrands

For any γ X , it is naturally to define the stochastic integral with respect to B in ∈ ε,PC H L1(Ω, GT , P) as the Riemann sum

n γ(T )(B (T ) B (T )). k H k+1 − H k Xk=0 If γ L is such that this sum has a limit in probability as n + , and this limit is ∈ 22 → ∞ independent on the choice of T n n , then we call this limit the integral T γ(t)d B (t). { k }k=1 0 RS H The classes of admissible deterministic integrands γ are known; see, e.g.R [28, 29]. However, there are some difficulties with identifying classes of admissible random γ. The present paper suggests a modification of the stochastic integral based on the extension from X, i.e. from the set of random functions that are not necessarily piecewise constant but rather ”piecewise predictable”. This modification will allow to establish a new extended class of random integrands that are not necessarily ”piecewise predictable”.

The case of of non-random integrands

As the first step, let us construct a stochastic integral over the time [s, T ] for Gs- measurable integrands γ L (Ω, G , P,L (s, T )). These integrands can be regarded as non- ∈ 2 s 2 random on the conditional probability space given Gs. By (2.1), we have that

BH (t)= WH (t)+ RH (t),

4 where t>s,

t s WH (t)= f(t, r)dB(r),RH (t)= f(t, r)dB(r). Zs Z−∞

The processes WH (t) and RH (t) are independent Gaussian processes with zero mean. In ad- dition, the process W is G -adapted, R (t) is G -measurable for all t>s, and W (t) is H { t} H s H independent on Gs for all t>s. To define integration with respect to dB for G -measurable integrands γ L (Ω, G , P,L (s, T )) H s ∈ 2 s 2 we define integration with respect to WH and RH separately. First, it can be noted that if we had f ′(t, ) L (s,t) then integration with respect to W t · ∈ 2 H would be straightforward, since we would be able to find the Itˆo’s differential dWH (t) as

t t f(t,t)dB(t)+ f ′(t, r)dB(r) dt = 0 dB(t)+ f ′(t, r)dB(r) dt, (3.1) t · · t · Z0 Z0

T t ′ T which would allow us to accept s γ(t) 0 ft(t, r)dB(r) dt as s γ(t)dWH (t). However, the ′ expression (3.1) cannot be regarded as anh Itˆo’s differentiali, since f (t, ) / L2(s,t). Nevertheless, R R Rt · ∈ we will be using a modification of this version of the integral with respect to WH amended with some approximations to overcome insufficient integrability of f ′(t, ). t · For ε> 0, let

t W (t)= f(t, r ε)dB(r). H,ε − Zs In this case, there exists a usual Itˆo’s differential

t dW (t)= f(t,t ε)dB(t)+ f ′(t, r ε)dB(r) dt. H,ε − t − · Z0 representing a ”regularized” approximation of the right hand part of (3.1). Proposition 3.1. For any γ L (Ω, G , P,L (s, T )), ∈ 2 s 2 T T lim γ(t)dWH,ε(t)= GH (τ, T, γ)dB(τ); ε→0 Zs Zs the limit holds in L2(Ω, GT , P). This result justifies the following definition. Definition 3.1. We regard the limit in Definition 3.1 as the stochastic integral with respect to T WH , and we denote it as s γ(t)dF WH (t), i.e.

RT T ∆ γ(t)dF WH (t) = GH (τ, T, γ)dB(τ). Zs Zs 5 It appears that this choice for the case of non-random integrands leads to a new version of a stochastic integral for random integrands constructed below.

Proposition 3.2. (i) RH (t) is Gs-measurable for all t>s and differentiable in t>s in the sense that

RH (t + δ) RH (t) lim E − DRH (t) = 0, (3.2) δ→0 δ −

where s ∆ ′ DRH (t) = ft(t,q)dB(q). Z−∞

The process DRH is such that

(a) DRH (t) is Gs-measurable for all t>s; (b) for any t>s,

d2 EDR (t)2 = H (t s)2H−2, (3.3) H 2 2H − − t c d E DR (r)2dr = H H (t s)2H−1. (3.4) H 2(2 2H) − Zs − Definition 3.2. For s [0, T ) and γ L (Ω, G , P,L (s, T )), we define the integral ∈ ∈ 2 s 2 T T T ∆ γ(t)dF BH (t) = γ(t)dF WH (t)+ γ(t)DRH (t)dt s s s Z T Z T Z = GH (τ, T, γ)dB(τ)+ γ(t)DRH (t)dt. Zs Zs The first integral in the sum above is described in Definition 3.1, and the second one is a pathwise

Lebesgue integral on [s, T ]. The sum belongs to L1(Ω, GT , P) thanks to Propositions 3.1 and 3.2.

Proposition 3.3. Under the assumptions and notations of Definition 3.2,

T 2 T E γ(t)d W (t) cE γ(t)2dt, F H ≤ Zs Zs 1/2 T T E γ(t)DR (t)dt c E γ(t)2dt , H ≤ Zs  Zs  1/2 T T E γ(t)d B (t) c E γ(t)2dt , F H ≤ Zs  Zs 

for some c = c(H, T ) > 0.

Remark 3.1. For the purposes of the proofs below, we need stronger estimates for γ(t)dF WH dt R 6 2 and γ(t)dF BH (t) than for γ(t) dt, such as is given in Proposition 3.3. It can be noted that 1/2 T 2 combinedR estimates from PropositionR 3.3 would lead to estimate E IH (γ) const E γ(t) dt . | |≤ s which is weaker than known estimates [14, 28].  R  Proposition 3.4. We have that

T 1 d B (t)= B (T ) B (s). · F H H − H Zs

Extension on piecewise-predictable integrands from Xε

Definition 3.3. Let γ Xε, where ε > 0. By the definitions, there exists a finite set Θ of ∈ n non-random times Θ = Tk k=0 [s, T ], where n > 0 is an integer, T0 = 0, Tn = T , and { } ⊂ Tk Tk+1 (Tk, Tk + ε] such that γ(t) is GT -measurable for t [Tk, Tk+1]. Let γ(t)dF BH (t) ∈ k ∈ Tk−1 be defined according to Definition 3.2 with the interval [s, T ] replaced by [Tk−1R, Tk]. We call the sum

n Tk IH (γ)= γ(t)dF BH (t). Tk−1 Xk=1 Z T the foresighted integral of γ and denote it as 0 γ(t)dF BH (t). R The integral in the above definition belongs to L1(Ω, GT , P) thanks to Propositions 3.1 and 3.2.

Remark 3.2. It follows from Proposition 3.4 that

T T γ(t)dF BH (t)= γ(t)dRS BH (t) Z0 Z0 for piecewise constant γ X . However, it appears that converges of Riemann sums ∈ ∪ε>0 ε,PC requires more restriction for non-piecewise constant γ than the convergence for the suggested new integral.. This is because this approximation is finer that approximation by the piecewise constant functions.

3.1 Extension on random integrands of a general type with a mild restriction on prediction error

Let Et and Var t denote the conditional expectation and the conditional variance given Gt, respectively For ν > 0 and ε> 0, let Y be the set of all processes γ L such that ν,ε ∈ 22

1/2 1−H+ν sup sup [EVar τ γ(t)] C(t τ) a.s. τ∈[0,T ] t∈[τ,T ∧(τ+ε)] ≤ −

7 for some C = C(γ) > 0.

It can be noted that Eτ γ(t) can be interpreted as the forecast at time τ of γ(t) for t>τ; the forecast is based on observations of the events from Gτ . Respectively, Var τ γ(t) can be interpreted as the conditional means-square error of this forecast given Gτ .

In particular, processes from Yν,ε with ν > 0 feature stronger predictability on the short horizon ε than processes from Y0,ε.

Proposition 3.5. For any ν > 0 and ε> 0, the space Yν,ε with the norm

∆ 1/2 1−H+ν γ Yν,ε = γ L22 + sup sup [EVar τ γ(t)] /(t τ) . k k k k τ∈[0,T ] t∈[τ,T ∧(τ+ε)] − is a Banach space.

It follows from the definitions that if ε0 (0, ε) and γ Yν,ε then γ Yν,ε and γ Y ∈ ∈ ∈ 0 k k ν,ε0 ≤ γ Y . Also, it can be seen that X Y for any ν > 0. k k ν,ε ε ⊂ ν,ε Let Y =∆ Y . ∪ν>0,ε>0 ν,ε Clearly, the set Y is everywhere dense in L22.

Example 3.1. We have that B Y but B / Y. On the other hand, B Y |[0,T ] ∈ 0,ε |[0,T ] ∈ H |[0,T ] ∈ 2H−1,ε for any ε> 0.

For γ L , let Z(γ) be the set of processes γ X, n = 0, 1, 2, ... , such that γ (t) = ∈ 22 { n ∈ } n E γ(t) for t [T , T ), where k = 0, 1, ..., 2n and where T = kT/2n. Tk ∈ k k+1 k Theorem 3.1. (i) Let γ Y, and let γ ∞ = Z(γ). Then the sequence I (γ ) ∞ ∈ { n}n=1 { H n }n=1 converges to a limit in L (Ω, G , P) uniformly over H (1/2, c) for any c (1/2, 1). Let 1 T ∈ ∈ IH (γ) denote this limit.

(ii) For any ε> 0, H (1/2, 1), and ν > 0, the operator I ( ) : Y L (Ω, G , P) defined in ∈ H · ν,ε → 1 T statement (i) is a linear continuous operator. For any ε> 0, the norms of these operators are bounded in H (1/2, c), for any c (1/2, 1). ∈ ∈

We will regard IH (γ) defined in Theorem 3.1 as the stochastic integral

T I (γ)= γ(t)d B (t), γ Y . H F H ∈ 0 Z0 Corollary 3.1. For any ε> 0 and ν > 0, there exists a constant c> 0 depending on T ,ε,ν only such that

T

E γ(t)dF B (t) c γ Y γ Y . H ≤ k k ν,ε ∀ ∈ ν,ε Z0

Corollary 3.1 follows immediately from Theorem 3.1.

8 For ν > 0 and r> 1, let H be the set of all γ L such that sup γ(s) γ(t) ν,r ∈ 22 s,t∈[0,T ] k − kLr (Ω) ≤ C t s 1−H+ν for some C = C(γ) > 0. | − | It can be seen that H Y for r 2 for all ε> 0. ν,r ⊂ ν,ε ≥ For γ Hν.r, let Z¯(γ) be the set of processes γn X, n = 0, 1, 2, ... , such that, for ∈ { ∈ } n t [Tk, Tk+1), either γn(t) = γ(Tk), or γn(t) = ETk γ(t), where k = 0, 1, ..., 2 and where ∈ n Tk = kT/2 .

Proposition 3.6. For any r (1, 2] and ν > 0, the conclusions of Theorem 3.1 hold for γ H ∈ ∈ ν,r if Y, Y , and Z(γ), are replaced by H , H , and Z¯(γ), respectively. ν,ε ∪ν>0 ν,r ν,r

4 Continuity of the foresighted integral in H 1/2+0 → The following theorem describes some classes of random integrands where the stochastic integrals are continuous with respect to the Hurst parameter H 1/2 + 0. → Theorem 4.1. For any γ Y, ∈ T T E γ(t)d B (t) γ(t)dB(t) 0 as H 1/2 + 0. (4.1) F H − → → Z0 Z0

In fact, the question about continuity at H 1/2 of stochastic integrals with respect to → dBH is quite interesting. In particular, it is known that

T T E B (t)d B (t) 9 E B(t)dB(t) as H 1/2 + 0. (4.2) H RS H → Z0 Z0 This follows from the equality

T 2 B(t)dB(t)= B(T )2 T − Z0 combined with the equalities [32]

T 2 B (t)d B (t)= B (T )2,H (1/2, 1). H RS H H ∈ Z0 Remark 4.1. Theorem 4.1 does not contradict to the divergence stated in (4.2) since B / Y. [0,T ] ∈ On the other hand, this theorem ensures that, for any H1 > 1/2,

T T E B (t)d B (t) E B (t)dB(t) as H 1/2 + 0, H1 F H → H1 → Z0 Z0 since B / Y. H |[0,T ] ∈

9 5 Proofs

Consider the derivative

f ′(t, r)= d (t r)H−3/2, t > r. t H − Since H 3/2 ( 1, 1/2), it follows that 2(H 3/2) ( 2, 1) and f ′(t, ) < + − ∈ − − − ∈ − − k t · kL2(−∞,s) ∞ for all s

By the restrictions on γ and by (2.4), we have that GH ( , T, γ) is Gs-measurable for any τ, T · T that s dB(τ)GH (τ, T, γ) is well defined as an Itˆo’s integral, and that s γ(t)dWH,ε(τ) is also well definedR as the Itˆo’s integral R

T γ(t)dWH,ε(t) s Z T T t H−3/2 = cH γ(t)f(t,t ε)dB(t)+ dH γ(t)dt (t τ + ε) dB(τ) s − s s − Z T T Z Z = d dB(τ) (t τ)H−3/2γ(t)dt, (5.2) H − Zs Zτ i.e.

T T γ(t)dWH,ε(t)= dB(τ)GH,ε(τ, T, γ). (5.3) Zs Zs Furthermore, let

T T D =∆ dB(τ)G (τ, T, γ) γ(t)dW (t). ε H − H,ε Zs Zs We have that D = D¯ + D , where D¯ =∆ T γ(t)f(t,t ε)dB(t) and where ε ε ε ε s − R b T D =∆ dB(τ)[G (τ, T, γ) G (τ, T, γ)]. ε H − H,ε Zs b ¯ 2 2 Clearly, EDε 0 as ε 0. Let us show that EDε 0 as ε 0. → → → → ∞ It suffices to consider ε = εj for a monotonically decreasing sequence εj j=1. b { } Assume first that γ(t) 0 a.e.. In this case, (t τ +ε )H−3/2γ(t) > (t τ +ε )H−3/2γ(t) 0 ≥ − i − j ≥ a.e. if i > j, i.e., εi < εj. It follows that G (τ, T, γ) G (τ, T, γ) 0 a.s. for almost all τ. It also follows G ( , T, γ) H − H,ε ≥ k H,ε · kL2 (s,T ) ≤

10 c γ with the same c as in (2.4). k kL2(s,T ) We have that G (τ, T, γ) G (τ, T, γ) 0 a.s. for almost all τ as ε = ε 0 and that H − H,ε → j → 0 G (τ, T, γ) G (τ, T, γ) for a.e. τ. By the Lebesgue Dominated Convergence Theorem, ≤ H,ε ≤ H it follows that ED2 0 as ε 0. ε → → The case where γ 0 can be considered similarly. In the case of a sign variable γ, apply the b ≤ proof above for γ = γI and for γ = γI separately. Then the proof for γ = γ γ + γ≥0 − − γ≤0 + − − follows. This completes the proof of Proposition 3.1. Proof of Proposition 3.2. Let us prove statement (i). We need to verify the properties related to the differentiability of RH (t). Let t>s and rs and r

′ (1) ′ ′ ′′ ft(t, r) f (t, r, δ) sup ft(t, r) ft(h, r) δ sup ftt(h, r) , (5.4) | − |≤ h∈(t,t+δ) | − |≤ h∈(t,t+δ) | | where

f ′′(h, r)= d (H 3/2)(h r)H−5/2. tt H − − For δ > 0, we have that

′′ H−5/2 sup ftt(h, r) dH (H 3/2) (t r) . h∈(t,t+δ) | |≤ | − | −

For δ ( (t s)/2, 0], we have that ∈ − −

′′ H−5/2 sup ftt(h, r) dH (H 3/2) (t + δ r) . h∈(t,t+δ) | |≤ | − | −

It follows that f ′′(t, ) < + . k tt · kL2(−∞,s) ∞ By (5.4), it follows for all t>s

s RH (t + δ) RH (t) ′ DRH (t) = lim − = ft(t, r)dB(r), δ→0 δ Z−∞ for the mean square limit described in statement (ii).

11 Further, we have that

s s EDR (t)2 = f ′(t, r) 2dr = d2 (t r)2H−3dr H | t | H − Z−∞ Z−∞ d2 s d2 = H (t r)2H−2 = H (t s)2H−2. (5.5) 2 2H − −∞ 2 2H − − −

Hence, for t>s,

t d2 t d2 E DR (r)2dr = H (r s)2H−2dr = H (t s)2H−1 H 2 2H − (2 2H)(2H 1) − Zs − Zs − − c d = H H (t s)2H−1. 2(2 2H) − − This completes the proof of Proposition 3.2.  Proof of Proposition 3.3 follows from (2.4) and Proposition 3.2.  Proof or Proposition 3.4. Let

h =∆ H 1/2. − By the definitions,

T 1 d B (t)= J + J , · F H 1 2 Zs where

T T T T J =∆ 1 d W (t)= d dB(τ) (t τ)h−1dt = dB(τ)G (τ,T, 1) 1 · F H H − H Zs Zs Zτ Zs and

T T s T s J =∆ 1 DR (t)dt = dt f ′(t, r)dB(r)= d dt (t τ)h−1dB(τ). 2 · H t H − Zs Zs Z−∞ Zs Z−∞ We have that

T J = c dB(τ)(T τ)h 1 H − Zs and

s T s J = d dB(τ) (t τ)h−1dt = c dB(τ)[(T τ)h (s τ)h]. 2 H − H − − − Z−∞ Zs Z−∞

12 Hence

T T s 1 d B (t)= J + J = c dB(τ)(T τ)h + c dB(τ)[(T τ)h (s τ)h]. · F H 1 2 H − H − − − Zs Zs Z−∞ It follows from the well known properties of fractional Brownian motions that this value is B (T ) B (s). Let us show this for the sake of completeness. We have that H − H T 0 B (T ) B (s)= c dB(τ)(T τ)h + c dB(τ)[(T τ)h ( τ)h] H − H H − H − − − Z0 Z−∞ s 0 c dB(τ)(s τ)h c dB(τ)[(s τ)h ( τ)h] − H − − H − − − Z0 Z−∞ T s = c dB(τ)(T τ)h + c dB(τ)[(T τ)h (s τ)h]. H − H − − − Zs Z−∞ This completes the proof of Proposition 3.4. 

Proof of Proposition 3.5. We denote by ℓ¯1 the Lebesgue measure in R, and we denote by B¯ 1 the σ-algebra of Lebesgue sets in R. Let D = (t, r) : 0 r t T . { ≤ ≤ ≤ } Let V1 = L2([0, T ], B¯ 1, ℓ¯1,L2(Ω, G0, P)), and let V2 be the linear normed space of all mea- surable function (classes of equivalency) g : D Ω R such that g(t, r) L (Ω, G , P) for a.e. × → ∈ 2 r t, r, with the norm

T t 1/2 2 g V = E dt g(t, r) dr k k 2  Z0 Z0  t 1/2 b + sup sup E g(t, r)2dθ /(t τ)1−H+ν . − τ∈[0,T ] t∈[τ,(τ+ε)∧T ]  Zτ  By Clark’s theorem, it follows that γ Y can be represented as ∈ ν,ε t γ(t)= E0γ(t)+ g(t, r)dB(r) Z0 for some g(t, r) V ; here E γ(t) V . In this case, Var γ(t)= E t g(t, r)2dr. To prove the ∈ 2 0 ∈ 1 τ τ τ proposition, it suffices to observe that the space V V is complete and is in a continuous and 1 × 2 R continuously invertible bijection with the space Yν,ε. This completes the proof of Proposition 3.5. 

To prove Theorems 3.1, Proposition 3.6, and Theorem 4.1, we will need some notation. We will be using functions

0 τ ∆ ′ ∆ ′ ρ(t) = ft(t, r)dB(r), ρ(t,τ) = ft(t, r)dB(r), τ>t> 0. (5.6) Z−∞ Z0 b

13 In the proofs below, we consider an integer n > 0 and γ X such that there exist some n ∈ ε > 0 and a set Θ = T n [0, T ], where T = 0, T = T , and T (T , T + ε) such γn { k}k=1 ⊂ 0 n k+1 ∈ k k that γ (t) L (Ω, G , P) for t [T , T ). n ∈ 2 Tk ∈ k k+1 Let

Tk Tk IW,H,k = γn(t)dWH,k(t), IR,H,k = γn(t)DRH,k(t)dt, ZTk−1 ZTk−1 where WH,k, RH,k, and DRH,k are defined similarly to WH , RH , and DRH , with [s, T ] replaced by [Tk−1, Tk]. Let

n n ∆ ∆ IW,H (γn) = IW,H,k, IR,H (γn) = IR,H,k. (5.7) Xk=1 Xk=1 Clearly,

Tk I IH (γn [Tk−1,Tk))= γn(t)dF BH (t)= IW,H,k + IR,H,k, ZTk−1 and

IH (γn)= IW,H(γn)+ IR,H (γn).

By the definitions,

Tk Tk Tk−1 ′ IR,H,k = γn(t)DRk(t)dt = γn(t) ft(t,s)dB(s) ZTk−1 ZTk−1 Z−∞ Tk 0 Tk Tk−1 ′ ′ = γn(t) ft(t,s)dB(s)+ γn(t) ft(t,s)dB(s) ZTk−1 Z−∞ ZTk−1 Z0 Tk Tk = γn(t)ρ(t)dt + γn(t)ρ(t, Tk−1)dt. ZTk−1 ZTk−1 Hence b

IH (γn)= IW,H (γn)+ IR,H (γn)+ J¯R,H (γn), where b

T n ∆ IR,H (γn)= γn(t)ρ(t)dt, J¯R,H (γn) = JR,H,k, (5.8) 0 Z Xk=1 b b

14 Tk+1 JR,H,k = γn(t)ρ(t, Tk)dt. (5.9) ZTk For k = 0, ..., n 1, consider operators Γ ( ) : L (0, T ) L (0, T ) such that − k · 2 k+1 → 2 k+1 Γ ( , g)= G ( , T , g), k · H · k+1 i.e.

Tk+1 Γ (τ, g)= d (t τ)H−3/2g(t)dt. (5.10) k H − Zτ By the properties of the Riemann–Liouville integral, Γ ( , g) c g k k · kL2 (Tk,Tk+1) ≤ k kL2(Tk,Tk+1) for some c> 0 that is independent on g L (T , T ) and H (1/2, 1). ∈ 2 k k+1 ∈ b Lemma 5.1. For any c (1/2, 1), there exists some C = C(c) > 0 such that, for any γ X b ∈ n ∈ and H (1/2, 1), ∈

E I (γ ) + E I (γ ) C γ L . (5.11) | W,H n | | R,H n |≤ k nk 22 Proof of Lemma 5.1. For k = 1, ..., n, web have that

Tk t I = d γ (t)dt (t τ)H−3/2dB(τ) W,H,k H n − ZTk−1 ZTk−1 Tk Tk Tk = d dB(τ) (t τ)H−3/2γ (t)dt = dB(τ)Γ (τ, γ ). H − n k−1 n ZTk−1 Zτ ZTk−1

The last integral here converges in L2(Ω, GT , P). Hence

2 n n n Tk E I (γ ) 2 = E I = EI2 = E Γ (τ, γ )2dτ W,H n L2(Ω) W,H,k W,H,k k−1 n k k ! Tk−1 Xk=1 Xk=1 Xk=1 Z n Tk 2 2 c E γn(τ) dτ = c γn L22 . ≤ Tk−1 k k Xk=1 Z b b Further, we have that

T 1/2 T 1/2 E I (γ ) E γ (t)2dt E ρ(t)2dt . | R,H n |≤ n  Z0   Z0  b 2 b By (3.4), E T ρ(t)2dt dH T 2H−1. This completes the proof of Lemma 5.1.  0 ≤ 2(2−2H) The followingR proofs will be given for Theorem 3.1 and 4.1 simultaneously with the proof of Proposition 3.6.b For the sake of the proofs of Theorem 3.1 and 4.1, we assume below that that r = 2, p = 2,

15 γ Y and γ ∞ = Z(γ). For the sake of the proof of Proposition 3.6, we assume below ∈ ν,ε { n}n=1 that r (1, 2], p = (1 1/r)−1, γ H and g ∞ = Z¯(γ). ∈ − ∈ ν,r { n}n=1 We consider below positive integers n,m + such that n m. We assume below that → ∞ n ≥ T = kT/2n, k = 0, 1, ..., 2n. This means that the grid T 2 is formed as defined for n rather k { k}k=0 than for m; since n m, Definition 3.3 is applicable to the integral T γ (t)d B (t) with this ≥ 0 m F H grid as well. R We denote

ε =∆ T/2n = T T , ε =∆ T/2n = T T . m k+1 − k n k+1 − k We assume that m is such that ε ε. It implies that ε ε as well. m ≤ n ≤ We denote by JR,k,m and JR,k,n the corresponding values JR,H,k defined for γ = γn and n γ = γ respectively obtained using the same grid T 2 . m { k}k=0 Lemma 5.2. The sequence I (γ ) ∞ has a limit in L (Ω, G , P); it converges to this limit { R,H n }n=1 1 T uniformly in H (1/2, c), for any c (1/2, 1). ∈ ∈ Proof of Lemma 5.2. Clearly,

γ γ 2 0 as n,m + (5.12) k n − mkL22 → → ∞ and

γ γ 2 0 as m + uniformly in n>m. (5.13) k n − mkL22 → → ∞ By Lemma 5.1, we have that

2 E IW,H (γn) IW,H (γm) + E IR,H (γn) IR,H (γm) 0 as b, m + . k − kL2(Ω) | − |→ → ∞ This implies that the sequences I (γ b) ∞ and bI (γ ) ∞ have limits in L (Ω, G , P), { W,H n }n=1 { R,H n }n=1 1 T and that they converge to these limits uniformly in H (1/2, c), for any c (1/2, 1). b∈ ∈ Therefore, to prove Lemma 5.2, it suffices to prove that the sequence J¯ (γ ) ∞ have a { R,H n }n=1 limit in L (Ω, G , P) as well, and that it converges to this limit uniformly in H (1/2, c), for 1 T ∈ any c (1/2, 1). . ∈ Let

Tk ξ (t) =∆ ρ(t, T )= d (t s)H−3/2dB(s). k k H − Z0 We have that

Tk+1 ψ =∆ J J = [γ (t) γ (t)]ξ (t)dt, n,m,k R,k,n − R,k,m n − m k ZTk 16 Remind that p> 0 is such that 1/p + 1/r = 1. We have that

Tk+1 ψ γ (t) γ (t) ξ (t) dt. k n,m,kkL1(Ω) ≤ k n − m kLr(Ω)k k kLp(Ω) ZTk Further, we have that

Tk 2 2 2 2H−3 dH 2H−2 2H−2 ξk(t) = d (t s) ds = (t Tk) t k kL2(Ω) H − 2H 2 − − Z0 − d2   = H t2H−2 (t T )2H−2 , t (T , T ]. 2 2H − − k ∈ k k+1 −   Hence

Tk+1 2 2 dH 2H−1 2H−1 2H−1 ξk(t) dt = [(Tk+1 Tk) T + T ] k kL2(Ω) (2 2H)(2H 1) − − k+1 k ZTk − − c d = H H [(T T )2H−1 T 2H−1 + T 2H−1]. 4 4H k+1 − k − k+1 k − Hence

1/2 Tk+1 2 H−1/2 ξk(t) dt C¯0CH ε , (5.14) k kL2(Ω) ≤ n ZTk  where

∆ √c d C = H H , (5.15) H 2 2H − and where C¯0 > 0 is independent on γ, k and H; it depends on T only. By the properties of Gaussian distributions, we have that

ξ (t) C(p) ξ (t) k k kLp(Ω) ≤ k k kL2(Ω) for some C(p) > 0. Hence

1/2 Tk+1 Tk+1 Tk+1 2 1/2 ξk(t) dt C(p) ξk(t) dt C(p) ξk(t) dt ε k kLp(Ω) ≤ k kL2(Ω) ≤ k kL2(Ω) n ZTk ZTk ZTk  C(p)C¯ C εH−1/2ε1/2 = C(p)C¯ C εH . (5.16) ≤ 0 H n n 0 H n Let

(m) ∆ m Td = εmd, d = 0, 1, ..., 2 ,

17 m Here εm = T/2 . Let

τ (t) =∆ inf T (m) : t [T (m), T (m)), d = 0, 1, ..., 2m 1 . m { d ∈ d d+1 − } Clearly, the function τ (t) is non-decreasing, and τ (t) τ (t). m m ≤ n By the definitions, we have that γm(t) = Eτm(t)γ(t) = Eτm(t)γn(t) and γn(t) = Eτn(t)γ(t). Hence

γ (t) γ (t) = γ (t) E γ (t) γ(t) E γ(t) . k n − m kL2(Ω) k n − τm(t) n kL2(Ω) ≤ k − τm(t) kL2(Ω) For the sake of the proof of Theorem 3.1, we have assumed that γ Y . It follows that ∈ ν,ε

1/2 1−H+ν sup γm(t) γn(t) L2(Ω) sup EVar τm(t)γ(t) cεm γm Yν,ε , (5.17) t∈[0,T ] k − k ≤ t∈[0,T ] ≤ k k  where c> 0 are independent on γ and H (1/2, 1). ∈ Let n = m + 1. In this case, we have that εm = 2εn. By (5.14) and (5.17), we have that and

Tk+1 ψ γ (t) γ (t) ξ (t) dt k k,m+1,mkL1(Ω) ≤ k m+1 − m kLr (Ω)k k kLp(Ω) ZTk Tk+1 sup γ (t) γ (t) ξ (t) dt ≤ k m+1 − m kLr (Ω) k k kLp(Ω) t∈[0,T ] ZTk Tk+1 γ (t) γ (t) ξ (t) dt ≤ k m+1 − m kLr (Ω)k k kLp(Ω) ZTk 1+ν c C ε γ Y , ≤ ψ H m k mk ν,ε where c > 0 is independent on γ, k, and H (1/2, 1). We have that 2n = 2m+1 = 2T/ε . ψ ∈ m Hence

2n−1 n 1+ν J¯ (γ ) J¯ (γ ) E ψ 2 c C ε γ Y k R,H m+1 − R,H m kL1(Ω) ≤ k k,n,mkL1(Ω) ≤ ψ H m k k ν,ε k=0 −1 X 1+ν −m ν = 2T ε c C ε γ Y = c C (2 ) γ Y , (5.18) m ψ H m k k ν,ε J H k k ν,ε where c > 0 is independent on m, γ, H, and ν 0. J ≥

18 Further, let m 1, 2, ..., n . We have that ∈ { } J¯ (γ ) J¯ (γ ) R,H n − R,H m = J¯ (γ ) J¯ (γ )+ J¯ (γ ) J¯ (γ ) R,H n − R,H n−1 R,H n−1 − R,H m = J¯R,H (γn) J¯R,H (γn−1)+ J¯R,H (γn−1) J¯R,H (γn−2)+ J¯R,H (γn−2) J¯R,H (γm) n − − − = ... = (J¯ (γ ) J¯ (γ )). (5.19) R,H k − R,H k−1 k=Xm+1 It follows that

n −k ν J¯ (γ ) J¯ (γ ) c C (2 ) γ Y 0 k R,H n − R,H m kL1(Ω) ≤ J H k k ν,ε → k=Xm+1 as m + (5.20) → ∞ uniformly in n>m and in the case where ν > 0, uniformly in H (1/2, c), for any c (1/2, 1). ∈ ∈ Hence J¯ (γ ) is a Cauchy sequence in L (Ω, F, P), and has a limit in this space, uniformly { R,H n } q in H (1/2, c), for any c (1/2, 1). ∈ ∈ For the sake of the proof of Proposition 3.6, we use, instead of (5.17), the estimates

sup γn(t) γm(t) Lr(Ω) = sup sup γm(t) γn(t) Lr(Ω) n t k − k k∈{0,...,2 −1} t∈[Tk,Tk+1] k − k 1−H+ν ε γ H . ≤ m k k ν,r Then the proof above can be repeated with minor changes. In particular, the corresponding constant cJ depends on r. This completes the proof of Lemma 5.2.  Proof of Theorem 3.1. It follows immediately from Lemma 5.2 that the sequence I (γ ) ∞ { H n }n=1 converges to a limit in L (Ω, G , P), uniformly in H (1/2, c), for any c (1/2, 1). This proves 1 T ∈ ∈ statement (i) of Theorem 3.1. Let us prove statement (ii) of Theorem 3.1. It follows from Lemma 5.1 that the operators I ( ) : X L (Ω, G , P) and I ( ) : X L (Ω, G , P) allow continuous extension into W,H · → 1 T R,H · → 1 T continuous operators IW,H( ) : L22 L1(Ω, GT , P) and IR,H ( ) : L22 L1(Ω, GT , P), that are · b→ · → bonded uniformly in H (1/2, c), for any c (1/2, 1). ∈ ∈ b It suffices to show that, for any ν > 0 and ε> 0,

¯ ¯ sup E JR,H (γn) C γ Yε,ν n≥0 | |≤ k k for some C¯ = C¯(ε, ν) > 0.

19 Assume that γ Y for some ε> 0. Let ∈ ν,ε m =∆ min m : 2−mT ε . (5.21) ε { ≤ }

It follows from (5.19) that, for all n>mε,

n −k ν E J¯ (γ ) J¯ (γ ) + c C (2 ) γ Y | R,H n | ≤ k R,H mε kL1(Ω) J H k k ν,ε k=Xm+1 J¯ (γ ) + C¯ γ Y , (5.22) ≤ k R,H mε kL1(Ω) H,ν,mε k k ν,ε where cJ is the same as in (5.18), and where

∞ ¯ ∆ −k ν CH,ν,mε = cJ CH (2 ) , mε k=2X+1 Clearly, C¯ is independent on γ Y , and, for any c (1/2, 1), C¯ is bounded by a H,ν,mε ∈ ν,ε ∈ H,ν,mε constant for all H (1/2, c),ε > 0. ∈ Further, let

(mε) Tk ξ(mε)(t)= ρ(t, T (mε))= d (t s)H−3/2dB(s). k k H − Z0 ∆ ¯2 2 2H−1 Let Mε = C0 CH εmε and

(mε) T Tk+1 ∆ 2 ∆ (mε) 2 ak = γmε (t) dt, bk = ξ (t) dt. L2(Ω) m k L2(Ω) 0 k k T ( ε) k k Z Z k Clearly,

n T T a = γ (t) 2 dt γ(t) 2 dt γ 2 . k mε L2(Ω) L2 (Ω) Yε,ν 0 k k ≤ 0 k k ≤ k k Xk=1 Z Z As was shown for ξ (t) in (5.14), we have that b M for all k. k k ≤ ε We have that, for any c (1/2, 1), ∈ 2mε (mε) 2mε Tk+1 ¯ (mε) 1/2 1/2 E JR,H (γmε ) γmε (t) L2(Ω) ξk (t) L2(Ω)dt ak bk | |≤ (mε) k k k k ≤ Tk Xk=1 Z Xk=1 2mε 1/2 2mε 1/2 1/2 mε/2 ak bk Mε 2 γ Yε,ν C γ Yε,ν . (5.23) ≤ ! ! ≤ · k k ≤ k k Xk=1 Xk=1 b for some C = C(c, mε) > 0. We have used here the H¨older’s inequality.

b b 20 It can be noted that the value mε in (5.23) is not increasing, since ε> 0 is fixed.

By the definitions, γ0(t) is G0-measurable. By the second estimate in Proposition 3.3,

E J¯ (γ ) C γ L C γ Y . | R,H 0 |≤ 0k k 22 ≤ 0k k ε,ν for some C0 = C0(c). b b The proof of Theorem 3.1(ii) follows from (5.22) and (5.23). This completes the proof of b b Theorem 3.1.  Proof of Proposition 3.6 repeats the proof of Theorem 3.1, given the adjustments mentioned in the proof of Lemma 5.2.  The remaining part of the paper is devoted to the proof of Theorem 4.1. We will use the notations from the proof of Theorem 3.1 with the following amendment: since we consider variable H [1/2, 1), we include corresponding H as an index for a variable. ∈ In particular, it follows from these notations that

n IW,H(γn)= PW,H,k + I1/2(γn), Xk=1 It can be noted that

H 1/2 Γ(H + 1/2)2(H 1/2) d = − 0, C = − 0 as H 1/2 + 0. H Γ(H + 1/2) → H 2 2H → → p − Lemma 5.3. For any γ X , n ∈ ε I (γ ) I (γ ) + I (γ ) 0 as H 1/2 + 0 k W,H n − 1/2 n kL2(Ω) k R,H n kL1(Ω) → → uniformly over any bounded in L set of bγ X . 22 n ∈ ε Proof of Lemma 5.3. For the operators Γ ( , ) = G ( , T , ) introduced before Lemma k · · H · k+1 · 5.1, we have that Γ ( , g) c g for some c > 0 that is independent on k k · kL2 (Tk,Tk+1) ≤ k kL2(Tk,Tk+1) H (1/2, 1). Similarly to the proof of Lemma 5.1, we have that ∈ b b Tk P = dB(τ)[Γ (τ, γ ) γ (τ)], k = 1, ..., n. W,H,k k−1 n − n ZTk−1

These integrals converge in L2(Ω, GT , P). Let

Tk α =∆ Γ (τ, γ ) γ (τ) 2dτ. H,k | k−1 n − n | ZTk−1

21 2 We have that EαH,k = EPW,H,k and

n 2 n 2 E IW,H (γn) IW,1/2(γn) L (Ω) = E PW,H,k = E αH,k. k − k 2 ! Xk=1 Xk=1 By the properties of the Riemann–Liouville integral, we have that

γ Γ ( , γ ) 0 a.s. as H 1/2 + 0 k n − k · n kL2(Tk−1,Tk) → → and

Γ ( , γ ) γ a.s.. k k · n kL2(Tk−1,Tk)k ≤ k nkL2(Tk−1,Tk) Hence

0 α √2 γ a.s.. ≤ H,k ≤ k nkL2(Tk−1,Tk) By Lebesgue’s Dominated convergenceTheorem, it follows that

n E α 0 a.s. as H 1/2 + 0. H,k → → Xk=1 Hence

2 E IW,H(γn) I (γn) 0 as H 1/2 + 0. k − W,1/2 kL2(Ω) → → Further, we have that

T 1/2 T 1/2 E I (γ ) E γ (t)2dt E ρ(t)2dt . | R,H n |≤ n  Z0   Z0  Similarly to the proofb of Proposition 3.2, we obtain that b

s d2 Eρ(t)2 = f ′(t, r) 2dr = H t2H−2 (5.24) | t | 2 2H Z−∞ − and b

T d2 c d E ρ(t)2dt = H T 2H−1 = h H T 2H−1 0 as H 1/2 + 0. 2(2 2H) 4 → → Z0 − This completesb the proof of Lemma 5.3. 

Lemma 5.4. Let ν > 0, γ Y , and γ ∞ = Z(γ). In the notations introduced above, we ∈ ν,ε { n}n=1

22 have that

J¯ (γ ) 0 as H 1/2 + 0 k R,H n kL1(Ω) → → uniformly in n> 0.

Proof of Lemma 5.4. Assume that γ Y for some ε> 0, and that m is defined by (5.21). ∈ ν,ε ε It follows from equation (5.19) applied to J¯R = J¯R,H that, for any and any n>mε,

n −k ν/2+H−1/2 E J¯ (γ ) J¯ (γ ) + c C (2 ) γ Y | R,H n | ≤ k R,H mε kL1(Ω) J H k k ν,ε k=Xmε+1 J¯ (γ ) + C¯ γ Y , ≤ k R,H mε kL1(Ω) H,ν,mε k k ν,ε ¯ ¯ where CH,ν,mε is the same as in (5.22); if ν > 0, then CH,ν,mε is bounded by a constant for all H (1/2, 1),ε > 0. In addition, we have that ∈ C 0 as H 1/2 H,ν,mε → → uniformly in n. By (5.14), J¯ (γ ) 0 as H 1/2. This completes the proof of k R,H m kL1(Ω) → → Lemma 5.4.  Proof of Theorem 4.1. Let γ Y for any ν > 0 and ε > 0. Let γ = Z(γ). We have to ∈ ν,ε n show that E I (γ) I (γ) 0 as H 1/2. We have that | H − 1/2 |→ → E I (γ) I (γ) A + A + A , | H − 1/2 |≤ 1,H,n 2,H,n 3,n where

A =∆ E I (γ) I (γ ) , A =∆ E I (γ ) I (γ ) , A =∆ E I (γ ) I (γ) . 1,H,n | H − H n | 2,H,n | H n − 1/2 n | 3,n | 1/2 n − 1/2 |

Clearly, γ γ Y 0 as n + for any ε> 0. k − nk ν,ε → → ∞ Let c (1/2, 1) be given. By Theorem 3.1, A 0 as n + uniformly in H (1/2, c). ∈ 1,H,n → → ∞ ∈ By Lemmata 5.3-5.4, A 0 as H 1/2 uniformly in n. Finally, by the properties of the 2,H,n → → Itˆointegral, it follows that A 0 as n + . This completes the proof of Theorem 4.1.  3,n → → ∞

References

[1] Al´os, E., Mazet, O., and Nualart, D. (2000). Stochastic calculus with respect to fractional Brownian motion with Hurst parameter lesser than 1/2. Stochastic Processes and their Applications 86 (2000) 121-139.

23 [2] Al´os, E. and Nualart, D. (2003). Stochastic integration with respect to the fractional Brownian motion. Stoch. Stoch. Rep. 75(3), 129-152.

[3] Bender C., Sottinen T., Valkeila E. (2007). Arbitrage with fractional Brownian motion? Theory Stoch. Process., 13(1-2), 23-34 (Special Issue: Kiev Conference on Modern Stochas- tics).

[4] Bender C., Sottinen T., Valkeila E. (2011). Fractional processes as models in stochastic finance. In: Di Nunno, Oksendal (Eds.), AMaMeF: Advanced Mathematical Methods for Finance, Springer, 75-103.

[5] Bender C. (2013). An Itˆoformula for generalized functionals of a fractional Brownian motion with arbitrary Hurst parameter. Stochastic Processes and their Applications 104, 81–106.

[6] Bender C., Pakkanen M.S., and Sayit H. (2015). Sticky continuous processes have consis- tent price systems. J. Appl. Probab. 52, No. 2 , 586-594.

[7] Bertoin, J. (1989). Sur une int´egrale pour les processus ´a α variation born´ee, The Annals of Probability 17, no. 4, 1521-1535.

[8] Biagini, F. and Oksendal, B. (2003). Minimal variance hedging for fractional Brownian motion. Methods Appl. Anal. 10 (3), 347-362.

[9] Bj¨ork T., Hult H. (2005). A note on Wick products and the fractional Black-Scholes model. Finance and Stochastics 9(2), 197-209.

[10] Carmona, P. Coutin, L., and Montseny, G. (2003). Stochastic integration with respect to fractional Brownian motion. Ann. Inst. H. Poincare Probab. Statist., 39(1), 27-68.

[11] ¸Cetin U., Novikov A., Shiryaev A.N. (2013). Bayesian sequential estimation of a drift of fractional Brownian motion. Sequential Analysis: Design Methods and Applications 32, Iss. 3, 288–296.

[12] Cheridito, P. (2003). Arbitrage in fractional Brownian motion models. Finance Stoch. 7 (4), 533–553.

[13] Ciesielski, Z., Kerkyacharian, G., and Roynette, B. (1992). Quelques espaces fonctionnels associ´es ´ades processus gaussiens. Studia Mathematica 107 (1993), no. 2, 171-204.

[14] Decreusefond, L. and Ust¨ unel,¨ A.S. (1999). Stochastic Analysis of the Fractional Brownian Motion. Potential Analysis 10 , no. 2, 177-214.

24 [15] Decreusefond, L. (2000). A Skohorod-Stratonovitch integral for the fractional Brown- ian motion, Proceedings of the 7-th Workshop on stochastic analysis and related fields. Birkhauser.

[16] Decreusefond, L. (2003). Stochastic integration with respect to fractional Brownian mo- tion. In: P. Doukhan, G. Oppenheimer, M.S. Taqqu. (eds.), Theory and Applications of Long-Range Dependence, Birkhauser, Boston, MA, pp. 203–226.

[17] Duncan, T.E., Hu, Y., and Pasik-Duncan, B. (2000). Stochastic calculus for fractional Brownian motion I, Theory. SIAM Journal of Control and Optimisation 38(2), 582?612.

[18] Es-Sebaiya, K., Ouassoub, I., Oukninea, Y. (2009). Estimation of the drift of fractional Brownian motion. Statistics & Probability Letters 79 (14), 1647–1653.

[19] Feyel, D., and de La Pradelle, A. (1999). On Fractional Brownian Processes. Potential Analysis 10, no. 3, 273-288.

[20] Gripenberg. G., and Norros, I. (1996). On the prediction of fractional Brownian motion. Journal of Applied Probability 33, No. 2, pp. 400-410.

[21] Guasoni, P. (2006). No arbitrage with transaction costs, with fractional Brownian motion and beyond. Math. Finance 16(2), 469–588.

[22] Hu Y. and Oksendal B. (2003). Fractional white noise calculus and applications to finance. Infinite dimensional analysis, quantum probability and related topics 6(1), 1-32.

[23] Hu Y., Oksendal B., Sulem A. (2003). Optimal consumption and portfolio in a Black- Scholes market driven by fractional Brownian motion. Infinite dimensional analysis, quan- tum probability and related topics 6(4), 519-536.

[24] Hu Y., Zhou X.Y. (2005). Stochastic control for linear systems driven by fractional noises. SIAM Journal on Control and Optimization 43, 2245-2277.

[25] Jumarie G. (2005). Merton’s model of optimal portfolio in a Black-Scholes market driven by a fractional Brownian motion with short-range dependence, Insurance: Mathematics and Economics 37, 585-598. Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 397?413

[26] Mandelbrot, B. B., Van Ness, J. W. (1968). Fractional Brownian motions, fractional noises and applications. SIAM Review 10, 422–437.

[27] Muravlev A. A. (2013). Methods of sequential hypothesis testing for the drift of a fractional Brownian motion. Russ. Math. Surv. 68 (3), 577.

25 [28] Pipiras, V. and Taqqu, M.S. (2000). Integration questions related to fractional Brownian motion. Probab. Theor. Related Fields, 118(2), 251-291.

[29] Pipiras, V. and Taqqu, M.S. (2001). Are classes of deterministic integrands for fractional Brownian motion on an interval complete? Bernoulli, 7(6), 873-897.

[30] Privault, N. (1998). Skorohod stochastic integration with respect to non-adapted processes on Wiener space. Stochastics and Stochastics Reports 65, 13–39.

[31] Rogers, L. C. G. (1997). Arbitrage with fractional brownian motion. Mathematical Finance 7 (1), 95–105.

[32] Shiryaev A.N. (1998). On arbitrage and replication for fractal models. Research Report 30, MaPhySto, Department of Mathematical Sciences, University of Aarhus.

[33] Salopek, D. M. (1998). Tolerance to arbitrage. Stochastic Process. Appl. 76 (2), 217-230.

[34] Young, L.C. (1936). An Inequality of H¨older Type, connected with Stieltjes integration. Acta Math. 67 , 251–282

[35] Z¨ahle, M. (1998). Integration with respect to fractal functions and stochastic calculus II. Probability Theory and Related Fields 111 (3), pp. 333-374.

26