arXiv:1206.5415v2 [math.PR] 6 Mar 2015 r osdrbymr novdi the in involved more considerably are imn–iuil prtr,ra neplto,approx interpolation, real operators, Riemann–Liouville 26A33. [ and 1.5.3, Proposition 19 605–638 2, No. DOI: 43, Vol. Probability2015, of Annals The asinsaemydffrsgicnl.Freape h Mey in the proved example, For be significantly. can differ ities may space Gaussian a Gatn.M21n6.Sm ato h okwsdn steau the Au as Innsbruck, done of was University work Mathematics, the of of Department part the Some MA2012n26). no. (Grant Finland. of Academy the of Applications fcrepnigsaeet rvdmil in mainly proved statements corresponding of elitroainmto n h prxmto rbe co in problem integrals approximation obta stochastic the of are proximation and spaces method Besov interpolation (Gaussian) real The . Gaussian dard rto,adapoiainter.A asinsae eco we space, Gaussian o As theory. approximation operators and Riemann–Liouville gration, associated space, Gaussian ih2 with

c n yorpi detail. 605–638 typographic 2, the and by No. published 43, Vol. article 2015, original Statistics the Mathematical of of reprint Institute electronic an is This nttt fMteaia Statistics Mathematical of Institute nvriyo y¨sy¨ n nvriyo nsrc,adU and Innsbruck, of Jyv¨askyl¨a University of and University , e od n phrases. and words Key 2 1 .Introduction. 1. M 00sbetclassifications. subject 2000 AMS 2013. August revised 2012; October Received NFATOA MOHESAND SMOOTHNESS FRACTIONAL ON upre yteEi atnnFudto n h ansEhr Magnus the and Foundation Aaltonen Emil the by Ana Supported Harmonic and Stochastic 133914 Project the by Supported 20 10.1214/13-AOP884 , ≤ 26 oterglrt fisha xeso.Terslsaeappl are results The in extension. problem heat fun approximation its a an of of study smoothness regularity fractional the the to relate and integr space fractional Gaussian of operators Riemann–Liouville and tion i nerl ihrsett the to motion. respect with integrals tic < p , 31 ecnie asinBsvsae bandb elinterpol real by obtained spaces Besov Gaussian consider We ∞ .Hwvr the However, ]. and ySea Geiss Stefan By dγ hsppri eoe oBsvsae endo a on defined spaces Besov to devoted is paper This NTEGUSA SPACE GAUSSIAN THE ON d L tcatcaayi naGusa pc,Bsvspaces, , Gaussian a on analysis Stochastic 24 = 2 ) nte xml stepeoeo ht for that, phenomenon the is example Another ]). 2015 , sn rhgnlt ysadr da,btthey but ideas, standard by orthogonality using e hsrpitdffr rmteoiia npagination in original the from differs reprint This . −| L x | 2 2 rmr 00,6H5 12;scnay46B70, secondary 41A25; 60H05, 60H07, Primary ter n the and -theory / 2 Jyv¨askyl¨a in dx/ 1 d h naso Probability of Annals The n niToivola Anni and dmninl(emti)Brownian (geometric) -dimensional 1 L (2 L p p π oeo h eut r extensions are results the of Some . cs hn1 when -case L ) p d/ o 2 for 2 en the being mto fsohsi integrals. stochastic of imation L L L ≤ 2 P p e [ see ; < p ter o 2 for -theory -APPROXIMATION stria. p < ∞ 7 d to nthe on ation o stochas- for , yi,Itrcin and Interactions lysis, dmninlstan- -dimensional 2 2 6= hrwsalae to affiliated was thor 8 rcinlinte- fractional f nsider rohFoundation nrooth , , ctional 11 < e to ied crsa ap- an ncerns iest of niversity , ∞ < p < ndb the by ined rinequal- er 12 a- L se[ (see , p ( 14 R ∞ , d 23 γ , 17 on ], d , ) 2 S. GEISS AND A. TOIVOLA instance, for 2

W (1) (t/2) W (d) (t/2) d Y := (e t − ,...,e t − )⊤ and E := (0, ) . t ∞ Then we have

dYt = σ(Yt) dWt, where Y is considered as a column vector and the d d-matrix σ(y) is given d d × by σ(y)= Id or (σij(y))i,j=1 = (δi,jyi)i,j=1, respectively, where δi,j =1if i = j and δi,j = 0 otherwise. The parabolic differential operator associated to the diffusion Y is

d 2 ∂ 1 2 ∂ := + σkk 2 . A ∂t 2 ∂yk Xk=1 Given a Borel-function g : E R with g(Y ) L , we let → 1 ∈ 2 (1) G(t,y) := E(g(Y ) Y = y) 1 | t and notice that G(1,y)= g(y). Integrability properties of G and its deriva- tives are given in Lemma A.2 below and are used implicitly in this paper. The function G solves the backward parabolic PDE G = 0 on [0, 1) E. A × FRACTIONAL SMOOTHNESS 3

For 0 s

Definition 1.1. (i) Let rand be the set of all sequences of stopping n T times τ = (τi)i=0 with 0 = τ0 τ1 τn 1 <τn = 1 where n = 1, 2,..., such that τ is -measurable≤ for≤···≤i = 1,...,n− 1, that is, i Fτi−1 − τi B τi 1 t t for t [0, 1] and B ([0, 1]). { ∈ } ∩ { − ≤ } ∈ F ∈ ∈B (ii) Given a time-net τ = (τ )n rand, 0 t 1 and g(Y ) L , we let i i=0 ∈T ≤ ≤ 1 ∈ 2 t n

Ct(g(Y1),τ) := G(s,Ys) dYs G(τi 1,Yτi−1 )(Yτi t Yτi−1 t), 0 ∇ − ∇ − ∧ − ∧ Z Xi=1 t n

Ct(g(Y1), τ, v) := G(s,Ys) dYs vτi−1 (Yτi t Yτi−1 t), 0 ∇ − ∧ − ∧ Z Xi=1 n Rd where v = (vτi−1 )i=1 is a sequence of random row vectors vτi−1 :Ω measurable w.r.t. . → Fτi−1 Let us briefly describe the contents of this paper, which continues and extends results from the preprint [28]. (1) In Theorem 3.1, we provide a characterization of functions f : Rd R belonging to the Besov space Bθ (Rd, γ ) by F : [0, 1] Rd R with→ p,q d × → F (t,x) := E(f(W1) Wt = x). Roughly speaking, considering F as the heat ex- |d tension of f L2(R , γd), the regularity of this extension precisely describes the Besov regularity∈ of f. Theorem 3.1 mainly relies on Proposition A.4, which might be of independent interest. Bθ (2) Besides the real interpolation spaces p,q, the Riemann–Liouville op- erator DY,θ from Section 4 provides an alternative way to describe the frac- tional regularity of a function g : E R. It is defined as a functional of the 2 → Hessian matrices (D G(t,y))t [0,1) by ∈ t 1/2 Y,θ 1 θ 2 D g(Y ) := (1 u) − H (u, Y ) du t 1 − G u Z0  4 S. GEISS AND A. TOIVOLA with

d 2 2 2 ∂ G HG(u, y) := σkkσll (u, y) . ∂yk ∂yl k,l=1   X Y,θ Bθ Rd In Proposition 4.2, we relate D to the spaces p,q( , γ d), which continues the analysis in [15], where this operator was used in a different form. (3) In the literature [3–5, 7, 8, 11, 17, 22], the question of the behavior of the discretization error Ct(g(Y1), τ, v) has been treated mostly using the L -, C (g(Y ), τ, v) , or by weak or stable limits of the re-scaled 2 k t 1 kL2 error processes limn √nCt(g(Y1),τn, vn), where τn is of cardinality n + 1. Many of the L2-results are of asymptotic nature as well, and, concerning random time-nets, only asymptotic statements were obtained. There is a general lower bound C1(g(Y1); τ) Lp δ/√n for nets τ of cardinality n + 1, see Remark 5.3. Ask one of the maink ≥ results of this paper, we obtain in Theorem 5.5 a characterization when this lower bound is actually achieved by special time-nets. A particular case of this statement is:

Theorem 1.2. For 2 p< , 0 <θ 1, and g(Y1) Lp, the following assertions are equivalent: ≤ ∞ ≤ ∈ (i) DY,θg(Y ) < , k 1 1 kLp ∞ (ii) sup √n C (g(Y ),τ θ) < , where the time-nets τ θ = n=1,2,... k 1 1 n kLp ∞ n (tθ )n are given by tθ := 1 (1 i )1/θ. i,n i=0 i,n − − n Using Proposition 4.2(iii), we can replace, in the case p = 2 and 0 <θ< 1, condition (i) in the theorem above by f Bθ [with convention (9)], which ∈ 2,2 is in accordance with the known one-dimensional L2-case; see [14]. The point of Theorem 1.2 is the usage of adapted time-nets. In the liter- ature, equidistant time-nets are often used in discretizations for simplicity. Therefore, we provide in Theorem 5.7 a description of the random variables θ/2 that can be approximated in Lp with equidistant time-nets with a rate n− Bθ for 0 <θ< 1 in terms of the Besov spaces p, . In particular, this theorem shows the loss of accuracy in the approximation∞ when not using the optimal nets. A special case of this theorem is the following.

Theorem 1.3. For 2 p< , 0 <θ< 1 and g(Y1) Lp the following assertions are equivalent: ≤ ∞ ∈ Bθ (i) f p, with f given by (9), ∈ ∞ θ/2 n (ii) supn=1,2,... n C1(g(Y1); τn) Lp < , where τn = (i/n)i=0 are the equidistant time-nets.k k ∞ FRACTIONAL SMOOTHNESS 5

(4) Theorems 1.2 and 1.3 (Theorems 5.5 and 5.7) are based on Theo- rem 5.1 which extends the curvature type description of the L2-approximation error from [11] for deterministic nets to the Lp-error C1(g(Y1),τ) Lp with n randk k 2 p< and to random time-nets τ = (τi)i=0 . To illustrate Theo- rem≤ 5.1,∞ let us formulate a corollary that follows∈T from Remark 5.2.

Theorem 1.4. For 2 p< there is a constant c 1 depending ≤ ∞ (1.4) ≥ n rand at most on p such that for all g(Y1) Lp and τ = (τi)i=0 we have that ∈ ∈T 1/2 n τi 2 C1(g(Y1),τ) Lp c(1.4) (τi t)HG(t,Yt) dt . k k ∼ τi−1 − ! i=1 Z Lp X

For example, to connect Theorem 1.4 to Theorem 1.2, we measure the size of a sequence 0 = t0 tn 1

2. Preliminaries.

Notation. We use A c B for A/c B cA whenever A, B 0 and c 1, a b = max a, b and∼ a b = min≤a, b ≤, and let be the Euclidean≥ norm≥ for∨ a vector{ or the} Hilbert–Schmidt∧ { norm} for a matrix.| · | Given a ran- dom vector or a random matrix A, we write A := A and denote k kLp k| |kLp the transpose of A by A⊤. 6 S. GEISS AND A. TOIVOLA

Real interpolation. Let us recall the real interpolation method that we use to generate the (Gaussian) Besov spaces.

Definition 2.1 ([1, 2]). Let (X0, X1) be a compatible couple of Banach spaces, that is, there exists a Hausdorff topological in which both X0 and X1 are continuously embedded. Given x X0 + X1 and λ> 0, the K-functional is defined by ∈ K(x, λ; X , X ) := inf x + λ x : x = x + x ,x X . 0 1 {k 0kX0 k 1kX1 0 1 i ∈ i} Given 0 <θ< 1 and 1 q , we let (X , X ) be the space of all x ≤ ≤∞ 0 1 θ,q ∈ X0 + X1 such that θ x (X0,X1)θ,q := λ− K(x, λ; X0, X1) Lq((0, ),(dλ)/λ) < . k k k k ∞ ∞ The K-functional yields to one of the basic approaches to define interme- diate spaces Y of a compatible couple of Banach spaces (X0, X1), that is, ֒ Banach spaces Y such that one has continues embeddings X0 X1 ֒ Y → →X + X . Assuming X ֒ X with norm one, this reduces to the∩ embedding 0 1 1 → 0 X1 ֒ Y ֒ X0. In this case, K(x, λ; X0, X1)= x X0 for λ [1, ) which does→ not→ give any information. However, for λ k(0k, 1) we have∈ that∞ ∈ λ x K(x, λ; X , X ) x . k kX0 ≤ 0 1 ≤ k kX0 The behavior of the function λ K(x, λ; X , X ) close to zero describes the → 0 1 distance of x to X1: intuitively we can say that the closer the function is to a linear function in λ, the closer x is to X1. In general, without the restriction X ֒ X , the functionals 1 → 0 θ λ− K(x, λ; X0, X1) Lq((0, ),(dλ)/λ) k k ∞ examine the behavior of the K-functional (in particular at zero and at in- finity) and lead to the spaces (X0, X1)θ,q. For X1 ֒ X0, we obtain the lexicographical ordering → (X , X ) (X , X ) and (X , X ) (X , X ) 0 1 θ0,q0 ⊆ 0 1 θ1,q1 0 1 η,r0 ⊆ 0 1 η,r1 if 0 <θ1 <θ0 < 1, 1 q0,q1 , 0 <η< 1, and 1 r0 r1 . The choice of the measure ≤dλ/λ ensures≤∞ (also in the general≤ case)≤ the≤∞ symmetry (X0, X1)θ,q = (X1, X0)1 θ,q. −

Gaussian Sobolev and Besov spaces. We let d 1 and γd be the stan- d d ≥ dard Gaussian measure on R . The space L2(R , γd) is equipped with the or- thonormal basis of generalized Hermite polynomials (h ) given k1,...,kd k∞1,...,kd=0 by h (x ,...,x ) := h (x ) h (x ), k1,...,kd 1 d k1 1 · · · kd d FRACTIONAL SMOOTHNESS 7 R where (hk)k∞=0 L2( , γ1) is the standard orthonormal basis of Hermite polynomials. The⊂ D = D (Rd, γ ) consists of all f L (Rd, 1,2 1,2 d ∈ 2 γd) such that

∞ 2 2 f, hk ,...,k Rd hk ,...,k Rd < . h 1 d iL2( ,γd)k∇ 1 d kL2( ,γd) ∞ k1,...,kXd=0 The space D1,2 is a under the norm

2 2 f D1,2 := f Rd + Df Rd , k k k kL2( ,γd) k kL2( ,γd) where, for f D , the gradientq Df is given by ∈ 1,2 ∞ Df := f, h Rd h . h k1,...,kd iL2( ,γd)∇ k1,...,kd k1,...,kXd=0 Given 2 p< , the Banach space D L is given by ≤ ∞ 1,p ⊆ p p p 1/p D1,p := f D1,2 : f D := ( f + Df ) < . { ∈ k k 1,p k kLp k kLp ∞} Here and later, we use f = f Rd and Df = Df Rd . k kLp k kLp( ,γd) k kLp k kLp( ,γd) Definition 2.2. For 0 <θ< 1 and 1 q , we let ≤ ≤∞ Bθ D p,q := (Lp, 1,p)θ,q be the Gaussian Besov space on Rd of fractional smoothness θ and fine-index q.

Because D1,p is not closed in Lp, we get a scale of spaces indexed by (θ,q), where the spaces are identical if and only if both indices coincide (see [21], Theorem 3.1). A typical function which has fractional smoothness is given by the following.

R 1 Example 2.3. Let d = 1, K , 2 p< and 0 α< 1 p . Then one has that ∈ ≤ ∞ ≤ − + α ((x K) ) , α> 0 B(1/p)+α f(x) := − p, , χ[K, )(x), α = 0 ∈ ∞  ∞ which shows the trade-off between integrability and smoothness. This can be proved by verifying K(f, λ; L , D ) cλ(1/p)+α for 0 <λ< 1. p 1,p ≤ Using canonical representations of functions of bounded variation, one can extend the case α = 0 in Example 2.3 to certain functions of bounded L variation by considering convex combinations f(x)= l=1 βlχ[Kl, )(x). ∞ P 8 S. GEISS AND A. TOIVOLA

Burkholder–Davis–Gundy inequality. We use the Burkholder–Davis–Gun- dy inequality for Brownian martingales with values in a separable . An explicit formulation is as follows: assume for i = 1, 2,... progres- i i Rd sively measurable processes (Lt)t [0,1] with Lt :Ω considered as row vectors and such that ∈ → 1 ∞ E i 2 Lt dt < , 0 | | ∞ Xi=1 Z then, for all 1

3. Fractional smoothness on the Gaussian space. In this section, we char- Bθ acterize the Gaussian Besov spaces p,q by the behavior of G from (1) in the case Y = W . To make this more clear, we do a change of notation and replace g by f and G by F . This means that f L (Rd, γ ) and ∈ 2 d d F (t,x) := Ef(x + W1 t) for (t,x) [0, 1] R . − ∈ × We also use the Hessian d d matrix × ∂2F d D2F := . ∂x ∂x  i j i,j=1 One can check that 1 2 2 (5) f D1,2 if and only if D F (t,Wt) dt < . ∈ k kL2 ∞ Z0 Moreover, for all f L (Rd, γ ) we have ∈ 2 d t ⊤ (6) F (t,W )= F (0, 0) + D2F (u, W ) dW a.s. ∇ t ∇ u u Z0  for 0 t< 1, where F (t,x) is considered as a row vector. If f D1,2, then (6) can≤ be extended∇ to t = 1 with the convention F (1, ) := Df∈ . Now we generalize (5) to the scale of Besov spaces. ∇ ·

d Theorem 3.1. Let 2 p< , 0 <θ< 1, 1 q and f Lp(R , γd). Then ≤ ∞ ≤ ≤∞ ∈ θ/2 θ f B c(3.1) f Lp + (1 t)− F (1,W1) F (t,Wt) Lp Lq ([0,1),(dt)/(1 t)) k k p,q ∼ k k k − k − k k − (1 θ)/2 c(3.1) f Lp + (1 t) − F (t,Wt) Lp Lq([0,1),(dt)/(1 t)) ∼ k k k − k∇ k k − (2 θ)/2 2 c(3.1) f Lp + (1 t) − D F (t,Wt) Lp Lq([0,1),(dt)/(1 t)) , ∼ k k k − k k k − where c 1 depends uniquely on (p,θ,q). (3.1) ≥ FRACTIONAL SMOOTHNESS 9

Remark 3.2. Theorem 3.1 generalizes [14], Theorem 2.2, where p = 2 was considered, and [28], Lemma 4.7, which was proved for 2 0, x0 R and Q(x0,s) := (y, z) : y x0 s, z x0 s . ≤ ∞ ∈ { | − |≤ | − |≤ } Bθ Corollary 3.3. For 2 p< , 0 <θ< 1, 1 q and f p,q, we have that ≤ ∞ ≤ ≤∞ ∈

(1/p) θ s − OSCp(f,x0,s) c f Bθ , k kLq((0,1],(ds)/s) ≤ (3.3)k k p,q where the constant c(3.3) > 0 depends at most on (p,θ,q,x0).

Proof. From [15], Lemma 4.9, we know that

1/(2p) OSCp(f,x0, √1 t) c(1 t)− f(Y ) f(Z) − ≤ − k − kLp for f Lp(R, γ1), 0 t< 1 and a two-dimensional Gaussian vector (Y,Z) with Y,Z∈ N(0, 1)≤ and cov(Y,Z)= t, where c> 0 depends at most on ∼ (x0,p). Looking at [15], Proof of Proposition 4.5(iii), we see that

f(Y ) f(Z) 2 f(W1) E(f(W1) t) , k − kLp ≤ k − |F kLp so that we can conclude by Theorem 3.1 of this paper. 

Now we turn to the proof of Theorem 3.1. We start with the following proposition.

Proposition 3.4. Let 2 p< . There exists a constant c(3.4) 1 depending at most on p such that≤ for∞ any 0 0. We find f L and f D such 0 ∈ p 1 ∈ 1,p that f = f0 + f1 and

f + √1 t f D K(f, √1 t; L , D )+ ǫ. k 0kLp − k 1k 1,p ≤ − p 1,p 10 S. GEISS AND A. TOIVOLA

For F (t,x) := E(f (W ) W = x) we obtain from (2) and (4) that i i 1 | t f(W1) F (t,Wt) k − kLp 1 f (W ) F (t,W ) + F (u, W ) dW ≤ k 0 1 − 0 t kLp ∇ 1 u u Zt Lp

1 1/2 f (W ) F (t,W ) + c F (u, W ) 2 du ≤ k 0 1 − 0 t kLp (4) k∇ 1 u kLp Zt  2 f + c √1 t f D ≤ k 0kLp (4) − k 1k 1,p c[K(f, √1 t; L , D )+ ǫ], ≤ − p 1,p where c := max c , 2 and we employed the facts that 2 p< and that { (4) } ≤ ∞ (6) yields

F1(u, Wu) f1 D for all 0 u 1. k∇ kLp ≤ k k 1,p ≤ ≤ Letting ǫ 0 and observing that √1 t f K(f, √1 t; L , D ) we → − k kLp ≤ − p 1,p achieve the first part of the desired inequality. (b) For 0

g (x) := F (t, √tx) and h (x) := f(x) F (t, √tx) t t − so that

p p p gt D = F (t, √tx) + F (t, √tx)√t k k 1,p k kLp k∇ kLp p p f + F (t,Wt) . ≤ k kLp k∇ kLp Applying (2) for Y = W , (4), the fact that F (t,W ) is nondecreasing k∇ t kLp in t and that 2 p< , we estimate ≤ ∞ t F (t,Wt) F (u, Wu) dWu + F (0,W0) k kLp ≤ ∇ k kLp Z0 Lp

c F (t,Wt) + Ef(W1) . ≤ (4)k∇ kLp | | Thus,

gt D f(W1) F (t,Wt) +(1+ c ) F (t,Wt) + Ef(W1) k k 1,p ≤ k − kLp (4) k∇ kLp | | 1/2 [1 + (1 + c )c (1 t)− ] f(W1) F (t,Wt) + Ef(W1) , ≤ (4) (A.3) − k − kLp | | where we used Lemma A.3. Exploiting an independent Brownian motion W and the fact that the covariance structures of (W1, √tW1 + √1 tW1) and − f f FRACTIONAL SMOOTHNESS 11

(W1,W√t + W1 √t) are the same, we obtain for ht that − h = [E f(W ) Ef(√tW + √1 tW ) p]1/p k tkLp f| 1 − 1 − 1 | EE p 1/p [ f(W1) f(W√t + W1 √t) ] ≤ | −e − |f f(W ) F (√t,W ) + F (√t, W ) f(W + W ) e 1 √t fLp √t √t 1 √t Lp ≤ k − k k − − k = 2 f(W1) F (√t,W ) k − √t kLp f 4 f(W1) F (t,Wt) , ≤ k − kLp where in the last step F (t,Wt) was inserted. Hence, K(f, √1 t; L , D ) − p 1,p 1/2 h + (1 t) g D ≤ k tkLp − k tk 1,p 1/2 (1 t) Ef(W1) +[5+(1+ c )c ] f(W1) F (t,Wt) ≤ − | | (4) (A.3) k − kLp and the proof is complete. 

We are now ready to prove the main result of this section.

Proof of Theorem 3.1. To verify the assumptions of Proposition A.4, we set 0 d (t) := f(W1) F (t,Wt) , k − kLp 1 d (t) := F (t,Wt) , k∇ kLp 2 2 d (t) := D F (t,Wt) , k kLp A := 2c f and α := c c . Then Lemma A.3 implies that (A.3)k kLp (4) ∨ (A.3) k k/2 0 d (t) c (1 t)− d (t) for k = 1, 2. ≤ (A.3) − By (2), (6), the Burkholder–Davis–Gundy inequalities (4) and 2 p< , we also see that ≤ ∞ t 1/2 t 1/2 0 2 1 2 d (t) c F (s,Ws) ds = c [d (s)] ds ≤ (4) k∇ kLp (4) Z0  Z0  and t 1 2 d (t) F (0,W0) + D F (s,Ws) dWs ≤ k∇ kLp Z0 Lp

t 1/2 2 2 2c f L + c D F (s,Ws) ds ≤ (A.3)k k p (4) k kLp Z0  t 1/2 = 2c f + c [d2(s)]2 ds , (A.3)k kLp (4) Z0  12 S. GEISS AND A. TOIVOLA where we used Lemma A.3. Now, applying (21) on page 31 gives the equiva- lence between the last three expressions in Theorem 3.1. It remains to check that θ/2 θ f B c f Lp + (1 t)− F (1,W1) F (t,Wt) Lp Lq ([0,1),(dt)/(1 t)) k k p,q ∼ k k k − k − k k − for some c = c(p,q,θ) 1, which follows from Proposition 3.4.  ≥ Remark 3.5. In the literature, interpolation spaces on the Wiener (or Gaussian) space are considered, for example, in [14, 15, 18, 28, 30]. A classical approach is based on semi-groups. Instead of that, our approach uses the elementary Proposition A.4, which is not related to semi-groups and makes it therefore possible to apply Proposition A.4 in more general situations (see [13]). Regarding the present paper, Proposition A.4 opens the way to extend results from Sections 4 and 5 below to processes different from the (geometric) Brownian motion. Below we want to indicate a possible semi- group approach to Theorem 3.1: (1) The first equivalence of Theorem 3.1 can be deduced in the case q = p (q is the fine-tuning index in the interpolation, Lp the integrability of the underlying spaces) from [18], Remark on page 428. Using the simple observation [9], equation (6), one can transform Hirsch’s condition into

∞ (θp)/2 s p ds s− f(W ) F (e− ,W −s ) , k 1 − e kp s Z0 which is our condition, up to a different scaling. (2) To consider the general case, that is, q = p, and also the other equiva- lences in Theorem 3.1, one can check general results6 about interpolation and semi-groups. There are two natural semi-groups one might use, the Ornstein– Uhlenbeck semi-group and the Poisson semi-group (see [27]). Roughly speak- ing, switching from the Ornstein–Uhlenbeck semi-group to the Poisson semi- group should result in a change of the main interpolation parameter η to our parameter θ = η/2 in the corresponding formulas, cf. [29], Section 1.15.2. Now assume 2 p< and ξ = f(W1) Lp and let (Ts)s 0 be the Ornstein– ≤ ∞ ∈ ≥ Uhlenbeck semi-group on Lp with generator Λ. Then [29], Section 1.13.2, gives η (7) f p + s− Tsξ ξ p Lq ((0, ),(ds)/s) < k k k k − k k ∞ ∞ for the interpolation space (Lp, D(Λ))η,q with 0 <η< 1 and 1 q . By Mehler’s formula, we have ≤ ≤∞

s 2s 2s s T ξ = Ef(fe− W + 1 e W )= F (e− , e− W ) s 1 − − 1 1 for an independent Brownian motionp W . This would give a comparable e f statement to the first equivalence of Theorem 3.1 for the Ornstein–Uhlenbeck f FRACTIONAL SMOOTHNESS 13 semi-group. To come closer to our statement, one can inspect the proof 2s of Proposition 3.4, which gives Tsξ ξ p 4 f(W1) F (e− ,We−2s ) p. Inserting this upper bound into (k7) would− k give≤ ank expression− like in the firstk equivalence of Theorem 3.1. To get the full statement one would still need to try to upper bound f(W1) F (t,Wt) p by Tsξ ξ p in an appropriate way or to find an alternativek way.− Concerningk thek second− k and third equivalence of Theorem 3.1 one might try to exploit [29], Section 1.14.5.

4. The Riemann–Liouville operator DY,θ. Riemann–Liouville type op- erators are typically used to describe fractional regularity. We use these operators to replace the Besov regularity defined by real interpolation when we consider the approximation along adapted time-nets in Theorem 5.5 be- low. The operator, introduced in the following Definition 4.1, was also used in a slightly modified form in [15], where the weak convergence of the error processes was considered.

Definition 4.1. For g(Y ) L , 0 <θ 1 and 0 t 1, we let 1 ∈ 2 ≤ ≤ ≤ t 1/2 Y,θ 1 θ 2 D g(Y ) := (1 u) − H (u, Y ) du , t 1 − G u Z0  where, with G given by (1),

d 2 2 2 ∂ G HG(u, y) := σkkσll (u, y) . ∂yk ∂yl k,l=1   X

d From now on, we use the following convention: for x = (x1,...,xd)⊤ R and 0 t 1 we let ∈ ≤ ≤ xk, Y = W (8) yk(t)= and y(t) := (y1(t),...,yd(t))⊤ exk (t/2), else  − and define the functions f : Rd R and F : [0, 1] Rd R as → × → (9) f(x) := g(y(1)) and F (t,x) := Ef(x + W1 t) − so that f(W1)= g(Y1) and F (t,x)= G(t,y(t)). In the case that Y is the coordinate-wise geometric Brownian motion, this notation implies that ∂2G ∂2F ∂F (10) yk(t)yl(t) (t,y(t)) = (t,x) δk,l (t,x) ∂yk ∂yl ∂xk ∂xl − ∂xk for k, l = 1,...,d. Let us summarize the connections between the Besov spaces and the operator DY,θ known to us.

Proposition 4.2. For g(Y1) Lp with 2 p< , the following asser- tions hold true: ∈ ≤ ∞ 14 S. GEISS AND A. TOIVOLA

(i) If 2

t 1/2 (1 θ)/2 2 (1 t) − H (s,Y ) ds . ≥ − G s Z0  Lp

If Y is the Brownian motion, then we can bound this from below by 1 (1 θ)/2 (1 t) − F (t,Wt) F (0,W0) Lp , c(4) − k∇ −∇ k where we have used (4) and (6). This implies that (θ 1)/2 W,θ F (t,Wt) F (0,W0) + c (1 t) − D f(W1) k∇ kLp ≤ k∇ kLp (4) − k 1 kLp and Theorem 3.1 can be used again. If Y is the coordinate-wise geometric Brownian motion, then we get from (10) that

t 1/2 2 HG(s,Ys) ds Z0  Lp

t 1/2 1 1/2 H2 (s,W ) ds F (s,W ) 2 ds ≥ F s − |∇ s | Z0  Lp Z0  Lp

FRACTIONAL SMOOTHNESS 15

1 1/2 1 2 F (t,Wt) F (0,W0) F (s,Ws) ds ≥ c k∇ −∇ kLp − |∇ | (4) Z0  Lp

1 1

F (t,Wt) Lp F (0,W0) Lp ≥ c(4) k∇ k − c(4) k∇ k 1 1/2 F (s,W ) 2 ds , − |∇ s | Z0  Lp

where we again used the Burkholder–Davis–Gundy inequalities (4). Because the last two terms on the right-hand side are finite, we can conclude as in the case of the Brownian motion. (ii) Because of (10) and ( 1 F (t,W ) 2 dt)1/2 L , we get DY,1g(Y ) 0 |∇ t | ∈ p 1 1 ∈ L if and only if ( 1 D2F (t,W ) 2 dt)1/2 L . Using relations (5) and (6), p 0 R t p one easily checks that| this is equivalent| to∈f D . R 1,p (iii) Since (10) implies the equivalence of ∈

1 Y,θ 2 1 θ 2 D g(Y1) = (1 t) − HG(t,Yt) dt < k 1 kL2 − k kL2 ∞ Z0 1 1 θ 2 2 and (1 t) − D F (t,Wt) dt < , we can use Theorem 3.1.  0 − k kL2 ∞ R 5. An approximation problem in Lp. In the whole section, we use the convention (8) and (9).

Time-nets. Given a sequence 0 = t0 tn 1

n ti u ti ti 1 − (ti)i=0 θ := sup sup | − 1| θ = sup | − 1| θ , | | i=1,...,n t − u

i 1/θ tθ := 1 1 . i,n − − n   For these time-nets,

θ θ θ θ ti,n u ti,n ti 1,n 1 t u | − | | − − | i,n 1 θ θ 1 θ | − |≤ (1 u) − ≤ (1 ti 1,n) − ≤ θn (11) − − − θ θ for u [ti 1,n,ti,n), ∈ − 16 S. GEISS AND A. TOIVOLA which implies that 1 (12) τ θ τ θ . | n| ≤ | n|θ ≤ θn Moreover, we have that

θ 1 θ (1 ti 1,n) − − − (13) θ θ βn ti,n ti 1,n ≤ | − − | for some β> 0 independent from n.

The basic equivalence in Lp. The following result reduces the computa- tion of the Lp-norm of the error processes defined in Definition 1.1 to an expression involving the curvature HG(t,Yt) similar to a square function. This result generalizes [11], Theorem 4.4, proved for deterministic nets in the L2-case.

Theorem 5.1. For 2 p< , there is a constant c 1 depending ≤ ∞ (5.1) ≥ at most on p such that for all g(Y ) L and τ = (τ )n rand we have 1 ∈ p i i=0 ∈T that 1/2 n τi 2 C1(g(Y1),τ) Lp c(5.1) (τi t)HG(t,Yt) dt , k k ≤ τi−1 − ! i=1 Z Lp X 1/2 n τi 1 2 inf C1(g(Y1), τ, v) (τi t)H (t,Yt) dt , v Lp G k k ≥ c(5.1) τi−1 − ! i=1 Z Lp X where the infimum is taken over all simple random vectors v :Ω Rd τi−1 → that are -measurable. Fτi−1 Remark 5.2. Both inequalities in Theorem 5.1 are proved by stopping at 0

T 1/2 (T t)H2 (t,Y ) dt − G t Z0  Lp

c E(g(Y1) T ) Eg(Y1 ) G(0,Y0)(YT Y0) ≤ k |F − −∇ − kLp c g(Y ) Eg(Y ) G(0,Y )(Y Y ) < ≤ k 1 − 1 −∇ 0 1 − 0 kLp ∞ FRACTIONAL SMOOTHNESS 17 so that 1/2 n τi 2 (τi t)HG(t,Yt) dt τi−1 − ! i=1 Z Lp X 1 1/2 (1 t)H2 (t,Y ) dt < . ≤ − G t ∞ Z0  Lp

Following (16) from step (c), this implies that sup C (g(Y ),τ) < 0 T<1 T 1 Lp , from which we can conclude that ≤ k k ∞ 1/2 1 n 2 χ(τi−1,τi](t) G(τi 1,Yτi−1 )σ(Yt) dt Lp 0 |∇ − | ! ∈ Z Xi=1 and C (g(Y ),τ) L . Finally, we have that 1 1 ∈ p (14) inf C1(g(Y1), τ, v) C1(g(Y1),τ) , v k kLp ≤ k kLp Rd where the infimum is taken over all simple random vectors vτi−1 :Ω that are -measurable. The latter also implies that all three express→ions—in Fτi−1 particular the simple and optimal Lp-approximation—in Theorem 5.1 are equivalent up to a multiplicative constant.

Remark 5.3. Theorem 5.1 provides an alternative way to prove the lower bound n δ C1(g(Y1),τ ) k kLp ≥ √n n n rand for some δ> 0, all n = 1, 2,... and all nets τ = (τi)i=1 whenever d ∈T there is no row vector v0 R such that g(Y1)= Eg(Y1)+ v0(Y1 Y0) a.s. This lower bound was obtained∈ in [8] in the one-dimensional case− using an asymptotic argument. To check the lower bound, observe for 0 a < b 1 and ρ := (a τ ) b that ≤ ≤ i ∨ i ∧ 1/2 n τi 2 (τi t)HG(t,Yt) dt τi−1 − ! i=1 Z Lp X 1/2 n ρi 2 (ρi t)HG(t,Yt) dt ≥ ρi−1 − ! i=1 Z Lp X n ρi ρi 1 inf HG(t,Yt) − − ≥ a t

18 S. GEISS AND A. TOIVOLA

Assume now sup0 a

0 = lim inf HG(t,Yt) = lim inf HG(t,Yt) = HG(T,YT ) Lp n an t

n ρi t ρi−1 ∨ ρi−1 ⊤ CT (g(Y1), τ, v)= mρi−1 + λu dWu dWt. ρi−1 ρi−1 Xi=1 Z  Z   Using (4) and the convexity inequality [6], pp. 104–105, p. 171, we achieve p n ρi t ρi−1 ∨ ρi−1 ⊤ mρi−1 + λu dWu dWt ρi−1 ρi−1 i=1 Z  Z   Lp X p/2 n ρi t ρi−1 2 p ∨ ρi−1 ⊤ c− E m + λu dW dt ≥ (4) ρi−1 u i=1 Zρi−1 Zρi−1  ! X n p/2 ρi t ρ i−1 2 p p/2E E ∨ ρi−1 ⊤ c− (p/2)− ρ mρ − + λu dWu dt ≥ (4) F i−1 i 1 i=1 Zρi−1 Zρi−1  ! X

FRACTIONAL SMOOTHNESS 19

p/2 n ρi pE E 2 (c(4) p/2)− ρ mρ − dt ≥ F i−1 | i 1 | i=1 ρi−1 ! p X Z p/2 n ρi p 2 = (c p/2)− E m dt , (4) | ρi−1 | i=1 ρi−1 ! p X Z where we used the assumption that ρi is ρi−1 -measurable. From this, we deduce that F

n ρi t ρi−1 ∨ ρi−1 ⊤ λu dWu dWt ρi−1 ρi−1 i=1 Z Z  Lp X n ρ i mρi−1 dWt ≤ ρi−1 i=1 Z Lp X n ρi t ρi−1 ∨ ρi−1 ⊤ + mρi−1 + λu dWu dWt ρi−1 ρi−1 i=1 Z  Z   Lp X 1/2 n ρi 2 c(4) mρi−1 dt ≤ ρi−1 | | ! i=1 Z Lp X n ρi t ρi−1 ∨ ρi−1 ⊤ + mρi−1 + λu dWu dWt ρi−1 ρi−1 i=1 Z  Z   Lp X n ρi t ρi−1 2 ∨ ρi−1 ⊤ [c(4) p/2 + 1] mρi−1 + λu dWu dWt ≤ ρi−1 ρi−1 i=1 Z  Z   Lp p X so that

CT (g(Y1), τ, v) k kLp n ρ t ρ − 1 i ∨ i 1 ⊤ 2 − ρi−1 [c(4) p/2 + 1] λu dWu dWt . ≥ ρi−1 ρi−1 i=1 Z Z  Lp p X

We continue by writing

n ρi t ρi−1 ∨ ρi−1 ⊤ λu dWu dWt ρi−1 ρi−1 i=1 Z Z  Lp X n 1 t ρi−1 ⊤ = [χ(ρi−1,t ρi−1](u)χ(ρi−1,ρi](t)λu ] dWu dWt 0 0 ∨ i=1 Z Z  Lp X

20 S. GEISS AND A. TOIVOLA

1 t ρ ⊤ = µ (t, u) dWu dWt Z0 Z0  Lp

with the d d-matrix × n n ρ ρi−1 ρi−1 µ (t, u) := χ(ρ − ,t](u)χ(ρ − ,ρ ](t)λu = χ ρ −

1 1 2 1/2 µρ(t, u) dW dt ∼c(4) u Z0 Z0  Lp

1 t 1/2 µρ(t, u) 2 du dt ∼c(A.1) | | Z0 Z0  Lp

1/2 n ρi ρi−1 2 = (ρi t) λt dt . ρi−1 − | | ! i=1 Z Lp X

Letting δ =0 if Y = W and δ =1 if Y is the geometric Brownian motion, this can be combined with 1/2 n ρi 2 (ρi t)HG(t,Yt) dt ρi−1 − ! i=1 Z Lp X 1/2 n ρi 2 δ (ρi t) ( G(t,Yt) αρi−1 )σ(Yt) dt − ρi−1 − | ∇ − | ! i=1 Z Lp X 1/2 n ρi ρi−1 2 (ρi t) λt dt ≤ ρi−1 − | | ! i=1 Z Lp X 1/ 2 n ρi 2 (ρi t)HG(t,Yt) dt ≤ ρi−1 − ! i=1 Z Lp X 1/2 n ρi 2 + δ (ρi t) ( G(t,Yt) αρi−1 )σ(Yt) dt ρi−1 − | ∇ − | ! i=1 Z Lp X

FRACTIONAL SMOOTHNESS 21 so that

CT (g(Y1), τ, v) k kLp 2 1 1 1 [c p/2 + 1]− c− c− ≥ (4) (4) (A.1) p 1/2 n ρi 2 (ρi t)HG(t,Yt) dt × " ρi−1 − ! i=1 Z Lp X 1/2 n ρi 2 δ ( G(t,Yt) αρi−1 )σ(Yt) dt . − ρi−1 | ∇ − | ! # i=1 Z Lp X

In the case of the Brownian motion, the last term disappears. In the case of the geometric Brownian motion, we apply again (4) to see that

1/2 n ρi 2 ( G(t,Yt) αρi−1 )σ(Yt) dt c(4) CT (g(Y1), τ, v) Lp . ρi−1 | ∇ − | ! ≤ k k i=1 Z Lp X

Hence, in both cases, we have that

inf CT (g(Y1), τ, v) v k kLp 1 (15) ≥ 2 [c(4) p/2 + 1]c(4)c(A.1) + c(4) 1/2 p n ρi 2 (ρi t)HG(t,Yt) dt . × ρi−1 − ! i=1 Z Lp X By T 1, we obtain the lower bound of our theorem. ↑ (c) Upper bound for C1(g(Y1),τ) Lp : For 0

ρi−1 νu := ( G(u, Yu) G(τi 1,Yτi−1 )χ τ −

CT (g(Y1),τ) k kLp 1/2 n ρi 2 c(4)c(A.1) (ρi t)HG(t,Yt) dt ≤ ρi−1 − ! i=1 Z Lp X 2 1/2 T t n ρ i−1 + c(4)c(A.1)δ χ ρi−1

22 S. GEISS AND A. TOIVOLA

1/2 n τi 2 c(4)c(A.1) (τi t)HG(t,Yt) dt ≤ τi−1 − ! i=1 Z Lp X 1/2 T t n ρi−1 2 + c(4)c(A.1)δ χ(ρi−1,ρi](u) νu du dt . 0 0 | | ! Z Z i=1 Lp X Because 2 p< , we can continue by ≤ ∞ 1/2 T t n ρi−1 2 χ(ρi−1,ρi](u) νu du dt 0 0 | | ! Z Z Xi=1 Lp 1/2 2 1/2 T t n ρi−1 2 χ(ρi−1,ρi](u) νu du dt ≤ 0 0 | | ! ! Z Z i=1 Lp X 2 1/2 T t n ρi−1 c(4) χ(ρi−1,ρi](u)νu dWu dt ≤ 0 0 ! Z Z i=1 Lp X T 1/2 2 = c Ct(g(Y1),τ) dt . (4) k kLp Z0  Combining these estimates, we achieve 2 CT (g(Y1),τ) k kLp 1/2 2 n τi 2 2 2 2c(4)c(A.1) (τi t)HG(t,Yt) dt ≤ τi−1 − ! i=1 Z Lp X T 4 2 2 + 2c c Ct(g(Y1),τ) dt. (4) (A.1) k kLp Z0 Gronwall’s lemma thus implies that

CT (g(Y1),τ) k kLp 1/2 (16) n τ c4 c2 i (4) (A.1) 2 √2c(4)c(A.1)e (τi t)HG(t,Yt) dt . ≤ τi−1 − ! i=1 Z Lp X Finally, by T 1 we obtain the upper bound in Theorem 5.1.  ↑ Remark 5.4. Our proof requires the assumption that the stopping time τi is τi−1 -measurable so that ρi is ρi−1 -measurable. For example, F ρ F ρ we need that the field (µ (t, u))t,u [0,1] has the property that µ (t, u) is u- ∈ E ρi 2 F measurable. Moreover, in step (b) we used ρ mρ − dt = F i−1 ρi−1 | i 1 | ρi 2 mρ dt. ρi−1 | i−1 | R R FRACTIONAL SMOOTHNESS 23

θ Approximation with adapted time-nets in Lp. We recall that the nets τn are given by i 1/θ tθ = 1 1 . i,n − − n   The following result extends [14], Theorem 3.2, from the one-dimensional L2-setting, but see also [20], Theorem 1, for a related d-dimensional L2- result.

Theorem 5.5. For 2 p< , 0 <θ 1 and g(Y1) Lp, the following assertions are equivalent: ≤ ∞ ≤ ∈ (i) DY,θg(Y ) < . k 1 1 kLp ∞ C1(g(Y1),τ) Lp (ii) supτ rand k k < . √ τ L ∈T k | |θk ∞ ∞ θ (iii) supn 1 √n C1(g(Y1),τn) Lp < . ≥ k k ∞ In particular, for all τ rand, ∈T Y,θ (17) C1(g(Y1),τ) c τ D g(Y1) , k kLp ≤ (5.1)k | |θ 1 kLp where c 1 is the constant from Theoremp 5.1. (5.1) ≥ For the proof, we need the following lemma that extends [14], Lemma 3.8.

Lemma 5.6. Let 0 <θ 1 and 0

(i) There exists a constant c1 > 0 such that

n ti ti ti 1 − (ti u)φu du c1 sup − 1 θ ti−1 − ≤ 1 i n (1 ti 1) − i=1 Z Lp ≤ ≤ − − X for all deterministic time-nets 0= t 0 such that, for all n = 1, 2,..., n θ ti,n θ c2 (ti,n u)φu du . tθ − ≤ n i=1 Z i−1,n Lp X

(iii) There exists a constant c3 > 0 such that 1 1 θ (1 u) − φ du c . − u ≤ 3 Z0 Lp

24 S. GEISS AND A. TOIVOLA

Proof. The implications (iii) (i) (ii) are similar to [14], Lemma ⇒ ⇒ n n n 3.8. For (ii) (iii), take a sequence of deterministic nets τ = (ti )i=0 with 0= tn 0 independent from n [see, e.g., (11) and (13)]. For a fixed 0

n 1 θ (1 ti 1) − n n 2 lim inf sup − − (t t ) φtn n n n i i 1 i−1 ≤ 1 i n ti ti 1 n − − L →∞  ≤ ≤ i N  p − − X∈ T

n n 2 β lim inf n (ti ti 1) φtn . ≤ n − − i−1 →∞ i N n  Lp X∈ T n n n 2 ti n Noticing that (ti ti 1) = 2 tn (ti u) du we continue with − − i−1 − n ti R n n β lim inf n 2 (ti u) du φti−1 n tn − →∞  i N n Z i−1  Lp X∈ T n ti n β lim inf n 2 (ti u)φu du ≤ n tn − →∞  i N n Z i−1 X∈ T

2 n n n + sup φu φti−1 (ti ti 1) tn u

FRACTIONAL SMOOTHNESS 25

n + α sup sup φu φti−1 i N n tn u

+ α lim sup sup sup φ φ n u ti−1 n i N n tn u

Proof of Theorem 5.5. First, we employ Theorem 5.1 to confirm equation (17) by

C1(g(Y1),τ) k kLp 1/2 n τi 2 c(5.1) (τi t)HG(t,Yt) dt ≤ τi−1 − ! i=1 Z Lp X 1/2 n τi 1 θ 2 c(5.1) τ θ (1 t) − HG(t,Yt) dt . ≤ | | τi−1 − ! i=1 Z Lp p X θ 1 Part (i) (ii) follows from (17) and part (ii) (iii) from τn θ θn [see (12)]. To⇒ show that (iii) (i), we apply Theorem⇒ 5.1 and (14|) to| ≤ see that ⇒ n θ 1/2 ti,n c θ 1 θ 2 C1(g(Y1),τn) Lp (ti,n t)HG(t,Yt) dt . √n ≥ k k ≥ c(5.1) tθ − ! i=1 Z i−1,n Lp X Lemma 5.6 completes the proof. 

Approximation with equidistant time-nets in Lp. Here, we extend the L2-results [7], Theorem 2.3, and [14], Theorem 3.5 for q = , to the Lp- case, as well as [28], Theorem 1.2, which concerned deterministic∞ time-nets 26 S. GEISS AND A. TOIVOLA and the one-dimensional Brownian motion, to random time-nets and the geometric Brownian motion.

Theorem 5.7. For 2 p< , 0 <θ< 1 and g(Y1) Lp, the following assertions are equivalent: ≤ ∞ ∈ Bθ (i) f p, . ∈ ∞ C1(g(Y1);τ) k kLp (ii) supτ rand τ θ/2 < . ∈T k| | kL∞ ∞ θ/2 n (iii) supn=1,2,... n C1(g(Y1); τn) Lp < , where τn = (i/n)i=0 are the equidistant time-nets. k k ∞ In particular, for 1 = 1 + 1 with p q, r and for all τ rand, p q r ≤ ≤∞ ∈T 1 1/2 2 θ 2 C1(g(Y1),τ) Lp c(5.1) ψ(t) Lq (1 t) − dt k k ≤ 0 k k − (18) Z  p 1 (θ/2) sup (1 t) − HG(t,Yt) Lr , × t [0,1) − k k ∈ where c 1 is the constant from Theorem 5.1 and (5.1) ≥

ψ(t,ω) := max τi(ω) τi 1(ω) (1 t). i=1,...,n| − − | ∧ −   Remark 5.8. The order for the equidistant nets can also be obtained from Theorem 5.5 under the condition DY,θg(Y ) < because k 1 1 kLp ∞ (i/n)n = n θ. | i=0|θ − Proof of Theorem 5.7. To verify (18), we use Theorem 5.1 and derive that 1/2 n τi 2 C1(g(Y1),τ) Lp c(5.1) (τi t)HG(t,Yt) dt k k ≤ τi−1 − ! i=1 Z Lp X 1 1/2 c ψ(t)H2 (t,Y ) dt ≤ (5.1) G t Z0  Lp

1 1/2 2 c ψ(t)HG(t,Yt) dt ≤ (5.1) k kLp Z0  1 p 1/2 c ψ(t) 2 H (t,Y ) 2 dt ≤ (5.1) k kLq k G t kLr Z0  1 p 1/2 2 θ 2 c ψ(t) (1 t) − dt ≤ (5.1) k kLq −  0  Z p FRACTIONAL SMOOTHNESS 27

1 (θ/2) sup (1 t) − HG(t,Yt) Lr . × t [0,1) − k k ∈ Part (i) (ii): we first observe that for ⇒ τ (ω)= max τi(ω) τi 1(ω) | | i=1,...,n| − − | we can compute (for q = ) ∞ 1 2 θ 2 ψ(t) L∞ (1 t) − dt 0 k k − Z p 1 τ L∞ 1 −k| |k θ 2 θ 1 = τ L∞ (1 t) − dt + (1 t) − dt k| |k 0 − 1 τ − Z Z −k| |kL∞ 1 θ 1 1 θ = τ ( τ − 1) + τ k| |kL∞ 1 θ k| |kL∞ − θ k| |kL∞ − 1 τ θ , ≤ θ(1 θ)k| |kL∞ − so that letting q = and r = p in (18) we obtain ∞ c(5.1) C (g(Y ),τ) τ θ/2 sup (1 t)1 (θ/2) H (t,Y ) . 1 1 Lp L∞ − G t Lp k k ≤ θ(1 θ)k| |k t [0,1) − k k − ∈ It remains to check thatp

1 (θ/2) sup (1 t) − HG(t,Yt) Lp < , t [0,1) − k k ∞ ∈ Bθ whenever f p, . This follows from Theorem 3.1, where we additionally use (10) and∈ the∞ a priori estimate

1/2 (19) sup (1 t) F (t,Wt) Lp < t [0,1) − k∇ k ∞ ∈ from Lemma A.3 if Y is the geometric Brownian motion. The implication (ii) (iii) is trivial. ⇒ Part (iii) (i): employing Theorem 5.1 and (14), we achieve ⇒ θ/2 cn− C1(g(Y1),τn) ≥ k kLp 1/2 n i/n 1 i 2 t HG(t,Yt) dt ≥ c(5.1) (i 1)/n n − ! i=1 Z −   Lp X 1 1 1/2 (1 t)H2 (t,Y ) dt ≥ c − G t (5.1) Z(n 1)/n  Lp −

28 S. GEISS AND A. TOIVOLA

1 1/2 1 2 1 (1 t)HG 1 ,Y1 (1/n) dt ≥ c − − n − (5.1) Z(n 1)/n    Lp −

1 1 1 1 = HG 1 ,Y1 (1/n) , c 2 n − n − (5.1) r   Lp

where we use in the last inequality the martingale property of the processes ∂2G σkkσll (t,Yt) . ∂yk ∂yl t [0,1)    ∈ The estimate above means that

1 1 (θ/2) HG 1 ,Y1 (1/n) √2cc(5.1)n − − n − ≤   Lp

for all n = 2, 3,.... Consequently, 1 (θ/2) (θ/2) 1 HG(t,Yt) 2 − √2cc (1 t) − , k kLp ≤ (5.1) − which follows from the monotonicity of HG(t,Yt) Lp . Theorem 3.1 com- pletes the proof, where we use (19) again.k  k

6. Further extensions. We see different open questions and possible ex- tensions, and briefly indicate some of them here: first, one should clarify whether Theorem 5.1 holds true without the additional assumption on the stopping times that τi is τi−1 -measurable. Second, the investigation to what extend the results of thisF paper can be extended to path dependent terminal conditions g(Yr1 ,...,YrL ) and their limits would possibly require new tech- niques and yield to a deeper insight into the approximation problem (cf. [9]). Finally, an extension to more general diffusions would be of interest, but might require a modification of the Besov spaces (see [7]) and a com- parison of these modified spaces to the spaces we have used in this paper. As described in Remark 3.5, Proposition A.4 below, which does not relay on semi-groups, might be useful in this respect.

APPENDIX A key step in the proof of Theorem 5.1 is the following-known formulation of the Burkholder–Davis–Gundy inequalities.

d d Lemma A.1. Assume that µ : [0, 1] [0, 1] Ω R × satisfies the fol- lowing assumptions: × × → d d (i) µ : [0, 1] [0, u] Ω R × is ([0, 1]) ([0, u]) u-measurable for all u [0, 1].× × → B ×B × F ∈ FRACTIONAL SMOOTHNESS 29

(ii) 1 1 E µ(t, u) 2 du dt < , where 1 E µ(t, u) 2 du < for all t 0 0 | | ∞ 0 | | ∞ ∈ [0, 1]. R 1R R (iii) ( 0 µ(t, u) dWu)t [0,1] is a measurable modification. ∈ Then, forR 1

1 1 2 1/2 1 1 1/2 µ(t, u) dW dt µ(t, u) 2 du dt . u ∼c(A.1) | | Z0 Z0  Lp Z0 Z0  Lp

Proof. For the convenience of the reader, we sketch the proof. By a fur- 1 ther modification, we can assume that (( 0 µ(t, u) dWu)(ω))t [0,1] L2[0, 1] ∈ ∈ for all ω Ω because of assumption (ii). Assume that (h ) is the or- ∈ R n n∞=0 thonormal basis of Haar-functions in L2[0, 1] and that µk(t, u) is the kth row of µ(t, u). Letting

1 n,k Lu := hn(t)µk(t, u) dt Z0 and using a stochastic Fubini argument we see that

1/2 1 1 2 1/2 d 1 2 ∞ n,k µ(t, u) dWu dt = Lu dWu . 0 0 Lp 0 ! Z Z  n=0 k=1 Z Lp X X

Using the Burkholder–Davis–Gundy inequalities (4), we obtain that

2 1/2 d 1/2 1 1 ∞ 1 µ(t, u) dW dt Ln,k 2 du u ∼c(4) | u | 0 0 Lp 0 ! Z Z  n=0 k=1 Z Lp X X 1/2 d 1 1 2 = µk(t, u) dt du 0 0 | | ! k=1 Z Z Lp X 1 1 1/2 = µ(t, u) 2 dt du . | |  Z0 Z0  Lp

Lemma A.2. Let 1

a p 1+ t sup Dy G(s,Ys) < for 0

30 S. GEISS AND A. TOIVOLA

Sketch of the proof. We use the notation (8) and (9) and consider first the case that Y is the Brownian motion. A simple direct computation gives the hyper-contraction property

a x 2/(2tq) D F (t,x) C(q, t, a) f Rd e| | | x |≤ k kLp( ,γd) for 0 0 depending only on p such that,− for all 0 t< 1, ≤ (i) F (t,W ) c (1 t) 1/2 f(W ) F (t,W ) , k∇ t kLp ≤ (A.3) − − k 1 − t kLp (ii) D2F (t,W ) c (1 t) 1 f(W ) F (t,W ) . k t kLp ≤ (A.3) − − k 1 − t kLp Next, we state some Hardy type inequalities we have used in the paper. FRACTIONAL SMOOTHNESS 31

Proposition A.4. Let 0 <θ< 1, 2 q and let dk : [0, 1) [0, ), ≤ ≤∞ → ∞ k = 0, 1, 2, be measurable functions. Assume that

1 1 1/2 (1 t)k/2 dk(t) d0(t) α [d1(s)]2 ds for t [0, 1) α − ≤ ≤ ∈ Zt  and for k = 1, 2, and that

t 1/2 d1(t) A + α [d2(u)]2 du for t [0, 1) ≤ ∈ Z0  for some A 0 and α> 0. Then ≥ θ/2 0 (1 t)− d (t) Lq ([0,1),(dt)/(1 t)) k − k − (1 θ)/2 1 c(A.4) (1 t) − d (t) Lq([0,1),(dt)/(1 t)) ∼ k − k − and

(2 θ)/2 2 (1 t) − d (t) Lq ([0,1),(dt)/(1 t)) k − k − θ/2 0 c(A.4) (1 t)− d (t) Lq([0,1),(dt)/(1 t)) ≤ k − k − 2 (2 θ)/2 2 c(A.4)[A + (1 t) − d (t) Lq ([0,1),(dt)/(1 t))], ≤ k − k − where c 1 depends at most on (α,θ,q). If the functions d1 and d2 are (A.4) ≥ nondecreasing, then the inequalities are true for 1 q< 2 as well. ≤ From Proposition A.4, it follows that

(k θ)/2 k (l θ)/2 l (21) A + (1 t) − d (t) c 21 A + (1 t) − d (t) k − kLq ∼ ( ) k − kLq dt 2 for Lq = Lq([0, 1), 1 t ), k, l = 0, 1, 2 and c(21) := [1+c(A.4)] . To prove Propo- sition A.4, we need:−

Lemma A.5. Let 0 <θ< 1, 2 q and let φ : [0, 1) [0, ) be a ≤ ≤∞ → ∞ measurable function. Then there is a constant c(A.5) > 0, depending at most on θ, such that

t 1/2 (1 θ)/2 2 (1 t) − φ(u) du − Z0  Lq ([0,1),(dt)/(1 t)) (22) − 1 (θ/2) c(A.5) (1 t) − φ(t) L q ([0,1),(dt)/(1 t)). ≤ k − k − Moreover, if φ is nondecreasing, the inequality is true for 1 q< 2 as well. ≤ 32 S. GEISS AND A. TOIVOLA

Proof. (a) For 2 q , we can use Hardy’s inequality (see, e.g., [1], Theorem 3.3.9): for≤ ≤∞<λ< 1 and 1 r < , and a measurable ψ : (0, ) [0, ), −∞ ≤ ∞ ∞ → ∞ r 1/r 1/r ∞ 1 λ ∞ ds dt 1 ∞ 1 λ r dt t − ψ(s) [t − ψ(t)] s t ≤ 1 λ t Z0  Zt   Z0  − q and the same with the supremum norm if r = . With the notation r := 2 , g(t) = [φ(t)]2, and λ = θ, we compute, in the case∞ 2 q< , ≤ ∞ t 1/2 2 (1 θ)/2 2 (1 t) − φ(u) du − Z0  Lq ([0,1),(dt)/(1 t)) − 1 t r 1/r 1 θ dt = (1 t) − g(u) du − 1 t Z0  Z0  −  r 1/r ∞ 1 θ ∞ ds = s − h(v) dv , s Z0  Zs   where h(v)= g(1 v)χ(0,1](v). Now we use Hardy’s inequality for ψ(v)= vh(v) and continue− with r 1/r ∞ 1 θ ∞ dv ds s − ψ(v) v s Z0  Zs   1/r 1 ∞ 1 θ r ds [s − ψ(s)] ≤ 1 θ s − Z0  1/r 1 ∞ 2 θ r ds = [s − h(s)] 1 θ s − Z0  1 1 (θ/2) 2 = (1 t) − φ(t) Lq ([0,1),(dt)/(1 t)) 1 θ k − k − − and the proof is complete for 2 q< . The case q = is analogous. ≤ ∞ ∞ 2 (b) For 1 q< 2, we use a different argument. First, we define r := q so that 1 < r ≤2. For 0

2 θ (T ) with c := 1−θ . This proves the desired inequality for ψ (t) := χ[T,1)(t). Next, we define− ψ := φq so that ψr = φ2. By assumption, φ is nondecreasing, and so is ψ, too. Now, we can approximate ψ from below by a sum of (T ) n n functions like ψ : for each integer n 1, we find αk 0, k = 0,..., 2 1 n n n n ≥ ≥ − and 0= t0

Proof of Proposition A.4. (a) Our assumptions imply for all 1 q that ≤ ≤∞ (1 θ)/2 1 θ/2 0 (1 t) − d (t) Lq([0,1),(dt)/(1 t)) α (1 t)− d (t) Lq([0,1),(dt)/(1 t)) k − k − ≤ k − k − and (2 θ)/2 2 θ/2 0 (1 t) − d (t) Lq([0,1),(dt)/(1 t)) α (1 t)− d (t) Lq([0,1),(dt)/(1 t)) . k − k − ≤ k − k − (b) Next, we observe that

θ/2 0 (1 t)− d (t) Lq ([0,1),(dt)/(1 t)) k − k − 1 1/2 (1 θ)/2 1 1 2 α (1 t) − [d (s)] ds ≤ − 1 t  Zt  Lq ([0,1),(dt)/(1 t)) − − max 1/2,1/q (1 θ)/2 1 αθ − { } (1 t) − d (t) Lq ([0,1) ,(dt)/(1 t)), ≤ k − k − where we used [14], formula (14) (the condition that ψ in [14] is continuous in the case 1 q< 2 is not necessary). ≤ 34 S. GEISS AND A. TOIVOLA

(c) To prove the remaining inequality, we continue from (b) with Lemma A.5 to

(1 θ)/2 1 (1 t) − d (t) Lq([0,1),(dt)/(1 t)) k − k − t 1/2 (1 θ)/2 2 (1 t) − A + α d (u) du ≤ −  Z0   Lq([0,1),(dt)/(1 t)) − (1 θ)/2 A (1 t) − Lq([0,1),(dt)/(1 t)) ≤ k − k − 1 (θ/2) 2  + αc(A.5) (1 t) − d (t) Lq([0,1),(dt)/(1 t)). k − k − REFERENCES

[1] Bennett, C. and Sharpley, R. (1988). Interpolation of Operators. Academic Press, Boston, MA. MR0928802 [2] Bergh, J. and Lofstr¨ om,¨ J. (1976). Interpolation Spaces. An Introduction. Springer, Berlin. MR0482275 [3] Fukasawa, M. (2011). Asymptotically efficient discrete hedging. In Stochastic Anal- ysis with Financial Applications. Progress in Probability 65 331–346. Birkh¨auser, Basel. MR3050797 [4] Fukasawa, M. (2011). Discretization error of stochastic integrals. Ann. Appl. Probab. 21 1436–1465. MR2857453 [5] Fukasawa, M. (2014). Efficient discretization of stochastic integrals. Finance Stoch. 18 175–208. MR3146491 [6] Garsia, A. M. (1973). Martingale Inequalities. Seminar Notes on Recent Progress. Benjamin, Amsterdam. MR0448538 [7] Geiss, C. and Geiss, S. (2004). On approximation of a class of stochastic integrals and interpolation. Stoch. Stoch. Rep. 76 339–362. MR2075477 [8] Geiss, C. and Geiss, S. (2006). On an approximation problem for stochastic integrals where random time nets do not help. Stochastic Process. Appl. 116 407–422. MR2199556 [9] Geiss, C., Geiss, S. and Gobet, E. (2012). Generalized fractional smoothness and Lp-variation of BSDEs with non-Lipschitz terminal condition. Stochastic Pro- cess. Appl. 122 2078–2116. MR2921973 [10] Geiss, C., Geiss, S. and Laukkarinen, E. (2013). A note on Malliavin fractional smoothness for L´evy processes and approximation. Potential Anal. 39 203–230. MR3102985 [11] Geiss, S. (2002). Quantitative approximation of certain stochastic integrals. Stoch. Stoch. Rep. 73 241–270. MR1932161 [12] Geiss, S. (2005). Weighted BMO and discrete time hedging within the Black–Scholes model. Probab. Theory Related Fields 132 13–38. MR2136865 [13] Geiss, S. and Gobet, E. Fractional smoothness of functionals of diffusion processes under a change of measure. Available at arXiv:1210.4572. [14] Geiss, S. and Hujo, M. (2007). Interpolation and approximation in L2(γ). J. Approx. Theory 144 213–232. MR2293387 [15] Geiss, S. and Toivola, A. (2009). Weak convergence of error processes in discretiza- tions of stochastic integrals and Besov spaces. Bernoulli 15 925–954. MR2597578 FRACTIONAL SMOOTHNESS 35

[16] Gobet, E. and Munos, R. (2005). Sensitivity analysis using Itˆo–Malliavin calculus and martingales, and application to stochastic optimal control. SIAM J. Control Optim. 43 1676–1713 (electronic). MR2137498 [17] Gobet, E. and Temam, E. (2001). Discrete time hedging errors for options with irregular payoffs. Finance Stoch. 5 357–367. MR1849426 [18] Hirsch, F. (1999). Lipschitz functions and fractional Sobolev spaces. Potential Anal. 11 415–429. MR1719833 [19] Hujo, M. (2006). Is the approximation rate for European pay-offs in the Black– Scholes model always 1/√n? J. Theoret. Probab. 19 190–203. MR2256486 [20] Hujo, M. (2007). On discrete time hedging in d-dimensional option pricing models. Available at arXiv:math/0703481. [21] Janson, S., Nilsson, P. and Peetre, J. (1984). Notes on Wolff’s note on inter- polation spaces. Proc. Lond. Math. Soc. (3) 48 283–299. With an appendix by Misha Zafran. MR0729071 [22] Martini, C. and Patry, C. (1999). Variance optimal hedging in the Black–Scholes model for a given number of transactions. Preprint 3767, INRIA, Paris. [23] Nualart, D. (2006). The Malliavin Calculus and Related Topics, 2nd ed. Probability and Its Applications (New York). Springer, Berlin. MR2200233 [24] Pisier, G. (1988). Riesz transforms: A simpler analytic proof of P.-A. Meyer’s in- equality. In S´eminaire de Probabilit´es, XXII. Lecture Notes in Math. 1321 485– 501. Springer, Berlin. MR0960544 [25] Pollard, H. (1948). The mean convergence of orthogonal series. II. Trans. Amer. Math. Soc. 63 355–367. MR0023941 [26] Seppal¨ a,¨ H. (2010). On the optimal approximation rate of certain stochastic inte- grals. J. Approx. Theory 162 1631–1653. MR2718889 [27] Stein, E. (1970). Topics in harmonic analysis. In Annals of Mathematical Studies. Princeton Univ. Press, Princeton, NJ. [28] Toivola, A. (2009). Interpolation and approximation in Lp. Preprint 380, Dept. Mathematics and Statistics, Univ. Jyv¨askyl¨a. [29] Triebel, H. (1978). Interpolation Theory, Function Spaces, Differential Operators. North-Holland, Amsterdam. MR0503903 [30] Watanabe, S. (1993). Fractional order Sobolev spaces on Wiener space. Probab. Theory Related Fields 95 175–198. MR1214086 [31] Zhang, R. (1998). Couverture approch´ee des options Europ´eennes. Ph.D. thesis, Ecole´ Nationale des Ponts et Chauss´ees, Paris.

University of Jyvaskyl¨ a¨ University of Jyvaskyl¨ a¨ P.O. Box 35 P.O. Box 35 FIN-40014 FIN-40014 Finland Finland and E-mail: anni.toivola@jyu.fi University of Innsbruck Technikerstraße 13/7 A-6020 Innsbruck Austria E-mail: [email protected]