arXiv:2001.10453v2 [math.PR] 14 Dec 2020 tbesausage. lim stable (1.3) h oueo inrsuae hi eutwsetne yEsl a Eisele by extended dev was large result a Their established sausage. they Wiener were a Varadhan of volume and the Donsker to due was lim (1.2) linked [ process the of subadditivity the of view in that hr Cap( where where q ( eq. (1.1) tnadBona oin nti case this In motion. Brownian standard a tn [ Stone ataon fltrtr ocrigisaypoi eair Th behavior. asymptotic its concerning literature of amount vast a nevl[ interval ´ v asg soitdwt h rcs n ie opc s compact given a and X process the with associated L´evy sausage If n e sdnt by denote us let and h oueo h L´evy sausage the of volume the 15 e od n phrases. and words Key 2010 oestsatr ii hoesfrtevlm faL´evy sausage a of volume the for theorems limit satisfactory More e = X Let s rpsto .2 mle h olwn toglwo ag numbers large of law strong following the implies 3.12] Proposition , euetenotation the use we 0 = 1.2 P V ahmtc ujc Classification. Subject Mathematics 22 Abstract. endvaa via defined eetbihafntoa eta ii hoe i h aewhen case the (in theorem limit central functional a establish we aso h trtdlgrtm(ntecs when case the (in Khintchine logarithm and iterated limit, the the of in laws motion Brownian one-dimensional standard obndwt iga’ roi hoe c.[ (cf. theorem ergodic Kingman’s with combined ) x t K ,t s, hoe 11 rvdta fXi rnin then transient is X if that proved 11.1] Theorem , stepoaiiymauerltdt h rcs tre at started X process the to related measure probability the is ihtefis itn time hitting first the with { K ,0 ], X stecpct of capacity the is ) OCEHCGN IOASANDRI NIKOLA CYGAN, WOJCIECH t II HOESFRASAL SAUSAGE STABLE A FOR THEOREMS LIMIT } ≤ t ≥ nti ril,w td utain ftevlm fasal sausa stable a of volume the of fluctuations study we article, this In s 0 d dmninlrttoal invariant rotationally -dimensional ≤ eaLeypoesin L´evy process a be t sterno e endas defined set random the is , ucinlcnrllmtterm a fteieae oaih,sta logarithm, iterated the of law theorem, limit central functional V E s [ K V + t t K t S ր∞ V ≤ = ] S t K K t K S V ր∞ = V [ s K ,t s, Z K 1. K t t K R soitdwt h rcs .Hwe [ Hawkes X. process the with associated S [ [ + ,t s, d ,t s, τ E = ] K Introduction 00,6G2 60F17. 60G52, 60F05, K Cap( = P [ V [0 V t w write (we ] x = ] inf = K t ( K t , R τ s [ ] ≤ ,s s, K .Let ]. [ d 1 S u Cap( = λ ≤ endo rbblt pc (Ω space probability a on defined ≤ t K { ( K t + s S { t {V scle inrsuae n hr is there and sausage, Wiener a called is X ) K d ) / > d/α ≥ t α ] λ u [ ,t s, , ,ADSTJEPAN AND C, t ´ K sal rcs.A h anresults, main the As process. -stable ,t s, ,t x, : 0 (d + V } K x t t K ]) P K ≥ X 9 ) eteLbsu esr on measure Lebesgue the be ) 0 / , -a.s. } s = htis, that , 5). . ∈ 16 λ ≥ ≥ K ( hoe h ,56)and 5.6]) I, Ch. Theorem , S } 0 0 t K , , i h identity the via ) led pte [ Spitzer Already )). / > d/α et SEBEK ineigwr [ work pioneering e ˇ K sadChung’s and ’s ainpicpefor principle iation dLn [ Lang nd ⊂ r nw fXis X if known are x 3 / ∈ R )wt a with 2) d R ntetime the on , 11 d otand Port . observed ] , l process, ble F ge 8 othe to ] , P .A ). 28 R 6 d ] ] 2 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ case when the driving process is a standard Brownian motion with drift, and to a class of elliptic diffusions by Sznitman [29], while Okuraˆ investigated similar questions for a certain class of symmetric L´evy processes. Le Gall [17] obtained a central limit theorem for the volume of a Wiener sausage in dimensions d 2, with different normalizing sequences and distributions in the limit for d = 2, d = 3 and≥ d 4, respectively. More recently, van den Berg, Bolthausen and den Hollander [34] studied≥ the problem of intersections of two Wiener sausages, see also [31], [32] and [33]. For further limit theorems for the volume of a Wiener sausage see [3], [12] and [35]. We remark that first studies on a Wiener sausage were motivated by its applications in physics [14]. We refer the reader to the book by Simon [27] for a comprehensive discussion on this topic. In the present article, we focus on the limit behavior of the volume of a stable sausage, that is, a L´evy sausage corresponding to a stable L´evy process. Asymptotic behavior of stable sausages has not been extensively studied yet. In the seminal paper [6] Donsker and Varadhan obtained a large deviation principle for the volume of a stable sausage. Some other works were concerned with the expansion of the expected volume of a stable sausage. More precisely, Getoor [9] proved eq. (1.2) for rotationally invariant α-stable processes with d>α and for any compact set K. He also investigated the first order E K asymptotics of the difference [ t ] t Cap(K), whose form depends on the value of the ratio d/α, see [9, Theorem 2]. TheV − second order terms in this expansion were found by Port [21] for all strictly stable processes satisfying some extra assumptions. In [24] Rosen established asymptotic expansions for the volume of a stable sausage in the plane with the coefficients represented by n-fold self-intersections of the stable process. In this article, we obtain a central limit theorem for the volume of a stable sausage. We then apply this result to study convergence of the volume process in the Skorohod space, and establish the corresponding functional central limit theorem. Finally, we also obtain Khintchine’s and Chung’s laws of the iterated logarithm for this process. Before we formulate our results, we briefly recall some basic notation from the potential theory of stable processes. Let X be a rotationally invariant stable L´evy process of index α (0, 2], that is, a L´evy process whose bounded continuous transition density p(t, x) is uniquely∈ determined by the Fourier transform
α e−t|ξ| = ei(x,ξ) p(t, x) dx, Rd Z where (x, ξ) stands for the inner product in Rd, x = (x, x)1/2 is the Euclidean norm, and dx = λ(dx). We assume that X is transient, which| | holds if (and only if) d>α. Its ∞ B Rd Green function is then given by G(x)= 0 p(t, x) dt. Let ( ) denote the family of all Borel subsets of Rd. For each B B(Rd) there exists a unique Borel measure µ (dx) R B supported on B B(Rd) such that∈ ∈
(1.4) Px(τB < ) = G(x y) µB(dy). ∞ Rd − Z The measure µB(dx) is called the equilibrium measure of B, and its capacity Cap(B) is defined as the total mass of µB(dx), that is, Cap(B)= µB(B). We denote by B(x, r) the closed Euclidean ball centered at x Rd of radius r > 0. In the case when r = 1 and ∈ x = 0, we write B = B(0, 1). If B = B(0,r) then the measure µB(dy) has a density which is proportional to (r y 2)−α/2. In particular, we have (see for instance [30]) −| | Γ(d/2) Cap(B)= . Γ(α/2)Γ(1 + (d α)/2) − B B B In the case when K = , we simply write t instead of t (and similarly t for t ). Let (0, 1) denote the Gaussian random variableV with meanV zero and varianceS one.S Our N LIMIT THEOREMS FOR A STABLE SAUSAGE 3 central limit theorem (see Theorem 2.1) for the volume of a stable sausage asserts that if d/α > 3/2 then there exists a constant σ = σ(d,α) > 0 such that
t Cap(B) (d) (1.5) Vt − (0, 1), σ√t −−−→tր∞ N where convergence holds in distribution. The cornerstone of the proof of eq. (1.5) is to represent t as a sum of independent random variables plus an error term. For this we use inclusion-exclusionV formula together with the Markov property and rotational invariance of the process X. More precisely, for t, s 0, we have ≥ = λ [t, t + s] = λ ( X ) ( [t, t + s] X ) Vt+s St ∪ S St − t ∪ S − t (1.6) (1) (1) = + (2) λ (2) , Vt Vs − St ∩ Ss (1) (2) (1) (2) where and s ( and s ) are independent and have the same law as and Vt V St S Vt s ( t and s), respectively. This decomposition allows us to apply the Lindeberg-Feller centralV S limitS theorem in the present context. The first key step is to find estimates for (1) (2) the error term λ t s , which we give in Section 2.1. The second step is to control the variance of theS volume∩ S of a stable sausage which is achieved in Section 2.2. Let us emphasize that the present article has been mainly inspired by Le Gall’s work [17] where he studied fluctuations of the volume of a Wiener sausage (the case α = 2). Among other results, he established the central limit theorem in eq. (1.5) for dimensions d 4. Still another source of motivation was the article [18] by Le Gall and Rosen where they≥ proved a corresponding central limit theorem for the range of stable random walks and mentioned that it is plausible that similar result holds for stable sausages, see [18, Page 654]. Both of these articles were also concerned with the lower-dimensional case d < 4 and d/α 3/2, respectively. In the present article we are only interested in the case when d/α >≤3/2, and we postpone the study of the remaining values of the ratio d/α to follow-up articles. As an application of eq. (1.5) we obtain a functional central limit theorem (see The- orem 3.1) which states that under the same assumptions, and with the same constant σ> 0, B nt nt Cap( ) (J1) (1.7) V − Wt t≥0. σ√n −−−→nր∞ { } t≥0 Here, convergence holds in the Skorohod space ([0, ), R) endowed with the J topology, D ∞ 1 and Wt t≥0 is a standard Brownian motion in R. The proof of eq. (1.7) is performed according{ } to a general two-step scheme: (i) convergence of finite-dimensional distributions, which follows from eq. (1.5); (ii) tightness, which we investigate by employing the well- known Aldous criterion, see Section 3 for details. It is remarkable that results in eqs. (1.5) and (1.7) correspond to analogous results for the range (and its capacity) of stable random walks on the integer lattice Zd which we discussed in [4] and [5], respectively. We finally use eq. (1.5) to study growth of the paths of the volume of the stable sausage. In the case when d/α > 9/5, we prove Khintchine’s law of the iterated logarithm t Cap(B) t Cap(B) (1.8) lim inf Vt − = 1 and lim sup Vt − = 1 P-a.s., tր∞ 2σ2t log log t − tր∞ 2σ2t log log t as well as Chung’sp law of the iterated logarithm p sup s s Cap(B) π (1.9) lim inf 0≤s≤t |V − | = P-a.s. tր∞ σ2t/ log log t √8 p 4 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ
Our proof is based on the approach which was developed by Jain and Pruitt [13] (see also [1]) in the context of the range of random walks and then successfully applied in [35] to obtain Khintchine’s and Chung’s laws of the iterated logarithm for a Wiener sausage in dimensions d 4. The rest of≥ the paper is organized as follows. In Section 2 we prove the central limit theorem in eq. (1.5). For this we first deal with the error terms derived from eq. (1.6), and in the second part we show that the variance of the volume of a stable sausage behaves linearly at infinity. Section 3 is devoted to the proof of eq. (1.7), and in Section 4 we prove eqs. (1.8) and (1.9). In Section 5 we present the proofs of some technical results which we need in the course of the study.
2. Central limit theorem The goal of this section is to prove the following central limit theorem. We assume that X is a rotationally invariant stable L´evy process in Rd of index α (0, 2] satisfying d/α > 3/2. ∈ Theorem 2.1. Under the above assumptions, there exists a constant σ = σ(d,α) > 0 such that t Cap(B) (d) Vt − (0, 1). σ√t −−−→tր∞ N We remark that Theorem 2.1 holds for any closed ball B(x, r), with a possibly different constant σ > 0. Moreover, as indicated by eq. (1.2), the statement of the theorem remains valid if we replace the term t Cap(B) with E[ t]. Before we embark on the proof of the theorem,V which is given at the end of the section, we first need to find satisfactory estimates for the error term in decomposition eq. (1.6). Next step is to investigate the variance Var( t) of the volume of a stable sausage and to show that it behaves as σt at infinity. V
2.1. Error term estimates. We assume that X is defined on the canonical space Ω = ([0, ), Rd) of all c`adl`ag functions ω : [0, ) Rd. It is endowed with the Borel σD-algebra∞ generated by the Skorokhod J ∞topology.→ Then X is understood as the F 1 coordinate process, that is, Xt(ω)= ω(t), and the shift operator θt acting on Ω is defined by θ ω(s) = ω(t + s), t,s 0. t ≥ In what follows we use notation K [t, ) = X + K , t 0. S ∞ { s } ≥ s≥t [ K K K K K K We also write ∞ = [0, ), ∞ = λ( ∞) and [t, ) = λ( [t, )), t 0. We start with a lemmaS whichS enables∞ V us to representS theV expected∞ volumS e of∞ the intersection≥ of two sausages in terms of the difference E[ K ] t Cap(K). Vt − Lemma 2.2. For any compact set K and all t 0 it holds ≥ E[ K ] t Cap(K) = E[λ( K K [t, ))]. Vt − St ∩ S ∞ Proof. We clearly have K = K + K [t, ) λ( K K [t, )) V∞ Vt V ∞ − St ∩ S ∞ which implies λ( K K [t, )) = K λ( K K [t, )). St ∩ S ∞ Vt − St \ S ∞ LIMIT THEOREMS FOR A STABLE SAUSAGE 5
Hence, it suffices to show that (2.1) E[λ( K K [t, )] = t Cap(K), t 0. St \ S ∞ ≥ By rotational invariance of the process X we have
E K K P K K [λ( t [t, )] = (x t [t, )) dx S \ S ∞ Rd ∈ S \ S ∞ Z
= Px Xs K , Xs / K dx Rd { ∈ } { ∈ } 0≤s≤t s≥t Z [ \
(2.2) = Px(0 < ηK t) dx, Rd ≤ Z where ηK is the last exit time of the process X from the set K, that is, sup t> 0 : X K , τ < , η = { t ∈ } K ∞ K 0, τ = . ( K ∞ We observe that ηK > t = τK θt < which together with eq. (1.4) yields { } { ◦ ∞} ∞ Px(ηK > t) = p(t, x, y) Py(τK < ) dy = p(s, x, z) dsµK(dz), Rd ∞ Rd Z Z Zt where we used notation p(t, x, y)= p(t, y x). We obtain − t Px(0 < ηK t) = p(s,x,y) dsµK(dy). ≤ Rd Z Z0 This and eq. (2.2) imply t E K K [λ( t [t, ))] = p(s,y,x) dx dsµK(dy) = t Cap(K), S \ S ∞ Rd Rd Z Z0 Z and the proof is finished. In the following lemma we show how one can easily estimate the higher moments of the expected volume of the intersection of two sausages through the first moment estimate. ′ ′ Lemma 2.3. Let X be an independent copy of the process X such that X0 = X0, and let ′, t 0, denote the sausage associated with X′. Then for all k N and t 0 it holds St ≥ ∈ ≥ E[λ( ′ )k] 2k−1(k!)2 E[λ( ′ )] k. St ∩ S∞ ≤ St ∩ S∞ Proof. We observe that E ′ P P ′ P P (2.3) [λ( t ∞)] = (x t) (x ∞) dx = x(τB t) x(τB < ) dx, S ∩ S Rd ∈ S ∈ S Rd ≤ ∞ Z Z where we used rotational invariance of X. Similarly, for k 1 we have ≥ E ′ k P P (2.4) [λ( t ∞) ] = (x1,...,xk t) (x1,...,xk ∞) dx1 dxk. S ∩ S Rd ··· Rd ∈ S ∈ S ··· Z Z By the strong Markov property, we obtain P(x ,...,x ) = P(τB t,...,τB t) 1 k ∈ St −x1 ≤ −xk ≤ k! P(τB τB t) (2.5) ≤ −x1 ≤···≤ −xk ≤ P P k! (τB−x1 τB−xk−1 t) sup z(τB−xk t). ≤ ≤···≤ ≤ z∈B−xk−1 ≤ For any w B we have B w B(0, 2) and whence ∈ − ⊆ P (τB t) = P (τB t) P (τB t). w−xk−1 −xk ≤ xk−xk−1 −w ≤ ≤ xk−xk−1 (0,2) ≤ 6 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ
For x Rd and B B(Rd) we set τ x = inf t 0 : x + X B and show that ∈ ∈ B { ≥ t ∈ } x (d) α x/2 x α x/2 τB(0,2) = 2 τB , that is, the random variables τB(0,2) and 2 τB are equal in distribution. Indeed, the easy calculation yields
x x Xt τ B = inf t 0 : x + X 2 = inf t 0 : + 1 (0,2) { ≥ | t| ≤ } ≥ 2 2 ≤ (d) x α x α x/2 = inf t 0 : + X α 1 = 2 inf s 0 : + X 1 = 2 τB . ≥ 2 t/2 ≤ ≥ 2 s ≤ n o n o This implies that for arbitrary w B, ∈ xk−xk−1 α (xk−xk−1)/2 P B P P P x −x B w−xk−1 (τ −xk t) (τB(0,2) t) = (2 τB t) k k−1 (τ t). ≤ ≤ ≤ ≤ ≤ 2 ≤ In particular,
P P x −x sup z(τB−xk t) k k−1 (τB t). 2 z∈B−xk−1 ≤ ≤ ≤ By combining this with eq. (2.5) and iterating the whole procedure, we obtain
P P P − P − (x1,...,xk t) k! x1 (τB t) x2 x1 (τB t) xk xk−1 (τB t). ∈ S ≤ ≤ 2 ≤ ··· 2 ≤ Similarly, it follows that
P P P − P − (x1,...,xk ∞) k! x1 (τB < ) x2 x1 (τB < ) xk xk−1 (τB < ). ∈ S ≤ ∞ 2 ∞ ··· 2 ∞ Applying the last two inequalities to eq. (2.4) and using eq. (2.3) finishes the proof. Corollary 2.4. In the notation of Lemma 2.3, for all k N and t> 0 large enough there is a constant c = c(d,α) > 0 such that ∈ (2.6) E[λ( ′ )k] 2k−1(k!)2ck h(t)k, St ∩ S∞ ≤ where the function h : (0, ) (0, ) is defined as ∞ → ∞ 1, d/α> 2, (2.7) h(t) = log(t + e), d/α =2, t2−d/α, d/α (1, 2). ∈ Proof. It follows from [9, Theorem 2] that there is a constant c(d,α) > 0 such that for t> 0 large enough
(2.8) Px(τB t) dx t Cap(B) c(d,α) h(t), Rd ≤ − ≤ Z where the function h(t) is given in eq. (2.7). We observe that by the Markov property and translation invariance of λ(dx) we have E[λ( ′ )] = E[λ(( X ) ( [t, ) X ))] = E[λ( [t, ))]. St ∩ S∞ St − t ∩ S ∞ − t St ∩ S ∞ Thus, eq. (1.1) combined with Lemma 2.2 and eq. (2.8) implies the assertion for k = 1. For k > 1 the result follows by Lemma 2.3.
2.2. Variance of the volume of a stable sausage. Our aim in this section is to determine the constant σ in Theorem 2.1. We can easily adapt the approach of [4, Lemma 4.3] to the present setting and combine it with [10, Theorem 2] to conclude that the limit below exists Var( ) (2.9) lim Vt = σ2. tր∞ t LIMIT THEOREMS FOR A STABLE SAUSAGE 7
The main difficulty is to show that σ is strictly positive, and this is obtained in the following crucial lemma. We adapt the proof of [17, Lemma 4.2] but let us emphasize that it is a laborious task to adjust it to the case of stable processes. Lemma 2.5. The constant σ in eq. (2.9) is strictly positive. Proof. We split the proof into several steps and we notice that it clearly suffices to restrict our attention to integer values of the parameter t. Step 1. We start by finding a handy decomposition of the variance Var( ) expressed as Vn a sum of specific random variables, see eq. (2.18). We assume that X0 = 0, and we set (2.10) [s, t] = [s, t] , 0 s t< . S S \ Ss ≤ ≤ ∞ For n, N N such that 1 n N we have ∈ b≤ ≤ + λ( [n, N]) = . Vn S VN Let t = σ(Xs : s t). Since n is n-measurable, we obtain F ≤ V F b + E[λ( [n, N]) ] = E[ ] Vn S |Fn VN |Fn and by taking expectations and subtracting1 b + E[λ( [n, N]) ] = E[ ] E[ ]. hVni h S |Fn i VN |Fn − VN Hence b n (2.11) + E[λ( [n, N]) ] = U N , hVni h S |Fn i k Xk=1 where b U N = E[ ] E[ ]. k VN |Fk − VN |Fk−1 We first discuss the second term on the left-hand side of eq. (2.11). We claim that (2.12) E[λ( [n, N]) ] = E[λ( [n, N] ) ] . h S |Fn i −h S ∩ Sn |Fn i Indeed, by eq. (2.10) we have b λ( [n, N]) = λ( [n, N]) λ( [n, N] ) S S − S ∩ Sn and the independence of the increments of the process X implies that b E[λ( [n, N]) ] = E[λ( [n, N]) ] E[λ( [n, N] ) ] S |Fn S |Fn − S ∩ Sn |Fn = E[λ( [n, N] Xn) n] E[λ( [n, N] n) n] b S − |F − S ∩ S |F = E[λ( [n, N] X )] E[λ( [n, N] ) ] S − n − S ∩ Sn |Fn = E[λ( [n, N])] E[λ( [n, N] ) ]. S − S ∩ Sn |Fn Taking expectation and then subtracting the two relations yields eq. (2.12). N Next we deal with the random variables Uk for k =1,...,N. By the independence of the increments of the process X, we obtain U N = E[λ( )+ λ( [k, N]) λ( [k, N]) ] k Sk S − Sk ∩ S |Fk E[λ( )+ λ( [k 1, N 1]) λ( [k 1, N 1]) ] − Sk−1 S − − − Sk−1 ∩ S − − |Fk−1 E[λ( [N 1, N]) ] − S − |Fk−1 = λ( k) λ( k−1)+ E[λ( [k, N])] E[λ( [k 1, N 1])] S −b S S − S − − E[λ( [N 1, N]) ]+ E[λ( [k 1, N 1]) ] − S − |Fk−1 Sk−1 ∩ S − − |Fk−1 1For any random variable Y L1 we write Y = Y E[Y ]. b ∈ h i − 8 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ
E[λ( [k, N]) ] − Sk ∩ S |Fk = λ( [k 1,k]) E[λ( [N 1, N]) ]+ E[λ( [k 1, N 1]) ] S − − S − |Fk−1 Sk−1 ∩ S − − |Fk−1 E[λ( k [k, N]) k]. − b S ∩ S |Fb Let s,t denote the σ-algebra generated by the increments of X on [s, t], 0 s t. Then, by aF reversibility argument, ≤ ≤ (d) E[λ( [N 1, N]) ] = E[λ( [1, N]) ]. S − |Fk−1 S1 \ S |FN−k+1,N Moreover, the following convergence in L1 holds b L1 (2.13) E[λ( 1 [1, N]) N−k+1,N ] E[λ( 1 [1, ))]. S \ S |F −−−→Nր∞ S \ S ∞ The proof of eq. (2.13) is postponed to Section 5, Lemma 5.1. In view of eq. (2.1) it follows that L1 E[λ( [N 1, N]) k−1] Cap(B). S − |F −−−→Nր∞ We thus obtain b 1 N L B E Uk λ( [k 1,k]) Cap( )+ [λ( k−1 [k 1, )) k−1] (2.14) −−−→Nր∞ S − − S ∩ S − ∞ |F E[λ( [k, )) ]. − b Sk ∩ S ∞ |Fk Further, we observe that λ( [k, )) = λ( [k 1, )) λ( [k 1,k]) Sk ∩ S ∞ Sk−1 ∩ S − ∞ − Sk−1 ∩ S − + λ( [k, ) [k 1,k]) + λ( [k 1,k] [k, )) Sk ∩ S ∞ ∩ Sk−1 ∩ S − S − ∩ S ∞ λ( [k 1, ) [k 1,k] [k, )) − Sk−1 ∩ S − ∞ ∩ S − ∩ S ∞ = λ( [k 1, )) λ( [k 1,k]) Sk−1 ∩ S − ∞ − Sk−1 ∩ S − + λ( [k 1,k] [k, )). S − ∩ S ∞ This and eq. (2.14) imply
1 N L c B E Uk λ( [k 1,k] k−1) Cap( )+ [λ( k−1 [k 1, )) k−1] −−−→Nր∞ S − ∩ S − S ∩ S − ∞ |F E[λ( [k 1, )) ]+ λ( [k 1,k] ) − Sk−1 ∩ S − ∞ |Fk S − ∩ Sk−1 E[λ( [k 1,k] [k, )) ] − S − ∩ S ∞ |Fk = λ( [k 1,k]) E[λ( [k 1,k] [k, )) ] Cap(B) S − − S − ∩ S ∞ |Fk − + E[λ( [k 1, )) ] E[λ( [k 1, )) ] Sk−1 ∩ S − ∞ |Fk−1 − Sk−1 ∩ S − ∞ |Fk = E[λ( [k 1,k] [k, )) ] + E[λ( [k 1,k] [k, ))] Cap(B) h S − \ S ∞ |Fk i S − \ S ∞ − + E[λ( [k 1, )) ] E[λ( [k 1, )) ] Sk−1 ∩ S − ∞ |Fk−1 − Sk−1 ∩ S − ∞ |Fk = E[λ( [k 1,k] [k, )) ] h S − \ S ∞ |Fk i + E[λ( [k 1, )) ] E[λ( [k 1, )) ], Sk−1 ∩ S − ∞ |Fk−1 − Sk−1 ∩ S − ∞ |Fk where in the last line we used eq. (2.1). Hence, by eqs. (2.11) and (2.12), n (2.15) = E[λ( [n, )) ] + Y , hVni h Sn ∩ S ∞ |Fn i k Xk=1 where
Yk = E[λ( k−1 [k 1, )) k−1] E[λ( k−1 [k 1, )) k] (2.16) S ∩ S − ∞ |F − S ∩ S − ∞ |F + E[λ( [k 1,k] [k, )) ] . h S − \ S ∞ |Fk i LIMIT THEOREMS FOR A STABLE SAUSAGE 9
From eq. (2.15) it follows that the variance of is equal to Vn n 2 Var( ) = Var(E[λ( [n, )) ]) + E Y Vn Sn ∩ S ∞ |Fn k h k=1 i (2.17) n X +2 E E[λ( [n, )) ] Y . h Sn ∩ S ∞ |Fn i k h Xk=1 i Clearly, Y is -measurable and E[Y ]=0 for k
1 1 2 lim Var(E[λ( n [n, )) n]) lim E[λ( n [n, )) ]=0. nր∞ n S ∩ S ∞ |F ≤ nր∞ n S ∩ S ∞ 1 n E 2 The sequence n k=1 [Yk ] n≥1 is bounded (the proof is given in Section 5, Lemma 5.2), and thus by the{ Cauchy-Schwarz} inequality we conclude that P 1 n lim E E[λ( n [n, )) n] Yk = 0. nր∞ n h S ∩ S ∞ |F i h Xk=1 i We have shown that n Var( n) 1 E 2 (2.18) lim V = lim [Yk ]. nր∞ n nր∞ n Xk=1 Step 2. In this step we prove that the limit on the right-hand side of eq. (2.18) is strictly ′ ′ positive. Let X be an independent copy of the original process X such that X0 = 0 and ′ it has c`agl`ad paths. We consider a process X¯ = X¯ R by setting X¯ = X for t 0, { t}t∈ t −t ≤ and X¯t = Xt for t 0. Clearly, the process X¯ has c`adl`ag paths, and stationary and independent increments.≥ The sausages [s, t], ( ,s] and [s, ) corresponding to X¯ S S −∞ S ∞ are defined for all s, t R, s t. Recall that ∞ = [0, ). We assume that the process X¯ is defined on the canonical∈ ≤ space Ω = (RS, Rd) ofS all∞ c`adl`ag functions ω : R Rd as D → the coordinate process X¯t(ω)= ω(t). We define a shift operator ϑ acting on Ω by (2.19) ϑω(t) = ω(1 + t) ω(1), t R, − ∈ and we notice that it is a P-preserving mapping. For t R, we define ∈ = σ(X¯ : 0 and | | L1 (2.22) Zk Z. −−−→kր∞ Step 2a. We start by proving the existence of Z. This is evident if d> 2α as in this case h(t) = 1 and whence using eq. (2.6) we obtain E[λ( ( , 0] ] < . S −∞ ∩ S∞ ∞ 10 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ
This implies (2.22) with
Z = E[λ( ( , 0] ∞) 0] E[λ( ( , 0] ∞) 1] (2.23) S −∞ ∩ S | G − S −∞ ∩ S | G + E[λ( [1, )) ] . h S1 \ S ∞ | G1 i We next consider the case 3α/2 the second line. We observe that ½ λ( [ k, 0] ∞) = ½S[−k,0](y) S∞ (y) dy. S − ∩ S Rd Z By taking conditional expectation with respect to , we obtain G0 E ½ ½ E ½ [λ( [ k, 0] ∞) 0] = S[−k,0](y) [ S∞ (y)] dy = S[−k,0](y) φ(y) dy, S − ∩ S | G Rd Rd Z Z where we set φ(y)= P(y ). ∈ S∞ Similarly, we write ½ λ( [ k, 0] [1, )) = ½S[−k,0]−X1(y) S[1,∞)−X1 (y) dy S − ∩ S ∞ Rd Z and we take conditional expectation with respect to which yields G1 E ½ E ½ [λ( [ k, 0] [1, )) 1] = S[−k,0]−X1 (y) [ S[1,∞)−X1 (y)] dy S − ∩ S ∞ | G Rd (2.24) Z = ½S[−k,0](y) φ(y X1) dy. Rd − Z It follows that E[λ( [ k, 0] ) ] E[λ( [ k, 0] [1, )) ] S − ∩ S∞ | G0 − S − ∩ S ∞ | G1 (2.25) = ½S[−k,0](y) φ(y) φ(y X1) dy. Rd − − Z We prove in Lemma 5.4 that the right-hand side integral in eq. (2.25) is a well-defined random variable in L1. Thus, the dominated convergence theorem implies eq. (2.22) with Z = E[λ( [1, )) ] E[λ(( [1, ]) ( , 0]) ] h S1 \ S ∞ | G1 i − S1 \ S ∞ ∩ S −∞ | G1 + ½S(−∞,0](y) φ(y) φ(y X1) dy. Rd − − Z Step 2b. We next show that E[ Z ] > 0 and we remark that the following arguments apply to all d> 3α/2. From eqs. (2.15| )| and (2.21) we have n−1 = Z ϑk + H , hVni ◦ n Xk=0 LIMIT THEOREMS FOR A STABLE SAUSAGE 11 where n−1 (2.26) H = E[λ( [n, )) ] + (Z Z) ϑk. n h Sn ∩ S ∞ | Gn i k − ◦ Xk=0 Equation (2.1) yields = E[ ] E[λ( [n, ))] = n Cap(B). hVni Vn − Vn ≤Vn − Sn \ S ∞ Vn − This implies n−1 n Cap(B)+ Z ϑk + H . Vn ≥ ◦ n Xk=0 We aim to prove that there isc> ¯ 0 (which does not depend on n) such that for all n N ∈ (2.27) P c¯ H c¯ > 0. {Vn ≤ }∩{| n| ≤ } Notice that if E[ Z ] = 0 then the inequality n n Cap(B)+ Hn and eq. (2.27) would imply that Cap(B| )| = 0 which is a contradiction.V Therefore,≥ it is enough to show eq. (2.27). From eqs. (2.20) and (2.23) we obtain n−1 k E ½ (Z Zk) ϑ = ½S(−∞,0](y) [ S∞ (y) 0] dy − ◦ Rd | G Xk=0 Z n−2 ½ ½ E S(−∞,0]\Sk (y) [ S[k,∞)(y) k+1] (2.28) − Rd | G Xk=0 Z E ½ ½ (y) [ (y) ] dy − S(−∞,0]\Sk+1 S[k+1,∞) | Gk+1 ½ ½ E S(−∞,0]\Sn−1 (y) [ S[n−1,∞)(y) n] dy. − Rd | G Z We next observe that ½ ½ E S(−∞,0]\Sk (y) [ S[k,∞)(y) k+1] Rd | G Z ½ ½ (y) E[ (y) ] dy − S(−∞,0]\Sk+1 S[k+1,∞) | Gk+1 E ½ ½ ½ ½ = ½ (y) c (y) [ (y)+ (y) c(y) ] S(−∞,0] Sk S[k,k+1] S[k+1,∞) S[k,k+1] k+1 Rd | G Z E ½ ½ ½ c (2.29) S(−∞,0](y) S +1 (y) [ S[k+1,∞)(y) k+1] dy − k | G ½ ½ = ½ (y) c (y) (y) S(−∞,0] Sk S[k,k+1] Rd Z E ½ ½ + ½Sc (y) S[k,k+1]c(y) [ S[k+1,∞)(y) k+1] k | G E ½ ½Sc (y) [ S[k+1,∞)(y) k+1] dy − k+1 | G = λ( ( , 0] ( [k,k + 1] )). S −∞ ∩ S \ Sk 12 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ ½ ½ We clearly have ½S[n−1,∞)(y)= S[n,∞)(y)+ S[n−1,n]\S[n,∞)(y). This identity and a similar argument as in eq. (2.24) yield ½ ½ E S(−∞,0]\Sn−1 (y) [ S[n−1,∞)(y) n] dy Rd | G Z ½ ½ E = S(−∞,0]\Sn−1 (y) [ S[n,∞)(y) n] dy Rd | G Z E ½ (2.30) + ½S(−∞,0]\Sn−1 (y) [ S[n−1,n]\S[n,∞)(y) n] dy Rd | G Z ½ = ½S(−∞,0](y) φ(y Xn) dy S(−∞,0]∩Sn−1 (y) φ(y Xn) dy Rd − − Rd − Z Z E ½ + ½S(−∞,0]\Sn−1 (y) [ S[n−1,n]\S[n,∞)(y) n] dy. Rd | G Z By combining eqs. (2.28) to (2.30), we arrive at n−1 (Z Z ) ϑk − k ◦ Xk=0 (2.31) ½ = ½S(−∞,0](y) φ(y) φ(y Xn) dy + S(−∞,0]∩Sn−1 (y) φ(y Xn) dy Rd − − Rd − Z Z