arXiv:2001.10453v2 [math.PR] 14 Dec 2020 tbesausage. lim stable (1.3) h oueo inrsuae hi eutwsetne yEsl a Eisele by extended dev was large result a Their established sausage. they Wiener were a Varadhan of volume and the Donsker to due was lim (1.2) linked [ process the of subadditivity the of view in that hr Cap( where where q ( eq. (1.1) tnadBona oin nti case this In motion. Brownian standard a tn [ Stone ataon fltrtr ocrigisaypoi eair Th behavior. asymptotic its concerning literature of amount vast a nevl[ interval ´ v asg soitdwt h rcs n ie opc s compact given a and X process the with associated L´evy sausage If n e sdnt by denote us let and h oueo h L´evy sausage the of volume the 15 e od n phrases. and words Key 2010 oestsatr ii hoesfrtevlm faL´evy sausage a of volume the for theorems limit satisfactory More e = X Let s rpsto .2 mle h olwn toglwo ag numbers large of law strong following the implies 3.12] Proposition , euetenotation the use we 0 = 1.2 P V ahmtc ujc Classification. Subject 22 Abstract. endvaa via defined eetbihafntoa eta ii hoe i h aewhen case the (in theorem limit central functional a establish we aso h trtdlgrtm(ntecs when case the (in Khintchine logarithm and iterated limit, the the of in laws motion Brownian one-dimensional standard obndwt iga’ roi hoe c.[ (cf. theorem ergodic Kingman’s with combined ) x t K ,t s, hoe 11 rvdta fXi rnin then transient is X if that proved 11.1] Theorem , stepoaiiymauerltdt h rcs tre at started X process the to related measure the is ihtefis itn time hitting first the with { K ,0 ], X stecpct of capacity the is ) OCEHCGN IOASANDRI NIKOLA CYGAN, WOJCIECH t II HOESFRASAL SAUSAGE STABLE A FOR THEOREMS LIMIT } ≤ t ≥ nti ril,w td utain ftevlm fasal sausa stable a of volume the of fluctuations study we article, this In s 0 d dmninlrttoal invariant rotationally -dimensional ≤ eaLeypoesin L´evy process a be t sterno e endas defined set random the is , ucinlcnrllmtterm a fteieae oaih,sta logarithm, iterated the of law theorem, limit central functional V E s [ K V + t t K t S ր∞ V ≤ = ] S t K K t K S V ր∞ = V [ s K ,t s, Z K 1. K t t K R soitdwt h rcs .Hwe [ Hawkes X. process the with associated S [ [ + ,t s, d ,t s, τ E = ] K Introduction 00,6G2 60F17. 60G52, 60F05, K Cap( = P [ V [0 V t w write (we ] x = ] inf = K t ( K t , R τ s [ ] ≤ ,s s, K .Let ]. [ d 1 S u Cap( = λ ≤ endo rbblt pc (Ω space probability a on defined ≤ t K { ( K t + s S { t {V scle inrsuae n hr is there and sausage, Wiener a called is X ) K d ) / > d/α ≥ t α ] λ u [ ,t s, , ,ADSTJEPAN AND C, t ´ K sal rcs.A h anresults, main the As process. -stable ,t s, ,t x, : 0 (d + V } K x t t K ]) P K ≥ X 9 ) eteLbsu esr on measure Lebesgue the be ) 0 / , -a.s. } s = htis, that , 5). . ∈ 16 λ ≥ ≥ K ( hoe h ,56)and 5.6]) I, Ch. Theorem , S } 0 0 t K , , i h identity the via ) led pte [ Spitzer Already )). / > d/α et SEBEK ineigwr [ work pioneering e ˇ K sadChung’s and ’s ainpicpefor principle iation dLn [ Lang nd ⊂ r nw fXis X if known are x 3 / ∈ R )wt a with 2) d R ntetime the on , 11 d otand Port . observed ] , l process, ble F ge 8 othe to ] , P .A ). 28 R 6 d ] ] 2 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ case when the driving process is a standard with drift, and to a class of elliptic diffusions by Sznitman [29], while Okuraˆ investigated similar questions for a certain class of symmetric L´evy processes. Le Gall [17] obtained a for the volume of a Wiener sausage in dimensions d 2, with different normalizing sequences and distributions in the limit for d = 2, d = 3 and≥ d 4, respectively. More recently, van den Berg, Bolthausen and den Hollander [34] studied≥ the problem of intersections of two Wiener sausages, see also [31], [32] and [33]. For further limit theorems for the volume of a Wiener sausage see [3], [12] and [35]. We remark that first studies on a Wiener sausage were motivated by its applications in physics [14]. We refer the reader to the book by Simon [27] for a comprehensive discussion on this topic. In the present article, we focus on the limit behavior of the volume of a stable sausage, that is, a L´evy sausage corresponding to a stable L´evy process. Asymptotic behavior of stable sausages has not been extensively studied yet. In the seminal paper [6] Donsker and Varadhan obtained a large deviation principle for the volume of a stable sausage. Some other works were concerned with the expansion of the expected volume of a stable sausage. More precisely, Getoor [9] proved eq. (1.2) for rotationally invariant α-stable processes with d>α and for any compact set K. He also investigated the first order E K asymptotics of the difference [ t ] t Cap(K), whose form depends on the value of the ratio d/α, see [9, Theorem 2]. TheV − second order terms in this expansion were found by Port [21] for all strictly stable processes satisfying some extra assumptions. In [24] Rosen established asymptotic expansions for the volume of a stable sausage in the plane with the coefficients represented by n-fold self-intersections of the . In this article, we obtain a central limit theorem for the volume of a stable sausage. We then apply this result to study convergence of the volume process in the Skorohod space, and establish the corresponding functional central limit theorem. Finally, we also obtain Khintchine’s and Chung’s laws of the iterated logarithm for this process. Before we formulate our results, we briefly recall some basic notation from the potential theory of stable processes. Let X be a rotationally invariant stable L´evy process of index α (0, 2], that is, a L´evy process whose bounded continuous transition density p(t, x) is uniquely∈ determined by the Fourier transform

α e−t|ξ| = ei(x,ξ) p(t, x) dx, Rd Z where (x, ξ) stands for the inner product in Rd, x = (x, x)1/2 is the Euclidean norm, and dx = λ(dx). We assume that X is transient, which| | holds if (and only if) d>α. Its ∞ B Rd Green function is then given by G(x)= 0 p(t, x) dt. Let ( ) denote the family of all Borel subsets of Rd. For each B B(Rd) there exists a unique Borel measure µ (dx) R B supported on B B(Rd) such that∈ ∈

(1.4) Px(τB < ) = G(x y) µB(dy). ∞ Rd − Z The measure µB(dx) is called the equilibrium measure of B, and its capacity Cap(B) is defined as the total mass of µB(dx), that is, Cap(B)= µB(B). We denote by B(x, r) the closed Euclidean ball centered at x Rd of radius r > 0. In the case when r = 1 and ∈ x = 0, we write B = B(0, 1). If B = B(0,r) then the measure µB(dy) has a density which is proportional to (r y 2)−α/2. In particular, we have (see for instance [30]) −| | Γ(d/2) Cap(B)= . Γ(α/2)Γ(1 + (d α)/2) − B B B In the case when K = , we simply write t instead of t (and similarly t for t ). Let (0, 1) denote the Gaussian random variableV with meanV zero and varianceS one.S Our N LIMIT THEOREMS FOR A STABLE SAUSAGE 3 central limit theorem (see Theorem 2.1) for the volume of a stable sausage asserts that if d/α > 3/2 then there exists a constant σ = σ(d,α) > 0 such that

t Cap(B) (d) (1.5) Vt − (0, 1), σ√t −−−→tր∞ N where convergence holds in distribution. The cornerstone of the proof of eq. (1.5) is to represent t as a sum of independent random variables plus an error term. For this we use inclusion-exclusionV formula together with the and rotational invariance of the process X. More precisely, for t, s 0, we have ≥ = λ [t, t + s] = λ ( X ) ( [t, t + s] X ) Vt+s St ∪ S St − t ∪ S − t (1.6) (1) (1) = + (2) λ  (2) ,  Vt Vs − St ∩ Ss (1) (2) (1) (2) where and s ( and s ) are independent and have the same law as and Vt V St S Vt s ( t and s), respectively. This decomposition allows us to apply the Lindeberg-Feller centralV S limitS theorem in the present context. The first key step is to find estimates for (1) (2) the error term λ t s , which we give in Section 2.1. The second step is to control the variance of theS volume∩ S of a stable sausage which is achieved in Section 2.2. Let us emphasize that the present article has been mainly inspired by Le Gall’s work [17] where he studied fluctuations of the volume of a Wiener sausage (the case α = 2). Among other results, he established the central limit theorem in eq. (1.5) for dimensions d 4. Still another source of motivation was the article [18] by Le Gall and Rosen where they≥ proved a corresponding central limit theorem for the range of stable random walks and mentioned that it is plausible that similar result holds for stable sausages, see [18, Page 654]. Both of these articles were also concerned with the lower-dimensional case d < 4 and d/α 3/2, respectively. In the present article we are only interested in the case when d/α >≤3/2, and we postpone the study of the remaining values of the ratio d/α to follow-up articles. As an application of eq. (1.5) we obtain a functional central limit theorem (see The- orem 3.1) which states that under the same assumptions, and with the same constant σ> 0, B nt nt Cap( ) (J1) (1.7) V − Wt t≥0. σ√n −−−→nր∞ { }  t≥0 Here, convergence holds in the Skorohod space ([0, ), R) endowed with the J topology, D ∞ 1 and Wt t≥0 is a standard Brownian motion in R. The proof of eq. (1.7) is performed according{ } to a general two-step scheme: (i) convergence of finite-dimensional distributions, which follows from eq. (1.5); (ii) tightness, which we investigate by employing the well- known Aldous criterion, see Section 3 for details. It is remarkable that results in eqs. (1.5) and (1.7) correspond to analogous results for the range (and its capacity) of stable random walks on the integer lattice Zd which we discussed in [4] and [5], respectively. We finally use eq. (1.5) to study growth of the paths of the volume of the stable sausage. In the case when d/α > 9/5, we prove Khintchine’s law of the iterated logarithm t Cap(B) t Cap(B) (1.8) lim inf Vt − = 1 and lim sup Vt − = 1 P-a.s., tր∞ 2σ2t log log t − tր∞ 2σ2t log log t as well as Chung’sp law of the iterated logarithm p sup s s Cap(B) π (1.9) lim inf 0≤s≤t |V − | = P-a.s. tր∞ σ2t/ log log t √8 p 4 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

Our proof is based on the approach which was developed by Jain and Pruitt [13] (see also [1]) in the context of the range of random walks and then successfully applied in [35] to obtain Khintchine’s and Chung’s laws of the iterated logarithm for a Wiener sausage in dimensions d 4. The rest of≥ the paper is organized as follows. In Section 2 we prove the central limit theorem in eq. (1.5). For this we first deal with the error terms derived from eq. (1.6), and in the second part we show that the variance of the volume of a stable sausage behaves linearly at infinity. Section 3 is devoted to the proof of eq. (1.7), and in Section 4 we prove eqs. (1.8) and (1.9). In Section 5 we present the proofs of some technical results which we need in the course of the study.

2. Central limit theorem The goal of this section is to prove the following central limit theorem. We assume that X is a rotationally invariant stable L´evy process in Rd of index α (0, 2] satisfying d/α > 3/2. ∈ Theorem 2.1. Under the above assumptions, there exists a constant σ = σ(d,α) > 0 such that t Cap(B) (d) Vt − (0, 1). σ√t −−−→tր∞ N We remark that Theorem 2.1 holds for any closed ball B(x, r), with a possibly different constant σ > 0. Moreover, as indicated by eq. (1.2), the statement of the theorem remains valid if we replace the term t Cap(B) with E[ t]. Before we embark on the proof of the theorem,V which is given at the end of the section, we first need to find satisfactory estimates for the error term in decomposition eq. (1.6). Next step is to investigate the variance Var( t) of the volume of a stable sausage and to show that it behaves as σt at infinity. V

2.1. Error term estimates. We assume that X is defined on the canonical space Ω = ([0, ), Rd) of all c`adl`ag functions ω : [0, ) Rd. It is endowed with the Borel σD-algebra∞ generated by the Skorokhod J ∞topology.→ Then X is understood as the F 1 coordinate process, that is, Xt(ω)= ω(t), and the shift operator θt acting on Ω is defined by θ ω(s) = ω(t + s), t,s 0. t ≥ In what follows we use notation K [t, ) = X + K , t 0. S ∞ { s } ≥ s≥t [ K K K K K K We also write ∞ = [0, ), ∞ = λ( ∞) and [t, ) = λ( [t, )), t 0. We start with a lemmaS whichS enables∞ V us to representS theV expected∞ volumS e of∞ the intersection≥ of two sausages in terms of the difference E[ K ] t Cap(K). Vt − Lemma 2.2. For any compact set K and all t 0 it holds ≥ E[ K ] t Cap(K) = E[λ( K K [t, ))]. Vt − St ∩ S ∞ Proof. We clearly have K = K + K [t, ) λ( K K [t, )) V∞ Vt V ∞ − St ∩ S ∞ which implies λ( K K [t, )) = K λ( K K [t, )). St ∩ S ∞ Vt − St \ S ∞ LIMIT THEOREMS FOR A STABLE SAUSAGE 5

Hence, it suffices to show that (2.1) E[λ( K K [t, )] = t Cap(K), t 0. St \ S ∞ ≥ By rotational invariance of the process X we have

E K K P K K [λ( t [t, )] = (x t [t, )) dx S \ S ∞ Rd ∈ S \ S ∞ Z

= Px Xs K , Xs / K dx Rd { ∈ } { ∈ } 0≤s≤t s≥t Z  [ \ 

(2.2) = Px(0 < ηK t) dx, Rd ≤ Z where ηK is the last exit time of the process X from the set K, that is, sup t> 0 : X K , τ < , η = { t ∈ } K ∞ K 0, τ = . ( K ∞ We observe that ηK > t = τK θt < which together with eq. (1.4) yields { } { ◦ ∞} ∞ Px(ηK > t) = p(t, x, y) Py(τK < ) dy = p(s, x, z) dsµK(dz), Rd ∞ Rd Z Z Zt where we used notation p(t, x, y)= p(t, y x). We obtain − t Px(0 < ηK t) = p(s,x,y) dsµK(dy). ≤ Rd Z Z0 This and eq. (2.2) imply t E K K [λ( t [t, ))] = p(s,y,x) dx dsµK(dy) = t Cap(K), S \ S ∞ Rd Rd Z Z0 Z and the proof is finished.  In the following lemma we show how one can easily estimate the higher moments of the expected volume of the intersection of two sausages through the first moment estimate. ′ ′ Lemma 2.3. Let X be an independent copy of the process X such that X0 = X0, and let ′, t 0, denote the sausage associated with X′. Then for all k N and t 0 it holds St ≥ ∈ ≥ E[λ( ′ )k] 2k−1(k!)2 E[λ( ′ )] k. St ∩ S∞ ≤ St ∩ S∞ Proof. We observe that  E ′ P P ′ P P (2.3) [λ( t ∞)] = (x t) (x ∞) dx = x(τB t) x(τB < ) dx, S ∩ S Rd ∈ S ∈ S Rd ≤ ∞ Z Z where we used rotational invariance of X. Similarly, for k 1 we have ≥ E ′ k P P (2.4) [λ( t ∞) ] = (x1,...,xk t) (x1,...,xk ∞) dx1 dxk. S ∩ S Rd ··· Rd ∈ S ∈ S ··· Z Z By the strong Markov property, we obtain P(x ,...,x ) = P(τB t,...,τB t) 1 k ∈ St −x1 ≤ −xk ≤ k! P(τB τB t) (2.5) ≤ −x1 ≤···≤ −xk ≤ P P k! (τB−x1 τB−xk−1 t) sup z(τB−xk t). ≤ ≤···≤ ≤ z∈B−xk−1 ≤ For any w B we have B w B(0, 2) and whence ∈ − ⊆ P (τB t) = P (τB t) P (τB t). w−xk−1 −xk ≤ xk−xk−1 −w ≤ ≤ xk−xk−1 (0,2) ≤ 6 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

For x Rd and B B(Rd) we set τ x = inf t 0 : x + X B and show that ∈ ∈ B { ≥ t ∈ } x (d) α x/2 x α x/2 τB(0,2) = 2 τB , that is, the random variables τB(0,2) and 2 τB are equal in distribution. Indeed, the easy calculation yields

x x Xt τ B = inf t 0 : x + X 2 = inf t 0 : + 1 (0,2) { ≥ | t| ≤ } ≥ 2 2 ≤   (d) x α x α x/2 = inf t 0 : + X α 1 = 2 inf s 0 : + X 1 = 2 τB . ≥ 2 t/2 ≤ ≥ 2 s ≤ n o n o This implies that for arbitrary w B, ∈ xk−xk−1 α (xk−xk−1)/2 P B P P P x −x B w−xk−1 (τ −xk t) (τB(0,2) t) = (2 τB t) k k−1 (τ t). ≤ ≤ ≤ ≤ ≤ 2 ≤ In particular,

P P x −x sup z(τB−xk t) k k−1 (τB t). 2 z∈B−xk−1 ≤ ≤ ≤ By combining this with eq. (2.5) and iterating the whole procedure, we obtain

P P P − P − (x1,...,xk t) k! x1 (τB t) x2 x1 (τB t) xk xk−1 (τB t). ∈ S ≤ ≤ 2 ≤ ··· 2 ≤ Similarly, it follows that

P P P − P − (x1,...,xk ∞) k! x1 (τB < ) x2 x1 (τB < ) xk xk−1 (τB < ). ∈ S ≤ ∞ 2 ∞ ··· 2 ∞ Applying the last two inequalities to eq. (2.4) and using eq. (2.3) finishes the proof.  Corollary 2.4. In the notation of Lemma 2.3, for all k N and t> 0 large enough there is a constant c = c(d,α) > 0 such that ∈ (2.6) E[λ( ′ )k] 2k−1(k!)2ck h(t)k, St ∩ S∞ ≤ where the function h : (0, ) (0, ) is defined as ∞ → ∞ 1, d/α> 2, (2.7) h(t) = log(t + e), d/α =2, t2−d/α, d/α (1, 2). ∈ Proof. It follows from [9, Theorem 2] that there is a constant c(d,α) > 0 such that for t> 0 large enough

(2.8) Px(τB t) dx t Cap(B) c(d,α) h(t), Rd ≤ − ≤ Z where the function h(t) is given in eq. (2.7). We observe that by the Markov property and translation invariance of λ(dx) we have E[λ( ′ )] = E[λ(( X ) ( [t, ) X ))] = E[λ( [t, ))]. St ∩ S∞ St − t ∩ S ∞ − t St ∩ S ∞ Thus, eq. (1.1) combined with Lemma 2.2 and eq. (2.8) implies the assertion for k = 1. For k > 1 the result follows by Lemma 2.3. 

2.2. Variance of the volume of a stable sausage. Our aim in this section is to determine the constant σ in Theorem 2.1. We can easily adapt the approach of [4, Lemma 4.3] to the present setting and combine it with [10, Theorem 2] to conclude that the limit below exists Var( ) (2.9) lim Vt = σ2. tր∞ t LIMIT THEOREMS FOR A STABLE SAUSAGE 7

The main difficulty is to show that σ is strictly positive, and this is obtained in the following crucial lemma. We adapt the proof of [17, Lemma 4.2] but let us emphasize that it is a laborious task to adjust it to the case of stable processes. Lemma 2.5. The constant σ in eq. (2.9) is strictly positive. Proof. We split the proof into several steps and we notice that it clearly suffices to restrict our attention to integer values of the parameter t. Step 1. We start by finding a handy decomposition of the variance Var( ) expressed as Vn a sum of specific random variables, see eq. (2.18). We assume that X0 = 0, and we set (2.10) [s, t] = [s, t] , 0 s t< . S S \ Ss ≤ ≤ ∞ For n, N N such that 1 n N we have ∈ b≤ ≤ + λ( [n, N]) = . Vn S VN Let t = σ(Xs : s t). Since n is n-measurable, we obtain F ≤ V F b + E[λ( [n, N]) ] = E[ ] Vn S |Fn VN |Fn and by taking expectations and subtracting1 b + E[λ( [n, N]) ] = E[ ] E[ ]. hVni h S |Fn i VN |Fn − VN Hence b n (2.11) + E[λ( [n, N]) ] = U N , hVni h S |Fn i k Xk=1 where b U N = E[ ] E[ ]. k VN |Fk − VN |Fk−1 We first discuss the second term on the left-hand side of eq. (2.11). We claim that (2.12) E[λ( [n, N]) ] = E[λ( [n, N] ) ] . h S |Fn i −h S ∩ Sn |Fn i Indeed, by eq. (2.10) we have b λ( [n, N]) = λ( [n, N]) λ( [n, N] ) S S − S ∩ Sn and the independence of the increments of the process X implies that b E[λ( [n, N]) ] = E[λ( [n, N]) ] E[λ( [n, N] ) ] S |Fn S |Fn − S ∩ Sn |Fn = E[λ( [n, N] Xn) n] E[λ( [n, N] n) n] b S − |F − S ∩ S |F = E[λ( [n, N] X )] E[λ( [n, N] ) ] S − n − S ∩ Sn |Fn = E[λ( [n, N])] E[λ( [n, N] ) ]. S − S ∩ Sn |Fn Taking expectation and then subtracting the two relations yields eq. (2.12). N Next we deal with the random variables Uk for k =1,...,N. By the independence of the increments of the process X, we obtain U N = E[λ( )+ λ( [k, N]) λ( [k, N]) ] k Sk S − Sk ∩ S |Fk E[λ( )+ λ( [k 1, N 1]) λ( [k 1, N 1]) ] − Sk−1 S − − − Sk−1 ∩ S − − |Fk−1 E[λ( [N 1, N]) ] − S − |Fk−1 = λ( k) λ( k−1)+ E[λ( [k, N])] E[λ( [k 1, N 1])] S −b S S − S − − E[λ( [N 1, N]) ]+ E[λ( [k 1, N 1]) ] − S − |Fk−1 Sk−1 ∩ S − − |Fk−1 1For any Y L1 we write Y = Y E[Y ]. b ∈ h i − 8 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

E[λ( [k, N]) ] − Sk ∩ S |Fk = λ( [k 1,k]) E[λ( [N 1, N]) ]+ E[λ( [k 1, N 1]) ] S − − S − |Fk−1 Sk−1 ∩ S − − |Fk−1 E[λ( k [k, N]) k]. − b S ∩ S |Fb Let s,t denote the σ-algebra generated by the increments of X on [s, t], 0 s t. Then, by aF reversibility argument, ≤ ≤ (d) E[λ( [N 1, N]) ] = E[λ( [1, N]) ]. S − |Fk−1 S1 \ S |FN−k+1,N Moreover, the following convergence in L1 holds b L1 (2.13) E[λ( 1 [1, N]) N−k+1,N ] E[λ( 1 [1, ))]. S \ S |F −−−→Nր∞ S \ S ∞ The proof of eq. (2.13) is postponed to Section 5, Lemma 5.1. In view of eq. (2.1) it follows that L1 E[λ( [N 1, N]) k−1] Cap(B). S − |F −−−→Nր∞ We thus obtain b 1 N L B E Uk λ( [k 1,k]) Cap( )+ [λ( k−1 [k 1, )) k−1] (2.14) −−−→Nր∞ S − − S ∩ S − ∞ |F E[λ( [k, )) ]. − b Sk ∩ S ∞ |Fk Further, we observe that λ( [k, )) = λ( [k 1, )) λ( [k 1,k]) Sk ∩ S ∞ Sk−1 ∩ S − ∞ − Sk−1 ∩ S − + λ( [k, ) [k 1,k]) + λ( [k 1,k] [k, )) Sk ∩ S ∞ ∩ Sk−1 ∩ S − S − ∩ S ∞ λ( [k 1, ) [k 1,k] [k, )) − Sk−1 ∩ S − ∞ ∩ S − ∩ S ∞ = λ( [k 1, )) λ( [k 1,k]) Sk−1 ∩ S − ∞ − Sk−1 ∩ S − + λ( [k 1,k] [k, )). S − ∩ S ∞ This and eq. (2.14) imply

1 N L c B E Uk λ( [k 1,k] k−1) Cap( )+ [λ( k−1 [k 1, )) k−1] −−−→Nր∞ S − ∩ S − S ∩ S − ∞ |F E[λ( [k 1, )) ]+ λ( [k 1,k] ) − Sk−1 ∩ S − ∞ |Fk S − ∩ Sk−1 E[λ( [k 1,k] [k, )) ] − S − ∩ S ∞ |Fk = λ( [k 1,k]) E[λ( [k 1,k] [k, )) ] Cap(B) S − − S − ∩ S ∞ |Fk − + E[λ( [k 1, )) ] E[λ( [k 1, )) ] Sk−1 ∩ S − ∞ |Fk−1 − Sk−1 ∩ S − ∞ |Fk = E[λ( [k 1,k] [k, )) ] + E[λ( [k 1,k] [k, ))] Cap(B) h S − \ S ∞ |Fk i S − \ S ∞ − + E[λ( [k 1, )) ] E[λ( [k 1, )) ] Sk−1 ∩ S − ∞ |Fk−1 − Sk−1 ∩ S − ∞ |Fk = E[λ( [k 1,k] [k, )) ] h S − \ S ∞ |Fk i + E[λ( [k 1, )) ] E[λ( [k 1, )) ], Sk−1 ∩ S − ∞ |Fk−1 − Sk−1 ∩ S − ∞ |Fk where in the last line we used eq. (2.1). Hence, by eqs. (2.11) and (2.12), n (2.15) = E[λ( [n, )) ] + Y , hVni h Sn ∩ S ∞ |Fn i k Xk=1 where

Yk = E[λ( k−1 [k 1, )) k−1] E[λ( k−1 [k 1, )) k] (2.16) S ∩ S − ∞ |F − S ∩ S − ∞ |F + E[λ( [k 1,k] [k, )) ] . h S − \ S ∞ |Fk i LIMIT THEOREMS FOR A STABLE SAUSAGE 9

From eq. (2.15) it follows that the variance of is equal to Vn n 2 Var( ) = Var(E[λ( [n, )) ]) + E Y Vn Sn ∩ S ∞ |Fn k h k=1  i (2.17) n X +2 E E[λ( [n, )) ] Y . h Sn ∩ S ∞ |Fn i k h Xk=1 i Clearly, Y is -measurable and E[Y ]=0 for k 3α/2 yield

1 1 2 lim Var(E[λ( n [n, )) n]) lim E[λ( n [n, )) ]=0. nր∞ n S ∩ S ∞ |F ≤ nր∞ n S ∩ S ∞ 1 n E 2 The sequence n k=1 [Yk ] n≥1 is bounded (the proof is given in Section 5, Lemma 5.2), and thus by the{ Cauchy-Schwarz} inequality we conclude that P 1 n lim E E[λ( n [n, )) n] Yk = 0. nր∞ n h S ∩ S ∞ |F i h Xk=1 i We have shown that n Var( n) 1 E 2 (2.18) lim V = lim [Yk ]. nր∞ n nր∞ n Xk=1 Step 2. In this step we prove that the limit on the right-hand side of eq. (2.18) is strictly ′ ′ positive. Let X be an independent copy of the original process X such that X0 = 0 and ′ it has c`agl`ad paths. We consider a process X¯ = X¯ R by setting X¯ = X for t 0, { t}t∈ t −t ≤ and X¯t = Xt for t 0. Clearly, the process X¯ has c`adl`ag paths, and stationary and independent increments.≥ The sausages [s, t], ( ,s] and [s, ) corresponding to X¯ S S −∞ S ∞ are defined for all s, t R, s t. Recall that ∞ = [0, ). We assume that the process X¯ is defined on the canonical∈ ≤ space Ω = (RS, Rd) ofS all∞ c`adl`ag functions ω : R Rd as D → the coordinate process X¯t(ω)= ω(t). We define a shift operator ϑ acting on Ω by (2.19) ϑω(t) = ω(1 + t) ω(1), t R, − ∈ and we notice that it is a P-preserving mapping. For t R, we define ∈ = σ(X¯ : 0 and | | L1 (2.22) Zk Z. −−−→kր∞ Step 2a. We start by proving the existence of Z. This is evident if d> 2α as in this case h(t) = 1 and whence using eq. (2.6) we obtain E[λ( ( , 0] ] < . S −∞ ∩ S∞ ∞ 10 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

This implies (2.22) with

Z = E[λ( ( , 0] ∞) 0] E[λ( ( , 0] ∞) 1] (2.23) S −∞ ∩ S | G − S −∞ ∩ S | G + E[λ( [1, )) ] . h S1 \ S ∞ | G1 i We next consider the case 3α/2

the second line. We observe that ½ λ( [ k, 0] ∞) = ½S[−k,0](y) S∞ (y) dy. S − ∩ S Rd Z By taking conditional expectation with respect to , we obtain G0

E ½ ½ E ½ [λ( [ k, 0] ∞) 0] = S[−k,0](y) [ S∞ (y)] dy = S[−k,0](y) φ(y) dy, S − ∩ S | G Rd Rd Z Z where we set φ(y)= P(y ). ∈ S∞

Similarly, we write ½ λ( [ k, 0] [1, )) = ½S[−k,0]−X1(y) S[1,∞)−X1 (y) dy S − ∩ S ∞ Rd Z and we take conditional expectation with respect to which yields G1

E ½ E ½ [λ( [ k, 0] [1, )) 1] = S[−k,0]−X1 (y) [ S[1,∞)−X1 (y)] dy S − ∩ S ∞ | G Rd (2.24) Z

= ½S[−k,0](y) φ(y X1) dy. Rd − Z It follows that E[λ( [ k, 0] ) ] E[λ( [ k, 0] [1, )) ] S − ∩ S∞ | G0 − S − ∩ S ∞ | G1 (2.25)

= ½S[−k,0](y) φ(y) φ(y X1) dy. Rd − − Z We prove in Lemma 5.4 that the right-hand side integral in eq. (2.25) is a well-defined random variable in L1. Thus, the dominated convergence theorem implies eq. (2.22) with Z = E[λ( [1, )) ] E[λ(( [1, ]) ( , 0]) ] h S1 \ S ∞ | G1 i − S1 \ S ∞ ∩ S −∞ | G1

+ ½S(−∞,0](y) φ(y) φ(y X1) dy. Rd − − Z Step 2b. We next show that E[ Z ] > 0 and we remark that the following arguments apply to all d> 3α/2. From eqs. (2.15| )| and (2.21) we have

n−1 = Z ϑk + H , hVni ◦ n Xk=0 LIMIT THEOREMS FOR A STABLE SAUSAGE 11 where

n−1 (2.26) H = E[λ( [n, )) ] + (Z Z) ϑk. n h Sn ∩ S ∞ | Gn i k − ◦ Xk=0 Equation (2.1) yields

= E[ ] E[λ( [n, ))] = n Cap(B). hVni Vn − Vn ≤Vn − Sn \ S ∞ Vn − This implies

n−1 n Cap(B)+ Z ϑk + H . Vn ≥ ◦ n Xk=0 We aim to prove that there isc> ¯ 0 (which does not depend on n) such that for all n N ∈ (2.27) P c¯ H c¯ > 0. {Vn ≤ }∩{| n| ≤ }  Notice that if E[ Z ] = 0 then the inequality n n Cap(B)+ Hn and eq. (2.27) would imply that Cap(B| )| = 0 which is a contradiction.V Therefore,≥ it is enough to show eq. (2.27). From eqs. (2.20) and (2.23) we obtain

n−1

k E ½ (Z Zk) ϑ = ½S(−∞,0](y) [ S∞ (y) 0] dy − ◦ Rd | G Xk=0 Z

n−2

½ ½ E S(−∞,0]\Sk (y) [ S[k,∞)(y) k+1] (2.28) − Rd | G

Xk=0 Z  E ½ ½ (y) [ (y) ] dy − S(−∞,0]\Sk+1 S[k+1,∞) | Gk+1



½ ½ E S(−∞,0]\Sn−1 (y) [ S[n−1,∞)(y) n] dy. − Rd | G Z

We next observe that

½ ½ E S(−∞,0]\Sk (y) [ S[k,∞)(y) k+1] Rd | G

Z  ½ ½ (y) E[ (y) ] dy − S(−∞,0]\Sk+1 S[k+1,∞) | Gk+1 

E ½ ½ ½ ½ = ½ (y) c (y) [ (y)+ (y) c(y) ] S(−∞,0] Sk S[k,k+1] S[k+1,∞) S[k,k+1] k+1 Rd | G Z 

E ½ ½ ½ c (2.29) S(−∞,0](y) S +1 (y) [ S[k+1,∞)(y) k+1] dy − k | G



½ ½ = ½ (y) c (y) (y) S(−∞,0] Sk S[k,k+1] Rd

Z  E ½ ½ + ½Sc (y) S[k,k+1]c(y) [ S[k+1,∞)(y) k+1] k | G

E ½ ½Sc (y) [ S[k+1,∞)(y) k+1] dy − k+1 | G = λ( ( , 0] ( [k,k + 1] )).  S −∞ ∩ S \ Sk

12 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

½ ½ We clearly have ½S[n−1,∞)(y)= S[n,∞)(y)+ S[n−1,n]\S[n,∞)(y). This identity and a similar

argument as in eq. (2.24) yield

½ ½ E S(−∞,0]\Sn−1 (y) [ S[n−1,∞)(y) n] dy Rd | G

Z

½ ½ E = S(−∞,0]\Sn−1 (y) [ S[n,∞)(y) n] dy Rd | G Z

E ½ (2.30) + ½S(−∞,0]\Sn−1 (y) [ S[n−1,n]\S[n,∞)(y) n] dy Rd | G

Z ½ = ½S(−∞,0](y) φ(y Xn) dy S(−∞,0]∩Sn−1 (y) φ(y Xn) dy Rd − − Rd − Z Z

E ½ + ½S(−∞,0]\Sn−1 (y) [ S[n−1,n]\S[n,∞)(y) n] dy. Rd | G Z By combining eqs. (2.28) to (2.30), we arrive at

n−1 (Z Z ) ϑk − k ◦ Xk=0

(2.31) ½ = ½S(−∞,0](y) φ(y) φ(y Xn) dy + S(−∞,0]∩Sn−1 (y) φ(y Xn) dy Rd − − Rd − Z Z



½ ½ E S(−∞,0]\Sn−1 (y) [ S[n−1,n]\S[n,∞)(y) n] dy λ( ( , 0] n−1). − Rd | G − S −∞ ∩ S Z We claim that there is a constantc> ˜ 0 such that for all n N ∈

(2.32) P sup Xs 1 ½S(−∞,0](y) φ(y) φ(y Xn) dy c˜ +1 > 0. | | ≤ ∩ Rd − − ≤ 0≤s≤n   Z   If sup X 1, then clearly λ( ) λ(B(0, 2)) and this allows us to estimate the 0≤s≤n n n first term on| the| ≤ right-hand side of eq.S (2.26≤ ) and, similarly, the three last terms on the right-hand side of eq. (2.31) by a constant. We infer that there exists a constantc ¯ > 0 such that

sup Xs 1 ½S(−∞,0](y) φ(y) φ(y Xn) dy c˜ +1 | | ≤ ∩ Rd − − ≤ 0≤s≤n   Z   n c¯ Hn c¯ . ⊆ {V ≤ }∩{ ≤ } To finish the proof of eq. (2.27), we are only left to show eq. (2.32). In view of the Markov inequality it is enough to prove that under sup X 1 we have 0≤s≤n | n| ≤

E ½S(−∞,0](y) φ(y) φ(y Xn) dy < . Rd − − ∞ h Z i This holds as, under sup X 1,  0≤s≤n | n| ≤ E ½S(−∞,0](y) φ(y) φ(y Xn) dy sup φ(y) φ(y) φ(y x) dy < , Rd − − ≤ x∈B Rd | − − | ∞ h Z i Z  where convergence of the last integral is established in Lemma 5.5. Step 2c. We finally show that the limit in eq. (2.18) is positive. Equation (2.22) implies

1 n−1 lim E[ Zk ] = E[ Z ]. nր∞ n | | | | Xk=0 LIMIT THEOREMS FOR A STABLE SAUSAGE 13

Hence, by Jensen’s inequality, n−1 n−1 n−1 2 2 1 E 2 E 1 E 1 lim [Zk ] lim Zk lim Zk nր∞ n ≥ nր∞ n | | ≥ nր∞ n | | Xk=0 h Xk=0  i  h Xk=0 i n−1 2 1 2 = lim E[ Zk ] = (E[ Z ]) . nր∞ n | | | |  Xk=0  By eq. (2.21), we have n n−1 1 E 2 1 E 2 E 2 lim [Yk ] = lim [Zk ] ( [ Z ]) > 0, nր∞ n nր∞ n ≥ | | Xk=1 Xk=0 and this finishes the proof of the lemma.  2.3. Proof of the central limit theorem. In this paragraph, we prove Theorem 2.1. In the proof we apply the Lindeberg-Feller central limit theorem which we include for completeness.

Lemma 2.6 ([7, Theorem 3.4.5]). For each n N let Xn,i 1≤i≤n be a sequence of independent random variables with zero mean. If the∈ followi{ng conditions} are satisfied n E 2 2 (i) lim [Xn,i]= σ > 0, and nր∞ i=1 X n

2 E ½ (ii) for every ε> 0, lim Xn,i {|Xn,i|>ε} =0, nր∞ i=1 (d) X   then Xn,1 + + Xn,n σ (0, 1). ··· −−−→nր∞ N Proof of Theorem 2.1. For t> 0 large enough we choose n = n(t)= log(t) . We have ⌊ ⌋ = λ( ) = λ [t/n, t] = λ ( X ) ( [t/n, t] X ) . Vt St St/n ∪ S St/n − t/n ∪ S − t/n By the Markov property,   (1) = X and (2) = [t/n, t] X St/n St/n − t/n S(n−1)t/n S − t/n are independent, and (2) has the same law as . Rotational invariance of X S(n−1)t/n S(n−1)t/n implies that (1) is equal in law to . Hence, St/n St/n = λ( (1) )+ λ( (2) ) λ (1) (2) . Vt St/n S(n−1)t/n − St/n ∩ S(n−1)t/n By iterating this procedure, we obtain  n n−1 (2.33) = λ( (i) ) λ (i) (i+1) . Vt St/n − St/n ∩ S(n−i)t/n i=1 i=1 X X  We denote n−1 (i) = λ( (i) ) and (t) = λ (i) (i+1) , Vt/n St/n R St/n ∩ S(n−i)t/n i=1 X  and we notice that (i) are i.i.d. random variables. By taking expectation in {Vt/n}1≤t≤n eq. (2.33) and then subtracting, we obtain n (2.34) = (i) (t) . hVti hVt/ni−hR i i=1 X 14 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

We first show that

(t) L1 (2.35) hR i 0. √t −−−→tր∞ Since (t) 0, we clearly have E[ (t) ] 2 E[ (t)]. By Corollary 2.4, R ≥ |hR i| ≤ R n−1 E[ (t)] E λ (i) (i+1) c n h(t/n), R ≤ St/n ∩ S∞ ≤ i=1 X h i for all t > 0 large enough. Hence, eq. (2.35) follows by eq. (2.7), and the fact that n = log(t) and d/α > 3/2. Next we prove that ⌊ ⌋ n 1 (d) (2.36) (i) σ (0, 1). √ hVt/ni −−−→tր∞ N t i=1 X For this we introduce the random variables (i) t/n Xn,i = hV i, i =1, . . . , n, √t and we check the validity of conditions (i) and (ii) from Lemma 2.6. Condition (i) follows by Lemma 2.5,

n E 2 n 2 (2.37) lim [Xn,i] = lim Var( t/n) = σ . nր∞ nր∞ t V i=1 X To establish condition (ii) we apply the Cauchy-Schwartz inequality and obtain that for every ε> 0,

1/2 2 1 4

E X ½ E P >ε√t . n,i {|Xn,i|>ε} ≤ t hVt/ni |hVt/ni|    By Chebyshev’s inequality combined with Lemma 2.5 and the fact that n = log(t) , ⌊ ⌋ there is a constant c1 > 0 such that Var( ) c t/n c P >ε√t Vt/n 1 = 1 . |hVt/ni| ≤ ε2t ≤ ε2t ε2n   This together with Lemma 5.6 imply that there are constants c2,c3 > 0 such that

n n 2 1/2 E 2 1 t c1 c3 (2.38) lim Xn,i ½{|Xn,i|>ε} lim c2 lim = 0. nր∞ ≤ nր∞ t n ε2n ≤ nր∞ √n i=1 i=1 X   X     Thus, eq. (2.36) follows and we conclude that

(d) hVti σ (0, 1). √t −−−→tր∞ N We finally observe that t Cap(B) E[ ] t Cap(B) Vt − = hVti + Vt − , σ√t σ√t σ√t which allows us to finish the proof in view of Lemma 2.2 and Corollary 2.4.  LIMIT THEOREMS FOR A STABLE SAUSAGE 15

3. Functional central limit theorem The goal of this section is to prove the functional central limit theorem in eq. (1.7). To prove this statement we adapt the proof of [5, Theorem 1.1], which is concerned with the functional central limit theorem for the capacity of the range of a stable . We again assume that X is a stable rotationally invariant L´evy process in Rd of in- dex α (0, 2] satisfying d/α > 3/2. We follow the classical two-step scheme (see [15, Theorem∈ 16.10 and Theorem 16.11]). Let Y n be a sequence of random elements in { }n≥1 the Skorohod space ([0, ), R) endowed with the Skorohod J1 topology. The sequence n D ∞ Y n≥1 converges weakly to a random element Y (in ([0, ), R)) if the following two {conditions} are satisfied: D ∞ n (i) The finite dimensional distributions of Y n≥1 converge weakly to the finite dimen- sional distributions of Y . { } n (ii) For any bounded sequence Tn n≥1 of Y n≥1-stopping times and any non-negative sequence b converging{ to} zero, { } { n}n≥1 P n n lim YT +b YT ε = 0, ε> 0. nր∞ | n n − n | ≥ Theorem 3.1. Under the above assumptions, the following convergence holds B nt nt Cap( ) (J1) V − Wt t≥0, σ√n −−−→nր∞ { }  t≥0 where σ is the constant from Theorem 2.1. Proof. We consider the following sequence of random elements which are defined in the space ([0, ), R), D ∞ nt Cap(B) (3.1) Y n = Vnt − , n N, t σ√n ∈ where σ is the constant from Theorem 2.1. Let us start by showing condition (i). Condition (i). By Theorem 2.1, we have B B n nt nt Cap( ) nt nt Cap( ) (d) Yt = V − = √t V − (0, t). σ√n σ√nt −−−→nր∞ N Let k N be arbitrary and choose 0 = t < t < t < < t . We need to prove that ∈ 0 1 2 ··· k n n n (d) (Yt1 ,Yt2 ,...,Yt ) (Wt1 , Wt2 ,...,Wtk ). k −−−→nր∞ In view of the Cram´er-Wold theorem [15, Corollary 5.5] it suffices to show that

k k n (d) Rk ξjYt ξjWtj , (ξ1,ξ2,...,ξk) . j −−−→nր∞ ∈ j=1 j=1 X X Using a similar reasoning as in the beginning of the proof of Theorem 2.1, we obtain for j 1, 2,...,k , ∈{ } j j−1 (i) (i) nt = , V j Vn(ti−ti−1) − Rntj i=1 i=1 X X where (i) = λ( (i) ) and (i) = λ (i) (i+1) . Vn(ti−ti−1) Sn(ti−ti−1) Rntj Sn(ti−ti−1) ∩ Sn(tj −ti)  16 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

The random variables (i) , for i 1, 2,...,k , are independent, (i) has the Vn(ti−ti−1) ∈{ } Sn(ti−ti−1) (i) ′ ′ same law as n(t −t − ), and has the same law as λ n(t −t − ) , with S i i 1 Rntj S i i 1 ∩Sn(tj −ti) Sn(tj −ti) being an independent copy of . For arbitrary (ξ ,ξ ,...,ξ ) Rk we have Sn(tj −ti) 1 2 k ∈  k k nt Cap(B) ξ Y n = ξ Vntj − j j tj j σ√n j=1 j=1 X X   k j j−1 j 1 (i) (i) = ξj n(ti ti−1) Cap(B) σ√n Vn(ti−ti−1) − Rntj − − j=1 i=1 i=1 i=1 ! X X X X k j (i) B k j−1 (i) − n(ti ti−1) Cap( ) 1 nt = ξ Vn(ti−ti 1) − − ξ R j j σ√n − σ j √n j=1 i=1 j=1 i=1 X X X X k k (i) B k j−1 (i) − n(ti ti−1) Cap( ) 1 = ξ Vn(ti−ti 1) − − ξ Rntj . j σ√n − σ j √n i=1 j=i ! j=1 i=1 X X X X Theorem 2.1 provides that (i) n(t t ) Cap(B) n(ti−ti−1) i i−1 (d) V − − (0, ti ti−1). σ√n −−−→nր∞ N − Markov’s inequality combined with Corollary 2.4 implies that for every ε> 0, (i) E ′ [λ( n(t −t −1) )] P Rntj >ε S i i ∩ Sn(tj −ti) √n ! ≤ ε√n ′ E[λ( nt )] c h(nt ) S j ∩ S∞ j , ≤ ε√n ≤ ε√n which converges to zero, as n tends to infinity. Since for i 1, 2,...,k the random ∈ { } variables (i) are independent, we obtain Vn(ti−ti−1) k k k 2 n (d) ξjYt 0, ξj (ti ti−1) . j −−−→nր∞ N − j=1 i=1 j=i ! X XX  n It follows that the finite dimensional distributions of Y n≥1 converge weakly to the finite dimensional distributions of a one-dimensional standard{ } Brownian motion. n Condition (ii). Let Tn n≥1 be a bounded sequence of Y n≥1-stopping times, and let b [0, ) be{ an arbitrary} sequence which converges{ to} zero. We aim to prove that { n}n≥1 ⊂ ∞ n n P YT +b YT 0, n n − n −−−→nր∞ where the convergence holds in probability. By eq. (3.1), we have

n(T +b ) n(Tn + bn) Cap(B) nT Cap(B) (3.2) Y n Y n = V n n − VnTn − n . Tn+bn − Tn σ√n − σ√n The Markov property and rotational invariance of X yield = λ ( [nT , n(T + b )]) X λ X Vn(Tn+bn) −VnTn SnTn ∪ S n n n − nTn − SnTn − nTn (1) (2) (1) (2) (1) = λ( )+ λ( ) λ λ( )  SnTn Snbn − SnTn ∩ Snbn − SnTn (2) (1) (2) = λ( ) λ ,  Snbn − SnTn ∩ Snbn  LIMIT THEOREMS FOR A STABLE SAUSAGE 17

(1) (2) where nTn and nbn are independent and have the same distribution as nTn and nbn , respectively.S EquationS (3.2) implies S S (2) (1) (2) B n n λ( nbn ) λ nTn nbn nbn Cap( ) YTn+bn YTn = S − S ∩ S − . − σ√n  (2) With a slight abuse of notation we write nb = λ( ). By Lemma 2.2, we obtain V n Snbn E E (1) (2) n n nbb [ nbn ] [λ nbn [nbb, ) ] λ nTn nbn (3.3) YTn+bn YTn = V − V + S ∩ S ∞ S ∩ S . − σ√n σ√n  − σ√n  We prove that the three terms on the right-hand side of eq. (3.3) converge to zero in probability. For the first term, Chebyshev’s inequality yields that for every ε> 0, E[ ] Var( ) P Vnbn − Vnbn >ε Vnbn , √n ≤ ε2n   and we are left to show that Var( ) (3.4) lim Vnbn = 0. nր∞ n This follows by Lemma 2.5. Indeed, there exist t ,c > 0, such that for every t t , 1 1 ≥ 1 we have Var( t) c1t, and whence for nbn t1, Var( nbn ) c1nbn. For nbn < t1 we observe that Var(V ≤ ) E[ 2 ] E[ 2 ]. This≥ triviallyV implies≤ eq. (3.4). Vnbn ≤ Vnbn ≤ Vt1 By Corollary 2.4, similarly as above, we show that there is t2 > 0 such that for all n N ∈ E[λ [nb , ) ] c h(nb )+ E[ ]. Snbn ∩ S n ∞ ≤ n Vt2 We then easily conclude that the second term on the right-hand side of eq. (3.3) converges to zero in probability. There exists c2 > 0 such that supn≥1 Tn c2. By the Markov inequality and Corol- lary 2.4, we obtain that for every ε> 0 ≤ (1) (2) E (1) (2) E (1) (2) λ [λ( )] [λ( c2n ∞ )] c h(c n) P SnTn ∩ Snbn >ε SnTn ∩ Snbn S ∩ S 2 , σ√n  ! ≤ εσ√n ≤ εσ√n ≤ εσ√n which converges to zero, as n tends to infinity. This shows that the last term on the right-hand side of eq. (3.3) goes to zero in probability and the proof is finished. 

4. Laws of the iterated logarithm This section is devoted to the proof of the following result.

Theorem 4.1. If d/α > 9/5, then the process t t≥0 satisfies Khintchine’s and Chung’s law of the iterated logarithm, that is, eqs. (1.8){Vand} (1.9), respectively. We start with the proof of eq. (1.8), which is based on the following lemma.

Lemma 4.2 ([20, Chapter X, Theorem 2]). Let Yn n≥1 be a sequence of independent random variables with mean 0 and finite variance.{ Set} n n 2 E 2 Sn = Yi, and sn = [Yi ]. i=1 i=1 X X Suppose lim s = , and that for any ε> 0, nր∞ n ∞ n 1 E 21 (4.1) lim 2 Yi {|Y |≥ε s2/ log log s2} = 0 nր∞ s i √ i i n i=i0 X h i 18 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ and ∞ 1 E 21 (4.2) 2 2 Yi {|Y |≥ε s2/ log log s2} < , s log log s i √ i i ∞ i=i0 i i X h i where i = min i 1 : log log s2 > 0, j i . Then 0 { ≥ j ≥ } S lim sup n = 1 P-a.s. 2 2 nր∞ 2sn log log sn Proof of Theorem 4.1 - Khintchine’sp law of the iterated logarithm. By a we denote the ⌊ ⌋ greatest integer less than or equal to a R. We consider a sequence ni i≥0 of non- k k∈+1 { } negative integers such that if 2 ni < 2 , then ni runs over all consecutive integers of k k/2 ≤ k/2 the form 2 + j2 /k , for k N and j = 0, 1,..., k2 . We set n0 = 0. Clearly, if 2k n < 2k+1⌊then 0 ⌋ n ∈n 2k/2/k + 1. Hence,⌊ ⌋ ≤ i ≤ i+1 − i ≤ ni+1 1/2 (4.3) lim = 1 and ni+1 ni = (ni / log ni). iր∞ ni − O Since E sup t ni [ni, ni+1]+ [ [ni, ni+1]], ni≤t≤ni+1 |hV i−hV i| ≤ V V we see that in the case when d/α > 1 (transience) eqs. (1.2), (1.3) and (4.3) imply that

sup t n (4.4) lim ni≤t≤ni+1 |hV i−hV i i| = 0 P-a.s. iր∞ ni/ log log ni P Thus, it suffices to prove that p-a.s. lim inf hVni i = 1 and lim sup hVni i = 1. iր∞ 2 2 2σ ni log log ni − iր∞ 2σ ni log log ni We only discussp the second relation as the first one can bep handled in an analogous way. For i 0 we set ≥ (4.5) (n , n ] = λ( (n , n ]) and = λ( (n , n ] ), V i i+1 S i i+1 Jni S i i+1 ∩ Sni B where (s, t] = s 9/5, then the lastp term on the right-hand side of the above identity converges to zero P-a.s. Thus we are left to prove that i−1 (nj, nj+1] lim sup j=0hV i = 1 P-a.s. 2 iր∞ P 2σ ni log log ni When d/α (1, 2) we set Λ= 2p d/α which satisfies Λ (0, 1). We apply Lemma 5.7 and obtain∈ that for n [2k, 2k+1]− there are constants c ,c∈,c ,c > 0 such that i ∈ 1 2 3 4 i−1 i−1 Var (n , n ] σ2n + c (n n )1/2h(n n ) V j j+1 ≤ i 1 j+1 − j j+1 − j j=0 ! j=0 ! X X LIMIT THEOREMS FOR A STABLE SAUSAGE 19

k 2 3l/4 l/2 1/2 σ ni + c2 2 h(2 /l)l ≤ ! Xl=1 2 k 3l/4 1/2 σ ni + c3 l=1 2 l , d/α> 2,  2  k 3l/4 3/2 σ ni + c3 Pl=1 2 l , d/α =2, ≤   2 k (2Λ+3)l/4 1/2−Λ σ ni + c3 P 2 l , d/α (1, 2). l=1 ∈    σ2n + c 23k/P4k3/2, d/α 2,  i 4 ≥ ≤ σ2n + c 2(2Λ+3)k/4k1/2, d/α (1, 2). ( i 4 ∈ σ2n + (n3/4(log n )3/2), d/α 2, = i O i i ≥ σ2n + (n(2Λ+3)/4(log n )1/2), d/α (1, 2). ( i O i i ∈

Hence, for d/α > 3/2, i−1 (4.7) Var (n , n ] = σ2n + o(n ). V j j+1 i i j=0 ! X This enables us to apply Lemma 4.2. We only need to show that ∞ 1 2 E (ni, ni+1] 1 < n hV i {|hV(ni,ni+1]i|≥ε√ni/ log log ni} ∞ i=2 i X h i as then Kronecker’s lemma implies both eqs. (4.1) and (4.2). By Lemma 5.6 we obtain E[ (n , n ] 4] c (n n )2 c n hV i i+1 i ≤ 5 i+1 − i ≤ 5 i for some c5 > 0. Finally, we have ∞ 1 2 E (ni, ni+1] 1 n hV i {|hV(ni,ni+1]i|≥ε√ni/ log log ni} i=2 i X h i ∞ log log n i E (n , n ] 4 ≤ ε2n2 hV i i+1 i i=2 i X ∞   c5 log log ni ≤ ε2 n i=2 i X k/2 ∞ k2 k k/2 c5 log log(2 + l2 /k) ≤ ε2 2k + l2k/2/k 1 Xk=1 Xl=0 − c ∞ k log log 2k+1 5 < , ≤ ε2 2k/2 2−k/2 ∞ Xk=1 − which completes the proof.  The proof of Chung’s law of the iterated logarithm is based on the following result.

Lemma 4.3 ([26, Theorem A]). Let Yn n≥1 be a sequence of independent random vari- ables with mean 0 and finite variance.{ Set} n n 2 E 2 Sn = Yi, and sn = [Yi ]. i=1 i=1 X X 20 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

E 2 o 2 2 2 E 2 Suppose that limnր∞ sn = , [Yn ] = (sn/ log log sn), and that Yn / [Yn ] n≥1 is uni- formly integrable. Then ∞ { } max S π lim inf 1≤i≤n | i| = P-a.s. nր∞ 2 2 sn/ log log sn √8

Proof of Theorem 4.1 - Chung’sp law of the iterated logarithm. We observe that for ni ≤ t < ni+1 it holds

sup s max nj + max sup s nj . 0≤s≤t |hV i| ≤ 0≤j≤i |hV i| 0≤j≤i nj ≤s≤nj+1 |hV i−hV i| We first claim that

max0≤j≤i supn ≤s≤n s nj (4.8) lim j j+1 |hV i−hV i| = 0 P-a.s. iր∞ ni/ log log ni Indeed, according to eq. (4.4) (ifpd/α > 1), there is Ω¯ Ω such that P(Ω)¯ = 1, and for any ω Ω¯ and any ε> 0 there exists j = j (ω,ε) N⊆for which ∈ 0 0 ∈ supn ≤s≤n s (ω) nj (ω) ε max j j+1 |hV i − hV i | . j≥j0 nj/ log log nj ≤ 2 We then write p max sup (ω) (ω) 0≤j≤i nj ≤s≤nj+1 |hVsi − hVnj i | ni/ log log ni max sup (ω) (ω) 0≤j≤j0p nj ≤s≤nj+1 |hVsi − hVnj i | ≤ ni/ log log ni

maxj0≤j≤i sup s (ω) n (ω) + pnj ≤s≤nj+1 |hV i − hV j i |. ni/ log log ni We next choose i = i (ω,ε) N such that 0 0 ∈ p

max0≤j≤j0 supnj ≤s≤nj+1 s (ω) nj (ω) ε |hV i − hV i | , i i0, ni/ log log ni ≤ 2 ≥ and we infer eq. (4.8). We arep thus left to show that max0≤j≤i n π lim inf |hV j i| = P-a.s. iր∞ 2 σ ni/ log log ni √8 From eq. (4.6) we have p j−1 j−1

max (nk, nk+1] max nk max nj 0≤j≤i hV i − 0≤j≤i hJ i ≤ 0≤j≤i |hV i| Xk=0 Xk=0 j−1 j−1

max (nk, nk+1] + max nk . ≤ 0≤j≤i hV i 0≤j≤i hJ i k=0 k=0 X X If we proceed similarly as in the proof of eq. (4.8) and apply Lemma 5.8 instead of eq. (4.4 ), we arrive at max j−1 lim 0≤j≤i | k=0hJnk i| = 0 P-a.s. iր∞ ni/ logP log ni Thus it suffices to show that p max j−1 (n , n ] π lim inf 0≤j≤i | k=0hV k k+1 i| = P-a.s. iր∞ 2 √ σ Pni/ log log ni 8 p LIMIT THEOREMS FOR A STABLE SAUSAGE 21

To this end, we apply Lemma 4.3. Recall that according to Lemma 5.6, there is c > 0 such that E[ (n , n ] 4] c (n n )2. hV i i+1 i ≤ i+1 − i This combined with Lemma 5.7 gives that 4 E[ (ni, ni+1] ] sup hV i2 2 < i≥1 E[ (n , n ] ] ∞ hV i i+1 i 2 2 which implies that the sequence (ni, ni+1] /E[ (ni, ni+1] ] i≥1 is uniformly inte- grable. In view of Lemma 5.7 and{hV eq. (4.3), i hV i } E[ (n , n ] 2] = o( n / log log n ), hV i i+1 i i i and the proof is finished. p  5. Technical results Lemma 5.1. In the notation of the proof of Lemma 2.5, for any k N it holds that ∈ L1 E[λ( 1 [1, N]) N−k+1,N ] E[λ( 1 [1, ))]. S \ S |F −−−→Nր∞ S \ S ∞ Proof. Set m = λ( [1, N]) and m = λ( [1, )). We have N S1 \ S ∞ S1 \ S ∞ lim E[ E[mN N−k+1,N ] E[m∞] ] Nր∞ | |F − |

lim E E[mN m∞ N−k+1,N ] + E[m∞ N−k+1,N ] E[m∞] ≤ Nր∞ | − |F | | |F − |

lim E[E[ mN m∞ N−k+1,N ]] + E[ E[m∞ N−k+1,N ] E[m∞] ] ≤ Nր∞ | − ||F | |F − |

= lim E[ mN m∞ ] + lim E[ E[m∞ N−k+1,N ] E[m∞] ].  Nր∞ | − | Nր∞ | |F − | 1 Since mN clearly converges to m∞ in L , we are left to prove that the second limit in the expression above is zero. For a fixed k N we define ∈ = σ(X : t N k + 1), N k. HN t ≥ − ≥ We observe that N−k+1,N N and N is a decreasing family of σ-algebras. Moreover, according to Kolmogorov’sF ⊆ 0 H 1 law,H for every H = , we have P(H) − ∈ H∞ N≥1 HN ∈ 0, 1 . From Levi’s theorem (see [23, Ch. II, Corollary 2.4]) we infer that P-a.s. { } T (5.1) lim E[m∞ N ] = E[m∞ ∞] = E[m∞]. Nր∞ | H | H

Notice that by eq. (2.1), E[m∞] = Cap(B). Since the family E[m∞ N ] N≥1 is uniformly integrable, we infer that the convergence in eq. (5.1) holds{| also| in H L1|}, see [7, Theorem 5.5.1]. We finally obtain

lim E E[m∞ N−k+1,N ] E[m∞] Nր∞ | |F − |   = lim E E[E[m∞ N ] N−k+1,N ] E[m∞] Nր∞ | H |F − h i lim E E [ E[m∞ N ] E[m∞] N−k+1,N ] ≤ Nր∞ | | H − ||F   = lim E E[m∞ N ] E[m∞] = 0, Nր∞ | H − h i  and the proof is finished. 1 n E 2 Lemma 5.2. In the notation of the proof of Lemma 2.5, the sequence n k=1 [Yk ] n≥1 is bounded. { } P 22 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

Proof. We set ∆ = d/α 3/2 and recall that it is a positive number. We present the proof in the case ∆ (0, 1/2)− as the proof for ∆ 1/2 is similar. For ∆ (0, 1/2), the function h(t) defined in∈ eq. (2.7) is given by h(t)=≥t1/2−∆. By the Cauchy-Schwarz∈ inequality, n E E[λ( [n, )) ] Y h Sn ∩ S ∞ |Fn i k h Xk=1 i 1/2 n Var(E[λ( [n, )) ]) 1/2 E[Y 2] ≤ Sn ∩ S ∞ |Fn k k=1 !  X n 1/2 1/2−∆ E 2 2√2 c n [Yk ] . ≤ ! Xk=1 This combined with eq. (2.17) yields 1/2 Var( ) 1 1 n 4√2 c 1 n Vn Var E[λ( [n, )) ] + E[Y 2] E[Y 2] . n ≥ n Sn ∩ S ∞ |Fn n k − n∆ n k k=1 k=1 !  X X We suppose, for the sake of contradiction, that there exists a subsequence nm m≥1 N such that { } ⊆ nm 1 E 2 lim [Yk ] = . mր∞ nm ∞ Xk=1 Since

Var( n) 2 1 lim V = σ and lim Var E[λ( n [n, )) n] = 0, nր∞ n nր∞ n S ∩ S ∞ |F it follows that  1/2 nm 1 1 E 2 lim ∆ [Yk ] = . mր∞ n nm ∞ m k=1 ! 1 nm 2 X 2∆ We deduce that E[Y ] m≥1 diverges faster to infinity than n m≥1. Since { nm k=1 k } { m } Var( ) P lim Vn = 0, nր∞ n1+2∆ we can again use eq. (2.17) to obtain Var( ) 1 1 nm Vnm Var E[λ( [n , )) ] + E[Y 2] n1+2∆ ≥ n1+2∆ Snm ∩ S m ∞ |Fnm n1+2∆ k m m m k=1 1/2  X √ nm 4 2 c 1 E 2 3∆ [Yk ] . − nm nm ! Xk=1 We infer that 1 nm E[Y 2] grows faster to infinity than n6∆ . By iterating nm k=1 k m≥1 m m≥1 this procedure,{ we conclude that} { } P nm 1 E 2 (5.2) lim 2 [Yk ] = . mր∞ nm ∞ Xk=1 On the other hand, from eq. (2.16) we have Y E[λ( [k 1, )) ]+ E[λ( [k 1, )) ] | k| ≤ Sk−1 ∩ S − ∞ |Fk−1 Sk−1 ∩ S − ∞ |Fk + E[λ( [k 1,k] [k, )) ]+ E[λ( [k 1,k] [k, ))] S − \ S ∞ |Fk S − \ S ∞ E[λ( [k 1, )) ]+ E[λ( [k 1, )) ] ≤ Sk−1 ∩ S − ∞ |Fk−1 Sk−1 ∩ S − ∞ |Fk LIMIT THEOREMS FOR A STABLE SAUSAGE 23

+ λ( [k 1,k]) + Cap(B), S − where in the last line we used monotonicity and eq. (2.1). By Jensen’s inequality, E[ Y 2] 4 2 E[λ( [k 1, ))2]+ E[λ( [k 1,k])2] + Cap2(B) | k| ≤ Sk−1 ∩ S − ∞ S − 64 c2h(k 1)2 +4 E[ 2] + 4(Cap(B))2 c k, ≤ − V1 ≤ 1  for a constant c > 0. This yields nm E[Y 2] c n2 , which gives a contradiction.  1 k=1 k ≤ 1 m Lemma 5.3. In the notation of theP proof of Lemma 2.5, for any β (0, 1] there exists a constant c(d,α,β) > 0 such that ∈ 1+ x β φ(y) φ(y x) c(d,α,β) φ(y)+ φ(y x) | | 1 , x,y Rd. | − − | ≤ − y β ∧ ∈  | |  Proof. Recall that φ(y)= P(y ). This yields  ∈ S∞ (5.3) φ(y) φ(y x) φ(y)+ φ(y x), x,y Rd. | − − | ≤ − ∈ To establish the second non-trivial part of the claimed inequality, that is, for y β > 1+ x β, | | | | we first observe that by rotational invariance of X it holds φ(y)= Py(τB < ). Moreover, by eq. (1.4), ∞

α−d 2 −α/2 d φ(y) = ad,α y w 1 w dw, y R , B | − | −| | ∈ Z where  sin πα Γ((d α)/2)Γ(d/2) a = 2 − , d,α 2απd+1Γ(α/2) see e.g. [30]. We fix β (0, 1] and x Rd. For any y Bc(0, 1+ x ) we have ∈ ∈ ∈ | | y w y w y 1 > x , w B. | − |≥| |−| |≥| | − | | ∈ There exists x0 B(0, y w ) lying on the line going through the origin, determined by the vector y w∈, and such| − that| − y w α−d y w x α−d y w β y w d−α y w x d−α y w β | − | −| − − | | − | −| − − | α−d α−d | − |β = d−α d−α | − |β y w + y w x 1+ x y w + y w x 1+ x | − | | − − | | | | − |d−α | − − | d−α | | y w y w x y w β | − | −| − − 0| d−α d−α | − | β . ≤ y w + y w x0 1+ x0 | − | | − − | | | Since x is necessarily of the form x = y−w ̺, for some ̺ [ y w , y w ], we have 0 0 |y−w| ∈ −| − | | − | d−α y w α−d y w x α−d y w β y w d−α y w ̺ y w β | − | −| − − | | − | − | − | − α−d α−d | − |β d−α | − |β . y w + y w x 1+ x ≤ y w d−α + y w ̺ 1+ ̺ | − | | − − | | | | − | | − | − | | We investigate the two following cases.  Case 1. We first assume that d α 1. If ̺ [0, y w /2] then, by the concavity of the function r rd−α, we obtain− ≤ ∈ | − | 7→ d−α−1 y w α−d y w x α−d y w β (d α)̺ y w ̺ y w β | − | −| − − | − | − | − α−d α−d | − |β d−α | − β| y w + y w x 1+ x ≤ y w d−α+ y w  ̺ 1+ ̺ | − | | − − | | | | − | | − | − ̺ y w β (d α) | − | ≤ − y w ̺ 1+ ̺β | − | − ̺ y w β 2(d α) | − | ≤ − y w 1+ ̺β | − | 24 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

̺β y w β 2(d α) | − | ≤ − y w β 1+ ̺β | − | 2(d α). ≤ − If ̺ [ y w /2, y w ], then ∈ | − | | − | y w α−d y w x α−d y w β y w β | − | −| − − | β α−d α−d | − |β | −−β | β 2 y w + y w x 1+ x ≤ 1+2 y w ≤ | − | | − − | | | | − | If ̺ [ y w , 0] then we again use the concavity argument which yields ∈ −| − | y w α−d y w x α−d y w β (d α) ̺ y w d−α−1 y w β | − | −| − − | α−d α−d | − |β − | || − | d−α | − |β y w + y w x 1+ x ≤ y w d−α + y w ̺ 1+ ̺ | − | | − − | | | | − | | − | − | | ̺ y w β (d α) | | | − |  ≤ − y w 1+ ̺ β | − | | | ̺ β y w β (d α) | | | − | ≤ − y w β 1+ ̺ β | − | | | d α. ≤ − Case 2. Assume that d α> 1. If ̺ [0, y w ] then the function r rd−α is convex and we obtain − ∈ | − | 7→ y w α−d y w x α−d y w β (d α) ̺ y w d−α−1 y w β | − | −| − − | α−d α−d | − |β − | − | d−α | − β| y w + y w x 1+ x ≤ y w d−α + y w ̺ 1+ ̺ | − | | − − | | | | − | | − | − ̺ y w β (d α) | − |  ≤ − y w 1+ ̺β | − | ̺β y w β (d α) | − | ≤ − y w β 1+ ̺β | − | d α. ≤ − If ̺ [ y w , 0] then again in view of the convexity we have ∈ −| − | d−α−1 y w α−d y w x α−d y w β (d α) ̺ y w + ̺ y w β | − | −| − − | − | | | − | | | α−d α−d | − |β d−α | − |β y w + y w x 1+ x ≤ y w d−α + y w + ̺ 1+ ̺ | − | | − − | | | | − | | − | | | | | ̺ y w β (d α) || | − | ≤ − y w + ̺ 1+ ̺ β | − | | | | | ̺ y w β (d α) | | | − | ≤ − y w 1+ ̺ β | − | | | ̺ β y w β (d α) | | | − | ≤ − y w β 1+ ̺ β | − | | | d α. ≤ − Finally, for y Bc(0, 1+ x ) Bc(0, 2) we obtain ∈ | | ∩ y w α−d y w x α−d y β | − | −| − − | α−d α−d | | β y w + y w x 1+ x | − | | − − | | | α−d α−d (5.4) y w y w x y w β y β = | − | −| − − | | − | | | y w α−d + y w x α−d 1+ x β y w β | − | | − − | | | | − | 2 1+β(d α) 22β. ≤ − ∨ LIMIT THEOREMS FOR A STABLE SAUSAGE 25

On the other hand, if y Bc(0, 1+ x ) B(0, 2) then x B and ∈ | | ∩ ∈ 1+ x β (5.5) | | 2−β. y β ≥ | | Equations (5.3) to (5.5) imply the result. 

Lemma 5.4. In the notation of the proof of Lemma 2.5, it holds that

p(1, x) φ(y) φ(y) φ(y x) dy dx < . Rd Rd − − ∞ Z Z

Proof. We split the integral into three parts

p(1, x) φ(y) φ(y) φ(y x) dy dx Rd Rd − − Z Z

= p(1 , x) φ(y) φ(y) φ(y x) dy dx Rd Bc − − Z Z (0,1+|x|) (5.6) + p(1, x) φ (y) φ(y) φ(y x) dy dx Rd B B − − Z Z (0,1+|x|)∩

+ p(1, x) φ(y) φ(y) φ(y x) dy dx. Rd B Bc − − Z Z (0,1+|x|)∩

According to Lemma 5.3, by setting β = α/2 we obtain

1+ x α/2 φ(y) φ(y x) c φ(y)+ φ(y x) | | , y Bc(0, 1+ x ), | − − | ≤ 1 − y α/2 ∈ | | | |  where c1 = c(d,α,β). By [19, Lemma 2.5], there exists a constant c2 = c2(d,α) > 0 such that φ(w) c w α−d, for any w Bc. Thus, ≤ 2| | ∈

p(1, x) φ(y) φ(y) φ(y x) dy dx Rd Bc − − Z Z (0,1+|x|) α/2 2 1 1 1 1+ x c1c2 p(1, x) d−α d−α + d−α |α/|2 dy dx ≤ Rd Bc y y y x y Z Z (0,1+|x|) | | | | | − |  | | d−α/2 d−α 2 α/2 1 2 (1+2 ) c1c2 p(1, x) 1+ x 2d−3α/2 dy dx ≤ Rd Bc | | y x Z Z (0,1+|x|) | − |  d−α/2 d−α 2 α/2 1 2 (1+2 ) c1c2 p(1, x) 1+ x dx 2d−3α/2 dz ≤ Rd | | Bc z Z Z | |  ∞ d−α/2 d−α 2 B α/2 1 = 2 (1+2 ) c1c2 dλ( ) p(1, x) 1+ x dx d−3α/2+1 dr, Rd | | r Z Z1  where in the second step we used the fact that y x y + x y + y 1 2 y . The last integral is finite as d/α > 3/2 and X has| finite− |≤|β-moment| | |≤| for any| β<α| | − (see≤ [|25|, Example 25.10]). For the second integral on the right-hand side of eq. (5.6) we observe that

p(1, x) φ(y) φ(y) φ(y x) dy dx 2 λ(B). Rd B B − − ≤ Z Z (0,1+|x|)∩

26 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

The third integral on the right-hand side of eq. (5.6) is most demanding. We start by splitting this integral into two parts

p(1, x) φ(y) φ(y) φ(y x) dy dx Rd B Bc − − Z Z (0,1+|x|)∩

(5.7) = p(1, x ) φ(y) φ(y) φ( y x) dy dx B B Bc − − Z (0,Λ) Z (0,1+|x|)∩

+ p(1, x) φ( y) φ(y) φ(y x) dy dx. Bc B Bc − − Z (0,Λ) Z (0,1+|x|)∩

For the first integral in this decomposition we have

p(1, x) φ(y) φ(y) φ(y x) dy dx B B Bc − − Z (0,Λ) Z (0,1+|x|)∩

1 2 c2 p(1, x) d−α dy dx ≤ B B Bc y Z (0,Λ) Z (0,1+|x|)∩ | | 1 2 c2 d−α dy ≤ B Bc y Z (0,1+Λ)∩ | | 2 c λ(B(0, 1+Λ)). ≤ 2 The second integral on the right-hand side of eq. (5.7) we decompose further as follows

p(1, x) φ(y) φ(y) φ(y x) dy dx Bc B Bc − − Z (0,Λ) Z (0,1+|x|)∩

= p(1, x) φ(y) φ (y) φ(y x) dy dx Bc B Bc − − Z (0,Λ) Z (0,1+|x|)∩ ∩{z:|z−x|>|z|} (5.8) + p(1, x) φ (y) φ(y) φ(y x) dy dx Bc B Bc − − Z (0,Λ) Z (0,1+|x|)∩ ∩{z:1≤|z−x|≤|z|}

+ p(1, x) φ(y) φ(y ) φ(y x) dy dx. Bc B Bc − − Z (0,Λ) Z (0,1+|x|)∩ ∩{z:|z−x|<1}

It is well-known that for any Λ > 1 large enough there is c3 = c3(d, α, Λ) > 0 such that c p(1, x) 3 , x Bc(0, Λ). ≤ x d+α ∈ | | c We set A1 = B(0, 1+ x ) B z : z x > z and estimate the first integral on the right-hand side of eq.| (5.8| )∩ as follows∩{ | − | | |}

p(1, x) φ(y) φ(y) φ(y x) dy dx Bc − − Z (0,Λ) ZA1

2 1 1 1 1 c2c3 d+α d−α d−α + d−α dy dx ≤ Bc x y y y x Z (0,Λ) ZA1 | | | | | | | − |  2 1 1 2 c2c3 d+α 2d−2α dy dx ≤ Bc x y Z (0,Λ) ZA1 | | | | 2 1 1 2 c2c3 d+α 2d−2α dy dx ≤ Bc B Bc x y Z (0,Λ) Z (0,1+|x|)∩ | | | | 1+|x| 2 B 1 1 2 c2c3 dλ( ) d+α d−2α+1 dr dx. ≤ Bc x r Z (0,Λ) | | Z1 The last integral is finite as d/α > 3/2. LIMIT THEOREMS FOR A STABLE SAUSAGE 27

c We set A2 = B(0, 1+ x ) B z :1 z x z and for the second integral on the right-hand side of eq.| (|5.8∩) we∩{ have ≤| − |≤| |}

p(1, x) φ(y) φ(y) φ(y x) dy dx Bc − − Z (0,Λ) ZA2

2 1 1 1 1 c2c3 d+α d−α d−α + d−α dy dx ≤ Bc x y y y x Z (0,Λ) ZA2 | | | | | | | − |  d+α+1 2 1 1 2 c2c3 2d d−α dy dx ≤ Bc y y x Z (0,Λ) ZA2 | | | − | d+α+1 2 1 1 2 c2c3 2d d−α dx dy ≤ Bc y y x Z Z{z:1≤|z−x|≤|z|} | | | − | d+α+1 2 1 1 2 c2c3 2d d−α dz dy ≤ Bc y B Bc z Z | | Z (0,|y|)∩ | | d+α+1 2 −1 B 1 2 c2c3 dα λ( ) 2d−α dy ≤ Bc y Z | | ∞ 1 = 2d+α+1c2c d2α−1λ(B)2 dr, 2 3 rd−α+1 Z1 where in the second step we used the fact that y 2 x . c | | ≤ | | Finally, we set A3 = B(0, 1+ x ) B z : z x < 1 and for the third integral on the right-hand side of eq. (5.8) we| | proceed∩ ∩{ as follows| − | }

p(1, x) φ(y) φ(y) φ(y x) dy dx Bc − − Z (0,Λ) ZA3

1 1 2 c2 c3 d+α d−α dy dx ≤ Bc x y Z (0,Λ) ZA3 | | | | 1+d−α 1 2 c2 c3 2d dy dx ≤ Bc x Z (0,Λ) Z{z:|z−x|<1} | | ∞ 1 21+d−αc c λ(B) dr, ≤ 2 3 rd+1 ZΛ where in the second step we used the fact that x y x + y 1+ y 2 y .  | |≤| − | | | ≤ | | ≤ | | Lemma 5.5. In the notation of the proof of Lemma 2.5, it holds that

sup φ(y) φ(y) φ(y x) dy < . B Rd | − − | ∞ x∈ Z Proof. We split the integral as follows

φ(y) φ(y) φ(y x) dy = φ(y) φ(y) φ(y x) dy Rd | − − | B | − − | Z Z (0,1+|x|) + φ(y) φ(y) φ(y x) dy. Bc | − − | Z (0,1+|x|) For any x B one has B(0, 1+ x ) B(0, 2), and whence ∈ | | ⊆ sup φ(y) φ(y) φ(y x) dy 2 λ(B(0, 2)). B B | − − | ≤ x∈ Z (0,1+|x|) By Lemma 5.3 with β = α/2, for a constant c1 = c(d,α,β), 1+ x α/2 φ(y) φ(y x) c φ(y)+ φ(y x) | | , y Bc(0, 1+ x ). | − − | ≤ 1 − y α/2 ∈ | | | |  28 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

α−d By [19, Lemma 2.5], there exists a constant c2 = c2(d,α) > 0 such that φ(w) c2 w , for any w Bc. Thus, for x B we have ≤ | | ∈ ∈ φ(y) φ(y) φ(y x) dy Bc | − − | Z (0,1+|x|) d−α/2+1 d−α 2 1 2 (1+2 ) c1c2 2d−3α/2 dy ≤ Bc y x Z (0,1+|x|) | − | ∞ 1 2d−α/2+1(1+2d−α) c c2 dλ(B) dr, ≤ 1 2 rd−3α/2+1 Z1 where we used the fact that y x 2 y . The assertion follows as d/α > 3/2.  | − | ≤ | | Lemma 5.6. There exists a constant c>˜ 0 such that for all t> 0 large enough, E[ 4] c˜ t2. hVti ≤ Proof. By setting n = 2 in eq. (2.33), we have = λ( (1))+ λ( (2)) λ (1) (2) , Vt St/2 St/2 − St/2 ∩ St/2 where (1) and (2) are independent, and have the same law as  . Let (i) = λ( (i) ), for St/2 St/2 St/2 Vt/2 St/2 i = 1, 2. Taking expectation in the last equation and then subtracting the two relations yields = (1) + (2) λ( (1) (2)) . hVti hVt/2i hVt/2i−h St/2 ∩ St/2 i By the triangle inequality,2 (5.9) (1) + (2) + λ( (1) (2)) , khVtik4 ≤ khVt/2i hVt/2ik4 kh St/2 ∩ St/2 ik4 Jensen’s inequality and Lemma 2.3 imply that there is a constant c1 > 0 such that (1) (2) (1) (2) E (1) (2) (1) (2) λ( t/2 t/2) 4 λ( t/2 t/2) 4 + [λ( t/2 t/2)] 2 λ( t/2 t/2) 4 (5.10) kh S ∩ S ik ≤ k S ∩ S k S ∩ S ≤ k S ∩ S k 2 λ( (1) (2)) c h(t/2) c √t. ≤ k St/2 ∩ S∞ k4 ≤ 1 ≤ 1 By the independence of the variables (1) and (2) , we have hVt/2i hVt/2i 4 E (1) + (2) = E (1) 4 + E (2) 4 +6 E (1) 2 E (2) 2 . hVt/2i hVt/2i hVt/2i hVt/2i hVt/2i hVt/2i     h i h i h i h i N By Lemma 2.5, there exists N N large enough such that Var( t) c2t for all t 2 and some c > 0. Hence, for t ∈2N+1, V ≤ ≥ 2 ≥ 4 c t 2 E (1) + (2) E (1) 4 + E (2) 4 +6 2 . hVt/2i hVt/2i ≤ hVt/2i hVt/2i 2    h i h i   By combining this with the elementary inequality (a + b)1/4 a1/4 + b1/4, we arrive at ≤ 1/4 (5.11) (1) + (2) E (1) 4 + E (2) 4 + c √t khVt/2i hVt/2ik4 ≤ hVt/2i hVt/2i 3 2 1/4  h i h i with c3 = (3c2/2) . From eqs. (5.9) to (5.11) it follows that there is c4 > 0 such that 1/4 E (1) 4 + E (2) 4 + c √t khVtik4 ≤ hVt/2i hVt/2i 4 For k N we set  h i h i ≥ γ = sup :2k t< 2k+1 . k {khVtik4 ≤ } 2For a random variable Y we write Y = (E[ Y p])1/p, for any p 1. k kp | | ≥ LIMIT THEOREMS FOR A STABLE SAUSAGE 29

Thus, for k N +1 and for every 2k t< 2k+1 we have ≥ ≤ (γ4 + γ4 )1/4 + c 2k/2 khVtik4 ≤ k−1 k−1 5 with c = √2 c . Taking supremum over 2k t< 2k+1 yields 5 4 ≤ γ 21/4 γ + c 2k/2. k ≤ k−1 5 k/2 k/2 We set δk = γk/2 and we divide the last inequality by 2 . We thus have 21/4γ δ k−1 + c = 2−1/4δ + c . k ≤ 21/22(k−1)/2 5 k−1 5 By iterating this inequality we finally conclude the result.  Lemma 5.7. The following expansion is valid Var( ) = σ2t + (t1/2h(t)), t 1, Vt O ≥ where the function h(t) is defined in eq. (2.7). Proof. For every s, t 0 we have ≥ = λ [s,s + t] = λ( (1))+ λ( (2)) λ (1) (2) . Vs+t Ss ∪ S Ss St − Ss ∩ St This implies   (1) + (2) λ (1) (2) (1) + (2), Vs Vt − Ss+t ∩ Ss+t ≤ Vs+t ≤ Vs Vt and whence  (1) + (2) λ (1) (2) (1) + (2) + E[λ( (1) (2) )]. hVs i hVt i − Ss+t ∩ Ss+t ≤ hVs+ti ≤ hVs i hVt i Ss+t ∩ Ss+t (1) (2) Here, s and t are independent and have the same law as s and t, respectively. We S S ′ ′ S S set t = λ t t , where t is an independent copy of t. From the previous relation we obtainI S ∩ S S S  ( (1) + (2) ) + E[ ] + . |hVs+ti − hVs i hVt i | ≤ Is+t Is+t ≤ Is+t kIs+tk2 Hence (5.12) ( (1) + (2) ) 2 , khVs+ti − hVs i hVt i k2 ≤ kIs+tk2 and (5.13) 2 2 + 2 +4 2 + 2 1/2 +4 2. khVs+tik2 ≤ khVsik2 khVtik2 khVsik2 khVtik2 kIs+tk2 kIs+tk2 By eq. (2.6), there are c1 > 0 and t1 > 1 such that t 2 c1 h(t) for t t1. For t [1, t1] we clearly have . Thus, there is a constantkIc k> ≤0 such that, ≥ ∈ It ≤ It1 2 c h(t), t 1. kItk2 ≤ 2 ≥ Moreover, from eq. (2.9) we have that there exist c3 > 0 and t2 > 1 such that Var( t) c3t for t t , and for t [1, t ] we have Var( ) E[ 2] E[ 2 ] t. Hence V ≤ ≥ 2 ∈ 2 Vt ≤ Vt ≤ Vt2 Var( ) c + E[ 2 ] t, t 1. Vt ≤ 3 Vt2 ≥ We conclude that there is c4 > 0 such that  (5.14) c √t and c h(t), t 1. khVtik2 ≤ 4 kItk2 ≤ 4 ≥ By eq. (5.13), we obtain 2 2 + 2 + 4 c2√s + t h(s + t)+4 c2(h(s + t))2 khVs+tik2 ≤ khVsik2 khVtik2 4 4 2 + 2 + c √s + t h(s + t), ≤ khVsik2 khVtik2 5 for some constant c5 > 0. Similarly as above, in view of eq. (5.12) we have (1) + (2) + ( (1) + (2) ) khVs i hVt ik2 ≤ khVs+tik2 khVs+ti − hVs i hVt i k2 30 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

+2 , ≤ khVs+tik2 kIs+tk2 which implies 2 + 2 2 +4 +4 2. khVsik2 khVtik2 ≤ khVs+tik2 khVs+tik2kIs+tk2 kIs+tk2 By eq. (5.14), 2 + 2 2 +4 c2√s + t h(s + t)+4 c2(h(s + t))2 khVsik2 khVtik2 ≤ khVs+tik2 4 4 2 ++c √s + t h(s + t). ≤ khVs+tik2 5 We set x = Var( ) = 2 and b = c √t h(t), t> 0, t Vt khVtik2 t 5 and we have shown that x + x b x x + x + b , s,t 1. s t − s+t ≤ s+t ≤ s t s+t ≥ By Lemma 2.5 we know that x lim t = σ2 > 0. tր∞ t Take s = t =2k−1r for k N and r R, r 1. We easily verify that ∈ ∈ ≥ x k x k−1 b k 2 r 2 r 2 r , k N, r 1. 2kr − 2k−1r ≤ 2kr ∈ ≥

Next, we observe that ∞ N x k x k−1 x k x k−1 x 2 r 2 r = lim 2 r 2 r = σ2 r , r 1, 2kr − 2k−1r Nր∞ 2kr − 2k−1r − r ≥ Xk=1   Xk=1   and whence ∞ ∞ x x k x k−1 b k t σ2 = 2 t 2 t 2 t , t 1. t − 2kt − 2k−1t ≤ 2kt ≥ k=1   k=1 X X This yields

x ∞ c √2kt h(2kt) c ∞ h(2kt) t σ2 5 5 , t 1. t − ≤ 2kt ≤ √t 2k/2 ≥ Xk=1 Xk=1 Case (i). For ∆ (0, 1 /2) we have ∈ ∞ k 1/2−∆ ∞ xt 2 c5 (2 t) c5 −∆ k −∆ σ = (2 ) = c6 t , t 1, t − ≤ √t 2k/2 t∆ ≥ Xk=1 Xk=1 ∞ −∆ k where c6 = c5 k=1 2 . It follows that x σ2t c t1−∆ = c t1/2h(t), t 1. P | t − | ≤ 6 6 ≥ Case (ii). If ∆ 1/2, then h(t) is slowly varying. According to [2, Theorem 1.5.6] there is a constant c ≥> 0 such that h(2kt) c 2k/4h(t) for all k N and t 1. We obtain 7 ≤ 7 ∈ ≥ ∞ k/4 ∞ xt 2 c5 c72 h(t) c5c7h(t) −k/4 −1/2 σ = 2 = c8 t h(t), t 1, t − ≤ √t 2k/2 √t ≥ Xk=1 Xk=1 ∞ −k/4  with c 8 = c5c7 k=1 2 , and the proof is finished.

Lemma 5.8. AssumeP that d/α > 9/5. Then, for the process ni i≥0 defined in eq. (4.5), it holds that {J } i−1 n lim | j=0hJ j i| = 0 P-a.s. iր∞ Pni/ log log ni p LIMIT THEOREMS FOR A STABLE SAUSAGE 31

Proof. For i 2 we clearly have ≥ i−1 i−1 i−1 j−1 i−1 j−1 Var E[ 2 ]+2 E[ ] 2 E[ ] E[ ]. Jnj ≤ Jnj Jnj Jnk − Jnj Jnk j=1 ! j=1 j=1 j=1 X X X Xk=1 X Xk=1 Let ′ be an independent copy of . Then St St (d) = λ( ′[0, n n ] [0, n ]) Jnj S j+1 − j ∩ S j for j =1,...,i 1. Jensen’s inequality and Corollary 2.4 imply that, for some c > 0, − 1 E[ ]2 E[ 2 ] c h(n )2. Jnj ≤ Jnj ≤ 1 i Further, for any j =1,...,i 1 and k =1,...,j 1, it holds that − − = λ( (n , n ] [0, n ]) Jnj S j j+1 ∩ S j = λ( (n , n ] [n , n ]) + λ( (n , n ] ( [0, n ] [n , n ])). S j j+1 ∩ S k+1 j S j j+1 ∩ S k+1 \ S k+1 j We set (1) = λ( (n , n ] [n , n ]), Jj,k S j j+1 ∩ S k+1 j (2) = λ( (n , n ] ( [0, n ] [n , n ])). Jj,k S j j+1 ∩ S k+1 \ S k+1 j Due to independence, E[ (1)] = E[ ] E[ (1)]. Jnk Jj,k Jnk Jj,k Thus,

i−1 i−1 i−1 j−1 i−1 j−1 Var E[ 2 ]+2 E[ (2)] 2 E[ ] E[ (2)]. Jnj ≤ Jnj Jnk Jj,k − Jnk Jj,k j=1 ! j=1 j=1 j=1 X X X Xk=1 X Xk=1 Further,

i−1 j−1 (2) Jnk Jj,k j=1 X Xk=0 i−1 j−1 = λ( (n , n ] [0, n ]) λ( (n , n ] ( [0, n ] [n , n ])) S k k+1 ∩ S k S j j+1 ∩ S k+1 \ S k+1 j j=1 X Xk=0 i−2 i−1 = λ( (n , n ] [0, n ]) λ( (n , n ] ( [0, n ] [n , n ])) S k k+1 ∩ S k S j j+1 ∩ S k+1 \ S k+1 j Xk=0 j=Xk+1 i−2 λ( (n , n ] [0, n ]) λ( (n , n ] [0, n ]). ≤ S k k+1 ∩ S k S k+1 i ∩ S k+1 Xk=0 Similarly as before, by Corollary 2.4, we obtain E[λ( (n , n ] [0, n ])2] c h(n )2, S k k+1 ∩ S k ≤ 1 i E[λ( (n , n ] [0, n ])2] c h(n )2. S k+1 i ∩ S k+1 ≤ 1 i This implies i−1 j−1 E[ (2)] c ih(n )2. Jnk Jj,k ≤ 1 i j=1 X Xk=0 32 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

Analogously we can show that i−1 j−1 E[ ] E[ (2)] c ih(n )2. Jnk Jj,k ≤ 1 i j=1 X Xk=0 Thus, i−1 Var c ih(n )2 Jnj ≤ 2 i j=0 ! X for some c2 > 0. Let 2k n < 2k+1, andΛ=2 d/α when d/α (1, 2). Then ≤ i − ∈ (n1/2 log n ), d/α> 2, k O i i ih(n )2 h(2k+1)2 j2j/2 = (n1/2(log n )3), d/α =2, i ≤  i i j=1 O 2Λ+1/2  (n log ni), d/α (1, 2), X O i ∈ and, for any ε> 0,  (n−1/2(log n ) log log n ), d/α> 2, i−1 O i i i P ε n / log log n = (n−1/2(log n )3 log log n ), d/α =2, hJnj i ≥ i i  i i i j=0 ! O 2Λ−1/2  (n (log ni) log log ni), d/α (1, 2) X p O i ∈

 (2−k/2k log k), d/α> 2, O = (2−k/2k3 log k), d/α =2, O  (2(2Λ−1/2)kk log k), d/α (1, 2). O ∈

Let ǫ (0, 1) be arbitrary and let us consider a subsequence nil l≥1 which consists of ∈ (1−ǫ)k/2  k k+1 { } ǫk/2 every 2 -th member of ni i≥0 in [2 , 2 ). Clearly, there are at most k2 members⌊ of this⌋ subsequence in{ [2k,}2k+1]. We have

∞ il−1 P nj ε nil / log log nil hJ i ≥ ! l=1 j=0 q X X ∞ 2(ǫ− 1)k/2k2 log k, d/α> 2, k=1 ∞ (ǫ−1)k/2 4 c3  k=1 2 k log k, d/α =2, ≤ P∞  2(4Λ+ǫ−1)k/2k2 log k, d/α (1, 2) Pk=1 ∈ for some c > 0. When d/α 2 we take an arbitrary ǫ (0, 1), and when d/α (1, 2) 3 P we take ǫ (0, 1) such that ǫ≥ < 1 4Λ. Observe that in∈ the former case it is necessary∈ that Λ < 1∈/4 (that is, d/α (7/4, −2)). In this case, ∈ ∞ il−1 P nj ε nil / log log nil < . hJ i ≥ ! ∞ l=1 j=0 q X X Borel-Cantelli lemma implies that

il−1 n (5.15) lim | j=0 hJ j i| = 0 P-a.s. lր∞ nPil / log log nil k We finally prove that eq. (5.15)p holds for the sequence ni i≥0. If2 nil ni nil+1 { } ≤ ≤ ≤ ≤ 2k+1 then

il−1 il+1−1 i−1 il+1−1 il+1−1 E[ ] + E[ ]. hJnj i − Jnj ≤ hJnj i ≤ hJnj i Jnj j=0 j=i j=0 j=0 j=i X Xl X X Xl LIMIT THEOREMS FOR A STABLE SAUSAGE 33

From Corollary 2.4 and eq. (4.3) it follows that (1), d/α> 2, E O 1/2 [ nj ] = (h(nj+1 nj)) =  (log(nj / log nj)), d/α =2, . J O − O 1/2 Λ  ((n / log nj) ), d/α (1, 2). O j ∈ Hence  (1−ǫ)/2 il+1−1 (ni ), d/α> 2, E O (1−ǫ)/2 [ nj ] =  (ni log ni), d/α =2, . J O (Λ+1−ǫ)/2 j=il  (n ), d/α (1, 2). X O i ∈ By choosing an arbitrary ǫ (0, 1) in the case when d/α 2, and ǫ (0, 1) (Λ, 1 4Λ) in the case when d/α (1, 2),∈ we obtain ≥ ∈ ∩ − ∈ il+1−1 E[ ] = o( n / log log n ), Jnj i i j=i Xl p which concludes the proof. We observe that Λ < 1 4Λ if, and only if, Λ < 1/5, that is, d/α (9/5, 2). −  ∈ Acknowledgement. This work has been supported by Deutscher Akademischer Austau- schdienst (DAAD) and Ministry of Science and Education of the Republic of Croatia (MSE) via project Random Time-Change and Jump Processes. Financial support through the Alexander von Humboldt Foundation and Croatian Sci- ence Foundation under projects 8958 and 4197 (for N. Sandri´c), and the Austrian Science Fund (FWF) under project P31889-N35 and Croatian Science Foundation under project 4197 (for S. Sebek)ˇ is gratefully acknowledged.

References [1] R. F. Bass and T. Kumagai. Laws of the iterated logarithm for the range of random walks in two and three dimensions. Ann. Probab., 30(3):1369–1396, 2002. [2] N. H. Bingham, C. M. Goldie, and J. L. Teugels. Regular variation. Cambridge University Press, Cambridge, 1989. [3] E. Cs´aki and Y. Hu. Strong approximations of three-dimensional Wiener sausages. Acta Math. Hun- gar., 114(3):205–226, 2007. [4] W. Cygan, N. Sandri´c, and S. Sebek.ˇ CLT for the capacity of the range of stable random walks. Preprint (2019), arXiv:1904.05695. [5] W. Cygan, N. Sandri´c, and S. Sebek.ˇ Functional CLT for the range of stable random walks. To appear in Bull. Malays. Math. Sci. Soc. [6] M. D. Donsker and S. R. S. Varadhan. Asymptotics for the Wiener sausage. Comm. Pure Appl. Math., 28(4):525–565, 1975. [7] R. Durrett. Probability: theory and examples. Cambridge Series in Statistical and Probabilistic Math- ematics, fourth edition, 2010. [8] T. Eisele and R. Lang. Asymptotics for the Wiener sausage with drift. Probab. Theory Related Fields, 74(1):125–140, 1987. [9] R. K. Getoor. Some asymptotic formulas involving capacity. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 4:248–252 (1965), 1965. [10] J. M. Hammersley. Generalization of the fundamental theorem on sub-additive functions. Proc. Cambridge Philos. Soc., 58:235–238, 1962. [11] J. Hawkes. Some geometric aspects of potential theory. In analysis and applications (Swansea, 1983), volume 1095 of Lecture Notes in Math., pages 130–154. Springer, Berlin, 1984. [12] J. B. Hough and Y. Peres. An LIL for cover times of disks by planar random walk and Wiener sausage. Trans. Amer. Math. Soc., 359(10):4653–4668, 2007. [13] N. C. Jain and W. E. Pruitt. The law of the iterated logarithm for the range of random walk. Ann. Math. Statist., 43:1692–1697, 1972. 34 W.CYGAN,N.SANDRIC,´ AND S. SEBEKˇ

[14] M. Kac and J. M. Luttinger. Bose-Einstein condensation in the presence of impurities. II. J. Math- ematical Phys., 15:183–186, 1974. [15] O. Kallenberg. Foundations of modern probability. Springer-Verlag, New York, 1997. [16] U. Krengel. Ergodic theorems. Walter de Gruyter & Co., Berlin, 1985. [17] J.-F. Le Gall. Fluctuation results for the Wiener sausage. Ann. Probab., 16(3):991–1018, 1988. [18] J.-F. Le Gall and J. Rosen. The range of stable random walks. Ann. Probab., 19(2):650–705, 1991. [19] A. Mimica and Z. Vondraˇcek. Unavoidable collections of balls for isotropic L´evy processes. . Appl., 124(3):1303–1334, 2014. [20] V. V. Petrov. Sums of independent random variables. Springer-Verlag, New York-Heidelberg, 1975. Translated from the Russian by A. A. Brown, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 82. [21] S. C. Port. Asymptotic expansions for the expected volume of a stable sausage. Ann. Probab., 18(2):492–523, 1990. [22] S. C. Port and C. J. Stone. Infinitely divisible processes and their potential theory. Ann. Inst. Fourier (Grenoble), 21(2):157–275; ibid. 21 (1971), no. 4, 179–265, 1971. [23] D. Revuz and M. Yor. Continuous martingales and Brownian motion. Springer-Verlag, Berlin, third edition, 1999. [24] J. Rosen. The asymptotics of stable sausages in the plane. Ann. Probab., 20(1):29–60, 1992. [25] K. Sato. L´evy processes and infinitely divisible distributions, volume 68. Cambridge University Press, Cambridge, 1999. [26] Q. M. Shao. A Chung type law of the iterated logarithm for subsequences of a . Stochastic Process. Appl., 59(1):125–142, 1995. [27] B. Simon. Functional integration and quantum physics. AMS Chelsea Publishing, Providence, RI, second edition, 2005. [28] F. Spitzer. Electrostatic capacity, heat flow, and Brownian motion. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 3:110–121, 1964. [29] A.-S. Sznitman. Some bounds and limiting results for the measure of Wiener sausage of small radius associated with elliptic diffusions. Stochastic Process. Appl., 25(1):1–25, 1987. [30] J. Takeuchi. On the sample paths of the symmetric stable processes in spaces. J. Math. Soc. Japan, 16:109–127, 1964. [31] M. van den Berg. On the expected volume of intersection of independent Wiener sausages and the asymptotic behaviour of some related integrals. J. Funct. Anal., 222(1):114–128, 2005. [32] M. van den Berg. On the volume of intersection of three independent Wiener sausages. Ann. Inst. Henri Poincar´eProbab. Stat., 46(2):313–337, 2010. [33] M. van den Berg. On the volume of the intersection of two independent Wiener sausages. Potential Anal., 34(1):57–79, 2011. [34] M. van den Berg, E. Bolthausen, and F. den Hollander. On the volume of the intersection of two Wiener sausages. Ann. of Math. (2), 159(2):741–782, 2004. [35] Y. Q. Wang and F. Q. Gao. Laws of the iterated logarithm for high-dimensional Wiener sausage. Acta Math. Sin. (Engl. Ser.), 27(8):1599–1610, 2011.

(Wojciech Cygan) Institut fur¨ Mathematische Stochastik, Technische Universitat¨ Dres- den, Dresden, Germany & Instytut Matematyczny, Uniwersytet Wroclawski, Wroclaw, Poland Email address: [email protected]

(Nikola Sandri´c) Department of Mathematics, University of Zagreb, Zagreb, Croatia Email address: [email protected]

(Stjepan Sebek)ˇ Institute of Discrete Mathematics, Graz University of Technology, Graz, Austria & Department of Applied Mathematics, Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia Email address: [email protected]