THE ANNALS OF APPLIED 8(3), 708–748 (1998)

BROWNIAN MOTION IN A BROWNIAN CRACK

Krzysztof Burdzy Davar Khoshnevisan

Abstract.LetD be the Wiener sausage of width ε around two-sided . The components of 2-dimensional reflected Brownian motion in D con- verge to 1-dimensional Brownian motion and iterated Brownian motion, resp., as ε goes to 0.

1. Introduction. Our paper is concerned with a model for a diffusion in a crack. This should not be confused with the “crack diffusion model” introduced by Chudnovsky and Kunin (1987) which proposes that cracks have the shape of a diffusion path. The standard Brownian motion is the simplest of models proposed by Chudnovsky and Kunin. An obvious candidate for a “diffusion in a Brownian crack” is the “iterated Brownian motion” or IBM (we will define IBM later in the introduction). The term IBM has been coined in Burdzy (1993) but the idea is older than that. See Burdzy (1993, 1994) and Khoshnevisan and Lewis (1997) for the review of literature and results on IBM. The last paper is the only article known to us which considers the problem of diffusion in a crack; however the results in Khoshnevisan and Lewis (1997) have considerably different nature from our own results. There are many papers devoted to diffusions on fractal sets, see, e.g., Barlow (1990), Barlow and Bass (1993, 1997) and references therein. Diffusions on fractals are often constructed by an approximation process, i.e., they are first constructed on an appropriate ε-enlargement of the fractal set and then a limit is obtained as the width ε goes to 0. This procedure justifies the claims of applicability of such models as the real sets are likely to be “fat” approximations to ideal fractals. The purpose of this article is to provide a similar justification for the use of IBM as a model for the Brownian motion in a Brownian crack. We will show that the two components of the reflected 2-dimensional Brownian motion in a Wiener sausage of width ε converge to the usual Brownian motion and iterated Brownian motion, resp., when ε 0. → It is perhaps appropriate to comment on the possible applications of our main result. The standard interpretation of the Brownian motion with reflection is that of a diffusing particle which is trapped in a set, i.e., crack. However, the transition for reflected Brownian motion represent also the solution for the heat equation with the Neu- mann boundary conditions. Our Theorem 1 may be interpreted as saying that inside very narrow cracks, the solution of the Neumann heat equation is well approximated by the solution of the heat equation on the straight line which is projected back onto the crack. We leave the proof of this claim to the reader.

Research supported in part by grants from NSF and NSA. Keywords: crack diffusion model, iterated Brownian motion, diffusion on fractal AMS subject classification: 60J65, 60F99.

1 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

We proceed with the rigorous statement of our main result. We define “two-sided” Brownian motion by X+(t)ift 0, X1(t)= ≥ X−( t)ift<0,  − + where X and X− are independent standard Brownian motions starting from 0. For ε>0 and a continuous function η : R R let → D = D(η, ε)= (x, y) R2 : y (η(x) ε, η(x)+ε) . { ∈ ∈ − } ε Note that D is open for every continuous function η.Let t = t , 0 =(0, 0), be the reflected 2-dimensional Brownian motion in D(X1,ε) with normalY Y vectorY of reflection (the construction of such a process is discussed in Section 2). 2 Informally speaking, our main result is that Re (c(ε)ε− t) converges in distribution to a Brownian motion independent of X1, where theY constants c(ε) are uniformly bounded away from 0 and for all ε>0. Suppose that∞X2 is a standard Brownian motion independent of X1. The process X(t) =df X1(X2(t)),t 0 is called an “iterated Brownian motion” (IBM). We will identify R{ 2 with the complex≥ plane} C and switch between real and complex notation. Let % be ametriconC[0, ) corresponding to the topology of uniform convergence on compact intervals. ∞

Theorem 1. One can construct X1, X2 and ε for every ε>0 on a common probability Y space so that the following holds. There exist c1,c2 (0, ) and c(ε) (c1,c2) such that ε 2 ∈ ∞ 2 ∈ the processes Re (c(ε)ε− t),t 0 converge in metric % to Xt ,t 0 in probability { Y ≥ } ε 2 { ≥ } as ε 0. It follows easily that the processes Im (c(ε)ε− t),t 0 converge in metric % to→ the iterated Brownian motion X ,t 0 {in probabilityY as ε≥ 0}. { t ≥ } → We would like to state a few open problems. (i) There are alternative models for cracks, see, e.g., Kunin and Gorelik (1991). For which processes X1 besides Brownian motion a result analogous to Theorem 1 holds? (ii) Can one construct ε so that the convergence in Theorem 1 holds in almost sure sense rather than in probability?Y 2 1 1 (iii) Let B(x, r)= y R : x y

Let us outline the main idea of the proof of Theorem 1. Although the Wiener sausage has a constant width ε in the vertical direction, the width is not constant from the point of view of the reflected Brownian motion . Large increments of X1 over short intervals produce narrow spots along the crack. WeY will relate the effective width of the crack to thesizeofX1 increments and show that the narrow parts of the crack appear with large regularity due to the independence of X1 increments. This gives the proof the flavor of a

2 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998) random homogenization problem. Our main technical tool will be the Riemann mapping as the reflected Brownian motion is invariant under conformal mappings. We would like to point out that the usually convenient “Brownian scaling” arguments cannot be applied in many of our proofs. The Brownian scaling requires a different scaling of the time and state space coordinates. In our model time and space for X1 represent two directions in space for . In other words, the versions of Brownian scaling for X1 and are incompatible. Y Y The paper has two more sections. Section 2 contains a sketch of the construction of the reflected Brownian motion . The proof of Theorem 1 is given in Section 3. The proof consists of a large number of lemmas.Y We would like to thank Zhenqing Chen and Ben Humbly for the most useful advice.

2. Preliminaries. First we will sketch a construction of the reflected Brownian motion in D. We start by introducing some notation. Y Since we will deal with one- and two-dimensional Brownian motions, we will sometimes write “1-D” or “2-D Brownian motion” to clarify the statements. The following definitions apply to D(η, ε) for any η, not necessarily X2. The resulting objects will depend on η,ofcourse.LetD = z C :Imz ( 1, 1) .TheCarath´eodory prime end boundaries of D and D contain∗ points{ ∈ at ∈and− + }which are defined in the obvious way. Let f be the (unique)∗ analytic one-to-one−∞ function∞ mapping D onto D such that f( )= ,f( )= and f((0, 1)) = (0, ε). ∗ We owe∞ the following∞ −∞ remarks−∞ on the construction− of− to Zhenqing Chen. Y It is elementary to construct a reflected Brownian motion (RBM) ∗ in D . We will Y ∗ argue that f( t∗)isanRBMinD, up to a time-change. First, theY RBM on a simply connected domain D can be characterized as the continu- ous Markov process associated with (H1(D),E)inL2(D, dz), which is a regular Dirichlet ¯ form on D,whereE(f,g)=1/2 D f gdx (seeRemark1toTheorem2.3ofChen (1992)). ∇ ∇ R 2 The Dirichlet form for the process f( t∗) under the reference measure f 0(z) dz is 1 1 Y | | (H (f − (D)),E). Therefore f( t∗) is a time change of the RBM on D. The last assertion follows from a Dirichlet form characterizationY of time-changed processes due to Silverstein and Fitzsimmons (see Theorem 8.2 and 8.5 of Silverstein (1974); the proof seems to contain a gap; see Fitzsimmons (1989) for a correct proof; see also Theorem 6.2.1 in Fukushima, Oshima and Takeda (1994)). We will denote the time-change by κ, i.e., (κ(t)) = f( ∗(t)). Unless indicated Y Y otherwise we will assume that 0∗ =(0, 0). The above argument providesY a construction of an RBM in D(η, ε) for fixed (non- random) η. However, we need a construction in the case when η is random, i.e., D = D(X1,ε). Let Sn, n be two-sided simple random walk on integers and let S , { t −∞ ≤ be a≤∞} continuous extension of S to all reals which is linear on all { t −∞ ≤ ≤∞} n intervals of the form [j, j + 1]. Next we renormalize St to obtain processes cSkt/√k which 1 k converge to X in distribution as k .LetUt agree with cSkt/√k on [ k, k]andlet U k be constant on intervals ( , k→∞]and[k, ). Note that the number of different− paths t −∞ − ∞ 3 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

k k k of Ut is finite. Fix an ε>0. We construct an RBM in D(Ut ,ε) by repeating the con- kY k k struction outlined above for every possible path of Ut .Wecanview(U , ) as a random element with values in C( , ) C[0, )2. It is possible to show that whenY η converge −∞ ∞ × ∞ k to η uniformly on compact subsets of R then RBM’s in D(ηk,ε) starting from the same point converge in distribution to an RBM in D(η, ε). It follows that when k then (U k, k) converges in distribution to a process whose first coordinate has the distribution→∞ of X1Y, i.e., two-sided Brownian motion. The distribution of the second component under the limiting measure is that of reflected Brownian motion in D(X1,ε). We will use letters P and E to denote probabilities and expectations for RBM in D when the function X1 is fixed. In other words, P and E denote the conditional distribution and expectation given X1. They will be also used in the situations when we consider a domain D(η, ε) with non-random η. We will use and to denote the distribution and expectation corresponding to the probability spaceP on whichE X1 and are defined. In other words, applied to a random element gives us a number while E appliedY to a random element resultsE in a measurable with respect to X1. For the convenience of the reader we state here a version of a lemma proved in Burdzy, Toby and Williams (1989) which is easily applicable in our argument below. The notation of the original statement was tailored for the original application and may be hard to understand in the present context.

Lemma 1. (Burdzy, Toby and Williams (1989)) Suppose that functions h(x, y),g(x, y) and h1(x, y) are defined on product spaces W1 W2, W2 W3 and W1 W3, resp. Assume that for some constant c ,c (0, 1) the functions× satisfy× for all x, y, x×,x ,y ,y ,z ,z , 1 2 ∈ 1 2 1 2 1 2

h1(x, y)= h(x, z)g(z, y)dz, ZW2

h(x1,z1) h(x2,z1) (1 c1), h(x1,z2) ≥ h(x2,z2) − and g(z1,y1) g(z2,y1) c2 . g(z1,y2) ≥ g(z2,y2) Then h1(x1,y1) h1(x2,y1) 2 (1 c1 + c2c1). h1(x1,y2) ≥ h1(x2,y2) −

Proof. All we have to do is to translate the original statement to our present notation. The symbols on the left hand side of the arrow appeared in Lemma 6.1 of Burdzy, Toby and Williams (1989). The symbols on the right hand side of the arrow are used in the present version of the lemma.

b 1,U ,kx , 3 k x ,vz ,wz , 7−→ 7−→ ∅ 7−→ 1 − 7−→ 2 7−→ 1 7−→ 2 x y ,yy ,fh, g g, h h . 7−→ 1 7−→ 2 7−→ 7−→ 7−→ 1 4 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

For a set A, τ(A)andT (A) will denote the exit time from and the hitting time of a set A for 2-D Brownian motion or reflected Brownian motion.

3.Proofofthemainresult. The proof of Theorem 1 will consist of several lemmas. Recall the definitions of D and f from Section 2. Let M (a, b)= z D : a Re z b , M (a)=M (a, a), M∗ (a, b)=f(M (a, b)) and M(a)=∗ f(M ({a)).∈ It∗ is easy≤ ≤ } ∗ 1∗ ∗ ∗ to show that f and f − can be extended in a continuous fashion to the closures of their domains. TheclosureofthepartofD between M(a)andM(a + 1) will be called a cell and denoted C(a). The closure of the part of D between the lines z :Rez = a and z : { 1 } { Re z = b will be called a worm and denoted W (a, b). Let W (a, b)=f − (W (a, b)). The degenerate} worm W (a, a) will be denoted W (a), i.e., W (a)=∗ z D :Rez = a . The first three lemmas in this section deal with domains D{ (η,∈ ε) for arbitrary} contin- uous functions η.

Lemma 2. There exists c< independent of η and f such that the diameter of the cell C(a) is less than cε for every∞a. In particular, the diameter of M(a) is less than cε for every a.

Proof. It is easy to see that it will suffice to prove the lemma for a =0.Moreover,we will assume that ε = 1 because the general case follows by scaling. We start by proving the second assertion of the lemma. Let ∂−M (a, b) be the part of ∂M (a, b) on the line Im z = 1 and let ∂−M(a, b)= ∗ ∗ { − } f(∂−M (a, b)). Consider a point z M (0) with Im z 0. 2-D Brownian motion starting ∗ ∈ ∗ ≤ from z hasnolessthan1/4 probability of exiting D through ∂−M ( , 0). By conformal invariance, Brownian motion starting from f(z) has∗ no less than 1∗/4−∞ probability of exiting D through ∂−M( , 0). Find large c < so that the probability that Brownian motion −∞ 1 ∞ starting from f(z) will make a closed loop around B(f(z), 2) before exiting B(f(z),c1) is greater than 7/8. Suppose for a moment that Re f(z) 0 and the distance from f(z)to(0, 1) is greater than c +2. Then ∂B(f(z),c ) does≥ not intersect W (0), and − 1 1 so a closed loop around B(f(z), 2) which does not exit B(f(z),c1) has to intersect ∂D before hitting ∂−M( , 0). In order to avoid contradiction we have to conclude that −∞ f(z) f((0, 1)) = f(z) (0, 1) c1 +2inthecaseRef(z) 0. The case Re f(z) 0 |can be− treated− in| the| similar− way− |≤ and we conclude that the diameter≥ of f( z M (0)≤ : { ∈ ∗ Im z 0 ) cannot be greater than 2(c1 + 2). By symmetry, the diameter of the other part of M≤(0)} is bounded by the same constant and so the diameter of M(0) cannot be greater than 4(c1 +2). The proof of the first assertion follows along similar lines. Consider a point z in the interval I =df z :Imz =0, 0 Re z 1 .Thereexistsp>0 such that Brownian motion starting from{z will hit M (0)≤ before≤ hitting} ∂D with probability greater than p for all z I. By conformal invariance,∗ Brownian motion∗ starting from f(z) will hit M(0) before ∈ hitting ∂D with probability greater than p. Find large c2 < so that the probability that Brownian motion starting from f(z) will make a closed loop∞ around B(f(z), 2) before exiting B(f(z),c ) is greater than 1 p/2. If the distance from f(z)toM(0) is greater 2 − than c2 then such a loop would intersect ∂D before hitting M(0). This would contradict our previous bound on hitting M(0) so we conclude that the distance from f(z)toM(0)

5 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998) is bounded by c2. Since the diameter of M(0) is bounded by 4(c1 +2),weseethat 4(c1 +2)+2c2 is a bound for the diameter of f(I). The cell C(0) is the union of sets M(a) for 0 a 1, the diameter of each of these sets is bounded by 4(c + 2) and each of these ≤ ≤ 1 sets intersects f(I). Hence, the diameter of C(0) is bounded by 12(c1 +2)+2c2.

Lemma 3. For every δ>0 there exists 0 we can find c1 > (depending on b but independent of η)sothatW ( ,c ) M−∞( ,b). −∞ −∞ 1 ⊂ −∞ Suppose that W (c2) intersects M (b1)forsomeb1 >band c2 R. Planar Brownian motion starting from∗ (0, 0) can hit the line∗ segment I = z :Imz =0∈,b 2 < Re z 0. A Brownian particle starting from any point of I can first∗ make a closed loop around I and then exit D through the upper part ∗ of the boundary before hitting M (b 3) or M (b) with probability p1 > 0. The same estimate holds for a closed loop around∗ − I exiting∗ through the lower part of the boundary. Since b1 >band W (c2) M (b1) = , either a closed loop around I exiting D through ∗ ∩ ∗ 6 ∅ ∗ the upper part of ∂D must cross W ( ,c2) or this is true for the analogous path exiting through the lower part∗ of ∂D . We∗ see−∞ that the 2-D Brownian motion starting from (0, 0) ∗ can hit W ( ,c2)beforeexitingD with probability greater than p0p1 > 0. By conformal∗ −∞ invariance of planar∗ Brownian motion, it will suffice to show that for sufficiently large negative c , Brownian motion starting from f(0, 0) cannot hit W ( ,c ) 2 −∞ 2 before exiting D with probability greater than p0p1/2. By Lemma 2 the diameter of M(0) is bounded by c < .Sincef(0, 0) M(0) and 3 ∞ ∈ M(0) W (0) = ,wehaveRef(0, 0) > c3. Find large c4 < so that the probability that Brownian∩ 6 motion∅ starting from f(0,−0) will make a closed∞ loop around B(f(0, 0), 2) before exiting B(f(0, 0),c4) is greater than 1 p0p1/2. If such a loop is made, the process exits D before hitting W ( , c c ). It− follows that Brownian motion starting from −∞ − 3 − 4 f(0, 0) cannot hit W ( , c3 c4)beforeexitingD with probability greater than p0p1/2. This completes the proof−∞ of− Step− 1.

Step 2. In this step, we will prove the lemma for a = 0. Recall the notation ∂−M(a, b) and ∂−M (a, b) from the proof of Lemma 2. Reflected∗ Brownian motion (RBM) in D starting from any point of M (0) has a fifty- ∗ ∗ fifty chance of hitting ∂−M ( , 0) before hitting ∂−M (0, ). By conformal invariance, ∗ −∞ ∗ ∞ η1 RBM in D(η1, 1) starting from any point of M(0) can hit ∂−M ( , 0) before hitting η −∞ ∂−M 1 (0, ) with probability 1/2. ∞ η η η η It follows from Lemma 2 that M (0) W (c0, )andW (c0, ) M (c0, )for ⊂ ∞ ∗ ∞ ⊂ ∗ ∞ some c0,c0 > and all η. An argument similar to that in the proof of Lemma 2 or Step −∞ η η 1 of this proof easily shows that for any z M (c0, ), and so for any z W (c0, ), the ∈ ∗ ∞ η ∈ ∗ η ∞ probability that RBM in D starting from z can hit M (b) before hitting ∂−M (0, )is ∗ ∗ ∗ ∞ bounded from above by p2(b) such that p2(b) 0whenb . Fix an arbitrarily small δ>0. It is easy to→ see that for→−∞ all v M ( δ)wehave ∈ ∗ − v P (T (∂−M ( , 0)) 1/2+p3, (1) Y∗ ∗ −∞ ∗ ∞ 6 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998) where p3 = p3(δ) > 0. Next find b such that p2(b)

η2 Since fη1 ((0, 1)) = (0, 1), and the same holds for fη2 ,wehave∂−M ( , 0) = η − −η η −∞ ∂−M 1 ( , 0) and ∂−M 2 (0, )=∂−M 1 (0, ). It follows that the last displayed formula−∞ is equal to ∞ ∞

z η1 η1 η2 P (T (∂−M ( , 0))

z η1 η1 η2 P (T (∂−M ( , 0))

Let 0 denote the RBM in D(η1, 1). In view of our assumption that η1 and η2 agree on the intervalY [c , ), the last quantity is equal to 1 ∞

z η1 η1 η2 P (T (∂−M ( , 0))

η1 η2 We have z M (0) W (c0, ), so the last probability is bounded by p2(b). By retracing our∈ steps we obtain⊂ the following∞ inequality

v P (T (∂−M ( , 0))

1 η1 This and (1) imply that fη− (M (0)) lies totally to the right of M ( δ). This is equivalent 2 ∗ to M η2 ( , δ) M η1 ( , 0). We have proved the lemma for a−=0. −∞ − ⊂ −∞ Step 3. We will extend the result to all a 0 in this step. ≥ The mapping g =df f 1 f is a one-to-one analytic function from D onto itself. We η−2 η1 have proved that g(M (0, ◦)) M ( δ, ). In order to finish the proof∗ of the lemma it will suffice to show that∗ g(∞M (a,⊂ ))∗ − M∞(a δ, ) for every a 0. ∗ ∞ ⊂ ∗ − ∞ ≥ 7 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Let h(z) =df g(z)+δ. It will suffice to show that h(M (a, )) M (a, )fora 0. We already know that h(M (0, )) M (0, ). Consider∗ ∞ any two⊂ points∗ ∞ z M≥(a) and v M ( ,a) M (a).∗ It∞ will⊂ suffice∗ to∞ show that v = h(z). Let u be such∈ that∗ Re u =Re∈ z∗and−∞ Im u\=Im∗ v. Suppose for a moment that Im6 v Im z. Consider a planar Brownian motion starting from z. By comparing, path by≥ path, the hitting times of Y ∂−M (0, )and∂M (0, ) for the processes and +(u z),andthenfortheprocesses +(∗u ∞z)and +(∗ u ∞v), we arrive at theY followingY inequalities,− Y − Y − z u P (T (∂−M (0, )) T (∂M (0, ))) P (T (∂−M (0, )) T (∂M (0, ))) ∗ ∞ ≤ ∗ ∞ ≥ ∗ ∞ ≤ ∗ ∞ v >P (T (∂−M (0, )) T (∂M (0, ))). ∗ ∞ ≤ ∗ ∞

Since h(M (0, )) M (0, )andh(∂−M (0, )) ∂−M (0, ), ∗ ∞ ⊂ ∗ ∞ ∗ ∞ ⊂ ∗ ∞ v v P (T (∂−M (0, )) T (∂M (0, ))) P (T (h(∂−M (0, ))) T (∂M (0, ))) ∗ ∞ ≤ ∗ ∞ ≥ ∗ ∞ ≤ ∗ ∞ v P (T (h(∂−M (0, ))) T (h(∂M (0, )))) ∗ ∗ ≥ 1 ∞ ≤ ∞ h− (v) = P (T (∂−M (0, )) T (∂M (0, ))). ∗ ∞ ≤ ∗ ∞ Hence

1 z h− (v) P (T (∂−M (0, )) T (∂M (0, ))) >P (T (∂−M (0, )) T (∂M (0, ))) ∗ ∞ ≤ ∗ ∞ ∗ ∞ ≤ ∗ ∞ and so v = h(z).ThecaseImv Im z can be treated in the same way. This completes the proof6 of Step 3 and of the entire≤ lemma.

Recall the definitions of cells and worms from the beginning of this section. Let K−(a1,a2) be the number of cells C(k), k Z, which lie in the worm W (a1,a2)andlet + ∈ K (a1,a2) be the number of cells which intersect the same worm. The cell count is relative to the conformal mapping establishing equivalence of D and D. We will say that a conformal mapping f : D D is admissible if f( )= ∗ and f( )= . ∗ → −∞ −∞ ∞ ∞ Lemma 4. There exists c < with the following property. Suppose that a

Proof.Letc2 be such that the cell diameter is bounded by c2ε (see Lemma 2). According η3 toLemma3wecanfindc3 so large that if η3(x)=η1(x)forx> c3ε then M ( ,a 1) M η1 ( ,a)fora 0 and similarly with the roles of η and−η reversed. −∞ − ⊂ −∞ ≥ 1 3 We can assume without loss of generality that η1(a1 c2ε c3ε)=η2(a1 c2ε c3ε). Let − − − − η (x)forx a c ε c ε, η (x)= 2 1 2 3 3 η (x)otherwise.≤ − −  1 The conformal functions f used to define conformal equivalence of D and D(η, ε)in the proof of Lemma 3 have the property that f((0, 1)) = (0, ε), by assumption.∗ In order − − 8 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998) to be able to apply Lemma 3 we introduce functions fn : D D(ηn,ε)withfn( )= , f( )= and f ((0, 1)) = (a c ε, η (a c ε) ∗ε).→ Lemma 3 now applies∞ with∞ −∞ −∞ n − 1 − 2 n 1 − 2 − a suitable shift. Let j0 and j1 be the smallest and largest integers k with the property M(k) W (a1,a2) = .ThenM(j0,jk) W (a1 c2ε, a2 + c2ε). Now we use Lemma 3 to see that∩ 6 ∅ ⊂ − M η1 ( ,j ) M η3 ( ,j +1) −∞ 0 ⊂ −∞ 0 and M η3 ( ,j 1) M η1 ( ,j ) −∞ 1 − ⊂ −∞ 1 + assuming that these sets are defined relative to fn’s. This implies that K (a1,a2) η3 ≥ K+ (a ,a ) 2. η1 1 2 − It is easy to see that by switching from the mapping fn to any admissible mapping we change the number K+ (a ,a )byatmost2.Hence,K+ (a ,a ) K+ (a ,a ) 6with ηn 1 2 η3 1 2 η1 1 2 the usual choice of conformal mappings. ≥ − The analogous estimate applies to η3 and η2, by the symmetry of the real axis. Thus + + K (a1,a2) K (a1,a2) 12. η2 ≥ η1 − The proof of the inequality K− (a1,a2) K− (a1,a2) 12 is completely analogous. η3 ≥ η1 −

Before stating our next lemma we introduce some notation. Recall that X1(t), < t< is a one-dimensional Brownian motion with X1(0) = 0. Suppose {ε>0andlet−∞ ∞} S0 =0,

2 1 1 Sk =(Sk 1 + ε /4) inf t>Sk 1 : X (Sk 1) X (t) ε/4 ,k1, − ∧ { − | − − |≥ } ≥ 2 1 1 Sk =(Sk 1 ε /4) sup t2ε2.LetJ = J(b, ε) be the smallest k such that S b and let k ≥ m+1 1 Nm = Nm(b, ε)= (Sk Sk 1)− . − − Xk=0

k k 4k Lemma 5. We have N (b, ε) c(k)b ε− for k 1. E J ≤ ≥

Proof. First we will assume that ε =1.SinceSk Sk 1 is bounded by 1/4, we have for 0 <λ<1/2, − −

exp( λ(Sk Sk 1)) (1 c1λ(Sk Sk 1)) = 1 c1λ (Sk Sk 1) exp( c2λ). E − − − ≤E − − − − E − − ≤ − We see that for λ (0, 1/2) and n 2, ∈ ≥ λb (J n)= (Sn 1

Let a = n/b and λ =1/b to see that for a 1/b, ≥ J ab c exp( c a). (2) P ≥ ≤ 3 − 2 For all x>4, 

(1/(Sk Sk 1) >x)= (Sk Sk 1 < 1/x) P − − P − − 2 (sup X(Sk 1 + t) X(Sk 1) > 1/4) ≤ P 0 t 1/x − − − ≤ ≤ 4 (X(Sk 1 +1/x) X(Sk 1) > 1/4) ≤ P − − − =4 (X(Sk 1 +1) X(Sk 1) > √x/4) P − − − 4 exp( x/32). ≤ √2πx −

c4 Therefore, exp(1/64(Sk Sk 1)) e with c4 < . Hence for m 1, E − − ≤ ∞ ≥ x/64 x/64 m Nm x) e− exp(Nm/64) e− ( exp(1/64(Sk Sk 1))) P ≥ ≤ E ≤ E − − x/64 mc x/64+mc e− e 4 = e− 4 . (3) ≤ Putting (2) and (3) together, for a 2/b and y>0, ≥ N yb J ab + N yb P J ≥ ≤P ≥ P ab ≥ c2a yb/64+c4ba  c3e− +e− .  ≤ 1 Let a = (128c4)− y.Sincea has to be bigger than 2/b and b>2, the following estimate holds for y>128c4,

c y yb/128 c y y/64 N /b y c e− 5 + e− c e− 5 + e− . P J ≥ ≤ 3 ≤ 3 We conclude that for all k, b  1, ≥ N k c(k)bk. E J ≤  2 2 By Brownian scaling, the distribution of NJ (b, ε)isthesameasε− NJ (bε− , 1). Hence,

k 2k 2 k k 4k N (b, ε) = ε− N (bε− , 1) c(k)b ε− . E J E J ≤

Let c0 be the constant defined in Lemma 2, i.e., c0 is such that the cell diameter is + bounded by c ε. Recall that K−(a ,a )(K (a ,a )) is the number of cells C(k), k Z, 0 1 2 1 2 ∈ which lie inside (intersect) the worm W (a1,a2).

1 Lemma 6. For D = D(X ,ε), k 1 and sufficiently small ε<(a2 a1)/(3c0),wehave, 3≥ − (i) K−(a1,a2) c1(a2 a1)ε− , E ≥k − k 3k (ii) K−(a ,a )− c(k)(a a )− ε , E 1 2 ≤ 2 − 1 10 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

+ k k 3k (iii) K (a ,a ) c(k)(a a ) ε− . E 1 2 ≤ 2 − 1 df Proof.(i)Findp>0 with the following property. If z1 and z2 lie on the line L = z : Im z =0 and z z 2 then 2-D Brownian motion starting from z can make a closed{ } | 1 − 2|≤ 1 loop around z2 before exiting D with probability greater than p.Letc2 < be such that 2-D Brownian motion starting∗ from z has more than 1 p/2chanceofmakingaclosed∞ 2 2 − loop around B(z, ε )beforeexitingB(z, c2ε ). Note that c2 may be chosen so that it does not depend on ε>0andz C. Let t = a + c ε + kε2∈for k 0. If the event k 1 0 ≥ =df X1(t ) >X1(t )+3ε Ck { k+1 k } holds we let bk = X1(t )+ε + jc ε2,1 j 1/(c ε) 2. Let j k 2 ≤ ≤ 2 − Γk = z D :Imz = bk, Re z (t ,t ) . j { ∈ j ∈ k k+1 } 2 k 2 k Since tk+1 tk = ε , the diameter of Γj does not exceed ε .Chooseapointzj which − k k belongs to f(L) Γj . 2-D Brownian motion starting from zj can make a closed loop k 2 ∩ k 2 around B(zj ,ε )beforeexitingB(zj ,c2ε ) with probability greater than 1 p/2. Since k 2 − n the diameter of Γj is bounded by ε , the chance of making a closed loop around zm before exiting D is less than p/2 assuming (n, m) =(k, j). By conformal invariance, 6 1 k the probability that 2-D Brownian motion starting from f − (zj ) will make a closed loop 1 n 1 k 1 n around f − (zm)beforeexitingD is less than p. Hence, f − (zj ) f − (zm) > 2andso k n ∗ + | − | zj and zm belong to different cells. Therefore K (a1 + c0ε, a2 c0ε)isatleastasbigas k −2 the number of different points zj for 0 k (a2 a1 2c0ε)/ε 2. Note that for small 2 ≤ ≤ −2 − − ε we have (a2 a1 2c0ε)/ε 2 c3(a2 a1)/ε . Since cell diameter is bounded by c0ε − − + − ≥ − we have K−(a ,a ) K (a + c ε, a c ε)andso 1 2 ≥ 1 0 2 − 0 c (a a )/ε2 3 2− 1 + K−(a1,a2) K (a1 + c0ε, a2 c0ε) (1/(c2ε) 2)1 . ≥ − ≥ − Ck Xk=0 Note that ( )=p for some absolute constant p > 0andsoforsmallε, P Ck 1 1 2 3 K−(a ,a ) p (1/(c ε) 2)c (a a )ε− c (a a )ε− . E 1 2 ≥ 1 2 − 3 2 − 1 ≥ 4 2 − 1 n 2 (ii) Let Uk =(1/(c2ε) 2)1 , Vn = Uk and n0 = c3(a2 a1)/ε . Let us assume − Ck k=1 − that ε is small so that Uk > (c5/ε)1 .SinceUk’s are independent we have for λ>0, Ck Pn exp( λV )= exp( λU ) pn exp( nc λ/ε)=exp(nc nc λ/ε). E − n E − k ≤ 1 − 5 6 − 5 Hence, for λ>0, u 2/(c c ),  ≥ 3 5 3 1 3 ((a a )ε− /V u)= (V u− (a a )ε− ) P 2 − 1 n0 ≥ P n0 ≤ 2 − 1 1 3 exp(λu− (a a )ε− ) exp( λV ) ≤ 2 − 1 E − n0 1 3 exp(λu− (a a )ε− + n c n c λ/ε) ≤ 2 − 1 0 6 − 0 5 2 1 =exp((a a )ε− (λ(uε)− + c c c c λ/ε)) 2 − 1 3 6 − 3 5 2 exp (a a )ε− (c c c c λ/(2ε)) . ≤ 2 − 1 3 6 − 3 5   11 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Now take 2ε u λ = + c c . c c (a a )ε 2 3 6 3 5  2 − 1 −  Then 2 (a a )ε− (c c c c λ/(2ε)) = u 2 − 1 3 6 − 3 5 − and so 3 ((a a )ε− /V u) exp( u). P 2 − 1 n0 ≥ ≤ − This implies that for k 1, ≥ k k k 3k K−(a ,a )− V − c (a a )− ε . E 1 2 ≤E n0 ≤ 7 2 − 1

(iii) Fix some p0 < 1 such that for every a R and z M (a), the probability that planar Brownian motion starting from z will hit M∈ (a 1) ∈M (∗a + 1) before exiting from D is ∗ − ∪ ∗ ∗ less than p0. By conformal invariance of Brownian motion, the probability that Brownian motion starting from z M(a) will hit M(a 1) M(a + 1) before exiting from D is less ∈ − ∪ than p0. We choose ρ>0 so small that the probability that Brownian motion starting from any point of B(x, rρ) will make a closed loop around B(x, rρ)insideB(x, r) B(x, rρ)before \ exiting B(x, r) is greater than p0. Suppose that two Jordan arcs V1 and V2 have endpoints outside B(x, r) and they both intersect B(x, rρ). Then Brownian motion starting from any point of V B(x, rρ) will hit V before exiting B(x, r) with probability greater than 1 ∩ 2 p0. It follows that if B(x, r) D then either M(a)orM(a + 1) must be disjoint from B(x, rρ). ⊂ We slightly modify the definition of Sk’s considered in Lemma 4 by setting S0 = a c ε. The rest of the definition remains unchanged, i.e., 1 − 0 2 1 1 Sk =(Sk 1 + ε /4) inf t>Sk 1 : X (Sk 1) X (t) ε/4 ,k1, − ∧ { − | − − |≥ } ≥ S =(S ε2/4) sup t

z :Imz = X1(S ),S Re z (S + S )/2 , { k k ≤ ≤ k k+1 } z :Rez =(S + S )/2, Im z [X1(S ),X1(S )] , { k k+1 ∈ k k+1 } z :Imz = X1(S ), (S + S )/2 Re z S . { k+1 k k+1 ≤ ≤ k+1} 1 1 1 1 Here [X (Sk),X (Sk+1)] denotes the interval with endpoints X (Sk)andX (Sk+1), not 1 1 necessarily in this order since X (Sk+1) may be smaller than X (Sk). Let J be the smallest k such that Sk a2 + c0ε. It is elementary to check that we can choose the balls so that (i) B(xk,rk) ≥D, j j ⊂ (ii) the set B(xk,rkρ) is connected and contains (S ,X1(S )) 1 j mk j j k k and (S ,X≤ ≤1(S )), kS+1 k+1 12 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

k (iii) the radii rj ,1 j mk, are not smaller than minm=k,k+1,k+2(Sm Sm 1)/2andso ≤ ≤ − − (iv) mk is not greater than (4ε/ρ)maxm=k,k+1,k+2 1/(Sm Sm 1). − − 1 The total number m of balls needed to connect all points (Sk,X (Sk)), 1 k J,is bounded by ≤ ≤ J J+1 e 12ε 1 mk (Sk Sk 1)− . ≤ ρ − − kX=1 kX=0 In the notation of Lemma 5,

m c εN (a a +2c ε, ε). ≤ 8 J 2 − 1 0

Since cell diameter is boundede by c0ε (Lemma 1), none of cells which touch W (a1,a2) can extend beyond W (a1 c0ε, a2 + c0ε). Every curve M(a) which lies totally inside − k k W (a1 c0ε, a2 + c0ε) must intersect at least one ball B(xj ,rj ρ), 1 k J,1 j − k ≤k ≤ ≤ ≤ mk.SinceM(a)andM(a + 1) cannot intersect the same ball B(xj ,rj ρ) it follows that + K (a1,a2) is bounded by c8εNJ (a2 a1 +2c0ε, ε). Note that a2 a1 +2c0ε c9(a2 a1) for small ε. Hence by Lemma 5, for− all k 1, − ≤ − ≥ + k k k k k k k k 4k k 3k [K (a ,a )] c ε N (c (a a ),ε) c ε c (a a ) ε− = c (a a ) ε− . E 1 2 ≤E 8 J 9 2 − 1 ≤ 8 9 2 − 1 10 2 − 1

Lemma 7. There exist constants 0 0,bothc(ε)ε K (0,d) and c(ε)ε K−(0,d) converge in probability to d as ε 0. → + Remark 1. Since the functions d K (0,d)andd K−(0,d) are non-decreasing we immediately obtain the following→ corollary: The functions→ d ε3c(ε)K+(0,d)and 3 → d ε c(ε)K−(0,d) converge to identity in the metric of uniform convergence on compact intervals,→ in probability as ε 0. →

Proof.Letc3 be the constant defined in Lemma 4. Suppose that α>c3 is a large positive constant; we will choose a value for α later in the proof. Let ak = k(2c3 + α)ε for integer 1 k.Letηk be the continuous function which is equal to X on [ak,ak+1] and constant on ( ,ak]and[ak+1, ). Let Dk = D(ηk,ε). For every k find a conformal mapping −∞ ∞ 1 fk : D Dk so that fk((0, 1)) = (ak,X (ak)), f( )= and f( )= .Let ∗ → − −∞ −∞ ∞ ∞ K− = K−(a + c ε, a c ε) be defined relative to D and f . Then random variables k k 3 k+1 − 3 k k Kk− are independent and identically distributed. According to Lemma 6, e 2 e K− c ε− α, E k ≥ 4 2 K− c5ε− α, E ek ≤ 2 4 2 (K−) c6ε− α . (4) E ke ≤ 2 Hence, Kk− = c7(ε)ε− α where c7(ε)e are uniformly bounded away from 0 and for all ε. E ∞ e 13 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

1 Let D = D(X ,ε). We let K− = K−(a + c ε, a c ε) be defined relative to D k k 3 k+1 − 3 and the usual conformal mapping f : D D. By Lemma 4, Kk− Kk− 12 for every k. ∗ → | − |≤

Fix some d>0andletk0 be the largest integer such that aek0 d.Then ≤ k d/(ε(2c + α)) < 2. | 0 − 3 |

k0 k0 Note that K−(0,d) k=1 Kk−. Recall that Kk−’s are independent and so k=1 Kk− = k0 ≥ k0 k0 E K− and Var K− = VarK−. We use Chebyshev’s inequality and (4) to k=1 E k Pk=1 k k=1 k P see that for arbitrary δ, p > 0 one can find ε0 >e 0 such that for ε<ε0, e P e P e P e 2 (K−(0,d) > (1 δ)c (ε)ε− α[d/(ε(2c + α)) 2]) (5) P − 7 3 − k0 2 K− > (1 δ)c (ε)ε− α[d/(ε(2c + α)) 2] ≥P k − 7 3 −  kX=1  k0 2 K− > (1 δ)c (ε)ε− α[d/(ε(2c + α)) 2] + 12k ≥P k − 7 3 − 0  kX=1  k0 e 2 K− > (1 δ)c (ε)ε− α[d/(ε(2c + α)) 2] + 12[d/(ε(2c + α)) + 2] ≥P k − 7 3 − 1  kX=1  k0 e 2 K− > (1 δ/2)c (ε)ε− α[d/(ε(2c + α)) 2] > 1 p. ≥P k − 7 3 − −  kX=1  e In the same way we obtain

k0+1 2 K− < (1 + δ/2)c (ε)ε− α[d/(ε(2c + α)) + 2] > 1 p/2. P k 7 3 −  k= 1  X−

The maximum cell diameter is bounded by c8ε according to Lemma 2. A cell must either lie inside k W (ak +c3ε, ak+1 c3ε) or intersect k W (ak c3ε c8ε, ak +c3ε+c8ε). It follows that − − − S k0+1 k0+1S + + K (0,d) K− + K ≤ k k k= 1 k= 1 X− X− + + where Kk = K (ak c3ε c8ε, ak + c3ε + c8ε) and the cells are counted in D relative to the usual conformal mapping.− − By Lemma 6,

+ 2 2 K c 2(c + c )ε− = c ε− E k ≤ 9 3 8 10 and so, k0+1 + 2 K (2/p)[d/(ε(2c + α)) + 2]c ε− p/2. P k ≥ 3 10 ≤  k= 1  X− 14 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Now choose α = α(δ, p) sufficiently large so that for all ε<ε0,

(2/p)[d/(ε(2c + α)) + 2]c < (δ/2)c (ε)α[d/(ε(2c + α)) 2]. 3 10 7 3 − Then for sufficiently small ε>0,

+ 2 (K (0,d) < (1 + δ)c (ε)ε− α[d/(ε(2c + α)) 2]) P 7 3 − k0+1 k0+1 + 2 K− + K < (1 + δ)c (ε)ε− α[d/(ε(2c + α)) 2] ≥P k k 7 3 −  k= 1 k= 1  X− X− k0+1 2 K− < (1 + δ/2)c (ε)ε− α[d/(ε(2c + α)) + 2] ≥P k 7 3  k= 1  X− k0+1 + 2 K (2/p)[d/(ε(2c + α)) + 2]c ε− −P k ≥ 3 12  k= 1  X− 1 p/2 p/2=1 p. ≥ − − − Since δ and p can be taken arbitrarily small and α can be taken arbitrarily large, a comparison of (5) with the last inequality easily implies the lemma.

Lemma 8. There exists c< such that for all b>0 and small ε, ∞

x 2 2 inf E (τ(W ( b, b)) T (W ( b))

Proof.Fixsomeb>0. Let GA∗ (x, y) be Green’s function for the reflected Brownian motion (RBM) in D killed on exiting A D . We define in an analogous way GA(x, y) to be Green’s functions∗ for RBM in D killed⊂ ∗ on exiting A D. The Green function is conformal invariant so ⊂

GW ( b,b)(f(x),f(y)) = GW∗ ( b,b)(x, y) − ∗ − for x, y W ( b, b). ∈ ∗ − Lemma 7 implies that for any p<1andc1 > 0 we can find small ε0 such that for ε<ε0 we have

(1 c )K−(0,b)/2 c(j j )forx, y (j +(j M∗ (j1,j2) 2 2 1 1 2 ∗ − ∈ − j )/8,j (j j )/8). 1 2 − 2 − 1 15 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Suppose that x W (0). Lemma 2 and (6) easily imply that for small ε,Rex belongs to the interval with endpoints∈ ∗

( K−( b, 0)) + (K−(0,b) ( K−( b, 0)))/8 − − − − − and K−(0,b)+(K−(0,b) ( K−( b, 0)))/8 − − − with -probability greater than p. If this event occurs and (6) holds then P

G∗ (x, y) >c (K−(0,b)+K−( b, 0)) >c K−(0,b) M ( K−( b,0),K−(0,b)) 2 3 ∗ − − − for y M ( K−( b/2, 0),K−(0,b/2)). By conformal invariance, ∈ ∗ − −

GM( K ( b,0),K (0,b))(f(x),y) >c3K−(0,b) − − − − for y M( K−( b/2, 0),K−(0,b/2)). Since M( K−( b, 0),K−(0,b)) W ( b, b), we have for∈ z −W (0),− − − ⊂ − ∈

GW ( b,b)(z, y) >GM( K ( b,0),K (0,b))(z, y) >c3K−(0,b) − − − − − for y M( K−(0,b)/4,K−(0,b)/4) W ( b/2,b/2), for small ε with probability greater than ∈p.Sincetheareaof− W ( b/2,b/⊂ 2)− is equal to bε we obtain for x W (0) with -probability greater than p, − ∈ P

x E τ(W ( b, b)) = GW ( b,b)(x, y)dy − W ( b,b) − Z −

GW ( b,b)(x, y)dy bεc3K−(0,b). (7) ≥ W ( b/2,b/2) − ≥ Z − Next we derive a similar estimate for the conditioned process. Assume that (6) is true. Then the process starting from a point of W (b/2) has at least 1/8 probability of hitting W ( b) before hitting W (b) and the probability is even greater for the process starting from−W ( b/2). The probability of hitting W ( b/2) before hitting W (b/2) for the process starting− from W (0) is very close to 1/2. The− Bayes’ theorem now implies that for the process starting from a point of W (0) and conditioned to hit W ( b) before hitting W (b), the chance of hitting W (b/2) before hitting W ( b/2) is between 1−/16 and 15/16. We have therefore, assuming (6) and using (7), −

x x E (τ(W ( b, b)) T (W ( b))

Again by Lemma 6,

x 2 2 inf E (τ(W ( b, b)) T (W ( b))

Lemma 9. Suppose that η is a continuous function, ε>0 and D = D(η, ε). Assume that the diameter of M(a 1,a+1)is not greater than d.Thereexistsc1 < independent of η and ε such that for− all a R and all x M(a), ∞ ∈ ∈ ExT (M(a 1) M(a +1)) c d2, − ∪ ≤ 1 ExT (M(a 1) M(a +1))2 c d4, − ∪ ≤ 1 Ex[T (M(a 1) M(a +1)) T (M(a 1))

Proof. First we deal with the case d =1. Consider a point x M(a). As a consequence of the assumption that the diameter of M(a 1,a+ 1) is not greater∈ than d =1,thesetM(a 1,a+ 1) lies below or on the line Q = −y C :Imy =Imx +1 .LetL = f( y D :Im− y =0 ). Let V be the union of M(a { 1),∈ M(a +1)andthepartof} L between{ ∈M(a∗ 1) and M} (a +1). Without− loss of generality assume that x lies on− or below L. The vertical component of the reflected Brownian motion in D starting from x will have only a nonnegative drift (singular drift on the boundary)Y until hits the upper part of the boundary of D. However, the process has to hit V beforeY hitting the upper part of ∂D.LetS be the Y hitting time of V .SinceV lies below Q, there are constants t0 < and p0 > 0 independent of η and ε and such that the process can hit V before t with∞ probability greater than Y 0 p0.AttimeS the process can be either at a point of M(a 1) M(a +1)orapointof L. Y − ∪ Suppose (S) L. It is easy to see that the reflected Brownian motion in D starting from any pointY z of∈ y D :Imy =0 between M (a 1) and M (a +1)canhit∗ M (a 1) M (a + 1){ before∈ ∗ hitting the boundary} of D∗ with− probability∗ greater than ∗ − ∪ ∗ ∗ p1 > 0wherep1 is independent of z or a. By conformal invariance, reflected Brownian motion in D starting from any point z of L M(a 1,a+1)canhit M(a 1) M(a +1) ∩ − − ∪ before hitting the boundary of D with probability greater than p1. Since the diameter of M(a 1,a+ 1) is bounded by 1, there exists t1 < such that 2-D Brownian motion starting from− any point of M(a 1,a+ 1) will exit this∞ set before − time t1 with probability greater than 1 p1/2. It follows that the reflected Brownian motion in D starting from any point z of −L M(a 1,a+1)canhit M(a 1) M(a +1) ∩ − − ∪ before time t1 and before hitting the boundary of D with probability greater than p1/2. Let t = t + t . Our argument so far has shown that the process starting from x 2 0 1 Y will hit M(a 1) M(a + 1) an hence exit the set M(a 1,a+1)beforetimet2 with − ∪ df − probability greater than p2 = p0p1/2. k By the repeated application of the at times t2 we see that the chance that does not hit M(a 1) M(a +1)before tk is less that (1 p )k. It follows that the Y − ∪ 2 − 2 17 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

distribution of the hitting time of M(a 1) M(a + 1) for reflected Brownian motion in D starting from any point of M(a) has an− exponential∪ tail and so it has all finite moments. This proves the first two formulae of the lemma in the case d =1. The probability that the reflected Brownian motion in D starting from a point of M(a) will hit M(a 1) before hitting M(a +1)isequalto1/2. This and the first two formulae imply the− last two formulae when d =1. Note that the estimates are independent of η and ε and so we can apply them for all d with appropriate scaling.

1 Let νa∗ denote the uniform probability distribution on M (a)andletνa = νa∗ f − . For a planar set A, its area will be denoted A . ∗ ◦ | | Lemma 10. There exists c < such that for all a, 1 ∞ Eνa (T (M(a 1)) T (M(a 1))

Proof.LetGM∗ (a 1,a+1)(νa∗,y) be the Green function for the reflected Brownian motion ∗ − in D with the initial distribution νa∗ and killed upon exiting M (a 1,a+ 1). By analogy, ∗ ∗ − GM(a 1,a+1)(νa,y) will denote the Green function for the reflected Brownian motion in D − with the initial distribution νa and killed upon exiting M(a 1,a+ 1). It is elementary to see that −

GM∗ (a 1,a+1)(νa∗,y)=c2 1 Re y/a ∗ − | − |

for y M (a 1,a+ 1). In particular, GM∗ (a 1,a+1)(νa∗,y) c2 < for all y.Bythe ∈ ∗ − ∗ − ≤ ∞ conformal invariance of the Green function, GM(a 1,a+1)(νa,f(y)) = GM∗ (a 1,a+1)(νa∗,y). − ∗ − Thus GM(a 1,a+1)(νa,x) c2 for all x and so − ≤

νa E T (M(a 1) M(a +1))= GM(a 1,a+1)(νa,x)dx c2 M(a 1,a+1). − ∪ M(a 1,a+1) − ≤ | − | Z − The event T (M(a 1))

Lemma 11. There exists c< such that for all a, b > 0 and small ε, ∞ 2 sup Ex(τ(W ( a, b) T (W ( a))

18 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Proof. Step 1. The estimates in Step 1 of the proof do not depend on X1.Inotherwords, they are valid in D(η, ε) for any continuous function η. Let t∗ be the reflected Brownian motion in D , ∗(0) = x0 M (0), let T0∗ =0,let Y ∗ Y ∈ ∗ d be defined by Re ∗(T )=d and let k Y k k

Tk∗ =inf t>Tk∗ 1 : t∗ M (dk 1 1) M (dk 1 +1) ,k1. { − Y ∈ ∗ − − ∪ ∗ − } ≥ Note d is simple symmetric random walk. { k} Fix some sequence (j0,j1,j2,...) of integers such that j0 =0and jk jk 1 =1for | − − | all k. We will denote the event dk = jk, k by . { ∀ } J∗ Note that if j k =1thenkˆ =2j k is different from k and kˆ j =1. | − | − | − | For j and k such that j k = 1 we define gj,k(x, y)forx M (j)andy M (k)by | − | ∈ ∗ ∈ ∗ x gj,k(x, y)dy = P ( (T (M (k))) dy T (M (k))

Recall that νa∗ denotes the uniform probability distribution on M (a). It is easy to see ∗ that there exists c1 < such that for all j, k, such that j k =1andallx1,x2 M (j), y M (k), ∞ | − | ∈ ∗ ∈ ∗ gj,k(x1,y)

It follows that for all x1,x2 M (jk)andy M (jk+1), ∈ ∗ ∈ ∗

x0 P ( ∗(Tk∗+1) dy , ∗(Tk∗)=x1) Y ∈ |J∗ Y Tk 1 : t M(dk 1 1) M(dk 1 +1) ,k1. { − Y ∈ − − ∪ − } ≥ The dk’s are the same for and ∗ because is an appropriate time-change of f( ∗). The event d = j , k willY be calledY in thisY new context. We obtain from (9), Y { k k ∀ } J P y0 ( (T ) dy , (T )=x )

Ey0 (T T , (T )=x ) c M(j 1,j +1) m+1 − m |J Y k 1 ≤ 3| m − m | 19 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998) and so

y0 E ((Tm+1 Tm)(Tk Tk 1) , (Tk)) c3 M(jm 1,jm +1)(Tk Tk 1). − − − |J F ≤ | − | − −

If we remove the conditioning on (Tk), apply again (10) and Lemma 10, we obtain the following estimate for all m k +1andF k 2, ≥ ≥ y0 E ((Tm+1 Tm)(Tk Tk 1) ) c4 M(jm 1,jm +1) M(jk 1 1,jk 1 +1). (11) − − − |J ≤ | − || − − − | By Lemma 9 for all m 0, ≥ Ey0 ((T T )2 ) c diam(M(j 1,j +1))4 m+1 − m |J ≤ 5 m − m and so, by Cauchy-Schwarz inequality, for any k 1, m 0, ≥ ≥ y0 E ((Tm+1 Tm)(Tk Tk 1) ) − − − |J 2 2 c6diam(M(jm 1,jm +1)) diam(M(jk 1 1,jk 1 +1)) . ≤ − − − − This and Lemma 2 imply

y0 4 E ((Tm+1 Tm)(Tk Tk 1) ) c7ε . (12) − − − |J ≤

Fix some integers Ia < 0andIb > 0andletk0 be the smallest integer such that jk0 = Ia or jk0 = Ib.Let k be the number of m such that m k0 and k = jm.Let =max . We use (11)N and (12) to derive the following estimate,≤ N k Nk

k0 1 2 − Ey0 (T 2 )=Ey0 T T k0 |J k+1 − k |J  kX=0   k 1 0− y0 E (Tm+1 Tm)(Tk+1 Tk) ≤ k,m=1 − − |J  k m 2  | −X|≥ k 1 0− y0 + E (Tm+1 Tm)(Tk+1 Tk) k,m=0 − − |J  k m 1  | −X|≤ k 1 0− + Ey0 2 (T T )(T T ) m+1 − m 1 − 0 |J m=0  X  I k 1 b 0− c M(m 1,m+1) M(k 2,k) + c ε4 ≤ 4Nm| − |Nk| − | 8 k,mX=Ia kX=0 Ib 2 c2 M(m 1,m+1) + c k ε4 ≤ 4 N| − | 8 0  mX=Ia  Ib 2 c 2 C(m) + c k ε4. ≤ 9N 8 0 m=Ia [

20 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Nowweremoveconditioningon . The quantities and k become random variables J N 0 defined relative to the random sequence dk in place of jk . We recall that dk is a { } { } 2 { } simple random walk on integers and so Ek0 c10 IaIb and E c11 IaIb . This implies that ≤ | | N ≤ | | Ib 2 y0 2 4 E T c12 IaIb C(m) + c13 IaIb ε . k0 ≤ | | | | m=Ia [

Before we go to the next step of the proof we note that the last estimate applies not only to y M(0) but to y W (0) as well. To see this, note that if y W (0) then 0 ∈ 0 ∈ 0 ∈ y0 M(k1)wherek1 is not necessarily an integer. Then Tk0 represents the exit time from M(∈k I ,k + I ). 1 − a 1 b + Step 2.Fixsomea, b > 0 and apply the last result with Ia = K ( a, 0) and Ib = + − − K (0,b). With this choice of Ia and Ib we have τ(W ( a, b)) Tk0 for y0 W (0). By Lemma 2, cells which intersect the worm W ( a, b) cannot− extend≤ beyond W ( ∈a c ε, b+ − − − 14 c ε). Hence the area of Ib C(m) is bounded by c (a + b)ε. This yields for small ε, 14 m=Ia 15

y0 2 S 2 2 4 2 2 E T c16 IaIb (a + b) ε + c13 IaIb ε c17 IaIb (a + b) ε . k0 ≤ | | | | ≤ | | The probability of T (W ( a))

Ey0 (τ(W ( a, b))2 T (W ( a))

+ 2 + 1/3 1/3 1/3 K ( a, 0) K (0,b) + 6 + 3 3 − K ( a, 0) K (0,b) K−(0,b)− E K−(0,b) ≤ E − E E  6 18 1/3 3 9 1/3  3 9 1/3 2  6 (c a ε− ) (c b ε− ) (c b− ε ) = c a ε− . ≤ 19 20 21 22 We obtain in a similar way

+ + 2 K ( a, 0)K (0,b) 6 − c23abε− . E K−(0,b) ≤

21 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Hence 2 sup Ey0 (τ(W ( a, b)) T (W ( a))

The following definitions are similar to those used in the proof of Lemma 11 but not identical to them. Suppose that b>0. Let t∗ be the reflected Brownian motion in D , Y ∗ ∗(0) = x W (0), let T0∗ =0,letdk be defined by Re ∗(Tk∗)=dk and let Y ∈ ∗ Y

Tk∗ =inf t>Tk∗ 1 : t∗ W ((dk 1 1)b) W ((dk 1 +1)b) ,k1. { − Y ∈ ∗ − − ∪ ∗ − } ≥

Fix some sequence (j0,j1,j2,...) of integers such that j0 =0and jk jk 1 =1forall | − − | k. We will denote the event dk = jk, k by . The corresponding definitions for the { ∀ } J∗ reflected Brownian motion in D are as follows. Let t be the reflected Brownian motion in D, (0) = x W (0), let T =0,letd be definedY by Re (T )=d and let Y ∈ 0 k Y k k

Tk =inf t>Tk 1 : t W ((dk 1 1)b) W ((dk 1 +1)b) ,k1. { − Y ∈ − − ∪ − } ≥ Recall that the process is constructed as a time-change of f( ∗)andsod ’s are the Y Y k same for and ∗. The event = dk = jk, k will be also called . Let hY (x, y)Y be defined for Jx∗ W{(0), y W∀ (}j b)by J k ∈ ∈ k h (x, y)dy = P x( (T ) dy ). k Y k ∈ |J

Lemma 12. There exist ε ,c ,c (0, 1) such that 0 1 2 ∈ h (x ,y) h (x ,y)(1 c ck) k 1 ≥ k 2 − 1 2 and h (x ,y) h (x ,y)(1 + c ck) k 1 ≤ k 2 1 2 for x ,x W (0) and y W (j b) provided ε<ε . 1 2 ∈ ∈ k 0 Proof. Recall that if j k =1thenkˆ =2df j k is different from k and kˆ j =1. | − | − | − | For j and k such that j k = 1 we define gj,k∗ (x, y)forx W (jb)andy W (kb) by | − | ∈ ∗ ∈ ∗ x gj,k∗ (x, y)dy = P ( (T (W (kb))) dy T (W (kb))

gj,k∗ (x1,y)

22 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Let hk∗ (x, y) be defined for x W (0), y W (jkb)by ∈ ∗ ∈ ∗ x hk∗(x, y)dy = P ( (Tk∗) dy ). Y∗ ∈ |J∗ By the strong Markov property,

h (x, y)= h (x, z)g (z, y)dz. (14) k∗+1 k∗ j∗k,jk+1 W (jkb) Z ∗

We will prove by induction on k that there exist constants 0

hk∗(x1,y1) hk∗(x2,y1) k (1 c1c2 ). (15) hk∗(x1,y2) ≥ hk∗(x2,y2) − It is elementary to verify that the inequality holds for k = 1 and some c ,c (0, 1) using 1 2 ∈ (8) and the fact that W (0) and W (j1b) are separated by at least two cells for small ε. We proceed with the∗ proof of∗ the induction step. Assume that (15) holds for k.We have from (13), g∗ (x1,y1) g∗ (x2,y1) jk,jk+1 c jk,jk+1 (16) g (x ,y ) 3 g (x ,y ) j∗k,jk+1 1 2 ≥ j∗k,jk+1 2 2 for some constant c3 > 0, all k 1, x1,x2 W (jkb)andy1,y2 W (jk+1b). We can ≥ ∈ ∗ 2 ∈ ∗ adjust c1 and c2 in (15) for the case k =1sothatc2 1 c3. Lemma 1 implies in view of (14), (15) and (16) that ≥ −

hk∗+1(x1,y1) hk∗+1(x2,y1) k 2 k hk∗+1(x2,y1) k+1 [(1 c1c2 )+c3c1c2 ] (1 c1c2 ) hk∗+1(x1,y2) ≥ hk∗+1(x2,y2) − ≥ hk∗+1(x2,y2) − for x1,x2 W (0) and y1,y2 W (jk+1b). This completes the proof of the induction step. We conclude∈ that∗ (15) holds∈ for all∗ k. Formula (15) easily implies

k h∗(x ,y) h∗(x ,y)(1 c c ) (17) k 1 ≥ k 2 − 4 5 and k h∗(x ,y) h∗(x ,y)(1 + c c ) (18) k 1 ≤ k 2 4 5 for x1,x2 W (0) and y W (jkb), with c4,c5 (0, 1). Now we∈ will∗ translate∈ these∗ estimates into the∈ language of reflected Brownian motion in D. Recall the definition of Tk’s and given before the lemma. The function hk(x, y) has been defined for x W (0), y W (jJb)by ∈ ∈ k h (x, y)dy = P x( (T ) dy ). k Y k ∈ |J The conformal invariance of reflected Brownian motion allows us to deduce from (17) and (18) that for x ,x W (0) and y W (j b), 1 2 ∈ ∈ k h (x ,y) h (x ,y)(1 c ck) k 1 ≥ k 2 − 4 5 23 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998) and h (x ,y) h (x ,y)(1 + c ck) k 1 ≤ k 2 4 5 with c ,c (0, 1). 4 5 ∈ Remark 2. We list two straighforward consequences of the last lemma. (i) An application of the strong Markov property shows that for m

x k m x P ( (T ) dy (T ) dz , ) (1 + c c − )P ( (T ) dy (T ) dz , ). (20) Y k ∈ |Y m ∈ 1 J ≤ 1 2 Y k ∈ |Y m ∈ 2 J (ii) By averaging over appropriate sequences j we obtain the following estimate. For { k} every a, b > 0andδ>0 we can find small ε0 > 0 such that for ε<ε0, x1,x2 W (0) and y W ( a), ∈ ∈ − (1 δ)P x1 ( (τ(W ( a, b)) dy T (W ( a))

Lemma 13. For b>0 andanintegerN1 > 1,letbj = bj/N1.Thereexist0 0 and p>∞0 ∈ 1 2 there exists N2 < such that the following inequalities hold with probability greater than 1 p when N >N∞ and ε<ε (N ), − 1 2 0 1 N 1 1− 2 2 x 1 c(ε)ε N1b− sup E τ(W (bj 1,bj+1)) T (W (bj+1)) T(W (bj 1)) <δ, − − | − j=0 x W (bj ) X ∈   N 1 1− 2 2 x 1 c(ε)ε N1b− inf E τ(W (bj 1,bj+1)) T (W (bj+1)) T(W (bj 1)) <δ. − x W (bj ) − | − j=0 ∈ X  

Proof.Fixsomed>0. Suppose a, δ ,δ > 0 are small. For x W (0), 1 2 ∈ Ex(τ(W ( d, d)) T (W ( d))

Remark 2 (ii) and the strong Markov property imply that the Radon-Nikodym derivative for P x1 and P x2 distributions of the post-τ(W ( a, d)) process is bounded below and above − by 1 δ1 and 1 + δ1 for all x1,x2 W (0) provided ε is small. Suppose x1 =(0, 0). We useLemma11toseethat− ∈ (1 δ ) Ex1 (τ(W ( d, d)) τ(W ( a, d)) T (W ( d)) 0 provided ε is sufficiently small, sup Ex(τ(W ( d, d)) T (W ( d))

2 x 4 4 sup E (τ(W (bj 1,bj+1)) T (W (bj 1))

x sup E τ(W (bj 1,bj+1)) T (W (bj 1))

We recall some definitions stated before Lemma 12. For the reflected Brownian motion in D with (0) = x W (0), we let T =0,letd be defined by Re (T )=d and let Y Y ∈ 0 k Y k k Tk =inf t>Tk 1 : t W ((dk 1 1)b) W ((dk 1 +1)b) ,k1. { − Y ∈ − − ∪ − } ≥ 25 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

We consider a sequence (j0,j1,j2,...) of integers such that j0 =0and jk jk 1 =1for all k. The event d = j , k is denoted by . | − − | { k k ∀ } J Lemma 14. There exists c< such that for all b>0, integers n>1 and sufficiently small ε>0, ∞ x x 2 4 4 E ((T E (T )) ) cnb ε− . E n − n |J |J ≤

Proof. We start with an estimate for the covariance of Tm Tm 1 and Tk Tk 1 when k 1 >m+ 1. Remark 2 (i) implies that − − − − − x k m x E (Tk Tk 1 (Tm+1)=y, ) (1 c1c2− )E (Tk Tk 1 ) − − |Y J ≥ − − − |J and x k m x E (Tk Tk 1 (Tm+1)=y, ) (1 + c1c2− )E (Tk Tk 1 ). − − |Y J ≤ − − |J These inequalities imply that x x k m x E ( ∆k 1T E (∆k 1T ) (Tm+1)=y, ) c1c2− E (∆k 1T ) (22) | − − − |J ||Y J ≤ − |J where ∆kT = Tk+1 Tk. For small ε,thesetsW (jm+1b)andW (jmb) are separated by at least two cells,− by Lemma 2. Some standard arguments then show that the ratio of densities of (Tm) under conditional distributions given (Tm+1)=y1 or given (T )=y Y is bounded above and below by strictly{Y positive finite constants}∩J which {Y m+1 2}∩J do not depend on y1,y2 W (jm+1b). It follows that for every y W (jm+1b), x ∈ x ∈ E (∆m 1T (Tm+1)=y, ) c3E (∆m 1T ). − |Y J ≤ − |J This, (22) and an application of the strong Markov property at Tm+1 yield x x x E ( (∆k 1T E (∆k 1T ))(∆m 1T E (∆m 1T )) ) | − − − |J − − − |J ||J k m x x c4c2− E (∆k 1T )E (∆m 1T ). ≤ − |J − |J Henceweobtain n 1 2 − Ex((T Ex(T ))2 )=Ex [∆ T Ex(∆ T )] n − n |J |J k − k |J |J  kX=0   n 1 (m+2) (n 1) − ∧ − Ex 2 (∆ T Ex(∆ T ))(∆ T Ex(∆ T )) ≤ | k − k |J m − m |J ||J m=0  X kX=m  n 1 n 1 − − + Ex 2 (∆ T Ex(∆ T ))(∆ T Ex(∆ T )) | k − k |J m − m |J ||J m=0  X k=Xm+3  n 1 (m+2) (n 1) − ∧ − 2 ≤ m=0 X kX=m Ex([∆ T Ex(∆ T )]2 ) 1/2 Ex([∆ T Ex(∆ T )]2 ) 1/2 k − k |J |J m − m |J |J n 1 n 1 − − k m x x   +2 c c − E (∆ T )E (∆ T ) 4 2 k |J m |J m=0 X k=Xm+3 df =Ξ1 +Ξ2.

26 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

In order to estimate Ξ we apply Lemma 11 to see that E 1 1/2 1/2 Ex([∆ T Ex(∆ T )]2 ) Ex([∆ T Ex(∆ T )]2 ) E k − k |J |J m − m |J |J    1/2  1/2 Ex([∆ T Ex(∆ T )]2 ) Ex([∆ T Ex(∆ T )]2 ) ≤ E k − k |J |J E m − m |J |J  4 4 1/2 4 4 1/2 4 4    (c b ε− ) (c b ε− ) = c b ε− .  ≤ 1 1 1 We obtain

n 1 (m+2) (n 1) − ∧ − Ξ = 2 E 1 E m=0 X k=Xm+1 Ex([∆ T Ex(∆ T )]2 ) 1/2 Ex([∆ T Ex(∆ T )]2 ) 1/2 k − k |J |J m − m |J |J n 1 (m+2) (n 1) ∧ − − 4 4 4  4  2 c b ε− c nb ε− . ≤ 1 ≤ 2 m=0 X k=Xm+1 Next we estimate Ξ . We use Lemma 11 again to obtain E 2 1/2 1/2 Ex(∆ T )Ex(∆ T ) [Ex(∆ T )]2 [Ex(∆ T )]2 E k |J m |J ≤ E k |J E m |J  4 4 1/2 4 4 1/2 4 4  (c b ε− ) (c b ε− ) = c b ε− ≤ 1 1 1 and so n 1 n 1 − − k m x x Ξ = 2 c c − E (∆ T )E (∆ T ) E 2 E 4 2 k |J m |J m=0 X k=Xm+3 n 1 n 1 − − k m 4 4 4 4 2 c c − c b ε− c nb ε− . ≤ 4 2 5 ≤ 6 m=0 X k=Xm+3 We conclude that

x x 2 4 4 E ((T E (T )) ) Ξ + Ξ c nb ε− . E n − n |J |J ≤E 1 E 2 ≤ 7

We will need two estimates involving downcrossings of 1-dimensional Brownian mo- tion. For convenience we will state the next lemma in terms of a special Brownian motion, namely Re ∗.LetU ∗(x, x+a, t) be the number of crossings from x to x+a by the process Y Re ∗ before time t. Y Lemma 15. (i) For every δ, p > 0 there exist ζ ,γ > 0 and M < such that if ζ<ζ, 0 0 0 ∞ 0 γ<γ0, M>M0, and for some random time S both events

∞ U ∗((j γ)ζ,(j +1+γ)ζ,S) M − ≤  j=  X−∞ 27 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998) and ∞ U ∗((j + γ)ζ,(j +1 γ)ζ,S) M − ≥  j=  X−∞ hold with probability greater than 1 p/4 then 1 S/(Mζ2) <δwith probability greater than 1 p. − | − | (ii)− Let

2 n =minU ∗((j γ)ζ/N1, (j +1+γ)ζ/N1, (1 η)Mζ ). Q nN1 j<(n+1)N1 − − ≤ For every δ, p > 0 there exist M ,η ,γ > 0 and N < such that for any ζ>0, M>M, 0 0 0 2 ∞ 0 η<η0, γ<γ0 and N1 >N2 we have

∞ (1 δ)MN2 N − 1 ≤ 1 Qn n= X−∞ ∞ 2 2 U ∗((j + γ)ζ/N , (j +1 γ)ζ/N , (1 + η)Mζ ) (1 + δ)MN ≤ 1 − 1 ≤ 1 j= X−∞ with probability greater than 1 p. − a Proof. (i) Fix arbitrarily small δ, p > 0. Let Lt denote the of the Brownian motion Re ∗ at the level a at time t. See Karatzas and Shreve (1988) for an introduction Y a 1 ac to the theory of local time. By Brownian scaling, the distributions of Lt and c− Ltc2 are the same. We will use the following inequality of Bass (1987),

1 a c λ sup U ∗(a, a + ε, t) L λ T/ε e− 1 . (23) P − ε t ≥ ≤ a R,t [0,T ] ! ∈ ∈ p 2 Let S1 =(1 δ)Mζ and suppose that j0 is a large integer whose value will be chosen later in the proof.− The following eqalities hold in the sense of distributions,

∞ U ∗((j + γ)ζ,(j +1 γ)ζ,S ) M (24) − 1 − j= ! X−∞ = U ∗((j + γ)ζ,(j +1 γ)ζ,S ) − 1 j >j | X| 0 1 (j+γ)ζ + U ∗((j + γ)ζ,(j +1 γ)ζ,S1) L − − ζ(1 2γ) S1 j j j j | X|≤ 0 | X|≤ 0 − 1 + L(j+γ)ζ M ζ(1 2γ) S1 − j j ! | X|≤ 0 − = U ∗((j + γ)ζ,(j +1 γ)ζ,S ) − 1 j >j | X| 0 28 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

1 (j+γ)ζ + U ∗((j + γ)ζ,(j +1 γ)ζ,S1) L − − ζ(1 2γ) S1 j j ! | X|≤ 0 − 2 2 (1 δ)M 1 (j+γ)/√(1 δ)M 1 δ δ 2γ + − L − − M − M. 1 2γ (1 δ)M 1 − 1 2γ − 1 2γ j j ! − | X|≤ 0 − − − p 2 2 Suppose that γ is so small that (δ 2γ)M/(1 2γ) > (δ /2)M.Letj0 = c1√M with c so large that the probability that− the Brownian− motion hits (j +1 γ)ζ or (j + γ)ζ 1 0 − 0 before time S1 is less than p/16. With this choice of j0,thesum

U ∗((j + γ)ζ,(j +1 γ)ζ,S ) − 1 j >j | X| 0 a is equal to 0 with probability exceeding 1 p/16. Recall that a L1 is almost surely − a → continuous and has a finite support, and ∞ L1da = 1. Hence, we can increase the value −∞ of c1, if necessary, and choose sufficiently large M so that R 1 (j+γ)/√(1 δ)M ∞ L − (1 + δ) Lada =(1+δ), (1 δ)M 1 ≤ 1 j j0 Z−∞ | X|≤ − p with probability greater than 1 p/16. Then −

2 (1 δ)M 1 (j+γ)/√(1 δ)M 1 δ − L − − M<0 1 2γ (1 δ)M 1 − 1 2γ j j ! − | X|≤ 0 − − p with probability greater than 1 p/16. − Let λ = c2/√ζ. First choose c2 so small that

2j λ S /(ζ(1 2γ)) = 2c √M(c / ζ) (1 δ)Mζ2/(ζ(1 2γ)) (δ2/4)M. 0 1 − 1 2 − − ≤ p cλ p p Then assume that ζ is so small that e− p/16. In view of (23) we have ≤

1 (j+γ)ζ U ∗((j + γ)ζ,(j +1 γ)ζ,S1) L 2j0λ S1/(ζ(1 2γ)) − − ζ(1 2γ) S1 ≤ − j j ! | X|≤ 0 − p (δ2/4)M, ≤ cλ with probability greater than 1 e− 1 p/16. Combining (24) with the estimates following it we see that − ≥ −

∞ U ∗((j + γ)ζ,(j +1 γ)ζ,S ) M<0, − 1 − j= ! X−∞ 29 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

with probability greater than 1 3p/16. The function s U ∗((j + γ)ζ,(j +1 γ)ζ,s)is non-decreasing, so if we assume− that → −

∞ U ∗((j + γ)ζ,(j +1 γ)ζ,S) M<0, − − j= ! X−∞

for some random variable S with probability less than p/4 then it follows that S S1 = (1 δ)Mζ2 with probability greater than 1 3p/16 p/4 > 1 p/2. We can prove≥ in a completely− analogous way that S (1 + δ)−Mζ2 with− probability− greater than 1 p/2. This easily implies part (i) of the lemma.≤ − (ii) We leave the proof of part (ii) of the lemma to the reader. The proof proceeds along the same lines as the proof of part (i), i.e., it uses an approximation of the number of upcrossings by the local time and the continuity of the local time as a function of the space variable.

Suppose b>0andN1 > 1 is an integer. Let T0 =0,letdk be defined by Re (Tk)=dk and let Y

Tk =inf t>Sk 1 : t W ((dk 1 1)b) W ((dk 1 +1)b) ,k1. { − Y ∈ − − ∪ − } ≥ 2 Let R =[1/b ]andbj = jb/N1.LetU(x, x + a, t) denote the number of crossings from the level x to x + a by the process Re before time t and = U(b ,b ,T ). Let Y Nj j j+1 R

n =min j Q nN1 j<(n+1)N1 N ≤

and + = where n(j) is an integer such that n(j)N j<(n(j)+1)N . Nj Nj −Qn(j) 1 ≤ 1

Lemma 16. For every p, δ > 0 there exist ε0,b0 > 0 and N2 < such that if ε<ε0, b b and N N then with probability greater than 1 p, ∞ ≤ 0 1 ≥ 2 −

∞ ∞ (1 δ)(N /b)2 N (1 + δ)(N /b)2 − 1 ≤ 1 Qn ≤ Nj ≤ 1 n= j= X−∞ X−∞ and so ∞ + δ(N /b)2. Nj ≤ 1 j= X−∞

Proof. Find large M0 as in Lemma 15 (i) with δ replaced by η = η0/2 of Lemma 15 (ii). 1/2 Then suppose that b

+ + small, K−( N0b, 0) >N0K ( b, b)/4andK−(0,N0b) >N0K ( b, b)/4withproba- bility greater− than 1 p. The− conformal invariance of reflected Brownian− motion then implies that for x W−(0), ∈ T 0isless than both constants γ of parts (i) and (ii) of Lemma 15. Let c < be so large that 0 1 ∞ the diameter of a cell is bounded by c1ε, as in Lemma 2. We invoke Lemma 7 to see that there exists ε0 > 0 so small that all of the following inequalities

3 + ε c(ε)K ( c1ε, bj + c1ε) bj <γb/N1, 1 j 2N0N1, | 3 − − | ≤ ≤ ε c(ε)K−(c1ε, bj c1ε) bj <γb/N1, 1 j 2N0N1, 3| + − − | ≤ ≤ ε c(ε)K ( bj c1ε, c1ε) bj <γb/N1, 1 j 2N0N1, 3| − − − | ≤ ≤ ε c(ε)K−( b + c ε, c ε) b <γb/N , 1 j 2N N , (26) | − j 1 − 1 − j| 1 ≤ ≤ 0 1 hold simultaneously with probability greater than 1 p provided ε<ε . Inequalities (26) − 0 imply that for all j = N0N1,...,N0N1, (i) M((j γ)ξ) lies to− the left of W (b ), and − j (ii) M((j + γ)ξ) lies to the right of W (bj). 1 3 Let ζ = bc− (ε)ε− = ξN . If (25), (i) and (ii) hold then an upcrossing of ((j 1 − γ)ζ,(j +1+γ)ζ)byRe ∗ must correspond to an upcrossing of (jb,(j +1)b)byRe .Vice versa, an upper boundY for the number of upcrossings of (jb,(j +1)b)byRe is providedY Y by the number of upcrossings of ((j + γ)ζ,(j +1 γ)ζ)byRe ∗. Recall that κ denotes − Y1 the time change for f( ∗), i.e., (κ(t)) = f( ∗(t)). Let S = κ− (T ). We have Y Y Y R

∞ U ∗((j γ)ζ,(j +1+γ)ζ,S) R − ≤ j= X−∞ and ∞ U ∗((j + γ)ζ,(j +1 γ)ζ,S) R. − ≥ j= X−∞ Lemma 15 (i) implies that with probability greater than 1 4p, − (1 η)Rζ2 S (1 η)Rζ2. (27) − ≤ ≤ − We apply similar analysis in order to compare the crossings of intervals ((j γ)ξ,(j + − 1+γ)ξ). An upcrossing of ((j γ)ξ,(j+1+γ)ξ)byRe ∗ must correspond to an upcrossing of (b ,b )byRe , assuming− (26). The number of upcrossingsY of (b ,b )byRe does j j+1 Y j j+1 Y 31 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998) not exceed the number of upcrossings of ((j + γ)ξ,(j +1 γ)ξ)byRe ∗. This, (27) and Lemma 15 (ii) yield − Y

2 ∞ 2 (1 δ)RN1 N1 min U ∗((j γ)ζ/N1, (j +1+γ)ζ/N1, (1 η)Rζ ) − ≤ nN1 j<(n+1)N1 − − n= ≤ X−∞ ∞ N1 min U(bj,bj+1,TR) ≤ nN1 j<(n+1)N1 n= ≤ X−∞ ∞ U(b ,b ,T ) ≤ j j+1 R j= X−∞ ∞ 2 2 U ∗((j + γ)ζ/N , (j +1 γ)ζ/N , (1 + η)Rζ ) (1 + δ)RN . ≤ 1 − 1 ≤ 1 j= X−∞ Since (27) holds with probability greater than 1 4p, the probability of the event in the last formula is not less than 1 5p provided b and− ε are small and N is large. − 1

Recall TR from the last lemma.

Lemma 17. Let R = R(t)=[t/b2].Thereexist0 0, with the following property. For every t>0 and δ>0 we can find b0 > 0 and ε0 > 0 such that ( c (ε)ε2T t >δ)

Proof. We will only consider the case t =1.Takeanyp, δ (0, 1). 1 ∈ Choose b>0andN1 which satisfy Lemma 16. We will impose more conditions on 2 these numbers later in the proof. First of all, we will assume that b /p < δ1.Takeany δ>0withδ<δ/p<δ1. Recall a large integer N0 from the proof of Lemma 16 so that we have T 0, constants 0

N 1 1− 2 2 x m m m m 1 c4(ε)ε N1b− sup E τ(W (bj 1,bj+1)) T (W (bj+1)) T(W (bj 1)) <δ, − x W (bm) − | − j=0 j X ∈   N 1 1− 2 2 x m m m m 1 c4(ε)ε N1b− inf E τ(W (bj 1,bj+1)) T (W (bj+1))

32 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

N 1 1− 2 2 x m m m m 1 c4(ε)ε N1b− inf E τ(W (bj 1,bj+1)) T (W (bj+1)) >T(W (bj 1)) <δ, − x W (bm) − | − j=0 ∈ j X   (29)

simultaneously for all m ( N0,N0) with probability greater than 1 p when ε<ε0. Recall the following notation∈ − introduced before Lemma 16. We write− R =[1/b2]. The number of crossings from the level x to x + a by the process Re before time t is denoted U(x, x + a, t)andwelet = U(b ,b ,T ). Let Y Nj j j+1 R n =min j Q nN1 j<(n+1)N1 N ≤ + and let j = j n(j) where n(j) is an integer such that n(j)N1 j<(n(j)+1)N1. Then LemmaN 16N shows−Q that for small ε, ≤

∞ ∞ (1 δ)(N /b)2 N (1 + δ)(N /b)2 (30) − 1 ≤ 1 Qn ≤ Nj ≤ 1 n= j= X−∞ X−∞ and so ∞ + δ(N /b)2 (31) Nj ≤ 1 j= X−∞ with probability greater than 1 p. − + + Let j = U(bj,bj 1,TR) and define j and n relative to j just like j and n N − N Q N N Q have been defined for . It is clear that the inequalities (30) and (31) hold also for , Nj Nj + and e with probability greater thane 1 p. e e Nj Qn − Suppose (j0,j1,j2,...) is a sequence of integers such that j0 =0and jk jk 1 =1e | − − |1 fore all k.e We will denote the event dk = jk, k by . By conditioning on and X we have { ∀ } J J

(T ,X1) (32) E R |J ∞ x 1 j inf E τ(W (bj 1,bj+1) T (W (bj+1)) T(W (bj 1))) ,X , E N x W (bj ) − | − |J  j= ∈  X−∞ e and similarly

(T ,X1) (33) E R |J ∞ x 1 j sup E τ(W (bj 1,bj+1) T (W (bj+1)) T(W (bj 1))) ,X . E N − | − |J  j= x W (bj )  X−∞ ∈ e 33 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Assume that (29) and (30) hold. Then

∞ x j inf E τ(W (bj 1,bj+1) T (W (bj+1))

A similar estimate can be obtained for j’s. Note that in order to derive the last estimate we had to make assumptions (28), (29)N and (30). They all hold simultaneously with probability greater than 1 4p. Hence,e in view of (32), − 1 2 1 2 2 1 2 (T ,X ) 2(1 δ) c− (ε)ε− 2(1 δ ) c− (ε)ε− (34) E R |J ≥ − 4 ≥ − 1 4 with probability greater than 1 4p. Next we derive the opposite− inequality. We have

∞ x j sup E τ(W (bj 1,bj+1) T (W (bj+1))

∞ + x j sup sup E τ(W (bj 1,bj+1) T (W (bj+1))

Hence

∞ + x 1 2 j sup sup E τ(W (bj 1,bj+1) T (W (bj+1))

∞ x j sup E τ(W (bj 1,bj+1) T (W (bj+1))

Recall that δ/p < δ1. We again double the estimate to account for the term with j’s in (33) and we obtain N e 1 1 1 2 1 2 (T ,X ) 2(1 + c p− δ)c− (ε)ε− 2(1 + c δ )c− (ε)ε− (35) E R |J ≤ 6 4 ≤ 6 1 4 with probability greater than 1 5p. 2 −2 Recall that R =[1/b ]andb /p < δ1. We obtain from Lemma 14

x x 2 4 4 2 4 E ((T E (T )) ) c Rb ε− c b ε− . E R − R |J |J ≤ 7 ≤ 8 It follows that

x x 2 2 4 4 ((T E (T )) ) (1/p)c b ε− c δ ε− (36) E R − R |J |J ≤ 8 ≤ 8 1 with probability greater than 1 p. Recall that δ1 and p may be chosen arbitrarily close − 2 to 0. The Chebyshev inequality and (34)-(36) show that (1/2)c4(ε)ε TR converges in probability to 1 as ε 0. →

Recall that κ(t) is the time-change which turns f( ∗) into a reflected Brownian motion Y in D, i.e., (κ(t)) = f( ∗). Y Yt

Corollary 1. There exist 0 0 the random variable c (ε)ε κ(c (ε)ε− t) converges in probability to t as ε 0. 3 4 → Proof. We will discuss only the case t = 1. Fix arbitrarily small δ, p > 0. 2 6 In view of Lemma 17 it will suffice to show that c3(ε)ε κ(c4(ε)ε− ) has the same limit 2 as (1/2)c2(ε)ε TR. Estimate (27) in the proof of Lemma 16 shows that for arbitrarily small η>0wehave 2 6 2 6 κ((1 δ)c− (ε)ε− ) T κ((1 + δ)c− (ε)ε− ) − ≤ R ≤ with probability greater than 1 p provided ε is small. A similar argument shows that − 2 6 1 κ(c− (ε)ε− ) (1 δ)− T (37) ≤ − R 35 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

and 1 2 6 (1 + δ)− T κ(c− (ε)ε− ) (38) R ≤ 2 with probability greater than 1 p provided ε is small. By Lemma 17, (1/2)c2(ε)ε (1 1 2 −1 1 1 − δ)− TR and (1/2)c2(ε)ε (1 + δ)− TR converge to (1 δ)− and (1 + δ)− ,resp.Thisand (37)-(38) prove the corollary. −

Proof of Theorem 1. Fix arbitrarily small δ, p > 0. Let c (1, ) be so large 1 ∈ ∞ that for the 1-dimensional Brownian motion ∗ starting from 0 and any t>0wehave Yt P (sup0 s t s∗ c1√t)

Let s = κ(t). Then f( ∗)= (κ(t)) = . Now consider the following transformation Yt Y Ys of the trajectories (t, Re ∗)and(s, Re ), Yt Ys 2 6 3 df (t, Re ∗) (c (ε)ε t, c (ε)ε Re ∗) =(ψ(t), Ψ(Re ∗)), Yt 7−→ 4 4 Yt Yt (s, Re ) (c (ε)ε2s, Re ) =(df ϕ(s), Re ). Ys 7−→ 3 Ys Ys Assume that all statements (i)-(iv) hold true. We see from (i) that

2 6 2 ψ(t) ϕ(s) = ψ(t) ϕ(κ(t)) = c− (ε)ε t c (ε)ε κ(t) <δ | − | | − | | 4 − 3 | 2 6 for all t of the form c4(ε)ε− j/N0, j = 2N0,...,2N0.Since1/N0 <δand ϕ(κ(t)) is non-decreasing, we have − ψ(t) ϕ(s) 2δ | − |≤ 2 6 2 6 for all t [ 2c (ε)ε− , 2c (ε)ε− ]. It follows that ∈ − 4 4 ψ(t) ϕ(s) 2δ (39) | − |≤ if ψ(t) [ 1, 1] or ϕ(s) [ 1, 1]. ∈ − ∈ − 36 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

+ 1 3 + Property (iii) implies that K (0,b+c2ε) < (b+δ)c4− (ε)ε− and so K (0,b δ+c2ε) < 1 3 1 − 3 bc4− (ε)ε− for b<3c2/2. For similar reasons, K−(0,b+ δ c2ε) >bc4− (ε)ε− .Since − 1 3 the diameter of a cell is bounded by c ε we must have M(bc− ε− ) W (b δ, b + δ). 2 4 ∈ − Property (iv) yields the same conclusion for negative b bounded below by 3c2/2. Consider 2 6 − a t (0, 2c4− ε− ). Then, according to (ii) there exists b ( c2√2,c2√2) such that ∈ 1 3 1 3 ∈ − Re t∗ = bc4− (ε)ε− .Sincef( t∗) M(bc4− (ε)ε− )wehave Re f( t∗) b δ.This impliesY that Y ∈ | Y − |≤ Re Ψ(Re ∗) = Re f( ∗) b δ. (40) | Yκ(t) − Yt | | Yt − |≤ We have shown in (39) and (40) that for small ε there exists, with probability greater than 1 p, a transformation of paths − df (ψ(t), Ψ(Re ∗)) (ϕ(s), Re )=(ϕ(κ(t)), Re (κ(t))) =(θ(ψ(t)), Θ(Ψ(Re ∗))) Yt 7−→ Ys Y Yt with the property that θ is increasing, θ(u) u < 2δ and Θ(r) r <δfor u [0, 1], r R. A similar argument would give a transformation| − | which would| − work| on any∈ finite interval.∈ 1 Note that (ψ(t), Ψ(Re t∗)) is the trajectory of a Brownian motion independent of X and Y 2 so we may assume that this is the trajectory of X (in other words we may construct ∗ 2 Y so that Ψ(Re t∗)=Xt ). The constants δ and p may be chosen arbitrarily small so the Y 2 processes with trajectories (c3(ε)ε s, Re s) converge in probability to Brownian motion X2 as ε 0 in a metric corresponding toY the Skorohod topology. Since the processes are continuous,→ the convergence holds also in the uniform topology on compact sets.

37 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

REFERENCES

1. Barlow, M.T. (1990) Random walks and diffusions on fractals Proceedings of the In- ternational Congress of Mathematicians, Kyoto, Japan, vol. II, Mathematical Society of Japan, Springer, Tokyo, New York, 1025–1035. 2. Barlow, M.T. and Bass, R.F. (1993) Coupling and Harnack inequalities for Sierpi´nski carpets Bull. Amer. Math. Soc. 29, 208–212. 3. Barlow, M.T. and Bass, R.F. (1997) Brownian motion and harmonic analysis on Sier- pinski carpets (preprint) 4. Barlow, M.T. and Perkins, E.A. (1988) Brownian motion on the Sierpi´nski carpet Prob. Th. Rel. Fields, 79, 543–623. 5. Bass, R.F. (1987) Lp-inequalities for functionals of Brownian motion S´eminaire de Probabilit´es, XXI, 206–217, Lecture Notes in Math., 1247, Springer, Berlin-New York. 6. Burdzy, K. (1993) Some path properties of iterated Brownian motion, in Seminar on Processes 1992,E.C¸ inlar, K.L. Chung and M. Sharpe, eds., Birkha¨user, Boston, 67–87. 7. Burdzy, K. (1994) Variation of iterated Brownian motion, in Workshop and Conference on Measure-valued Processes, Stochastic Partial Differential Equations and Interacting Systems, CRM Proceedings and Lecture Notes 5, 35–53. 8. Burdzy, K., Toby, E. and Williams, R.J. (1989) On Brownian excursions in Lipschitz domains. Part II. Local asymptotic distributions, in E. Cinlar, K.L. Chung, R. Getoor, J. Glover, eds., Seminar on Stochastic Processes 1988,Birkh¨auser, Boston, 55–85. 9. Chen, Z.-Q. (1992) Pseudo Jordan domains and reflecting Brownian motions Prob. Th. Rel. Fields 94, 271–280. 10. Chudnovsky, A. and Kunin, B. (1987) A probabilistic model of brittle crack formation J. Appl. Phys. 62, 4124–4129. 11. Fitzsimmons, P.J. (1989) Time changes of symmetric Markov processes and a Feynman-Kac formula J. Theoretical Prob. 2, 485–501. 12. Fukushima, M., Oshima, Y. and Takeda, M. (1994) Dirichlet Forms and Symmetric Markov Processes, Walter de Gruyter, New York. 13. Goldstein, S. (1987) Random walks and diffusion on fractals, in Kesten, H., ed., Per- colation Theory and of infinite particle systems, IMA Math. Appl., vol. 8, Springer, New York, 121–129. 14 Karatzas, I. and Shreve, S. (1988) Brownian Motion and , Springer, New York. 15. Khoshnevisan, D. and Lewis, T.M. (1997) Stochastic calculus for Brownian motion on a Brownian fracture (preprint) 16. Kunin, B. and Gorelik, M. (1991) On representation of fracture profiles by fractional integrals of a J. Appl. Phys. 70, 7651–7653. 17. Kusuoka, S. (1985) A diffusion process on a fractal, in Itˆo, K. and Ikeda, N., eds., Prob- abilistic Methods in Mathematical Physics. Proceedings of the Taniguchi International Symposium. Academic Press, New York, 251–274. 18. Silverstein, M.L. (1974) Symmetric Markov processes, Lecture Notes in 426, Springer, New York.

38 THE ANNALS OF APPLIED PROBABILITY 8(3), 708–748 (1998)

Krzysztof Burdzy Davar Khoshnevisan Department of Mathematics Department of Mathematics University of Washington University of Utah Seattle, WA 98195 Salt Lake City, UT 84112

39