arXiv:1608.02987v3 [math.PR] 1 Sep 2018 rtclfrLR,i h es htaLR n nSWon SRW an and LERW a that sense the in LERW, for critical Theorem o rvdi htppr n[ in c was paper; normalization that exact in The proved normalization. not proper after limit scaling ruetue ekvrino ma-ed rpryfrLERW for property “mean-field” a of version weak a uses argument that known is It point. same the at a s starting and the (SRW) LERW is LERW walk a of of random behavior probability simple scale non-intersection Schramm-L large the and the namely, tree governs probability, that spanning quantity uniform key con as close A with such probability, subjects and physics important statistical in model important osrcino h w-ie EWin LERW two-sided the of construction Section in Then ae,w sals h hr enfil rpryfrteescape the for property mean-field sharp on the establish we paper, nrdcdb h rtato fti ae n[ in self-av paper on this measure of probability author first a the is by (LERW) introduced walk random Loop-erased Introduction 1 pnigfrsson forests spanning nyif only Z esaeormi eut o h eomlzdecp rbblt of probability escape renormalized the for results main our state We rgr .Lawler F. Gregory 4 nvriyo Chicago of University saepoaiiy ecntuttetosddLR,adwe app and LERW, two two-sided provide the We construct We sets. probability. recurrent slowly escape on building author in pnmdlculdwt h ie pnigfrsson forests spanning wired the with coupled model spin n op hoooial o iperno ak epoet prove We (log by walk. renormalized random LERW the simple of for probability chronologically loops ing asinfil on field Gaussian htge eodtesaiglmtresult. limit scaling the beyond goes that d o rdmninllo-rsdrno walk random loop-erased dimensional Four L h opeae admwl LR)in (LERW) walk random loop-erased The 1.1 ≤ p o all for .I wssoni [ in shown iwas It 4. and 1.3 1.2 > p Z and nSection in , 4 R ihteb-alca asinfil on field Gaussian bi-Laplacian the with .Aogtewy eetn rvosrslsb h first the by results previous extend we way, the Along 0. 4 1.4 sissaiglimit. scaling its as edsustoapiain ftemi eut,nml a namely results, main the of applications two discuss we 9 twsdtridu omlilctv osat.The constants. multiplicative to up determind was it ] 1.1 oubaUniversity Columbia 7 htLR on LERW that ] notieo h rosi ie nSection in given is proofs the of outline An . Abstract i Sun Xin d ,adasi edculdwt h wired the with coupled field spin a and 4, = 1 Z 4 .Snete EWhsbcm an become has LERW then Since ]. 4 n ) stepoesotie yeras- by obtained process the is 1 3 ovre lotsrl and surely almost converges Z 4 Z a rwinmto sits as motion Brownian has 4 ihtebi-Laplacian the with nvriyo Warwick of University Z R d rbblt fLERW of probability iain o the for lications 4 osrc a construct a h escape the hat nesc ..i and if a.s. intersect e Wu Wei sissaiglimit. scaling its as enrevolution. oewner etost other to nections -aldescape o-called netrdbut onjectured in independent n iigpaths oiding Z DLERW, 4D 4 nthis In . d ± 1 is 4 = 1.2 . 1.1 Escape probability of LERW
d Given a positive integer d, a process S = Sn n∈N on Z is called a simple random walk d { } (SRW) on Z if Sn+1 Sn n∈N are i.i.d. random variables taking uniform distribution on z Zd : z {=1 .− Here} is the Euclidean norm on Rd. Unless otherwise stated, { ∈ | | } |·| our SRW starts at the origin, namely, S0 = 0. When S0 = x almost surely, we denote the probability measure of S by Px. A path on Zd is a sequence of vertices such that any two consecutive vertices are neighbors in Zd. Given a sample S of SRW and m < n N, let S[m, n] and S[n, ) be ∈ ∞ the paths [Sm,Sm+1 ,Sn] and [Sn,Sn+1, ] respectively. Given a finite path = [v , v ,...,v ] on Zd,··· the (forward) loop erasure··· of (denoted by LE( )) is definedP 0 1 n P P by erasing cycles in chronologically. More precisely, we define LE( ) inductively as follows. The first vertexP u of LE( ) is the vertex v of . SupposingP that u has been 0 P 0 P j set, let k be the last index such that vk = uj. Set uj+1 = vk+1 if k < n; otherwise, let d LE( ) := [u0,...,uj]. Suppose S is an SRW on Z (d 3). Since S is transient, there is noP trouble defining LE(S) = LE(S[0, )), which≥ we call the loop-erased random ∞ walk (LERW) on Zd. LERW on Z2 can be defined via a limiting procedure but we will not discuss it in this paper. Let W and S be two independent simple random walks on Z4 starting at the origin and η = LE(S). Let
1 2 X = (log n) 3 P W [1, n ] η = η . n { ∩ ∅| } In [9], building on the work on slowly recurrent sets [8], the first author of this paper proved that E[Xp] 1 for all p> 0. In this paper, we show that n ≍ Theorem 1.1. There exists a nontrivial random variable X∞ such that p lim Xn = X∞ almost surely and in L for all p> 0. n→∞
We can view X∞ as the renormalized escape probability of 4D LERW at its starting point. It is the key for our construction of the 4D two-sided LERW in Section 1.3. Our next theorem is similar to Theorem 1.1 with the additional feature of the evaluation of the limiting constant. Theorem 1.2. Let W, W ′, W ′′,S be four independent simple random walks on Z4 start- ing from the origin and η = LE(S). Then π2 lim (log n)P (W [1, n2] W ′[1, n2]) η = , W ′′[0, n2] η = 0 = . n→∞ { ∪ ∩ ∅ ∩ { }} 24 2 2 Write π in Theorem 1.2 as 1 π . We will see that the constant 1 is universal and 24 3 · 8 3 is the reciprocal of the number of SRW’s other than S. The factor π2/8 comes from the bi-harmonic Green function of Z4 evaluated at (0, 0) and is lattice-dependent. The SRW analog of Theorem 1.2 is proved in [10, Corollary 4.2.5]: 1 π2 lim (log n)P W [1, n2] S[0, n2]= , W ′[0, n2] S[1, n2]= = . n→∞ { ∩ ∅ ∩ ∅} 2 · 8
2 Theorem 1.1 and 1.2 are special case of our Theorem 1.5, whose proof is outlined in Section 1.2. In particular, the asymptotic result is obtained from a refined analysis of slowly recurrent set beyond [8, 9] as well as fine estimates on the harmonic measure π2 of 4D LERW. The explicit constant 24 is obtained from a “first passage” path decom- position of the intersection of an SRW and a LERW. Here care is needed because there are several time scales involved. See Section 1.2 for an outline. As a byproduct, at the end of Section 5.2 we obtain an asymptotic result on the long range intersection between SRW and LERW which is of independent interest. To state the result, we recall the Green function on Z4 defined by
∞ x G(x, y)= P [Sn = y], n=0 X Given a subset A Z4, the Green function on A is defined by ⊂ ∞ G (x, y)= Px[S = y,S[0, n] A]. A n ⊂ n=0 X It will be technically easier to work on geometric scales. Let C = z Z4 : z < en n { ∈ | | } be the discrete disk, Gn = GCn , and
2 Gn(w)= Gn(0, z) Gn(z,w). zX∈Cn 4 Theorem 1.3. Let W, S be independent simple random walks on Z with W0 =0 and S = w. Let σW = min j : W C and σ = min j : S C . If 0 n { j 6∈ n} n { j 6∈ n} q (w)= P W [0, σW ] LE(S[0, σ ]) = , n { n ∩ n 6 ∅} then 2 π 2 lim max n qn(w) Gn(w) =0. n→∞ n−1≤e−n |w|≤1−n−1 − 24
W W Remark 1.4. Theorem 1.3 holds if W [0, σ n ] LE(S[0, σn]) is replaced by W [0, σn ] 2 2 ∩ ∩ S[0, σn] and π /24 is replaced by π /16. This is the long range estimate for two 2 independent SRW’s in [10, Section 4.3]. The function Gn(w) is the expected number of W intersections of S[0, σn] and W [0, σn ]. This means that the long-range non-intersection probability of an SRW and an independent LERW is comparable with that of two independent SRW’s. This is closely related to the fact that the scaling limit of LERW on Z4 is Brownian motion, that is, has Gaussian limits.
3 1.2 Outline of the proof Here we state and give an outline for the proof of Theorem 1.5 from which Theorem 1.1 and 1.2 are immediate corollaries. The details for the proof are given in Section 2-5. We start by defining some notation. Let σ = min j 0 : S C and n { ≥ j 6∈ n} be the σ-algebra generated by S : j σ . (1) Fn { j ≤ n} We recall that there exist 0 <β,c< such that for all n, if z C and a 1, ∞ ∈ n−1 ≥ Pz a−1 e2n σ a e2n 1 c e−βa. (2) ≤ n ≤ ≥ − For the lower inequality, see, e.g.. [12, (12.12)], and the upper inequality follows from z 2n 2n the fact that P σn (k + 1)e σn k e is uniformly bounded away from 0. If x Z4,V { Z4≤, we write | ≥ } ∈ ⊂ H(x, V )= Px S[0, ) V = , { ∞ ∩ 6 ∅} H(V )= H(0,V ), Es(V )=1 H(V ), − H(x, V )= Px S[1, ) V = , Es(V )=1 H(0,V ). { ∞ ∩ 6 ∅} − Note that Es(V ) = Es(V ), if 0 V . If0 V , a standard last-exit decomposition shows that 6∈ ∈ 0 Es(V )= GZ4\V 0 (0, 0) Es(V ), (3) 0 4 0 where V = V 0 and GZ4 0 is the Green’s function on Z V . We also write \{ } \V \ Es(V ; n)= P S[1, σ ] V = , { n ∩ ∅} which is clearly decreasing in n. We have to be a little careful about the definition of the loop-erasures of the random walk and loop-erasures of subpaths of the walk. We will use the following notations.
η denotes the (forward) loop-erasure of S[0, ) and • ∞ Γ= η[1, )= η[0, ) 0 . ∞ ∞ \{ } ω denotes the finite random walk path S[σ , σ ] • n n−1 n ηn = LE(ω ) denotes the loop-erasure of S[σ , σ ]. • n n−1 n Γ = LE(S[0, σ ]) 0 , that is, Γ is the loop-erasure of S[0, σ ] with the origin • n n \{ } n n removed.
4 Note that S[1, ) is the concatenation of the paths ω1,ω2,.... However, it is not true that Γ is the∞ concatenation of η1, η2,..., and that is one of the technical issues that must be addressed in the proof. Let Y ,Z ,G be the -measurable random variables n n n Fn n 4 Yn = H(η ), Zn = Es[Γn], Gn = GZ \Γn (0, 0). (4)
By (3), we have Es(Γ 0 )= G−1 Z . It is easy to see that 1 G 8. Furthermore, n ∪{ } n n ≤ n ≤ using the transience of S, we can see that with probability one G∞ := limn→∞ Gn exists and equals GZ4\Γ(0, 0). Theorem 1.5. For every 0 r,s < , there exists 0 2 Moreover, c3,2 = π /24. Our methods do not compute the constant cr,s except in the case r =3,s = 2 (and the trivial case r = s = 0). The proof of Theorem 1.5, which is the technical bulk of this paper, requires several steps which we will outline now. For the remainder of this paper we fix r > 0 and allow constants to depend on r. If n N, we let ∈ E r E 3 −2 pn = [Zn] , pˆn = Zn Gn , (5) n n −hj hn = E[Yn]= E[H(η )], φn = e . (6) j=1 Y In Section 2.2, we review and prove some basic estimates on simple random walk −1 that, in particular, give hn = O(n ), and hence, −hn −1 φn = φn−1 e = φn−1 1+ O n . In Section 3.1, we revisit the theory of slowly recurrent sets in [8, 9] and obtain quanti- tive estimates on the escape probablity of slowly recurrent sets under a mild assumption (see Definition 3.1). Using these estimates, we prove two propositions in Section 3.2. The first one controls pn/pn+1: Proposition 1.6. p = p 1 O log4 n/n . n+1 n − The second one gives a good estimate along the subsequence n4 . Letη ˜ denote { } n the (forward) loop-erasure of S[σ(n−1)4+(n−1), σn4−n]. For m < n we let A(m, n) be the discrete annuli defined by A(m, n)= C C = z C : z em . n \ m { ∈ n | | ≥ } Let Γ˜ =η ˜ A((n 1)4 + 4(n 1), n4 4n) and h˜ = E H(Γ˜ ) . n n ∩ − − − n n h i 5 Proposition 1.7. There exists c < such that 0 ∞ n −1 p 4 = c + O n exp r h˜ . n 0 − j ( j=1 ) X In Section 4, in order to get rid of the subsequence n4 , we prove { } Proposition 1.8. There exists c< ,u> 0 such that ∞ c h˜ h . n − j ≤ n1+u (n−1)4 6 1.3 Two sided LERW In [5], the first author author proved the existence of two-sided loop-erased random walk in Zd for d 5. ≥ Theorem 1.10 ([5]). Given d 5, consider the sample of LERW in Zd, denoted by ≥ ηi i≥0. The n limit of ηn+i ηn −k≤i≤k exists for any k N, which defines an { } →∞ { d − } ∈ ergodic random path η˜ Z in Z called the two-sided LERW. { i}i∈ The proof of Theorem 1.10 crucially replies on the existence of global cut points for SRW in Zd for d 5, which is not true for d 4. As an application of results in Section 1.1, we extend≥ the existence of the two-sided≤ LERW to d = 4 in Section 6. Moreover, X∞ in Theorem 1.1 is the Radon-Nikodym derivative between the two-sided LERW restricted to non-negative times and the usual LERW. The existence for d =2, 3 was recently established by the first author author in [11]. A big difference in d < 4 compared to d 4 case is that the marginal distribution of one side of the path is not absolutely continuous≥ with respect to the usual LERW. Our results addresses the d = 4 case of Conjecture 15.12 in [2] by Benjamini-Lyons- Peres-Schramm, which asserts the existence of the two-sided uniform spanning tree in Zd. This is immediate from Wilson’s algorithm [16] that connects LERW and uniform spanning tree (see Section 7.1). 1.4 A spin field from USF As an application of Theorem 1.3, we will construct a sequence of random fields on the integer lattice Zd (d 4) using uniform spanning tree and show that they converge in ≥ distribution to the bi-Laplacian field (Theorem 1.11). For each positive integer n, let N = N = n(log n)1/4. Let A = x Zd : x < n N { ∈ | | N . We will construct a 1 valued random field on AN as follows. Recall that a wired spanning} tree on A is a± tree on the graph A ∂A where we have viewed the N N ∪{ N } boundary ∂AN as “wired” to a single point. Such a tree produces a spanning forest on AN by removing the edges connected to ∂AN . We define the uniform spanning forest (USF) on AN to be the forest obtained by choosing the wired spanning tree of A ∂A from the uniform distribution. (Note this is not the same thing as N ∪{ N } choosing a spanning forest uniformly among all spanning forests of AN .) We now define d the random field on (a rescaling of) Z . Let an be a sequence of positive numbers (we will be more precise later). Choose a USF on A . This partitions A into (connected) components. • N N For each component of the forest, flip a fair coin and assign each vertex in the • component value 1 or 1 based on the outcome. This gives a field of spins Y : x A . If we− wish we can extend this to a field on x Zd by setting { x,n ∈ N } ∈ Y = 0 for x A . x,n 6∈ N 7 Let φ (x)= a Y which is a field defined on L := n−1 Zd. • n n nx,n n This random function is constructed in a manner similar to the Edward-Sokal cou- pling of the FK-Ising model [3]. That coupling says that we can obtain the Ising model Zd on Zd by first sample a random configuration ω 0, 1 according to the so-called ∈ { } random cluster measure, and then flip a fair coin and assign each component of ω value 1 or 1 based on the outcome. The way we construct φn is similar to the Ising model except− that we replace the random cluster measure by the USF measure on Zd. It is known that the Ising model has critical dimension d = 4, in the sense that mean field critical behaviors are expected for d 4 but not for d 3. In particular, ≥ ≤ it is believed when d 4 the scaling limit of Ising model is a d dimensional Gaussian Free Field (GFF). For≥ d 5 this GFF limit is proved by Aizenman− [1], while the ≥ critical case d = 4 is still open. Our theorem below asserts that the random field φn we construct has critical dimension d = 4, and for d 4, we can choose the scaling ≥ d an such that φn converges to the bi-Laplacian Gaussian field on R . Note that when d = 4, a bi-Laplacian Gaussian field is log-correlated. If h C∞(Rd), we write ∈ 0 h, φ = n−d/2 h(x) φ (x). h ni n xX∈Ln Theorem 1.11. (d−4)/2 If d 5, there exists a> 0 such that if an = a n , then for every h1,...,hm • C∞(≥Rd), the random variables h ,φ converge in distribution to a centered joint∈ 0 h j ni Gaussian random variable with covariance h (z) h (w) z w 4−d dz dw. j k | − | Z Z If d =4, if a = √3 log n, then for every h ,...,h C∞(Rd) with • n 1 m ∈ 0 hj(z) dz =0, j =1, . . . , m, Z the random variables hj,φn converge in distribution to a centered Gaussian random variable with varianceh i h (z) h (w) log z w dz dw. − j k | − | Z Z Remark 1.12. Gaussian fields on Rd with correlations as in Theorem 1.11 is called d-dimensional • bi-Laplacian Gaussian field (see [13]). 8 For d = 4, we could choose the cutoff N = n(log n)α for any α > 0. We choose • 1 α = 4 for concreteness. For d > 4, we could do the same construction with no cutoff (N = ) and get the same result. ∞ By Wilson’s algorithm, the two-point correlation function of the field φn is propor- tional to the intersection probability of an SRW and a LERW stopped upon hitting ∂AN . Therefore Theorem 1.11 essentially follows from Theorem 1.3. In particular, 2 Gn(w) there is the discrete biharmonic function that gives the covariance structure of the bi-Laplacian random field in the scaling limit. We will give the full proof of Theorem 1.11 in Section 7, where we only deal with the case d = 4. The d 5 case can be proved in the same way but is much easier (see [15] for a detailed argument).≥ 2 Preliminaries In this section we recall and prove necessary lemmas about SRW and LERW which will be used frequently in the rest of the paper. Throughout this section we retain the notations in Section 1.2. 2.1 Basic notations Given a vertex set V Zd, ∂V is the set of vertices on Zd V who have a neighbor in ⊂ \ V , and V = V ∂V . A function φ on V is called harmonic on V if for each v V , we v ∪ ∈ have E [φ(S1)] = φ(v). When we say “φ is harmonic on V ” then it is implicit that φ is defined on V . We will use c and C to represent constants which may vary line by line. We use the asymptotic notion that two nonnegative functions f(x),g(x) satisfy f . g if there exists a constant C > 0 independent of x such that f(x) Cg(x). We write f & g if ≤ g . f and write f g if f . g and g & f. Given a sequence an and a nonnegative sequence b , we write≍ a b if lim a /b = 1. We write{a }= O(b ) if a . b . { n} n ∼ n n→∞ n n n n | n| n We write a = o(b ) if lim a /b = 0. When b = 1 , we may write o(1) as n n n→∞ | n| n { n} { } on(1) to indicate the dependence on n. We say that a sequence ǫ of positive numbers is fast decaying if it decays faster { n} than every power of n, that is nkǫ = o (1) for for every k > 0. We will write ǫ n n { n} for fast decaying sequences. As is the convention for constants, the exact value of ǫn may change from line to line. We will use implicitly the fact that if ǫ is fast decaying{ } { n} then so is ǫ′ where ǫ′ = ǫ . { n} n m≥n m P 2.2 Estimates for simple random walk on Z4 In this subsection we provide facts about simple random walk in Z4, which will be frequently used in the rest of the paper. We first recall the following facts about intersections of random walks in Z4 (see [6, 8]). 9 Proposition 2.1. There exist 0 n sup n E [H(η )] sup n E [H(ωn)] < , (13) n ≤ n ∞ and hence exp E [H(ηn)] =1 E [H(ηn)] + O(n−2), {− } − It follows that if φn is defined as in (6) and m < n, then n φ = φ 1+ O(m−1) 1 E H(ηj) . (14) n m − j=m+1 Y Corollary 2.2. There exists c< such that if n N, α 2, 0 10 Lemma 2.3. There exists c> 0 such that the followind holds for all n N. ∈ if φ is a positive (discrete) harmonic function on C and x C , • n ∈ n−1 log[φ(x)/φ(0)] c x e−n. (15) | | ≤ | | If m < n, V C , and V˜ = V C , • ⊂ n−m \ n−m−1 c−1 e−2m P S[σ , ) C = c e−2m; (16) ≤ { n ∞ ∩ n−m 6 ∅|Fn} ≤ c−1 e−2mH(V˜ ) P S[σ , ) V = c e−2mH(V ). (17) ≤ { n ∞ ∩ 6 ∅|Fn} ≤ Es(V ) P S[0, σ ] V = [1 c e−2m H(V )]. (18) ≥ { n ∩ ∅} − Proof. The inequalities (15) and (16) are standard estimates, see, e.g., [12, Theo- rem 6.3.8, Proposition 6.4.2]. The Harnack principle [12, Theorem 6.3.9] shows that H(z,V ) H(z′,V ) for z, z′ ∂C , and by stopping at time σ we see that ≍ ∈ n−m+1 n−m+1 H(V ) min H(z, V¯ ). ≥ z∈∂Cn−m+1 This combined with (16) and the strong Markov property gives the upper bound in (17). To get the lower bound, one uses the Harnack principle to see that for z ∂C , ∈ n−m+1 H(z,V +) H(V +) and H(z,V V +) H(V V +), where ≍ \ ≍ \ V + = V˜ (z ,...,z ) Z4 : z 0 . ∩{ 1 4 ∈ 1 ≥ } Finally, (18) follows from the upper bound in (17) and the strong Markov property. Lemma 2.4. Let U be the event that there exists k σ with n ≥ n LE(S[0,k]) C 2 = Sˆ[0, ) C 2 . ∩ n−log n 6 ∞ ∩ n−log n Then P(Un) is fast decaying. Proof. By the loop-erasing process, we can see that the event Un is contained in the event that either S[σn− 1 log2 n, ) Cn−log2 n = or S[σn, ) Cn− 1 log2 n = . 2 ∞ ∩ 6 ∅ ∞ ∩ 2 6 ∅ The probability that either of these happens is fast decaying by (16). The next proposition gives a quantitative estimate on the slowly recurrent nature of a simple random path in Z4. Proposition 2.5. If Λ(m, n)= S[0, ) A(m, n), then the sequences ∞ ∩ log2 n log4 n P H[Λ(n 1, n)] and P H(ω ) − ≥ n n ≥ n are fast decaying. 11 Proof. For any z Z4, let Sz be a simple random walk starting from z independent of z z ∈ z N S. Let Λj = Λj (n 1, n) = S [0, j] A(n 1, n) for j . By the definition of H and Proposition− 2.1, there exists∩ a positive− constant∈ c ∪such {∞} that for each z with z en−1, | | ≥ c E[H(Λz )] = P S[0, ) Sz[0, ) A(n 1, n) = , ∞ { ∞ ∩ ∞ ∩ − 6 ∅} ≤ 4n From now on we assume z en−1. Then by the Markov inequality, | | ≥ c 1 P H(Λz ) . (19) ∞ ≥ 2n ≤ 2 n o N z For each k , let τk = inf j : H(Λj ) ck/n . On the event τk < , we have H(Λz ) <∈ ck/n and Sz A(n 1, n). Since≥ ∞ τk−1 τk ∈ − H(A B) H(A)+ H(B) forany A, B Z4, ∪ ≤ ⊂ for n sufficiently large, we have z z 1 H(Λτ ) H(Λ )+ H(S ) c(k + )/n. k ≤ τk−1 τk ≤ 2 Moreover, combined with (19) we see that for n sufficiently large, z w c P τk+1 < τk < P[S = w τk < ]P[H(Λ ) ] { ∞| ∞} ≤ τk | ∞ ∞ ≥ 2n w∈AX(n−1,n) 1 z 1 P[S = w τk < ]= . ≤ 2 τk | ∞ 2 w∈AX(n−1,n) Therefore P τ < 2−k. Setting k = c−1 log2 n , we see that the first sequence in { k ∞} ≤ ⌊ ⌋ Proposition 2.5 is fast decaying. 4 For the second sequence, note that on the event H(ωn) log n/n , either ωn 2 2 { ≥ } 2 6⊂ A(n log n, n) or there exists a j [n log n, n] such that H[Λ(j 1, j)] log n/n. We use− (16) to see that P ω A(∈n −log2 n, n) is fast decaying. − ≥ { n 6⊂ − } 2.3 Loop-free times One of the technical nuisances in the analysis of the loop-erased walk is that if j LE(S[j, k]) = LE(S[0, )) S[j, k]. ∞ ∩ However, this is the case for special times which we call loop-free times. We say that j is a (global) loop-free time if S[0, j] S[j +1, )= . ∩ ∞ ∅ 12 Proposition 2.1 shows that the probability that j is loop-free is comparable to (log j)−1/2. From the definition of chronological loop erasing we can see the following. If j P [I(n, 2n;0, 3n)] 1/ log n (21) ≍ by giving the matching upper bound. Lemma 2.6. There exists c< such that P [I(n, 2n;0, )] c/ log n. ∞ ∞ ≤ Proof. Let E = En denote the complement of I(n, 2n;0, ). We need to show that P(E) 1 O(1/ log n). ∞ ≥ − Let k = n/(log n)3/4 and let A = A be the event that n ⌊ ⌋ i i,n A = S[n + (2i 1)k , n +2ik ] S[n +2ik +1, n + (2i + 1)k ]= . i { − n n ∩ n n ∅} and consider the events A , A ,...,A where ℓ = (log n)3/4/4 . These are ℓ inde- 1 2 ℓ ⌊ ⌋ pendent events each with probability greater than c (log n)−1/2 by Proposition 2.1. Therefore ℓ 1 1 P(A A )= [1 P(A )] exp O((log n)1/4) = o( ). − 1 ∪···∪ ℓ − i ≤ {− } 3 i=1 log n Y Let Bi = Bi,n be the event S[0, n + (2i 1)kn] S[n +2ikn, )= . By(11) in Proposition 2.1, P(Bc) c log{ log n/ log n.− Therefore∩ ∞ ∅} i ≤ cℓ log log n log log n P(B B ) 1 1 O . 1 ∩···∩ ℓ ≥ − log n ≥ − (log n)1/4 For 1 i ℓ, on the event Ai (B1 Bℓ) the time 2ikn is loop-free, hence E occurs.≤ Therefore≤ ∩ ∩···∩ log log n P(E) P [(A A ) (B B )] 1 O . (22) ≥ 1 ∪···∪ ℓ ∩ 1 ∩···∩ ℓ ≥ − (log n)1/4 13 This is a good estimate, but we need to improve on it. Let Cj, j =1., . . . , 5, denote the independent events (depending on n) 3(j 1)+1 3(j 1)+2 (j 1)n jn I n 1+ − , n 1+ − ; n + − , n + . 15 15 5 5 By (22) we see that P[C ] o 1/(log n)1/5 , and hence j ≤ 1 P(C C ) o . 1 ∩···∩ 5 ≤ log n Let D = Dn denote the event that at least one of the following ten things happens: j 1 3(j 1)+1 S 0, n 1+ − S n 1+ − , = , j =1,..., 5; 5 ∩ 15 ∞ 6 ∅ 3(j 1)+2 j S 0, n 1+ − S n 1+ , = , j =1,..., 5. 15 ∩ 5 ∞ 6 ∅ Each of these events has probability comparable to 1/ log n and hence P(D) 1/ log n. ≍ Also, I(n, 2n;0, ) (C C ) D. ∞ ⊂ 1 ∩···∩ n ∪ Therefore, P [I(n, 2n;0, )] c/ log n. ∞ ≤ Corollary 2.7. 1. There exists c< such that if 0 j j + k n, then ∞ ≤ ≤ ≤ c log(n/k) P [I(j, j + k;0, n)] . (23) ≤ log n n−1 2. Given 0 <δ< 1, let Iδ,n := j=0 I(j, j + δn;0, n). Then there exists a positive constant c such that for all n N and δ (0, 1), we have S∈ ∈ c log(1/δ) P [I ] . (24) δ,n ≤ δ log n 3. There exist c < and a positive integer ℓ such that the following holds for all positive integers ∞n. Let I˜(m, r) denote the event that there is no loop-free point j with σ j σ , and let k = k = log n . Then m ≤ ≤ r n ⌊ ⌋ P I˜(n ℓk,n + ℓk) c/n. (25) { − |Fn−3ℓk} ≤ Proof. 14 1. It suffices to prove under the assumption that k n1/2. Note that I(j, j +k;0, n) is contained in the union of the following two events:≥ I(j, j + k; j k, j +2k), − S[0, j k] S[j, n] = , and S[0, j] S[j + k, n] = . { − ∩ 6 ∅} { ∩ 6 ∅} Since k n1/2, the probability of the first event is O(1/ log n) by Lemma 2.6. By (11),≥ the probabilities of the second two events are O(log(n/k)/ log n). This gives (23). 2. By(23), P I(iδn/3, (i +1)δn/3;0, n) = O(log(δ−1)/ log n) for all 0 i 3/δ . { } ≤ ≤ ⌈ ⌉ Now (24) follows from the fact that Iδ,n can be covered by these I(iδn/3, (i + 1)δn/3;0, n)’s. 3. We will first consider walks starting at z ∂C (with constants independent ∈ n−3ℓk of z). Let En be the event E = σ e2n 2 e2n σ , S[σ , ) C = . n { n−ℓk ≤ ≤ ≤ n+ℓk n−ℓk ∞ ∩ n−2ℓk ∅} Using (2) and (16), we can choose ℓ sufficiently large so that P(En) 1 1/n. 2n 2n ≥ − On the event En, we have I˜(n ℓk,n + ℓk) I(e , 2e ;0, ). Hence, by Lemma 2.6, we have P[I˜(n ℓk,n−+ ℓk)] O(n⊂−1). ∞ − ≤ More generally, if we start the random walk at the origin, stop at time σn−3ℓk, and then start again, we can use the result in the previous paragraph. Since S[σ , ) C = on the event E , attachment of the initial part of the n−ℓk ∞ ∩ n−2ℓk ∅ n walk up to σn−3ℓk will not effect whether a time after σn−ℓk is loop-free. This concludes the proof. 2.4 Green function estimates Recall the Green function G( , ) on Z4 and G ( , ) defined in Section 1.1. We write · · n · · G(x) = G(x, 0) = G(0, x) and Gn(x) = Gn(x, 0) = G(0, x). As a standard estimate (see [12]), we have 2 G(x)= + O( x −4), x . (26) π2 x 2 | | | |→∞ | | Here and throughout we use the convention that if we say that a function on Zd is O( x −r) with r > 0, we still imply that it is finite at every point. In other words, for lattice| | functions, O( x −r) really means O(1 x −r). We do not make this assumption | | ∧| | for functions on Rd which could blow up at the origin. Lemma 2.8. For w C , let ∈ n ˆ2 Gn(w)= G(0, z) Gn(w, z)= G(0, z) Gn(w, z). Z4 z∈C zX∈ Xn 15 Then 8 Gˆ2 (w)= [n log w ] + O(e−n)+ O( w −2 log w ). n π2 − | | | | | | In particular, if w ∂C , ∈ n−1 8 Gˆ2 (w)= + O(e−n). (27) n π2 Proof. Let f(x)= 8 log x and note that π2 | | 2 ∆f(x)= + O( x −4)= G(x)+ O( x −4). π2 x 2 | | | | | | where ∆ denotes the discrete Laplacian. Also, we know that f(w)= Ew [f(S )] G (w, z) ∆f(z) σn − n z∈C Xn n n Ew (this holds for any function f). Since e Sσn e + 1, we have [f(Sσn )] = 8n −n ≤ | | ≤ π2 + O(e ). Therefore, 8 G (w, z) G(z)= [n log w ]+ O(e−n)+ ǫ, n π2 − | | z∈C Xn where ǫ G (w, z) O( z −4) O( w z −2) O( z −4). | | ≤ n | | ≤ | − | | | zX∈Cn zX∈Cn We split the sum on the right-hand side into three pieces. O( w z −2) O( z −4) c w −2 O( z −4) | − | | | ≤ | | | | |z|≤|Xw|/2 |z|≤|Xw|/2 c w −2 log w , ≤ | | | | O( w z −2) O( z −4) c w −4 O( x −2) | − | | | ≤ | | | | |z−wX|≤|w|/2 |x|≤|Xw|/2 c w −2, ≤ | | If we let C′ the the set of z C with z > w /2 and z w > w /2, then n ∈ n | | | | | − | | | O( w z −2) O( z −4) O( z −6) c w −2. ′ | − | | | ≤ | | ≤ | | zX∈Cn |z|X>|w|/2 Lemma 2.9. If 1 m < n and x C , ≤ ∈ m π2 Gˆ2 (x) G2 (x)= + O em−n . n − n 2 16 Proof. Set N = en. Using the martingale M = S 2 t, we see that t | t| − G (x, w)= N 2 x 2 + O(N). (28) n −| | w∈C Xn By the strong Markov property, for all w C , ∈ n min G(z, x) G(w, x) Gn(w, x) max G(z, x). N≤|z|≤N+1 ≤ − ≤ N≤|z|≤N+1 Set δ =(1+ x )/N. By(26), we have | | 2 G (x, w)= G(x, w) [1 + O(em−n)]. n − π2N 2 Using (28), we see that Gn(x, w) Gn(0,w) w∈C Xn 2 = G(x, w) + O em−nN −2 G (0,w) − π2 N 2 n wX∈Cn 2 = O(δ) + G(x, w) G (0,w). − π2 n wX∈Cn 3 Slowly recurrent set and the subsequential limit Simple random walk paths in Z4 are “slowly recurrent” sets in the terminology of [8]. In Section 3.1 we will consider a subcollections n of the collection of slowly recurrent sets and give uniform bounds for escape probabilitiesX for such sets. Then in Section 3.2 we use these estimates to prove Proposition 1.6 and 1.7. This section does not rely on notions and results in [8, 9] as we will give a new and self-contained treatment. 3.1 Sets in n X Given a subset V Z4 and m N we write ⊂ ∈ V = V A(m 1, m), and h = h = H(V ). m ∩ − m m,V m Using (17), we can see that there exist 0 E = E = S[1, σ ] V = . m m,V { m ∩ ∅} Note that P(Em) = Es(V ; m). We will interchangeably use the two notions P(Em) and Es(V ; m) throughout this section. We write hmm(z) for the harmonic measure of ∂Cm for random walk starting at the origin, that is, hm (z)= P S = z , z ∂C . m { σm } ∀ ∈ m If V Z4 and P(E ) > 0, we write ⊂ m hm (z; V )= P S = z E . m { σm | m} By the strong Markov property, we have P S[σ , σ ] V = = hm (z) Pz S[0, σ ] V = , { m m+1 ∩ 6 ∅} m { m+1 ∩ 6 ∅} z∈X∂Cm and P(Ec E ) = P S[σ , σ ] V = E (30) m+1 | m { m m+1 ∩ 6 ∅| m} = hm (z; V ) Pz S[0, σ ] V = . m { m+1 ∩ 6 ∅} z∈X∂Cm Proposition 3.3. There exists c< such that if V n, m n/10, and P(Em+1 E ) 1/2, then P(Ec E ) c∞log2 n/n. ∈ X ≥ | m ≥ m+2 | m+1 ≤ Proof. As in (30), we write P(Ec E )= hm (z; V ) Pz S[0, σ ] V = . m+2 | m+1 m+1 { m+2 ∩ 6 ∅} z∈∂CXm+1 Using P(E E ) 1/2, we claim that there exists c< such that m+1 | m ≥ ∞ hm (z; V ) c hm (z), z ∂C . (31) m+1 ≤ m+1 ∀ ∈ m+1 Indeed, we have P Sσm+1 = z, Em+1 hmm+1(z; V ) = { } P(Em+1) P S = z, E σm+1 m+1 P 2 { } 2 Sσm+1 = z Em , ≤ P(Em) ≤ { | } 18 and the Harnack inequality shows that P Pw Sσm+1 = z Em sup Sσm+1 = z c hmm+1(z). { | } ≤ w∈∂Cm { } ≤ Therefore, letting r = P S[σ , σ ] V = for k N, we have k { m+1 m+2 ∩ k 6 ∅} ∈ P(Ec E ) c hm (z) Pz S[0, σ ] V = m+2 | m+1 ≤ m+1 { m+2 ∩ 6 ∅} z∈∂CXm+1 m+2 = c P S[σ , σ ] V = c r . { m+1 m+2 ∩ 6 ∅} ≤ k Xk=1 By Definition 3.1, the terms rk for k = m, m +1, m + 2 are bounded by P S[σ , σ ] (V V V ) = H(V V V ) { m+1 m+2 ∩ m ∪ m+1 ∪ m+2 6 ∅} ≤ m ∪ m+1 ∪ m+2 c log2 n . ≤ n Using (16), we see that for λ large enough, m−λ log m r P S[σ , σ ] C = c n−2. k ≤ { m+1 m+2 ∩ m−λ log m 6 ∅} ≤ Xk=1 For m λ log m k m 1, (17) and the definition of imply that − ≤ ≤ − Xn log2 k r c e−2(m−k) H(V ) c e−2(m−k) . k ≤ k ≤ k Summing over k gives the result. Definition 3.4. Let ˜ denote the set of V such that P(E ) 2−n/4. Xn ⊂ Xn n ≥ The particular choice of 2−n/4 in this Definition 3.4 is rather arbitrary but it is convenient to choose a particular fast decaying sequence. For typical sets in one Xn expects that P(En) decays as a power in n, so “most” sets in n with P(En) > 0 will also be in ˜ . X Xn Recall Es(V ; n)= P S[1, σn] V = , which is decreasing in n. We state the next immedate fact as a proposition{ so∩ that∅} we can refer to it. Proposition 3.5. For any r> 0 and any random subset V Z4, ⊂ E Es(V ; m)r; V / ˜ P[V / ]+2−rn/4. ∈ Xn ≤ ∈ Xn h i In particular, if P[V / ] is fast decaying then so is the left-hand side. ∈ Xn Proposition 3.6. There exists c< such that if V ˜ , then ∞ ∈ Xn c log2 n 3n P(Ec E ) , j n. j+1 | j ≤ n 4 ≤ ≤ 19 Proof. If P(E E ) < 1/2 for all n/4 m n/2, then P(E ) < 2−n/4 and m+1 | m ≤ ≤ n V ˜n. Therefore we must have P(Em+1 Em) 1/2 for some n/4 m n/2. Now (for6∈ nX sufficiently large) we can use Proposition| ≥3.3 and induction≤ to conclude≤ that P(E E ) 1/2 for m k n. The result then follows from Proposition 3.3. k+1 | k ≥ ≤ ≤ It follows from Proposition 3.6 that there exists n such that ˜ ˜ for n n . 0 Xn ⊂ Xn+1 ≥ 0 We fix the smallest such n0 and set ∞ ˜ = ˜ . X Xj j=n [0 Combining Proposition 3.3 and 3.6, and the union bound, we have ck log2 n P(Ec E ) , k N, V ˜ . (32) n+k | n ≤ n ∈ ∈ Xn Proposition 3.7. There exists c< such that if V C and V ˜ , then ∞ ⊂ n ∈ Xn c log2 n Es[V ; n] 1 Es[V ] Es[V ; n]. (33) − n ≤ ≤ Proof. The upper bound is trivial. For the lower bound, we first use the previous proposition to see that Es(V ; n + 1) Es(V ; n) 1 O(log2 n/n) . ≥ − Since Es(V ) = Es(V ; n + 1)(1 maxz∈∂Cn+1 H(z,V )), it suffices to show that there exists c such that for all z ∂C− , ∈ n+1 H(z,V ) c log2 n/n. ≤ This can be done by dividing V into Vj’s similarly as in the proof of Proposition 3.3, m+2 (see the bound for 1 rj there). The next propositionP is the key to the analysis of slowly recurrent sets. It says that the distribution of the first visit to ∂Cn given that one has avoided the set V is very close to the unconditioned distribution. We would not expect this to be true for recurrent sets that are not slowly recurrent. Proposition 3.8. There exists c< such that if V ˜ we have ∞ ∈ Xn c log3 n hm (z; V ) hm (z) 1+ , z ∂C . (34) n ≤ n n ∀ ∈ n Moreover, c log3 n hm (z) hm (z; V ) . (35) | n − n | ≤ n z∈X∂Cn 20 Proof. Let k = log n . By(32), we have ⌊ ⌋ c log3 n P(Ec E ) . (36) n | n−k ≤ n Consider a random walk starting on ∂C with the distribution hm ( ; V ) and let n−k n−k · ν denote the distribution of the first visit to ∂Cn. In other words, ν is the distribution of the first visit to ∂Cn conditioned on the event En−k. Using (15), we see that for z ∂Cn, ∈ −1 ν(z) = hmn(z) [1 + O(n )]. By (36), for each z ∂C , we have ∈ n P P (En−k) P hmn(z; V )= Sσn = z En Sσn = z En−k { | } ≤ P(En) { | } ν(z) log3 n hm (z) 1+ O . ≤ 1 P(Ec E ) ≤ n n − n| n−k Since hm ( ) and hm ( ; V ) are probability measures on ∂C , we have n · n · n hm (z) hm (z; V ) =2 [hm (z; V ) hm (z)] | n − n | n − n + z∈X∂Cn zX∈∂Cn c log3 n c log3 n hm (z) . ≤ n n ≤ n zX∈∂Cn 3.2 Along a subsequence In this section we prove Proposition 1.6 and 1.7 via the estimates proved for sets in ˜. ˜ X For V n4 , let ∈ X 4 4 V ∗ := V e(n−1) +4(n−1) z < en −4n . n ∩ ≤| | n o Proposition 3.9. There exists c< such that if V ˜ 4 , ∞ ∈ Xn log2 n Es(V ;(n + 1)4) = Es(V ; n4) 1 H(V ∗ )+ O . − n+1 n3 Proof. Let τn = inf j : Sj / Cn4 and recall En and hmn(z; V ). We observe that (Es(V ; n) Es(V ;(n{+ 1)4))/∈Es(V ;}n4) is bounded by − ∗ ∗ P S[τ , τ ] V = E 4 + P S[τ , τ ] (V V ) = E 4 . { n n+1 ∩ n+1 6 ∅| n } { n n+1 ∩ \ n+1 6 ∅| n } 21 To bound the first term, we have ∗ ∗ P S[τ , τ ] V = E 4 P S[τ , τ ] V = { n n+1 ∩ n+1 6 ∅| n } − { n n+1 ∩ n+1 6 ∅} z ∗ hmn4 (z; V ) hmn4 (z) P S[0, τn+1] V = ≤ | − | { ∩ n+1 6 ∅} z∈∂C Xn4 c log3 n sup Pz S[0, τ ] V ∗ = n4 n+1 n+1 ≤ z∈∂Cn4 { ∩ 6 ∅} c log3 n P S[τ , τ ] V ∗ = , ≤ n4 { n n+1 ∩ n+1 6 ∅} where the three inequalities are due to the strong Markov property, (34) and Harnack inequality respectively. P ∗ ∗ −4n Using (15), we have S[τn, τn+1] Vn+1 = = H(Vn+1) [1 + O(e )]. Hence, it suffices to prove that { ∩ 6 ∅} 2 ∗ log n P S[τ , τ ] (V V ) = E 4 = O , { n n+1 ∩ \ n+1 6 ∅| n } n3 which by the strong Markov property and (34), can be further reduced to showing that log2 n P S[τ , τ ] (V V ∗ ) = = O . { n n+1 ∩ \ n+1 6 ∅} n3 ∗ Note that V V is contained in the union of C 4 and O(n) sets of the form V \ n+1 n −4n m with m n4 4n. By Definition 3.1 and the union bound, ≥ − 2 4 4 log n H (V V ∗ ) en −4n z e(n+1) = O . \ n+1 ∩ ≤| | ≤ n3 n −8no By Lemma 2.3, P S[τ , τ ] C 4 = = O(e ). This concludes the proof. { n n+1 ∩ n −4n 6 ∅} 4 4 Corollary 3.10. If V ˜ 4 , m n, and m k (m + 1) , then ∈ Xn ≥ ≤ ≤ m log4 n Es(V ; k) = Es(V ; n4) exp H(V ∗) 1+ O . − j n ( j=n+1 ) X Proof. By Proposition 3.9, if m > n we have P m 2 (Em4 ) ∗ log j = 1 H(Vj )+ O 3 . P(E 4 ) − j n j=n+1 Y ∗ 2 By Definition 3.1 and the union bound, we have H(Vj )= O(log j/j). Hence m 4 P(Em4 ) ∗ log j −H(Vj ) = e + O 2 P(E 4 ) j n j=n+1 Y log4 n m = 1+ O exp H(V ∗) . n − j ( j=n+1 ) X 22 2 4 On the other hand, (32) implies that P(Ek) = P(Em4 ) 1 O log m/m for m k (m + 1)4. This concludes the proof. − ≤ ≤ Now we apply our theory to LERW. Recall the setup in Section 1.2. Proof of Proposition 1.6. Let k = log2 n . We will show the stronger result that ⌈ ⌉ r 4 pn = E [Es(Γn) ]= pn−k 1+ O(log n/n) , (37) ad similarly for pn+1. Since pn decays polynomially, by Proposition 3.5, E r E r [Es(Γn) ]= Es(Γn) 1Γn∈X˜n (1 + ǫn), (38) where ǫn fast decaying. By (33) we have r r 2 E[Es(Γn) ]= E[Es(Γn; n) ] [1 + O(log n/n)]. (39) By Lemma 2.4, except for an event of fast decaying probability, Γ C Γ V (40) ∩ n−k ⊂ n ⊂ where V := (Γ C ) [S[0, ) C ] . If (40) occurs, then we have ∩ n−k ∪ ∞ \ n−k Es[Γ; n k] = Es[V ; n k] Es[Γ ; n] Es[Γ ; n + k] Es[V ; n + k] − − ≥ n ≥ n ≥ By (32), we have E Es[V ; n + k]r; V ˜ ∈ Xn−k h i = E[Es[V ; n k]r; V ˜ ] 1 O log4 n/n , − ∈ Xn−k − Since E [Es[V ; n + k]r] decays like a power of n, by Proposition 3.5 , E [Es[V ; n + k]r]= E[Es[V ; n k]r] 1 O log4 n/n . − − Now (37) follows from (39) and E[Es(Γ ; n)r] E [Es[V ; n + k]r]= p 1 O log4 n/n . n ≥ n−k − A similar argument gives p = p 1 O log4 n/n . n+1 n−k − A useful corollary of the proof is r 4 E[Es(Γ; n) ]= pn 1+ O(log n/n) . (41) 23 4 Proof of Proposition 1.7 . Let Qn = Es[Γ; n ] and Γ∗ =Γ A((n 1)4 + 4(n 1), n4 4n). n ∩ − − − Then, by Proposotion 3.9, if Γ ˜ 4 , we have ∈ Xn log2 n Q = Q 1 H(Γ∗ )+ O . n+1 n − n+1 n3 Applying Proposition 3.5 to V = Γ, we have log2 n E Qr = E Qr [1 rH(Γ∗ )] + E [Qr ] O . n+1 n − n+1 n n3 ∗ Recall I( , ) in (25). We see that P Γ = Γ˜ 4 is bounded by · · { n+1 6 n+1 |Fn } 4 4 4 4 P I[n + n, n +4n] 4 + P I[(n + 1) 4(n + 1), (n + 1) (n + 1)] 4 , { e |Fn } { − − |Fn } which by (25) is further bounded by O(n−4). Therefore, e e E Qr H(Γ∗ ) H(Γ˜ ) O(n−4) E [Qr ] . n | n+1 − n+1 | ≤ n Hence, h i log2 n E Qr = E Qr [1 rH(Γ˜ )] + E [Qr ] O . (42) n+1 n − n+1 n n3 h i Using (15) in Lemma 2.3 , we can see that −4n −4n E[H(Γ˜ ) 4 ]= E[H(Γ˜ )] [1 + o(e )] = h˜ [1 + o(e )]. (43) n+1 |Fn n+1 n+1 Let qn := pn4 . Combining (41), (42) and (43), we get log2 n q = q 1 r h˜ + O n+1 n − n+1 n3 1 = q exp r h˜ 1+ O , n − n+1 n2 n o −1 where the last inequality uses h˜n+1 = O(n ). In particular, if m > n, 1 m q = q 1+ O exp r h˜ . m n n − j ( j=n+1 ) X from which Proposition 1.7 follows. By Proposition 1.6, we see that 4 4 4 p = p 4 1+ O log n/n if n m (n + 1) . m n ≤ ≤ Combined with Proposition 1.7, we immediately get Corollary 3.11. There exists c < such that as m , 0 ∞ →∞ ⌊m1/4⌋ log4 m p = c + O exp r h˜ . m 0 m1/4 − j j=1 X 24 4 From the subsequential limit to the full limit In this section we prove Proposition 1.8 and 1.9. The proof of Proposition 1.8 is the technical bulk of this section, which will be given in Section 4.2. Let us first conclude the proof of Proposition 1.9 assuming Proposition 1.8. Proof of Proposition 1.9. Given Proposition 1.8, the s = 0 case follows from Corol- lary 3.11. For the general case, recall that 1 G 8 and G converge to G almost ≤ n ≤ n ∞ surely. Moreover Lemma 2.4 implies that there exists a fast decaying sequence ǫn P E −r r −s E −r r { −s} such that if m n, Gn Gm ǫn ǫn. Therefore [φn Zn Gn ] [φn Zn G∞ ] is fast decaying≥ and {| − | ≥ } ≤ − E φ−r Zr G−s E φ−rZr G−s c E φ−r Zr E φ−rZr . n n ∞ − m m ∞ ≤ n n − m m Take m , we see that the s = 0 case follows from the s= 0 case. →∞ 6 4.1 Harmonic measure of the range of SRW We start by proving two estimates for the harmonic measure of the range of random walk. Lemma 4.1. Let σ− = σ n−1/4 e2n , σ+ = σ + n−1/4 e2n , n n − ⌊ ⌋ n n ⌊ ⌋ ′ 4/5 − − + + n = n + n , = S[0, σ ], = S[σ , σ ′ ], ⌈ ⌉ Sn n Sn n n + − Rn = max H(x, ) + max H(y, ), − n + n x∈Sn S y∈Sn S Then, for all n sufficiently large, P R n−1/6 n−1/3. (44) { n ≥ } ≤ Our proof will actually give a stronger estimate, but (44) is all that we need and makes for a somewhat cleaner statement. Proof. Let m = n + log n . Recall from (2) that there exists c < such that ⌈ ⌉ 0 ∞ P σ c e2n log n n−1 and P σ c e2n n2 log n n−1. { n ≥ 0 } ≤ { m ≥ 0 } ≤ + Let V = Vn = S[σn , σm], ˜ − Rn = max− H(x, V ) + max H(y, n ), x∈Sn y∈V S R∗ = max H(x, + V ) + max H(y, −), n − n + n x∈Sn S \ y∈Sn \V S 25 and note that R R˜ + R∗ . We claim that for n sufficiently large, n ≤ n n P R∗ n−1/6/2 O(n−1). (45) { n ≥ } ≤ + To see this, let ℓ = n+[(log n)/2] , and let U denote the event U = (Sn V ) Cℓ = . P ⌊ −1 ⌋ − { \ ∩ + ∅}c By (16), (U) 1 O(n ), and on the event U we have n Cn Cℓ (Sn V ) . Therefore, on the≥ event− U S[0, ) , for y −S, x ⊂ S+⊂ V ,⊂ we have\ the ∩{ ∞ ∈ Xn} ∈ Sn ∈ n \ following two bouds: H(y,S+ V ) cH(S[0, ) A(n +1, n + n4/5) c n−1/5 log2 n; n \ ≤ ∞ ∩ } ≤ H(x, −) H(x, C ) c/n. Sn ≤ n ≤ This gives (45). Let N = N = c e2n log n , M = M = c e2n n2 log n and k = k = n−1/4 e2n/4 . n ⌈ 0 ⌉ n ⌈ 0 ⌉ n ⌊ ⌋ For each integer 0 j N/(k + 1), let Ej = Ej,n be event that at least one of the following holds: ≤ ≤ log n max H(Si,S[(j + 1)k, M]) , (46) 0≤i≤jk ≥ n1/4 log n max H(Si,S[0,jk]) . (47) (j+1)k≤i≤M ≥ n1/4 Then for large enough n, we have R˜ n−1/3, jk σ (j + 1)k E . { n ≥ ≤ n ≤ } ⊂ j Recall the notion Yn,α in Corollary 2.2. For fixed j, using the reversibility of simple random walk, the probabilities of both the event in (46) and in (47) are bounded by P[Y n−1/4 log n]= O(n−3/4). Therefore P(E ) O(n−3/4) and hence M,N/k ≥ j ≤ N log n P R˜ n−1/3 O(n−1)+ O(n−3/4) O . { n ≥ } ≤ k ≤ n1/2 Combining this with (45) gives the proof. The next lemma will use the notion of capacity cap(V ) for a subset V Z4. We will not review the definition but only recall three key facts: ⊂ cap(V V ′) cap(V ) + cap(V ′) V,V ′ Z4; (48) ∪ ≤ ⊂ cap( z + v : v V ) = cap(V ) V Z4, z Z4; (49) { ∈ } ⊂ ∈ cap(V ) z 2H(z,V ) V C , z C . (50) ≍| | ⊂ n 6∈ n+1 See [12, Section 6.5, in particular, Proposition 6.5.1] for definitions and properties. Combining these with the estimates for hitting probabilities, we have E[cap S[0, n2] ] n2/ log n. ≍ By Markov inequality, there exists δ > 0 such that n2 P cap S[0, n2] δ. (51) ≤ δ log n ≥ 26 By iterating (51) and using the strong Markov property and subadditivity as in the proof of Proposition 2.5, we see that there exists c, β such that for all n and all a> 0, an2 P cap S[0, n2] c e−βa. (52) ≥ log n ≤ Lemma 4.2. For all j, m N, let L[j, m] = cap(S[j, j + m]). For k, n N, let L¯(n; k) = max L[j, k]. Then∈ for every u< ∈ j≤n ∞ P L¯(nu e2n; n−1/4 e2n) 2 n−11/10 e2n is fast decaying. (53) ≥ Proof. Let k = n−1/4 e2n . Let U denote the event in (53). By the subadditivity of capacity (48) and⌈ the union⌉ bound, we have nu+1 U L[ik,k] n−11/10 e2n . ⊂ ≥ i=1 [ By (49), the events in the union are identically distributed. Therefore e2n P(U) nu+1 P L[0; k] n3/20 , ≤ ≥ n1/4 n which is fast decaying by (52). 4.2 Proof of Proposition 1.8 The strategy is to find a u> 0, and for each n a random set U = U(n) Z4 that can ⊂ be written as a disjoint union n4 U = Uj (54) 4 j=(n[−1) +1 such that the following four conditions hold where U Γ˜ ; (55) ⊂ n U ηj, j =(n 1)4 +1,...n4; (56) j ⊂ − j −(1+u) E H(Γ˜n U) + E H(η Uj) O(n ); (57) \ 4 4 \ ≤ h i (n−1)X −(1+u) E H(Γ˜n) = O(n )+ E [H(U)] ; h i −(1+u) j E[H(Uj)] = O(n )+ E H(η ) . (n−1)4 27 Let S˜ be a simple random walk (independent of U) starting at the origin and let J be the number of integers j with (n 1)4 < j n4 and such that S˜[0, ) U = . n − ≤ ∞ ∩ j 6 ∅ Let P˜ and E˜ are the probability and expectation over S˜ with U fixed. Since the Uj are disjoint, (58) and the strong Markov property imply for k 1, ≥ P˜ J k +1 J k n−u. { n ≥ | n ≥ } ≤ Therefore, E˜[J ] P˜[J 1][1 + O(n−u)]. n ≤ n ≥ Since E˜[J ]= 4 4 H(U ) and H(U)= P˜[J 1], we have n (n−1) Therefore it remains to find the sets U and Uj’s satisfying (55)-(58). ± −1/4 2j + − Let σj = σj j e as in Lemma 4.1 andω ˜j = S[σj−1, σj ]. We will let U be defined as in (54±), ⌊ where U⌋ = ηj ω˜ unless one of the following six events occurs in j ∩ j which case we set U = . (We assume (n 1)4 < j n4.) j ∅ − ≤ 1. If j (n 1)4 +8n or j n4 8n. ≤ − ≥ − 2. If H(ω ) j−1 log2 j. j ≥ 3. If ω C = . j ∩ j−8 log n 6 ∅ 4. If H(ω ω˜ ) j−1−u. j \ j ≥ + − 5. If it is not true that there exist loop-free points in both [σj−1, σj−1] and [σj , σj]. −1/6 6. If sup H(x, S[0, σ 4 ] ω ) j . x∈ω˜j n \ j ≥ We need to show that (55)–(58) hold for some u> 0. Throughout this proof we assume n is large enough. The definition of Uj imme- 4 diately implies (56). Combining conditions 1 and 3, we see that Uj A((n 1) + 4 + ⊂ − − 6n, n 6n). Moreover, if there exists loop-free points in [σj−1, σj−1] and [σj , σj], then η˜ ω˜−= η ω˜ . Therefore, the U are disjoint and (55) holds. Also, condition 6 n ∩ j j ∩ j j immediately yields that (58) holds for u 1/6. In order to establish (57) we first note≤ that (n−1)4+1 n4 (Γ˜n η η ) U Vj, ∪ ∪···∪ \ ⊂ 4 4 (n−1)[ 28 where ωj if Uj = Vj = ∅ . ω ω˜ if U = ηj ω˜ j \ j j ∩ j Hence, it suffices to find 0 To estimate E [H(ωj ω˜j)], we use (50), and Lemma 4.2 to see that except for an event of fast decaying probability\ H(ω ω˜ ) O(j−11/10), (59) j \ j ≤ and hence E[H(ω ω˜ )] O(j−11/10) and j \ j ≤ −11/10 − 7 E [H(ωj ω˜j)] c j c n 5 . 4 4 \ ≤ 4 4 ≤ (n−1)X −1 2 2. By Proposition 2.5, P H(ωj) j log j is fast decaying in j. This takes care 2 ≥ of 4 4 E[H(ω ); E ]. (n−1) P(Ei) j−u for i =3, 4, 5, 6 and (n 1)4 +8n 29 2j 1/16 −7/16 5. Let I = Iδ,n be as in (24) substituting in n = e j and δ = j so that δn = e2j j−3/8. Using (24), we have P(I) = o(j−1/4). Note that the event that + − there is no loop-free time in either [σj−1, σj−1] or [σj , σj] is contained in the union of I and the two events: σ e2j j1/16 and S[e2j j1/16, ) S[0, σ ] = . { j+1 ≥ } ∞ ∩ j+1 6 ∅ The probability of the first event is fast decaying by (2) and the probability of P 5 −1/4 the second is o(1/j) by (11). Hence (Ej )= o(j ). 6. By Lemma 4.1, for large enough n, we have P(E6) j−1/3. j ≤ 5 Exact relation In this section we first prove the elemetary lemma promised at the end of Section 1.2. Then we give the asymptotics of the long-range intersection probability of SRW and ˆ2 LERW in terms of Gn defined in Section 2.4, which concludes our proof of Theorem 1.5. 5.1 A lemma about a sequence Lemma 5.1. Suppose β > 0, p ,p ,... is a sequence of positive numbers with p /p 1 2 n+1 n → 1, and n lim log pn + β pj n→∞ " j=1 # X exists and is finite. Then lim npn =1/β. n→∞ Proof. It suffices to prove the result for β = 1, for otherwise we can considerp ˜n = βpn. Let n an = log pn + pj. j=1 X The hypothesis implies that an is a Cauchy sequence. We first claim that for every{ }δ > 0, there exists N > 0 such that if n N and δ ≥ δ p =(1+2ǫ)/n with ǫ δ, then there does not exist r > n with n ≥ 1 p , k = n,...,r 1, k ≥ k − 1+3ǫ p . r ≥ r Indeed, suppose these inequalities hold for some n, r. Then, 1+3ǫ log(p /p ) log log(r/n), r n ≥ 1+2ǫ − 30 r p log(r/n) O(n−1). j ≥ − j=n+1 X and hence for n sufficiently large, 1 1+3ǫ 1 1+3δ a a log log . r − n ≥ 2 1+2ǫ ≥ 2 1+2δ Since an is a Cauchy sequence, this cannot be true for large n. We next claim that for every δ > 0, there exists Nδ > 0 such that if n Nδ and p =(1+2ǫ)/n with ǫ δ, then there exists r such that ≥ n ≥ 1+ ǫ 1+3ǫ p < , k = n,...,r 1, (60) k ≤ k k − 1+ ǫ p < . r r 1+ǫ To see this, we consider the first r such that pr < r . By the previous claim, if such an r exists, then (60) holds for n large enough. If no such r exists, then by the argument above for all r > n, 1+ ǫ ǫ a a log + log(r/n) (1 + ǫ) O(n−1). r − n ≥ 1+2ǫ 2 − Since the right-hand side goes to infinity as r , this contradicts the fact that an is a Cauchy sequence. → ∞ By iterating the last assertion, we can see that for every δ > 0, there exists Nδ > 0 such that if n N and p =(1+2ǫ)/n with ǫ δ, then there exists r > n such that ≥ δ n ≥ 1+2δ 1+3ǫ p < , and p , k = n,...,r 1. r r k ≤ k − Let s be the first index greater than r (if it exists) such that either 1 1+2δ p or p . k ≤ k k ≥ k Using p /p 1, we can see, perhaps by choosing a larger N if necessary, that n+1 n → δ 1 δ 1+4δ − p . k ≤ s ≤ k If p (1+2δ)/k, then we can iterate this argument with ǫ 2δ to see that s ≥ ≤ lim sup npn 1+6δ. n→∞ ≤ The lim inf can be done similarly. 31 5.2 Long range intersection Let S, W be simple random walks with corresponding stopping times σn. We will assume that S0 = w, W0 = 0. Let η = LE (S[0, σn]). Note that we are stopping the random walk S at time σn but we are allowing the random walk W to run for infinite time. Proposition 5.2. There exists α< such that if n−1 e−n w 1 n−1, then ∞ ≤ | | ≤ − logα n log P W [0, ] η = log[Gˆ2 (w)ˆp ] c . { ∞ ∩ 6 ∅} − n n ≤ n In particular, 8ˆp logα n E [H(ηn)] = n 1+ O . π2 n Throughout this section, let θn denote an error term that decays at least as fast as logα n/n for some α (with the implicit uniformity of the estimate over all n−1 −n −1 ≤ e w 1 n ). θn may vary line by line. Then the second assertion in Proposi- tion| 5.2| ≤follows− immediately from the first and (27). We can write the conclusion of the proposition as P W [0, ] η = = Gˆ2 (w)ˆp [1 + θ ] , { ∞ ∩ 6 ∅} n n n Note that Gˆ2 (w) c/n if w en(1 n−1) and hence Gˆ2 (w)ˆp c/n2. n ≥ | | ≤ − n n ≤ We start by giving the sketch of the proof which is fairly straightforward. On the event W [0, ] η = there are typically many points in W [0, ] η. We focus on a { ∞ ∩ 6 ∅} ∞ ∩ particular one. This is analogous to the situation when one is studying the probability that a random walk visits a set. In the latter case, one usually focuses on the first or last visit. In our case with two paths, the notions of “first” and “last” are a little ambiguous so we have to take some care. We will consider the first point on η that is visited by W and then focus on the last visit by W to this first point on η. To be precise, we write η = [η0,...,ηm], i = min t : η W [0, ) , { t ∈ ∞ } ρ = max t 0 : S = η , { ≥ t i} λ = max t : W = η . { t i} Then the event ρ = j; λ = k; S = W = z is the event that: { ρ λ } I : j < σn, Sj = z, Wk = z, II : LE (S[0, j]) (S[j +1, σ ] W [0,k] W [k +1, )) = z , ∩ n ∪ ∪ ∞ { } III : z S[j +1, σ ] W [k +1, ). 6∈ n ∪ ∞ 32 Viewing the picture at z, we see that P II and III pˆn. Using the slowly recurrent nature of the random walk paths, we expect{ that as} long∼ as z is not too close to 0,w, or ∂Cn, then I is almost independent of (II and III). This then gives P ρ = j; λ = k; S = W = z P S = W = z pˆ , { ρ λ } ∼ { j k } n and summing over j, k, z gives P W [0, ] η = pˆ Gˆ2 (w). { ∞ ∩ 6 ∅} ∼ n n The following proof makes this reasoning precise. Proof of Proposition 5.2. Let V be the event V = w S[1, σ ], 0 W [1, ) . { 6∈ n 6∈ ∞ } Using P[0 / W [1, )] = Pw[w S[1, )] = G(0, 0)−1, we can see that P(V ) −2 ∈ ∞ ∈ ∞ | − G(0, 0) is fast decaying. Let τ = max j : W = 0 . Then P τ > σ 2 and | { j } { log n} P S[0, ) C 2 = are fast decaying and hence so is { ∞ ∩ log n 6 ∅} P W [0, ] η = V P W [0, ] η = | { ∞ ∩ 6 ∅| } − { ∞ ∩ 6 ∅}| Therefore it suffices to show that Gˆ2 (w) P [V W [0, ] η = ]= n pˆ [1 + θ ] . (61) ∩{ ∞ ∩ 6 ∅} G(0, 0)2 n n Let E(j, k, z), Ez be the events ∞ E(j, k, z)= V ρ = j; λ = k; S = W = z , E = E(j, k, z). ∩{ ρ λ } z j,k[=0 Then P [V W [0, ] η = ]= P(Ez). Let ∩{ ∞ ∩ 6 ∅} z∈Cn C′ = C′ = z C : z nP−4en, z w n−4 en, z (1 n−4)en . n n,w { ∈ n | | ≥ | − | ≥ | | ≤ − } We can use the easy estimate P(E ) G (w, z) G(0, z) to see that z ≤ n −6 P(En) O(n ), ′ ≤ z∈XCn\Cn P ′ so it suffices to estimate (Ez) for z Cn. We will translate so that z is the origin∈ and will reverse the paths S[0, ρ] and W [0,λ]. Let ω1,...,ω4 be four independent simple random walks starting at the origin and let x = w z,y = z. Let li denote the smallest index l such that ωi y en. Using − − | l − | ≥ 33 the fact that reverse loop-erasing has the same distribution as forward loop-erasing, we see that P[E(j, k, z)] can be given as the probability of the following event ω3[1,l3] ω4[1, ) LE(ω1[0, j]) = , ∪ ∞ ∩ ∅ ω2[0,k] LE(ω 1[0, j]) = 0 , ∩ { } j T 3 = , and T i = T i = min j : W i Cy for i = 1 and 4; ∞ n { j 6∈ n} τ 1 = min m : W 1 = x , and τ 2 = min m : W 2 = y ; { m } { m } 1 1 Γ=ˆ Γˆn = LE W [0, τ ] . We also override the notation S to denote an simple random walk on Z4 starting from 0. Let E be the event Γˆ W 2[1, τ 2] W 3[1, T 3] = and Γˆ W 4[0, T 4]= 0 . ∩ ∩ ∅ ∩ { } Then P E, τ 1 <