Conformal invariance of the loop-erased random walk intensity

Christian Beneˇs ∗1, Fredrik Johansson Viklund†2, and Gregory F. Lawler‡3

1Brooklyn College of the City University of New York 2Uppsala University and Columbia University 3University of Chicago

January 13, 2014

Abstract The loop-erased random walk (LERW) edge intensity is simply the probability that the LERW passes through a given edge. We show that this normalized probability (in the bulk) of a planar LERW between two boundary points of a square grid approximation of a simply connected domain converges in the lattice size scaling limit to an explicit conformally covariant quantity which coincides with the SLE2 Green’s function.

Contents

1 Introduction and outline of proof 2 1.1 Introduction ...... 2 1.2 Acknowledgments ...... 3 1.3 Notation and set-up ...... 4 1.4 Outline of proof and a key idea ...... 6

2 Preliminaries 9 2.1 Green’s functions ...... 9 2.1.1 Loops and loop measures ...... 10 2.1.2 Loop-erased walk ...... 11 2.2 Brownian loop ...... 12 2.3 Poisson kernels ...... 13 2.4 KMT coupling ...... 15

3 The combinatorial identity: proof of Theorem 1.3 15

4 Comparison of loop measures: proof of Theorem 1.4 19

[email protected][email protected][email protected]

1 5 Asymptotics of Λ: proof of Theorem 1.5 24 5.1 Continuum functions ...... 24 5.2 Asymptotics of ΛA(0, a)...... 26 5.3 Poisson kernel convergence in the slit square: Proof of Theorem 5.5 ...... 31 5.4 From the Slit Square to ∂A: Proof of Proposition 5.7 ...... 34

1 Introduction and outline of proof

1.1 Introduction In this paper we consider loop-erased random walk, LERW, on a square grid. This measure on self-avoiding paths is obtained by running a simple random walk and successively erasing loops as they form. We will work with a chordal version in a small mesh lattice approximation of a simply connected domain: given two boundary vertices, we erase the loops of a random walk started at one of the vertices conditioned to take its first step into the domain (along a prescribed edge) and then exit at the other vertex (along a prescribed edge). By linear interpolation this gives a random continuous curve – the LERW path. It is known that the LERW path has a conformally invariant scaling limit in the sense that it converges in law as a curve up to reparameterization to the chordal SLE2 path as the mesh size goes to zero, see [LSW04]. We will not use any results about SLE in this paper. The main theorem of this paper is a different conformal invariance result which does not follow from the convergence to SLE. We consider the edge intensity in the bulk which is the probability that the LERW path uses a given edge away from the boundary of the domain D. We show that this probability, normalized by a suitable power of the mesh size, converges as the mesh size gets smaller to an explicit (up to an unknown lattice-dependent constant) conformally covariant function which coincides with the SLE2 Green’s function GD(z; a, b). This is the limit as  → 0 of the normalized probability that the chordal SLE2 path between a and b in D visits the ball of radius  around z 1. So our theorem is more in spirit of the scaling limit predictions commonly taken as starting point for the application of CFT to lattice models than an SLE convergence result. See [HS11] for a somewhat similar result for the one-point function of the energy density of the Ising model. Let us be more precise. Let D be a simply connected bounded Jordan domain containing 0. Write rD for the conformal radius of D seen from 0. Let Dn ⊂ D be a suitable approximation of −1 2 D using n Z (see Section 2 for details) with boundary vertices an, bn tending to a, b as n tends to ∞. Let γn be LERW in Dn from an to bn and write e = en for the edge [0, 1/n].

Theorem 1.1. There exists 0 < c0 < ∞ such that for all D, a, b, there exists a sequence of approximating domains Dn ↑ D and boundary points an → a, bn → b such that

3/4 −3/4 3 lim c0 n P (e ⊂ γn) = r sin (π hmD (0, (ab))) , n→∞ D where rD is the conformal radius of D from 0, hm denotes harmonic measure, and (ab) ⊂ ∂D is either of the subarcs from a to b.

We do not determine the value of the (lattice dependent) constant c0. We do give bounds on the rate of convergence, but it will be easier to describe them in terms of the discrete result

1 2iθa 2iθb 3 A formula for GD can be written using a covariance rule and the fact that GD(0, e , e ) equals | sin (θa −θb)| up to a constant.

2 Theorem 1.2. There are two sources of error. For the discrete approximation Dn there is an error in the LERW probability; we give a uniform bound on this error. There is also an error in the approximation of D by Dn; this error depends on the domain D. If ∂D is nice, say piecewise analytic (analytic close to a, b), the first error term is larger. Several authors have studied the LERW edge intensity and the closely related (expected) growth exponent, that is, the polynomial growth rate of the expected number of steps of a LERW of diameter N. Lawler computed the growth exponents in dimensions d > 4 [Law13b], where they turn out to be the same as for simple random walk with a logarithmic correction in d = 4, while Kenyon proved it equals 5/4 in the planar case and also estimated the asymptotics of the probability 2 that a given point is on a whole-plane LERW from 0 to ∞ on Z , see [Ken00]. (We will only discuss the planar case in the rest of the paper.) Masson gave a different proof of Kenyon’s result using the convergence to SLE2 and known results on SLE exponents [LSW00] and obtained second moment estimates in collaboration with Barlow [Mas09, BM10]. Kenyon and Wilson computed several exact 2 numeric values for the intensity of the whole-plane LERW on Z in the vicinity of the starting point, see [KW11]. Lawler recently estimated the decay of the edge intensity of a chordal LERW in a square domain up to constants and the main result of this paper is obtained by refining the arguments of [Law13a]. The papers of Kenyon and Kenyon and Wilson provided key inspiration for the particular observable used in the proofs of [Law13a] and this paper. The LERW path is known to converge to the SLE2 path when parameterized by capacity, a parameterization which is natural from the point of view of conformal geometry. An important question is whether the LERW path also converges when parameterized in the natural Euclidean sense so that it, roughly speaking, takes the same number of steps in each unit of time. (The num- ber of steps increases as the step length/mesh size decreases, of course.) The conjecture is that the sequence of LERWs with this parameterization converges in law to SLE2 with a particular param- eterization, the natural parameterization, which can be given as a multiple of the 5/4-dimensional Minkowski content of the SLE2 curve, see [LR13] and the references therein. (The Hausdorff di- mension of the SLE2 curve is 5/4.) One motivation for studying the problem of this paper is that we believe it to be an important step in the proof of this conjecture. See also [GPS13] for some results for the corresponding question in the case of percolation interfaces and SLE6. The starting point of our proof is a combinatorial identity that factorizes the edge intensity, just as in [Law13a]. We give here a new proof using Fomin’s identity [Fom01] which makes more explicit the connection with determinantal formulas. We actually prove a generalization which considers a LERW path containing as a subset a prescribed self-avoiding walk (SAW) away from the boundary. (A given edge is clearly a special case of such a SAW.) As it turns out there are two factors whose asymptotics need to be understood. The first is the squared exponential of the 1 i random walk loop measure of loops of odd winding number about 2 − 2 . We obtain asymptotics by comparing with the corresponding conformally invariant Brownian loop measure quantity. The second factor can be written in terms of “signed” random walk hitting probabilities or alternatively as an expectation for a random walk on a branched double cover of the domain. After some preliminary reductions the required estimates are proved using coupling techniques that include the KMT strong approximation and results from [KL05, BVK13].

1.2 Acknowledgments Johansson Viklund was supported by the Simons Foundation, National Science Foundation Grant DMS-1308476 and the Swedish Research Council (VR). Lawler was supported by National Science

3 Foundation Grant DMS-0907143.

1.3 Notation and set-up The proof of Theorem 1.1 has three principal building blocks. Although we formulated the theorem for a fixed domain being approximated with a grid of small mesh size we prefer to work with discrete 2 domains derived from Z and let the inner radius from 0 tend to infinity. Let us set some notation. 2 • We write the planar integer lattice Z as Z × iZ. Throughout this paper we fix 1 i w = − , 0 2 2

2 2 and note that the dual lattice to Z is Z + w0. 2 2 • A subset of A ⊂ Z is called simply connected if both A and Z \ A are connected in the sense 2 of graphs. Let A denote the set of simply connected, finite subsets A of Z that contain the origin. Let An denote the set of A ∈ A with n 6 dist(w0, ∂Dn) 6 2n. 2 • Let E~ = {[z, w]: z, w ∈ V} be the directed edge set of the graph Z = Z + iZ. 2 • For each z ∈ Z , let Sz denote the closed square region of side length one centered at z,  1 S = z + (x + iy) ∈ : 0 |x|, |y| . z C 6 6 2

2 Note that the corners of Sz are on the dual lattice Z + w0.

• If A ∈ A, let DA ⊂ C be the simply connected domain " # [ DA = int SA . z∈A

2 We see that A ⊂ DA and ∂DA is a subset of the edge set of the dual Z + w0.

• If z ∈ DA, let f = fA denote the unique conformal map f : DA → D with

0 f(w0) = 0, f (w0) > 0.

• Let ∂eA denote the edge boundary of A, that is, the set of ordered pairs (a−, a+) of lattice 2 points with a− ∈ A, a+ ∈ Z \ A, |a− − a+| = 1. We will also specify boundary points by giving a, the midpoint of a− and a+, which is a point on ∂DA. We will often use a for the pair (a−, a+) when no confusion is possible.

• For a ∈ ∂DA, we define θa ∈ [0, π) by

i2θa fA(a) = e .

Note the factor of 2 in the definition.

4 0 −1 • Let rA = rA(w0) = f (w0) be the conformal radius of DA with respect to w0. If rA(0) −1 denotes the conformal radius from 0, then it is easy to see that rA(0) = rA [1 + O(rA )]. • We write ω = [ω0, . . . , ωτ ] 2 for nearest neighbor walks in Z and call them simply walks or sometimes paths. We write |ω| = τ for the length of the path and p(ω) = 4−|ω| for the simple random walk probability.

1 1 1 2 2 2 • We write ⊕ for concatenation of paths. That is to say if ω = [ω0, . . . , ωk], ω = [ω0, . . . , ωj ], 1 2 1 2 the concatenation ω ⊕ ω is defined if ωk = ω0 in which case 1 2 1 1 2 2 ω ⊕ ω = [ω0, . . . , ωk, ω1, . . . , ωj ].

• If a, b are distinct elements of ∂eA, we let

W = W(A; a, b)

be the set of walks ω = [ω0, . . . , ωτ ],

with [ω0, ω1] = [a+, a−], [ωτ−1, ωτ ] = [b−, b+] and ω1, . . . , ωτ−1 ∈ A. • Write X H∂A(a, b) = p(ω) ω∈W 2 for the the corresponding (boundary) Poisson kernel. If x ∈ Z \ A and dist(x, A) = 1, then we will similarly write H (x, b) = P H (a, b). ∂A a:a+=x ∂A

1 1 • If ω ∈ W(A; a, b) with |ω| = τ, will also write ω(t), 2 6 t 6 τ − 2 , for the continuous path of time duration τ − 1 that starts at a and goes along the edges of ω at speed one to b. Note 1 1 that ω(t) ∈ DA for 2 < t < τ − 2 .

• Let WSAW = WSAW(A; a, b) denote the set of η ∈ W that are self-avoiding walks. Note that a path ω is self-avoiding if and only if f ◦ ω is a simple curve.

• For each ω ∈ W there exists a unique η = L(ω) ∈ WSAW obtained by chronological loop- erasing, see Section 2.

+ − • Let WSAW (resp., WSAW ) denote the set of η ∈ WSAW that include the ordered edge [0, 1] ∗ + − ∗ (resp., [1, 0]) and WSAW = WSAW ∪ WSAW . Let W be the set of ω ∈ W such that ∗ L(ω) ∈ WSAW . Set ∗ X H∂A(a, b) := p(ω). ω∈W∗ If e is the unordered edge [0, 1] and η is a LERW from a to b in A, then we can write H∗ (a, b) P (a, b; A) := P (e ⊂ η) = ∂A ; (1) H∂A(a, b) this is the LERW intensity at e.

5 The following is our main theorem.

Theorem 1.2. There exists u > 0, 0 < c0 < ∞ such that the following holds. Suppose A ∈ A and −u suppose a, b ∈ ∂A with |sin(θa − θb)| > rA . Then,

−3/4 3 h  −u −1i P (a, b; A) = c0 rA | sin (θa − θb) | 1 + O rA sin (θa − θb) .

Let us discuss how to derive Theorem 1.1 from Theorem 1.2. Suppose D is a Jordan domain containing the origin with dist(0, ∂D) = 1. For each n, let An be the largest, simply connected 2 −1 −1 subset of Z containing the origin such that DAn ⊂ nD. Let Dn = n DAn , wn = n w0.

Let fAn be the corresponding map as above and fn(z) = fAn (nz). Then fn : Dn → D with 0 0 fn(wn) = 0, fn(wn) > 0. Let f : D → D be the conformal transformation with f(0) = 0, f (0) > 0. Since D is a Jordan domain, f extends to a homeomorphism, f : D → D. If z ∈ ∂Dn, let w be a point in ∂D that minimizes√ |z − w|. Since z is contained in a square of −1 side n that intersects ∂D, we have |z − w| 6 2/n. By the Beurling estimate, we can see that −1/2 |f(z) − f(w)| 6 c n . In other words, there exists c1 such that

−1/2 f(∂Dn) ⊂ {z ∈ D : |z| > 1 − c1n }.

−1 Also, by the Beurling estimate, if e is a boundary edge of Dn, then the diameter of f(n e) is −1/2 −1 O(n ). If a ∈ ∂D, let us choose an to be a point on n ∂eAn that minimizes |f(a) − f(an)|. Then there exists c2 such that

−1/2 |f(a) − f(an)| + dist(f(an), ∂D) 6 c2 n .

−1/2 Also, since |w0| 6 1/n, we have |f(an) − fn(an)| 6 c n and hence |f(a) − fn(an)| = O(n ). Also, −1 −1/2 rA = n rAn [1 + O(n ). Hence, if we choose the approximating points to be an, bn, then we get a uniform error term in Theorem 1.1. Although there is a uniform bound on |f(a) − f(an)|, there is not a uniform bound on |a − an|. −1/2 However, since |f(a) − f(an)| 6 c2 n , we certainly have

n −1 −1 −1/2o |a − an| 6 δn := sup |f (z) − f (w)| : |z − w| 6 c2 n .

−1 If D is a Jordan domain, then f is uniformly continuous and hence δn → 0 as n → ∞. Using 0 0 −1 the Beurling estimate again, we can see that if an, bn are sequences of points in n ∂eAn with 0 0 0 0 an → a, bn → b, then fn(an) → f(a), fn(bn) → f(b). If ∂D is locally analytic at a and b, or more generally, if the map f is bi-Lipchitz in neighborhoods −1 −1 of a and b, then one can improve these estimates giving |a−an| = O(n ), |f(a)−f(an)| = O(n ). Therefore, for nice domains the biggest error term in our result comes from the discrete result, Theorem 1.2.

1.4 Outline of proof and a key idea The first step of the proof of Theorem 1.2 rewrites (1) as a product of three factors which will then be estimated in the remainder of the paper. Before stating the main estimates we will introduce a key idea which is further discussed in Section 5.1.

6 Suppose ω(t), t ∈ [0,T ], is a curve in DA that avoids w0. Let t 7→ Θt = arg[f(ω(t))] be a continuous version of the argument. Define Θ  Θ  J = t − 0 ; t 2π 2π

Jt Qt = Q(ω[0, t]) = (−1) .

Although the argument is only defined up to an integer multiple of 2π, the value of Jt, and hence the value of Qt are independent of the choice of Θ0. If ω has time duration τ, we write Q(ω) = Qτ . Note that if ω = ω1 ⊕ ω2, then

J(ω) = J(ω1) + J(ω2),Q(ω) = Q(ω1) Q(ω2).

In particular, if ω = [ω0, . . . , ωτ ] is a path lying in A, then τ Y Q(ω) = Q(ej), ej = [ωj−1, ωj]. j=1

−1 Roughly speaking Q(e) = −1 if and only if the edge e crosses β := fA ([0, 1]). To be more precise, we note that if rA is sufficiently large, then the angle of f(z) (modulo 2π) cannot change as much 2 as π along an edge e of Z . Let Arg f(z) denote the value of the argument that lies in [0, 2π). Then for ~e = [z, z0], Q(~e) = −1 if and only if | Arg f(z) − Arg f(z0)| > π. We note the following., • Q(~e) is a function of the undirected edge. • If ~e = [0, 1], then Q(~e) = 1.

• if ` is a loop in A, then Q(`) = −1 if and only if the winding number of ` about w0 is odd. We define (signed) weights by 1 q(e) = p(e) Q(e) = Q(e), (e edge), 4 and if ω is a walk as above, τ Y q(ω) = p(ω) Q(ω) = q(ej). j=1 In order to state the combinatorial identity, we need to define a number of quantities. Write JA for the set of unrooted random walk loops ` ⊂ A with Q(`) = −1. (See Section 2 for precise definitions.) Let Sj be simple random walk starting in A, and let τ = τA = inf{k > 0 : S(k) 6∈ A} be the first time that the walk leaves A. As a slight abuse of notation, we write Sτ = a to mean that the walk exits A through the ordered edge [a−, a+]. If Sτ = a, we associate to the random walk path 1 the continuous path in DA of time duration τ − 2 ending at a ∈ ∂DA. Let

Ia = 1{S[1, τ] ∩ {0, 1} = ∅; Sτ = a} (2) be the indicator of the event that S leaves A at a and never hits the points 0, 1 before leaving A. Let   z h J(S[0,τ− 1 ]) i z 1 R (z, a) = E (−1) 2 I = E Q(S[0, τ − ]) I , A a 2 a

7 |RA(0, a)RA(1, b) − RA(0, b)RA(1, a)| ΦA(a, b) = . H∂A(a, b)

Let qA(z, w) denote the Green’s function in A using the signed weight Q, X qA(z, w) = q(ω). ω:z→w Here the sum is over all walks in A from z to w. From the definition, we can write

∞ X 0 qA(0, 0) = E [Q (S[0, j]) ; Sj = 0; j < τA] , j=0

∞ X 1   qA\{0}(1, 1) = E Q (S[0, j]) ; Sj = 1; j < τA\{0} , j=0 We define 1 q¯ = q (0, 0) q (1, 1). A 4 A A\{0}

We can also interpret 4q ¯A as the random walk loop measure using the weight q of loops in A that intersect {0, 1}. The following is the combinatorial identity:

Theorem 1.3. Let A ∈ A and a, b ∈ ∂A. Then,

P (a, b; A) =q ¯A exp {2m(JA)} ΦA(a, b), (3) where m is the random walk loop measure and JA is the set of unrooted random walk loops ` ⊂ A with Q(`) = −1.

Proof. See Section 3.

It is not hard (see [Law13a, Section 2]) to see that there existsq ¯ ∈ (0, ∞) and u > 0 such that

−u q¯A =q ¯ + O(rA ). (4) The remaining work is to estimate the other two factors on the right-hand side of (3). In Section 4 we compare the random walk loop measure with the Brownian loop measure to prove the following. Our proof does not compute the lattice-dependent constant c1.

Theorem 1.4. There exist 0 < u, c1 < ∞ such that if A ∈ A,

1/4  −u exp{2m(JA)} = c1 rA 1 + O rA . Proof. See Section 4.

The last factor in (3) is estimated using the following result, the proof of which is the main technical hurdle. Let RA(z, a) ΛA(z, a) = . HA(z, a)

8 Theorem 1.5. There exists u > 0, 0 < c2 < ∞ such that as rA → ∞

−1/2  −u ΛA(0, a) = c2rA sin θa + O rA ; (5)

−1/2  −u ΛA(1, a) = c2 rA cos θa + O rA . (6) Proof. See Section 5.

Proof of Theorem 1.2 assuming Theorems 1.3, 1.4, and 1.5. By Theorem 1.5,

|ΛA(0, a)ΛA(1, b) − ΛA(1, a)Λ(0, b)| −1 −1−u = c3rA |sin(θa) cos(θb) − cos(θa) sin(θb)| + O rA −1 h  −1 −ui = c3rA |sin (θa − θb)| 1 + O |sin (θa − θb)| rA .

−1/20 We can then use Theorem 1.1 of [KL05] which implies that if | sin(θa − θb)| > rA , then 2 H (0, a) H (0, b) = sin2(θ − θ ) H (a, b) 1 + O(r−u) . A A π a b ∂A A

But a difference estimate for discrete harmonic functions shows that HA(0, a) = HA(1, a)(1 + −1 O(rA )) and so by combining these estimates we see that

HA(0, a) HA(0, b)  −1  ΦA(a, b) = |ΛA(0, a)ΛA(1, b) − ΛA(1, a)Λ(0, b)| 1 + O(rA ) H∂A(a, b) −1 3 h  −1 −ui = c4rA | sin (θa − θb)| 1 + O |sin (θa − θb)| rA , and consequently Theorem 1.2 follows from Theorem 1.3 combined with (4), Theorem 1.4, and the last equation.

2 Preliminaries

This section sets more notation and collects some background material primarily on loop measures and loop-erased walks.

2.1 Green’s functions 2 Let A (Z be connected. We define the Green’s function X GA(z, w) = p(ω), (z, w) ∈ A × A, ω: ω0=z, ωτ =w, ω⊂A and the q-Green’s function of A by

q X GA(z, w) = q(ω), (z, w) ∈ A × A, ω: ω0=z, ωτ =w, ω⊂A where the sums are over walks starting at z and ending at w and staying in A.

9 2.1.1 Loops and loop measures

2 We will consider oriented rooted loops on Z , that is, walks starting and ending at the same vertex:

` = [`0, `1, . . . , `τ ], `0 = `τ .

(Unless otherwise stated, we will simply say “loop”.) The length of a loop, |`| = τ, is clearly an even integer. The vertex `0 is the emphroot of `. The edge representation of a rooted loop is the sequence of directed edges e(`) = [~e1, . . . ,~eτ ], ~ej = [`j−1, `j].

Note that each vertex in ` occurs in an even number of edges in e(`). Let L∗(A) be the set of discrete 2 q (rooted) loops contained in A and write L∗ = L∗(Z ). The rooted loop measures, m, m∗, associated q with p, q, respectively are the measures on rooted loops and is defined by m∗(`) = m∗(`) = 0 if |`| = 0, and otherwise p(`) m (`) = , ` ∈ L . ∗ |`| ∗ q(`) p(`) Q(`) mq(`) = = , ` ∈ L . ∗ |`| |`| ∗ An unrooted loop [`] is an equivalence class of rooted loops where two rooted loops are equivalent if and only if the edge representation of one can be obtained from that of the other by a cyclic permutation of indices. Clearly the lengths of two equivalent loops are the same, so we can write |[`]| for this number. We write #[`] for the number of equivalent rooted loops in [`]; this is always an integer dividing |[`]|. Moreover, the weights of all equivalent loops in a given class are the same so we may write p([`], q([`]).

We write L(A) for the set of unrooted loops whose representatives are in L∗(A) and L = LZ2 . q q The loop measures, m, m , are the measure on unrooted loops induced by m∗, m∗: X #[`] m([`]) = m (`) = p(`), [`] ∈ L, ∗ |`| `∈[`]

X #[`] mq([`]) = mq(`) = q(`), [`] ∈ L, ∗ |`| `∈[`] and where the sums are over the representatives of [`]. If ` is a rooted loop, we write m(`), mq(`) for m([`]), mq([`]). 2 Suppose that V ⊂ A ⊂ Z , where A is finite. Consider the set of loops contained in A that meet V : L(V ; A) = {[`] ∈ L(A): ` ∩ V 6= ∅}, and define L∗(V ; A) similarly using rooted loops. Let

F (V ; A) = exp {m [L(V ; A)]} = exp {m∗ [L∗(V ; A)]} ,

q q q F (V ; A) = exp {m [L(V ; A)]} = exp {m∗ [L∗(V ; A)]} k j It follows from the first equality that if V = ∪i=1Vi, with the Vi disjoint and Aj = A \ (∪i=1Vi), then q q q q F (V ; A) = F (V1; A)F (V2; A1) ··· F (Vk; Ak−1), (7)

10 and similarly for F (V ; A). In particular, the right-hand side of (7) is independent of the order of q the Vi partitioning V . If V = {z} is a set, we write just L(z; A), L∗(z; A),F (z; A).

2 Lemma 2.1. Suppose that z ∈ A (Z . Then, X G(z, z; A) = p(`) = em[L(z;A)] = F (z; A),

`∈L∗(A): `0=z

X q Gq(z, z; A) = q(`) = em [L(z;A)] = F q(z; A).

`∈L∗(A): `0=z Proof. See [LL10, Lemma 9.3.2]. Although that book only studies positive measures, the proof is entirely algebraic and holds q as well. Let us generalize the proof here. We restrict to the case of mq, but the same holds for m. Suppose L is a set of nontrivial loops rooted at a point z with the property that if `1, `2 ∈ L, then `1 ⊕ `2 ∈ L. Let L1 be the set of elementary loops, that is the set of loops in ` ∈ L that cannot be written as l1 ⊕ l2 with `1, `2 ∈ L. Let Lk denote the set of loops of the form ` = `1 ⊕ · · · ⊕ `k, `j ∈ L1. S∞ −1 Then L = k=1 Lk. Consider the measure λ on loops in L that assigns measure k q(`) of ` ∈ Lk. Then the measure λ viewed as a measure on unrooted loops is the same as the loop measure of unrooted loops [`] that have at least one representative in V . Let L0 denote the set of unrooted loops what have at least one representative in L. Then the measure λ viewed as a measure on unrooted loops is the same as the loop measure of unrooted loops [`] restricted to L0. Indeed, this measure assigns measure (j/k) q([`]) to [`] where j is the number of distinct loops among

`1 ⊕ `2 ⊕ · · · ⊕ `k, `2 ⊕ `3 ⊕ · · · ⊕ `k ⊕ `1, ··· , `k ⊕ `1 ⊕ · · · ⊕ `k−1.

If |q(L1)| < 1, then

∞ ∞ X 1 X 1 mq[L0] = q(L ) = q(L )k = − log [1 − q(L )] . k k k 1 1 k=1 k=1

In our particular case, if L1 is the set of nontrivial loops l starting and ending at z, staying in A and otherwise not visiting z, then it is standard that 1 Gq(z, z; A) = . 1 − q(L1)

2.1.2 Loop-erased walk

Let ω = [ω0, . . . , ωτ ] be a walk with τ < ∞. We say that ω is self-avoiding if i 6= j implies that ωi 6= ωj. The loop-erasure of ω, LE[ω], is a self-avoiding walk defined as follows: • If ω is self-avoiding, set LE[ω] = ω.

• Otherwise, set s0 = max{j 6 τ : ωj = ω0} and let LE[ω]0 = ωs0 ;

11 • For i = 0,..., if si < τ, set si+1 = max{j 6 τ : ωj = ωsi } and let LE[ω]i+1 = ωsi+1. We can now define the “loop-erased q-measure“ of walks staying in A: X qˆ(η; A) := q(ω) = q(η)F q(η; A) (8) ω⊂A: LE[ω]=η

Note that this quantity is zero if η is not self-avoiding. The second identity in (8) is proved in [LL10, Proposition 9.5.1] by observing that one can write any walk ω as a concatenation of the loops erased by the loop-erasing algorithm and the self-avoiding segments of LE[ω] “between” the loops, and then using (7) and Lemma 2.1. Again, although [LL10] deals with positive weights, the proof is equally valid the weight q.

2.2 Brownian loop measure The random walk loop measure m has a conformally invariant scaling limit, the Brownian loop measure µ. It is a sigma finite measure on equivalence classes of continuous loops ω : [0, tω] → C, ω(0) = ω(tω), with the equivalence relation given by ω1 ∼ ω2 if there is s such that ω1(t) =

ω2(t + s) (with addition modulo tω2 ); see [LW04]. One can construct µ via the Brownian bubble bub measure µ : a bubble in a domain D, rooted at a ∈ ∂D, is a continuous function ω : [0, tω] → C with ω(0) = ω(tω) = a ∈ ∂D and ω(0, tω) ⊂ D. The bubble measure is conformally covariant so it is enough to specify the scaling rule and give the definition in one reference domain, say D. Let 1 1 − |z|2 h (z, a) = D 2π |z − a|2

z,a be the Poisson kernel of D. Let P [·] be the law of an h-process derived from Brownian motion and the harmonic function hD(z, a) (see below); informally, this h-process is a Brownian motion from z conditioned to exit D at a. We define µbub(1)[A] = π lim −1h (1 − , 1)P1−,1(A), (9) D →0 D (The π factor is present to match the notion of [LW04] and is chosen so that the measure of bubbles in H rooted at 0 that intersect the unit circle equals 1.) Suppose f : D → D is a conformal map and that ∂D is locally analytic at f(1). Then if we write

bub bub f ◦ µD (1)[A] := µD (1)[{ω : f ◦ ω ∈ A}], we see that (9) and the covariance rule for the Poisson kernel hD imply bub 0 2 bub f ◦ µD (1) = |f (1)| µD (f(1)). (10) We can now define the Brownian loop measure restricted to loops in D as the measure on unrooted loops induced by Z 2π Z 1 1 bub iθ µ = µrD (re )rdrdθ. (11) π 0 0 The Brownian loop measure in other domains can then be defined by conformal invariance. The next lemma makes precise that the random walk and Brownian loop measures are close on large enough scales.

12 Lemma 2.2. There exist constants θ > 0 and c1 < ∞ and for all n sufficiently large a coupling of the random walk and Brownian loop measures, m and µ, respectively, in which the following holds. There is a set E whose complement has measure at most e−θn and on E we have that all pairs of loops (γ, ω) (γ random walk loop and ω Brownian loop) with

n(1−2θ) n(1−2θ) diam γ > e or diam ω > e satisfy ||γ − ω|| 6 c1n, where ||γ − ω|| = infα ||γ ◦ α − ω||∞ with the infimum taken over increasing reparameterizations. This result can be derived from the main theorem of [LT07]. However, let us sketch the argu- ment. Another way to construct the Brownian loop measure is by the following rooted measure. Suppose γ : [0, tγ] → C is a loop, that is, a continuous function with γ(0) = tγ. We can describe any loop γ as a triple (z, tγ, γ˜) where z ∈ C, tγ > 0 andγ ˜[0, tγ] → C is a loop withγ ˜(0) =γ ˜(tγ) = 0. The loop γ is obtained from (z, tγ, γ˜) by translation. We put consider the measure on (z, tγ, γ˜) given by −2 area × (2πt) dt × (bridget) (12) where bridget means the probability measure associated to two-dimensional Brownian motions −2 −1 Bt, 0 6 s 6 t conditioned so that B0 = Bt = 0. The factor (2πt) can be considered as t pt(0, 0). where pt is the transition kernel for a two-dimensional Brownian motion. This measure, considered as a measure on unrooted loops, is the same as the measure

Z 2π Z ∞ 1 bub iθ µ = µrD (re )rdrdθ. π 0 0 The expression for µ associates to each unrooted loop the rooted loop obtained by choosing the point farthest from the origin. The expression (12) chooses the root using the uniform distribution on [0, tγ]. Similarly, a rooted random walk loop can be written as (z, 2n, l) where l is a loop with l(0) = 0 and |l| = 2n. We choose the point by

−1 (counting measure) × (2n) P{S2n = 0} × (bridgen).

Here Sn is a simple random walk starting at the origin, and bridgen denotes the probability measure −1 −2 on [S0,S1,...,S2n] conditioned that S2n = 0. Using the relation P{S2n = 0} = (πn) + O(n ), we can now see our coupling of the two components. For the first component, the root, we couple Brownian loops rooted at Sz with random walk loops rooted at z. We couple Brownian loops with 1 1 time duration n − 2 6 tγ < n + 2 with random walk loops of time duration 2n. Then we use a version of the KMT coupling (see Theorem 2.4) of the random walk and Brownian loops to couple the paths. This coupling has the correct properties.

2.3 Poisson kernels We collect here some basic definitions and facts on continuous Brownian Poisson kernels and discrete random walk Poisson kernels, that is, hitting probabilities corresponding to the weight p as well as the closely related Green’s functions.

13 For a domain D ⊂ C if w ∈ D, z ∈ ∂D, and ∂D is locally analytic at z, the Poisson kernel hD(w, z) is the density of harmonic measure with respect to Lebesgue measure. In particular, for any piecewise locally analytic arc F ⊂ ∂D, Z Pw(B(T ) ∈ F ) = h(w, z)d|z|. F If w, z, ∈ ∂D and ∂D is locally analytic at both w and z, then we can define the excursion Poisson kernel −1 h∂D(w, z) = lim  hD(w + nw, z), →0+ where nw is the unit vector normal to ∂D at w, pointing into D. In this paper, we will also need to define the excursion Poisson kernel at points where ∂D is not locally analytic, namely at the tip of the slit of slit domains. The slits we will consider will be the positive real half-line and so the tip will be the origin. Since at such points, the derivative grows like the inverse of the square root of the distance to the tip, we define −1/2 h∂D(0, z) = lim  hD(−, z). (13) →0+ Similar definitions hold in the discrete setting. Let A ∈ A be given. Another way to define the Green’s function is to let Sj be a simple random walk in A, T = TA = min{j > 0 : Sj 6∈ A}, and T −1  z X GA(z, w) = E  1{Sj = w} , z, w ∈ A. j=0

It is known [KL05, Theorem 1.2] that there exists a lattice dependent constant C0 and c < ∞ such that 2 log rA GA(0, 0) − log rA − C0 c . π 6 1/3 rA For our purposes, it will suffice to use 2 G (0, 0) = log r + O(1), A π A where, as before, rA is the conformal radius of DA. We will need the following result. −u Theorem 2.3. There exists u > 0 such that if z ∈ A with |f(z)| 6 1 − rA , then 2 1 − |f(z)|  −u HA(z, a) = HA(0, a) 1 + O r . |f(z) − ei2θa |2 A and hence, hA(z, a) HA(z, a)  −u = 1 + O rA . (14) hA(0, a) HA(z, 0) −1/20 Moreover, if |θa − θb| > rA ,

π HA(0, a) HA(0, b)  −u H∂A(a, b) = 2 1 + O rA . (15) 2 sin (θa − θb) Proof. This follows from [KL05, (40)–(41)]. An explicit u can be deduced from these estimates, but it is small and not optimal so we will not write it.

14 2.4 KMT coupling We will use the Koml´os,Major, and Tusn´ady[KMT] coupling of random walk and Brownian motion. For a proof of the one-dimensional case, see [KMT76] or [LL10] and the two-dimensional case follows using a standard trick [LL10, Theorem 7.6.1].

Theorem 2.4. There exists a coupling of planar Brownian motion B and two-dimensional simple random walk S with B0 = S0, and a constant c > 0 such that for every λ > 0, every n ∈ R+,   −λ P sup |S2t − Bt| > c(λ + 1) log n 6 cn . 06t6n∨Tn∨τn where Tn = min{t : |S2n| > n}, τn = min{t : |Bt| > n} and ρn = n ∨ Tn ∨ τn.

3 The combinatorial identity: proof of Theorem 1.3

This section states and proves Theorem 3.1 which is a more general version of Theorem 1.3. For the statement of the theorem some more notation is needed. 2 Fix A (Z containing 0 and such that D = DA contains w0 = 1/2 − i/2. Recall the definition −1 of J, Q and q from Section 1.4 and that β = fA ([0, 1]) runs from w0 to ∂DA in DA. We will assume that rA is sufficiently large so that [0, 1] ∩ β = ∅. Let λ = [x0, . . . , xk] ⊂ A be a self-avoiding walk (SAW) containing the ordered edge [0, 1] wtih dist(0, ∂D) > 2 diam λ. Given λ write R λ = [xk, . . . , x0] R for its time-reversal. Note that λ contains the ordered edge [1, 0]. For given a = [a−, a+], b = [b−, b+] ⊂ ∂eA we make the following definitions.

+ + • Let WSAW = WSAW (a, b; λ, A) be the set of SAWs from a to b in A that contain the walk + 1 2 1 2 λ. That is, any η ∈ WSAW can be written as η ⊕ λ ⊕ η , where η , η are SAWs connecting a with x0 and xk with b, respectively.

− + R • Let WSAW = WSAW (a, b; λ ,A) be the set of SAWs from a to b in A that contain the reversal of λ.

+ − • Let WSAW = WSAW(a, b; λ, A) = WSAW(λ) ∪ WSAW(λ) . For topological reasons (see [Law13a] for a detailed argument), every self-avoiding path η from a to b traversing the ordered edge [0, 1] has the same value of Q(η). Moreover, if η0 is another SAW from a to b traversing [1, 0], then Q(η0) = −Q(η). Without loss of generality, we will assume that a, b are labelled in such a way that

+ − η ∈ WSAW ⇒ Q(η) = 1; η ∈ WSAW(λ) ⇒ Q(η) = −1.

Recall that JA is the set of unrooted random walk loops in A with odd winding number about w0.

15 Set K = A \ λ and let

∆K (x0 → a, xk → b) = H∂K (x0, a)H∂K (xk, b) − H∂K (x0, a)H∂K (xk, b) q q q q q ∆K (x0 → a, xk → b) = H∂K (x0, a)H∂K (xk, b) − H∂K (x0, b)H∂K (xk, a), where X q X H∂K (x, a) = p(ω),H∂K (x, a) = q(ω), ω:x→a ω:x→a ω⊂K ω⊂K are the boundary Poisson kernels with the sums taken over walks started from the vertex x, taking the first step into K = A \ λ and then exiting K using the edge a ∈ ∂eA. Theorem 3.1. Under the assumptions above,

X 2m[JA] q q pˆ(η) = p(λ)e F (λ; A) |∆ (x0 → a, xk → b)| , (16)

η∈WSAW q q where ∆ = ∆A\λ. In fact, if ∆ = ∆A\λ, then

X p(λ)h i pˆ(η) = e2m[JA]F q(λ; A)|∆q (x → a, x → b) | + F (λ; A)∆ (x → a, x → b) 2 0 k 0 k + η∈WSAW and X p(λ)h i pˆ(η) = e2m[JA]F q(λ; A)|∆q (x → a, x → b) | − F (λ; A)∆ (x → a, x → b) . 2 0 k 0 k − η∈WSAW We will use this theorem only in the special case when λ = [0, 1] where the theorem gives X 1 pˆ(η) = F q([0, 1]; A) e2m[JA] |R (0, a)R (1, b) − R (0, b)R (1, a)| . 4 A A A A η∈WSAW

If we divide both sides by H∂A(a, b) we get (3). Before proving Theorem 3.1 we need a lemma.

Lemma 3.2. Let η : a→b, a, b ∈ ∂eA, be a SAW in A containing the (unordered) edge [0, 1]. Then q F (η; A) = F (η; A) exp{−2m[JA]}, where JA is the set of unrooted loops in A with odd winding number about w0.

Proof. Let wind(`) denote the winding number of ` about w0. Then the definition of q implies that mq ({` ⊂ A : ` ∩ η 6= ∅}) = m ({` ⊂ A : ` ∩ η 6= ∅, wind(`) even}) − m ({` ⊂ A : ` ∩ η 6= ∅, wind(`) odd}) = m ({` ⊂ A : ` ∩ η 6= ∅}) − 2m ({` ⊂ A : ` ∩ η 6= ∅, wind(`) odd}) .

But any loop with odd winding number must separate w0 from ∂A, and so intersect every SAW η : a→b containing [0, 1]. This implies that mq [{` ⊂ A : ` ∩ η 6= ∅}] = m [{` ⊂ A : ` ∩ η 6= ∅}] − 2m [{` ⊂ A : wind(`) odd}] . By exponentiating both sides we get the lemma.

16 P P Proof of Theorem 3.1. The idea is to write the sums η∈W+ pˆ(η) and η∈W− pˆ(η) in terms of both random walk and q-random walk quantities via the formulas (see (8) and Lemma 3.2)

pˆ(η) = p(η)F (η; A),F (η; A) = e2m(JA)F q(η; A), and the facts that p(η) = ±q(η), η ∈ W±. After resummation, when we add and subtract the resulting expressions, a determinant identity due to Fomin (see Section 6 of [Fom01] or [LL10], Proposition 9.6.2) can be used to write the expressions in terms of random walk determinants ∆ and ∆q that do not involve loop-erased walk quantities. q q Now we turn to the details. We fix λ and write K = A \ λ, ∆ = ∆k, ∆ = ∆K . First observe that any η ∈ W+ can be written as

η = (η1)R ⊕ λ ⊕ η2,

1 2 where η , η are mutually nonintersecting SAWs in K connecting x0 with a and xk with b, respec- tively. Note that any loop intersecting η either intersects λ ⊂ η or it does not. Consequently, by (7) and (8), we can write

pˆ(η) = p(η)F (η; A) = p(η)F (λ; A)F (η; A \ λ).

Using this and the above decomposition, we see that X X pˆ(η) = p(η)F (η; A) η∈W+ η∈W+ X = p(λ)F (λ; A) p(η1)p(η2)F (η1 ∪ η2; K), (17) 1 η :x0→a 2 η :xk→b η1∩η2=∅

1 2 where the sum is over all pairs of nonintersecting SAWs η : x0→a and η : xk→b in K. Similarly, any η ∈ W− can be decomposed η = (η2)R ⊕ (λ)R ⊕ η1, 2 1 where η : xk→a and η : x0→b are nonintersecting SAWs in K. We see that X X pˆ(η) = p(λ)F (λ; A) p(η1)p(η2)F (η1 ∪ η2; K). (18) − 1 η∈W η :x0→b 2 η :xk→a η1∩η2=∅

Let us now consider the sum on the right-hand side of (17). Then using (7) we have

X 1 2 1 2 X 1 2 p(η )p(η )F (η ∪ η ; K) = p(η )F (η1; K)p(η )F (η2; K \ η1) 1 1 η :x0→a η :x0→a 2 2 η :xk→b η :xk→b η1∩η2=∅ η1∩η2=∅ X X = p(ω1)p(ω2), 1 2 ω :x0→a ω :xk→b ω2∩LE[ω1]=∅

17 1 2 where ω : x0 → a and ω : xk → b are SAWs in K. An identical argument proves the corresponding identity (interchanging x0 and xk) starting from the sum in the right-hand side of (18). Fomin’s identity now implies that we can express the difference of the results using a determinant: X X X X p(ω1)p(ω2) − p(ω1)p(ω2) 1 2 1 2 ω :x0→a ω :xk→b ω :xk→a ω :x0→b ω2∩LE[ω1]=∅ ω2∩LE[ω1]=∅ X X = p(ω1)p(ω2) − p(ω1)p(ω2) 1 1 ω :x0→a ω :xk→a 2 2 ω :xk→b ω :x0→b

= H∂K (x0, a)H∂K (xk, b) − H∂K (xk, a)H∂K (x0, b)

= ∆ (x0 → a, xk → b) .

So subtracting (18) from (17) gives X X pˆ(η) − pˆ(η) = p(λ) F (λ; A) ∆ (x0 → a, xk → b) . (19) η∈W+ η∈W− P P We will now express η∈W+ pˆ(η) + η∈W− pˆ(η) in terms ofq ˆ. We first claim that X X pˆ(η) = ΓA qˆ(η), where ΓA = exp{2m[JA]}. (20) η∈W+ η∈W+

To see this, recall thatq ˆ(η) = q(η)F q(η; A). We already noted that q(η) = p(η) for η ∈ W+, so Lemma 3.2 gives (20). Using that p(η) = −q(η) for η ∈ W−, a similar argument shows that X X pˆ(η) = −ΓA qˆ(η). (21) η∈W− η∈W−

Hence, adding (20) and (21) gives   X X X X pˆ(η) + pˆ(η) = ΓA  qˆ(η) − qˆ(η) . η∈W+ η∈W− η∈W+ η∈W−

We can now argue exactly as in the proof of (19) replacing p by q; it makes no difference, and in this way we get

X X q q pˆ(η) + pˆ(η) = q(λ)ΓA F (λ; A) ∆ (x0 → a, xk → b) (22) η∈W+ η∈W− q q = p(λ)ΓA F (λ; A) |∆ (x0 → a, xk → b)| , where the last step uses that the left-hand side of (22) is positive and that |q(λ)| = p(λ). The theorem follows by adding and subtracting (19) and (22).

18 4 Comparison of loop measures: proof of Theorem 1.4

In this section we prove the main estimate on the random walk loop measure by comparing with the corresponding quantity for the Brownian loop measure. We recall some notation. Given A ∈ A, JA is the set of unrooted random walk loops in A with odd winding number about w0. Given a simply connected domain D 3 0 we write J˜D for the set of unrooted Brownian loops with odd winding number about 0. Let ψD : D → D be the conformal 0 0 map with ψD(0) = 0, ψD(0) > 0 and ψ (0) = RD is the conformal radius of D from 0. Given a lattice domain A ⊂ A with corresponding D = DA we define rA = RDA . For R ∈ R, set 2 B(R) = {z ∈ C : |z| < R},B(R) = {z ∈ Z : |z| < R}.

Theorem 4.1. There exists 0 < u, c0, c < ∞ such that the following holds. Let A ∈ A and let DA be the associated simply connected domain. Then

1 −u m (JA) − log rA − c0 c r . 8 6 A

Our proof does not determine the value of c0. Before giving the proof we need a few lemmas. Lemma 4.2. There exists c < ∞ such that if D is a simply connected domain containing the origin with conformal radius r > 5, then

 ˜ ˜  log r −1 µ JD \ J − c r . D 8 6  ˜ ˜  Proof. Let φ(t) = µ JtD \ JD . Note that if 1 < t < s, then conformal invariance of µ implies that φ(s) = φ(t) + φ(s/t), that is, φ(t) = α log t for some α. The constant α can be computed, see Proposition 4.1 of [Law13a],   1 µ J˜ \ J˜ = log t. (23) tD D 8 Distortion estimates imply that there is a universal c such that

−1 −2  −1 −2  ψD B(r − cr ) ⊂ D ⊂ ψD B(r + cr ) Hence, by conformal invariance and (23),   1 µ J˜ \ J˜ = log [r ± c] . D D 8

Given the lemma and the fact that the conformal radius of DA with respect to 0 and w0 are −1 the same up to a multiplicative error of 1 + O(rA ), we see that to prove the lemma, it suffices to prove that there exists c0 such that  ˜ ˜  −u m(JA) = µ JD \ JD + c0 + O(rA ).

k+1 If we let k be the largest integer such that e D ⊂ D, then we can write  ˜ ˜  m(JA) − µ JD \ JD =

19 k ˜ ˜ X [m(JA \ JBk ) − µ(JA \ JBk )] + [m(JBj \JBj−1 ) − µ(JBj \JBj−1 )], j=1 j j j j where B = B(e ), B = B(e ). The Koebe-1/4 theorem implies that that (C \ D) ∩ {|z| = rD} is nonempty. Note that this implies that any loop in D (either random walk or Brownian motion) with odd winding number must intersect rD. The theorem then follows from the following estimate. The phrasing of the lemma is rather technical but the basic idea is that the set of random walk loops and Brownian loops with odd winding number that are in D, intersect rD, and are not contained in a smaller disk δrD are the same. We use the coupling of random walk and Brownian loops and note that the winding number will be the same for the two of them unless one of the following hold.

• The Brownian loop and the random walk loop are not very close (this happens with small measure in the coupling).

• One of the loops is in D but the other is not. If the walks are close this would require the loop that is inside D to be close to the boundary.

• Similarly, one loop can intersect rD or be contained in δrD while the other is not. Again, this requires one of the loops to be near a circle without intersecting the circle.

• The final bad possibility is that the loops are close but they are so close to the origin that the winding numbers can differ.

The probability that the walks are close to the origin is sufficiently large that we cannot just ignore this term. However, if a loop (random walk or Brownian motion) gets close to the origin it is almost equally likely to have an odd number as an even winding number. This allows us to show that the random walk and Brownian loop measures of such loops are close.

Lemma 4.3. There exist u > 0, c < ∞ such that the following holds for all δ > 1/10. Suppose D is a simply connected domain containing the origin and let r = rD. Let µ denote the Brownian loop measure and m the random walk loop measure.

• Let I(`) (I˜(ω)) be the indicator function of the event that a random walk loop ` (resp., a Brownian loop ω) is a subset of D, intersects rD, is not a subset of δrD. • Let U(`) (resp., U˜(ω)) denote the indicator function that the winding number of ` (resp., ω) about w0 (resp., 0) is odd. Then,

˜ ˜ −u µ[I(ω) U(ω)] − m[I(`) U(`)] 6 c r . Here we are writing µ[·] for the integral with respect to µ and similarly for m[·] and ν[·] in the proof below.

Proof. We will be doing estimates for the random walk loop measure. It will be useful to fix an 2 enumeration {z0, z1,...} of Z such that |zj| is nondecreasing and we let Vj = {z0, . . . , zj}. In 1+u particular, z0 = 0. We will first consider loops that do not lie in r D. As already noted, any loop in D with odd winding number must intersect rD.

20 • Claim 1: The random walk and Brownian loop measures of loops that intersect both rD and the circle of radius r1+u is O(r−u).

We will prove this for the random walk measure; the Brownian loop estimate is done similarly. Let L denote the set of random walk loops in D that intersect both rD and the circle of radius r1+u. Then [ ∗ L = Lj , |zj | r , then `s 6= zj, 1 6 s 6 k. Let i 1 Lj denote the set of unrooted loops that have i good representatives. Then Lj is a set of elementary loops as in the proof of Lemma 2.1. In particular, ∗  1  m(Lj ) = − log 1 − p(Lj ) For j = 0, a good loop must start by reaching the disk of radius r without returning to the origin. This has probability O(1/ log r). Given this, the Beurling estimate implies that the probability of reaching the disk of radius r1+u without leaving D is O(r−u/2). Given this, the probability to return to the disk of radius r without leaving D is O(r−u/2). Given this, the expected number of returns 1 −u to the origin before leaving D is O(1). Combining this all, we see that p(L0) = O(r / log r) and ∗ −u hence m(L0) = O(r / log r). Let us now consider elementary loops for j > 0. Let x = |zj|. Using the gambler’s ruin estimate, the probability that the random walk reaches the circle of radius 2x without hitting Vj−1 is O(1/x). Given this, the probability that the walk reaches the circle of radius r without hitting −1 1+u Vj−1 is O(1 ∧ [log(r/x)] ). Given this, as above, the probability to reach the circle of radius r and return to the circle of radius r without leaving D is O(r−u). Given this, the probability that to −1 reach within distance 2x of zj is O(1 ∧ [log(r/x)] ). Given this, the probability that the next visit −1 to Vj is at zj is O(x ). Given this, the expected number of visits to zj before leaving D \ Vj−1 is O(1). Combining all of these estimates, we see that   ∗ 1  1  c 1 m(Lj ) = p(Lj ) 1 + O(p(Lj )) 6 2 u 1 ∧ 2 . (24) |zj| r log (r/|zj|) −u By summing over |zj| 6 r, we get m(L) = O(r ) which establishes Claim 1. We will use the coupling as described in Lemma 2.2. Let us write (ω, `) for a Brownian mo- tion/random walk loop pair. This coupling defines (ω, `) on the same measure space (M, ν) such that • The marginal measure on ω restricted to nontrivial loops is µ. • The marginal measure on ` restricted to nontrivial loops is m. • Let E denote the set of (ω, `) such that at least one of the paths has diameter greater than 1−u 1+u r and is contained in the disk of radius r and such that kω − `k∞ > c0 log r. (Here by k · k∞ we mean the infimum of the supremum norm over all parametrizations.) Then −u ν(E) 6 O(r ).

21 −u Given Claim 1, we see that is suffices to show that |ν(I˜U˜ −IU)| = O(r ). We will write E for 1E. ˜ 1/2 1/2 Let K(`) (resp., K(ω)) denote the indicator function that dist(0, `) 6 r (resp., dist(0, ω) 6 r ). Note that UI − U˜I˜ = UIK − U˜I˜K˜ + U˜I˜(K˜ − K) + (UI − U˜ I˜) (1 − K). ˜ Note that if K = 0 and kω − `k∞ 6 c0 log r, then (for r sufficiently large) U = U. Therefore, ˜ ˜ ˜˜ ˜ ˜ ˜ ˜ ˜ ˜ |ν(I U − IU)| 6 |ν(IUK) − ν(IUK)| + ν[I U |K − K|] + ν[|I − I| (U + U)] + ν(E). Therefore it suffices to establish

• Claim 2: ˜ ˜ ˜ −u ν[(I + I) |K − K|] + ν[|I − I|] 6 O(r ), and

• Claim 3: ˜˜ ˜ −u |ν(IUK) − ν(IUK)| 6 O(r ).

To prove Claim 2, we first note that if the loops are coupled, and IU(`) 6= 0, then diam(`) > δr and simillarly for the Brownian motion loops. Also, if (ω, `) are in the disk of radius r1+u with E(ω, `) = 0 and I 6= I˜, then either the random walk loop or the Brownian loop does one of the following:

• gets within distance c0 log r of ∂D without leaving D

• gets within distance c0 log r of the the circle of radius r without hitting the circle

• gets within distance c0 log r of the circle of radius δr without hitting the circle.

We can estimate the measure of loops that satisfy this as well as diam > δr, by using the rooted loop measure. We will do the random walk case for the first bullet; the Brownian motion case and the other two bullets are done similarly. Let  be any positive number. Note that the root must be in the disk of radius r1+u and hence there are O(r2(1+u)) choices for the root. Using standard large deviations estimates, except for a set of loops of measure o(r−5), these loops must have time duration at least r2−. We consider the probability that a random walk returns to the origin at time 2n after getting within distance O(log r) of ∂D but not leaving D. We claim that this is O(log r/n5/4). To see this, first note that by considering the reversal of the walk, we can see this is bounded above by the probability that the random walk gets with distance O(log r) in the first n steps, stays in ∂D, and then returns to the original point at time 2n. Given that the walk is within distance O(log r), the Beurling estimate implies that the probability of not leaving D in the next n/2 steps is O(log r/n1/4). Given this, the probability of being at the orign at time 2n is O(n−1). The rooted loop measure puts an extra factor of (2n)−1 in. Therefore the rooted loop measure of 2− loops rooted at z of time duration 2n > r that have diameter at least δr, and get within distance 9/4 2− c0 log r of ∂D but stay in D is O(log r/n ). By summing over 2n > r , we see that the rooted loop measure of loops rooted at z that have diameter at least δr, get within distance c0 log r of ∂D −(2−2)5/4 1+u but stay in D is O(r ). If we sum over |z| 6 r , we get that the loop measure (rooted or 2u+ 5 − 1 −1/4 unrooted) of such loops is O(r 2 2 ) 6 O(r ) for u sufficiently small. This gives the upper bound on ν[|I˜− I|].

22 ˜ ˜ Similarly, if E(ω, `) = 0 , K(`) 6= K(ω), and I(`) + I(ω) > 1, then

1/2 1/2 r − c0 log r 6 dist(0, `) 6 r + c0 log r. 1/2 ˜ ˜ and diam(`) > r − c0 log r. Therefore ν[(I + I)|K − K| (1 − E)] is bounded above by twice the m measure of the set of loops ` that satisfy these conditions. This can be estimated as in the previous paragraph (or using an unrooted loop measure estimate as in the beginning of this proof). This finishes the proof of Claim 2. For the final claim, first note that ˜ ˜ ˜ ˜ ˜ |ν(IK) − ν(IK)| 6 ν[(I + I) |K − K|] + ν[|I − I|], and hence by Claim 2, |ν(I˜K˜ ) − ν(IK)| = O(r−u). Therefore to prove Claim 3 it suffices to show that 1 |ν(I˜U˜K˜ ) − ν(IUK)| = |ν(I˜K˜ ) − ν(IK)| + O(r−u). (25) 2 Let us consider the set L0 of unrooted random walk loops that lie in D and intersect both the circle of radius r1/2 and the circle of radius δr about the origin. Our first claim is that c m(L0) . (26) 6 log r

2 1/2 To see this, write {z ∈ Z : |z| 6 r } = {z0, z1, z2, . . . , zk} where the points are ordered as above 0 so that |zj| is nondecreasing. In particular, z0 = 0. Let Vj = {z0, . . . , zj−1}. Let Lj denote the set of unrooted loops ` in D such that zj ∈ ` and ` ∩ Vj−1 = ∅. Then

k 0 X 0 m(L ) = m(Lj). j=0

Arguing in the same way as in the proof of (24), we see that c m(L0 ) , 0 6 log r c m(L0 ) , 1 j k. j 6 2 2 6 6 |zj| log r By summing over j, we get (26). 2 ∗ 2 Let A = D ∩ Z ,A = {w ∈ Z : |w| < δr}. For each loop in Lj, choose a rooted representative so that the walk starting at zj does not return to zj before visiting the circle of radius δr. Let ζ ∈ ∂A∗ be the first point visited in ∂A∗ which can be considered as the terminal point of the the initial segment of the path, that is, the walk from zj to ζ. This has the distribution of an ∗ “h-process”, that is, a random walk conditioned to leave A \Vj at ζ. Each time the walk goes from b b+1 1/2 b the circle of radius 2 to the circle of radius 2 where b is an integer with 2r 6 2 6 δr/2, there is a reasonable chance of making a loop about w0. Using standard coupling techniques, we can see that we can couple the paths so that with positive probability (independent of b, r) the paths

23 can be coupled so that the argument of the two paths differs by 2π, that is the winding numbers ∗ have different parity. Therefore, we can couple random walk excursions from zj to ζ in A \ Vj such that, except for an event of probability O(r−u) the winding numbers of coupled paths have different parity. (The exponent u depends on the probability of coupling at each stage for which their is a uniform lower bound.) Therefore, the measure of walks with odd winding number is the same as that of even winding number up to a multiplicative error of 1 + O(r−u). This gives (25) for random walk. The Brownian loop quantity is done similarly, using the representation of the loop measure in terms of bubbles and by coupling the Brownian loops similarly. This establishes Claim 3.

5 Asymptotics of Λ: proof of Theorem 1.5

In this section we study the asymptotics of ΛA(z, a) = RA(z, a)/HA(z, a), as rA → ∞. We recall z z that RA(z, a) = E [Q(S[0, τ])Ia] where S = S is simple random walk from z, Ia is as defined in z (2), and that HA(z, a) = P (Sτ = a) is discrete harmonic measure. Consequently we have

RA(z, a) z,a ΛA(z, a) = = E [Q(S[0, τ])I], z ∈ A, (27) HA(z, a) where I = 1{S[1, τ] ∩ {0, 1} = ∅} and Ez,a denotes expectation with respect to the measure under which S is a simple random walk from z conditioned to exit A at a.

5.1 Continuum functions Given (27) and the fact that simple random walk converges to Brownian motion, we would expect any scaling limit of ΛA(0, a) to be the corresponding quantity with random walk replaced by an h-transformed Brownian motion conditioned to exit D at a ∈ ∂D. Here we describe the quantity in the continuum. We will do the construction in the unit disc D; we can use conformal invariance to define the functions in other simply connected domains. Let β denote the line segment [0, 1) ⊂ D. Let a = e2iθ, 0 < θ < π, and let α = α(a) = {w : w = rei(2θ+π), r ∈ (0, 1)} be the antipodal radius z,a relative to a. Let P be a probability measure under which the process Wt, t ∈ [0,T ], is a Brownian h-process in D started from z conditioned to exit D at a. Here T is the hitting time of ∂D. For each realization of Wt, we let Q equal ±1 depending on whether the path W [0,T ] intersects β an even or an odd number of times. This is a bit imprecise since there are an infinite number of intersections. One way to make it precise by lifting by multi-valued logarithm F (z) = −i log z. The image of β, F (β) is the union of the 2π translates of the positive imaginary axis. If we choose a particular image of z, sayz ˆ, then there is a corresponding image of a,a ˆ, such thata ˆ is on the boundary of the connected component of H \ F (β) that containsz ˆ. Oncez ˆ is given, the h-process ˆ is mapped to an h-process Wt in H conditioned to leave H at F (a) = {aˆ + 2πk : k ∈ Z}. Then Q = 1 if and only if Wˆ exits H at {aˆ + 4πk : k ∈ Z}. Let λ(z; a) = Ez,a[Q].

Remark. We can also consider an “observable” λˆ living on the Riemann surface given by the branched two-cover of D with branch cut β. We lift the h-process in D so that it starts on the

24 two-cover and the observable is the expectation of the giving +1 if the h-process reaches the boundary on the top sheet and −1 if it reaches it on the bottom sheet. Then λˆ only changes sign when evaluated at the two different points in the fiber of z ∈ D and so could in physics language be called a “spinor”. See [CI13] for a similar construction in a related context.

Note that symmetry implies that λ(z; a) = 0 for z ∈ α. Let τ = inf{t : Wt ∈ α}. If τ > t, then the value of Q is determined: it is +1 if z and a are in the same component of D \ (α ∪ β) and −1 if they are in different components. Therefore,

h (z, a) |λ(z; a)| = | Ez,a[Q; τ > T ]| = Pz,a (W [0,T ] ∩ α = ∅) = D\α . hD(z, a)

Lemma 5.1. If 0 6 θ < π, then as  ↓ 0,

1/2 λ(−; a) = 2  sin θa + O().

Proof. Let D = D \ [0, 1),U = D \ f(α). The map √ 2 z 1 − z φ(z) = , φ0(z) = √ z + 1 z (z + 1)2 is a conformal transformation of D onto the upper half plane H with 1 sin θ φ(e2iθ) = , |φ0(e2iθ)| = , φ(e2iµ) = 2 1/2 eiµ [1 + O()]. cos θ 2 cos2 θ The scaling rule for the Poisson kernel implies that

|φ0(ei2θ)| Im[φ(z)] π h (z, ei2θ) = π |φ0(ei2θ)| h (φ(z), φ(ei2θ)) = . D H [φ(ei2θ) − Re φ(z)]2 + [Im f(z)]2

In particular, i2µ i2θ 1/2 π hD(e , e ) =  sin θ + O().

For the simply connected domain DA and with a marked boundary point a ∈ ∂DA, we define

λA(z; a) = λ(fA(z); fA(a)), z ∈ D.

0 where we recall fA : DA → H with fA(w0) = 0, fA(w0) > 0. For later reference, we note that

1/2 −1 1/2 −1 fA(−nA ) = rA nA + O(rA ), and hence 1/2 −1/2 1/4 −1/2 λA(−nA ; a) = 2 rA nA sin θa + O(rA ). (28)

25 5.2 Asymptotics of ΛA(0, a) Our construction will be somewhat neater if we start with the observation in the next lemma.

Lemma 5.2. There exists  > 0 such that the following holds. Suppose D is a simply connected domain containing the origin and f : D → D is a conformal transformation with f(0) = 0. Let S = {x + iy : |x|, |y| < |f 0(0)|} be a square centered at the origin and let γ(t) = f −1(t). Let σ = inf{t : γ(t) ∈ ∂S}. Then γ[σ, 1] ∩ S = ∅.

Proof. This can be proved using standard distortion estimates for conformal maps.

We now set some notations.

2 • Let Un ⊂ Z denote the discrete square centered at 0 of side length 2n. − • Let Un = Un \ [0, . . . , n] be the slit discrete square. − − • Let V = Vn,V = Vn be corresponding continuum domains

− Vn = {x + iy : |x|, |y| < n},Vn = V \ (0, n].

• Let  be a fixed positive number that satisfies Lemma 5.2.

• For every A, let n = br c =  r + O(1) and let U ,U −,V ,V − denote U ,U − ,V ,V − , A A A A A A A nA nA nA nA respectively.

∗ −1 ∗ −1 • Let β = (0, w0] ∪ fA (β). The curve β goes from 0 to fA (1) ∈ ∂DA. By Lemma 5.2 we ∗ ∗ can see that after the curve β hits ∂VA it does not return to VA. We will also write β for ∗ the arc β ∩ VA.

− ∗ z,w ∗ − • For z, w ∈ VA \ β we define Q = −1 if β separates z from w in in VA and +1 otherwise.

• We use Sj and Bt, respectively, to denote discrete and continuous paths. • We use Pz,a, Ez,a to denote probabilities and expectations with respect to an h-process con- ditioned to leave the domain at a. We will use this notation for both Sj and Bt. This should not cause confusion. All h-processes in this paper will be those given by boundary points, that is, where the harmonic function in the Possion kernel. We recall that

z,a HA(ωk, a) z,a P {(S0,...Sk) = (ω0, . . . , ωk)} = P {(S0,...Sk) = (ω0, . . . , ωk)}, (29) HA(z, a) with a similar formula for the Brownian h-process.

Lemma 5.3. We have

X HA(w, a) z,w ΛA(0, a) = H∂U − (0, w) Q ΛA(w, a). A HA(0, a) w∈∂UA

26 − − Proof. We write U, U for UA,UA . Let σ = min{k > 1 : Sk ∈ N∪{0}}. Since Sτ∧σ ∈ {0, 1} implies that I = 0, Using (27) and the strong Markov property applied at the stopping time ρ gives

0,a ΛA(0, a) = E [Q(S[0, ρ]) I ΛA(Sρ, a)] 0,a = E [Q(S[0, ρ]) ΛA(Sρ, a); Sρ ∈ {2, . . . , n}] 0,a +E [Q(S[0, ρ]) ΛA(Sρ, a); Sρ ∈ ∂U]. (30) Suppose ω is a nearest-neighbor path from 0 to w ∈ {2, . . . , n}, and otherwise staying in U −. Then ifω ¯ is the reflection of ω about the real axis we have Q(ω) = −Q(¯ω). Indeed, ω ⊕ ω¯ is a loop rooted at 0 intersecting N exactly once and it is symmetric about the real axis and so has winding number 1 about w0. Moreover, by (29) ω andω ¯ have the same distribution. This implies that Q(ω) = −Q(¯ω). Since the (reflected) paths can be coupled after the first visit to {2, . . . , n} we conclude that the first expectation in (30) vanishes. If there exists a nearest neighbor path in U¯ from 0 to w ∈ ∂U that does not intersect β∗ ∪ {0, . . . , n} except at its starting and endpoint, then any path from 0 to w that avoids {0, . . . , n} will intersect β∗ an even number of times, yielding a value of Q0,w = 1. Similarly, for all other w ∈ ∂U, Q0,w = −1. We then see that,

0,a ΛA(0, a) = E [Q(S[0, ρ]) ΛA(Sρ, a); Sρ ∈ ∂U] X 0,a 0,w = P (Sρ = w) Q ΛA(w, a) w∈∂U

X HA(w, a) 0,w = H∂U − (0, w) Q ΛA(w, a). HA(0, a) w∈∂U

1/2 Lemma 5.4. If z = zn = −n , then Z −3/4 hVA (w, a) z,w λA(z, a) = O(n ) + h − (z, w) Q λA(w; a) |dw| VA ∂VA hVA (z, a)

−3/4 X hVA (w, a) z,w = O(n ) + h − (z, w) Q λA(w; a). VA hVA (z, a) w∈∂UA Proof. The proof of the first expression is similar to that of the last lemma. Note, however, since the winding is measured around w0 it is possible that when we concatenate a path from z to the positive real axis with its reflection about the real axis we could get a loop whose winding number about w0 is even. By toplogy we can see this can only happen if the path hits η := [0, w0] ∪ [0, w0] before hitting [0, ∞). By the Beurling estimate, the probability of hitting η before hitting the positive real axis is O(|z|−1/2). Also the value of λ on η is O(n−1/2). Therefore, this term produces an error of size O(n−3/4). Therefore, Z −3/4 hVA (w, a) z,w λA(z; a) = O(n ) + h − (z, w) Q λA(w; a) |dw|. VA ∂VA hVA (z, a) For the second estimate we first note that the Beurling estimate and the covariance rule for the Poisson kernel show that

hV (w, a) z,w 0 1/2 −3/2 |hV − (z, w) Q λA(w; a)| 6 c|hV − (z, w)| 6 c |z| n , ∀w ∈ ∂V. hV (z, a)

27 Let E be the set obtained by removing from ∂V its intersection with the six balls of radius n1/2 centered at the four corners of V , the point at which the slit meets ∂V , and the point at which β∗ meets ∂V . Then by the last estimate Z hV (w, a) z,w hV − (z, w) Q λA(w; a) |dw| ∂V hV (z, a) Z hV (w, a) z,w 1/2 −1 = hV − (z, w) Q λA(w; a)|dw| + O(|z| n ). E hV (z, a) Notice that Qz,w is constant on each component of E. Derivative estimates for positive harmonic functions show that if u, v are in the same component of E and |u − v| 6 1 then h (u, a) h (v, a) V = V (1 + O(n−1)); hV (z, a) hV (z, a)

Finally, since each point on E is distance at least n1/2 from a corner one can map to the unit disk and compare the Poisson kernels there (using, e.g., Schwarz reflection and the distortion theorem to compare the derivatives) to see that

−1/2 hV − (z, u) = hV − (z, v)(1 + O(n )). These estimates imply the lemma.

We proceed to show that each of the factors in the expression for ΛA(0, a) in Lemma 5.3 is close to its continuum counterpart in the decomposition of Lemma 5.4 and then appeal to Lemma 5.4 to go back to the continuum function. The estimates naturally split into two parts. The first deals with fine asymptotics close to the tip of the slit in the slit square and the second with what happens between the boundary of the slit square and the boundary of A. We state the results here, but postpone the proofs to the two subsequent subsections. We then combine them to prove Theorem 1.5. We define H − (0, b) ¯ ∂Un Hn(0, b) = P , b ∈ ∂Un, H − (0, b) b∈Un ∂Un − − This is the conditional probability that an excursion starting at 0 in Un exits ∂Un at b given that it exits at a point in ∂Un. We also define the analogous quantity for Brownian motion,

h − (−1, b) ¯ Vn hn(−1, b) = R , b ∈ ∂Vn, h − (−1, w) |dw| ∂Vn Vn

Theorem 5.5. If b ∈ ∂Un,

h −1/20 i H¯n(0, b) = h¯n(−1, b) 1 + O(n ) .

Proof. See Section 5.3.

We do not believe that this error term is optimal, but this suffices for our needs and is all that we will prove. We will also need the following corollary.

28 Corollary 5.6. There exists c, u such that

X −1/2 −u H − (0, b) = c n [1 + O(n )]. ∂Un − ∂Un Proof. Let X K(n) = H − (0, b), ∂Un − ∂Un Z K˜ (n) = h − (−1, w) |dw|. Vn ∂Vn and note that X K(rn) = K(n) Hn(0, b) HUrn (b, ∂Urn), − ∂Un Z ˜ ˜ K(rn) = K(n) hn(−1, w) hmVrn (w, ∂Vrn) |dw|. ∂Vn Using the previous theorem and strong approximation, we can see that for 1 6 r 6 2, K(rn) K˜ (rn) = + O(n−u). K(n) K˜ (n)

By direct calculation using conformal mapping,

K˜ (rn) = r−1/2 + O(n−u). K˜ (n)

Therefore, K(rn) = r−1/2 + O(n−u). K(n)

Given the corollary we can restate Theorem 5.5 as: there exists c > 0, u > o such that

 −u  H − (0, b) = c h (−1, b) 1 + O(n ) . (31) ∂Un Vn We will also need that

HA(w, a) hDA (w, a) −u − = + O(n ), w ∈ ∂Un , a ∈ ∂A (32) HA(0, a) hDA (0, a) which is a direct consequence of Theorem 2.3. This handles the first part of the argument. The second part is handled in the next proposition. Note that for w ∈ ∂Ua the quantities ΛA(w, a) and λA(w; a) are “macroscopic” quanities for which one would expect Brownian motion and random walk to give almost the same value.

Proposition 5.7. There exist u > 0, c < ∞ such that if A ∈ A, a ∈ ∂A, and w ∈ ∂UA,

−u |ΛA(w, a) − λA(w; a)| 6 c rA . (33)

29 Proof. See Section 5.4.

Proof of Theorem 1.5 assuming Theorem 5.5 and Proposition 5.7. We will first show (5), that is., there is a constant 0 < c2 < ∞ such that −1/2  −u ΛA(0, a) = c2rA sin θa + O rA . − − ¯ ¯ − − ¯ ¯ We write U, U ,V,V , H, h for UA,UA ,VA,VA , HnA , hnA . Using Lemma 5.3, (31), (32), and (33) we see that there is a constant 0 < c0 < ∞ such that

X HA(w, a) z,w ΛA(0, a) = H∂U − (0, w) Q ΛA(w, a) HA(0, a) w∈∂U

X hDA (w, a) z,w  −u  = c0 hV − (−1, w) Q λA(w; a) (1 + O(rA ) . hD (0, a) w∈∂U A A simple argument, using conformal mapping say, shows that

1/2 1/4 −1/4 hV − (−nA , w) = nA hV − (−1, w) [1 + O(n )]. We now use Lemma 5.4 and (28) to conclude that

1/4 −1 −3/4 1/2  −u  nA c0 ΛA(0, a) = O(rA ) + λA(−nA ; a) 1 + O(n ) −1/4  −u  = 2 nA sin θa + O(n ) .

Since nA =  rA + O(1), we get (5). We will give a symmetry argument here to deduce (6) from (5). Suppose we replace Jt with Θ + π  Θ + π  J˜ = t − 0 . t 2π 2π −1 −1 In other words, we place the discontinuity of the argument at fA ((−1, 0]) rather than at fA ([0, 1)). Then we can see that if Sτ = a, then Q˜(ω[0, τ]) = ±Q(ω[0, τ]) with the minus sign appearing if and only if π/2 6 θa < π. Now given A, consider its reflection along the line {Re(z) = 1/2}, that is, let ρ(z) = 1 − z, A0 = 0 0 0 i2θ0 ρ(A). Let a = ρ(a) and define θ by fA0 (a ) = e . Note that ρ(DA0 ) = DA, fA0 (z) = −fA(ρ(z)), −1  −1  fA ([0, 1)) = ρ fA0 ((−1, 0]) , and

0 2iθ0 n π 0o f (a) = −f 0 (a ) = −e = exp 2i − θ . A A 2 π 0 0 π 0 0 π 3π 0 In other words, θa = 2 −θ (mod π). If θa, θ ∈ [0, π), then θa = 2 −θ if 0 6 θ 6 2 and θa = 2 −θ π 0 if 2 < θ < π, and hence  sin θ0 0 θ0 π/2 cos θ = 6 6 a − sin θ0 π/2 < θ0 < π. From the previous paragraph and (5), we see that

0 −1/2  0 −u 0 π Λ(1, a) = Λ 0 (0, a ) = c r sin θ + O r , 0 θ , A 2 A A 6 6 2 0 −1/2  0 −u π Λ(1, a) = −Λ 0 (0, a ) = c r − sin θ + O r , < θ < π. A 2 A A 2

30 5.3 Poisson kernel convergence in the slit square: Proof of Theorem 5.5 The rate of convergence of the discrete Poisson kernel to the continuous kernel is very fast in the case of rectangles aligned with the coordinate axes.

Lemma 5.8. There exists c < ∞ such that if n/10 6 m 6 10n, and

A = A(n, m) = {j + ik : 1 6 j 6 n − 1, 1 6 k 6 m − 1}, R = R(n, m) = {x + iy : 0 < x < n, 0 < y < m}, then 0 0 0 0 min{k , m − k } 4 h∂R(ik, n + ik ) − H∂A(ik, n + ik ) c . 6 n4 Proof. Let us assume that k 6 m/2 (symmetry can handle the k > m/2 case) and write w = n + ik0, d = min{k0, m − k0}. Using separation of variables (see [LL10, Chapter 8] for more details), one can find hV (1 + ik, w) exactly as an infinite series

∞ 2 X sinh(lπ/m) lkπ  lk0π  h (1 + ik, w) = sin sin . R m sinh(lπn/m) m m l=1 Similarly one can find the discrete Poisson kernel by separation of variable as a finite Fourier series; more specifically,

m−1    0  2 X sinh(αlπ/n) lkπ lk π HA(1 + ik, w) = sin sin , m sinh(αlπ) m m l=1 where αl is the solution to α π  lπ  cosh l + cos = 2. n m Note that ln α = 1 + O(l2/n2) . (34) l m The terms in both sums decay exponentially. Using this and the inequality | sin x| 6 |x|, we can see that there exists c < ∞ such that

|hV (1 + ik, w) − HA(1 + ik, w)| 6  

1 kd X 2 sinh(lπ/m) sinh(αlπ/n) c  6 + 3 l −  n n sinh(lπn/m) sinh(αlπ) l6c log n Using (34) we can see that for l 6 c log n, sinh(α π/n) sinh(lπ/m) l = 1 + O(l3/n2) , sinh(αlπ) sinh(lπn/m) and hence the sum is O(n−3) giving ckd |h (1 + ik, w) − H (1 + ik, w)| . V A 6 n6

31 Note that 1 1 H (ik, w) = H (1 + ik, w) = h (1 + ik, w) + O(n−4). ∂A 4 A 4 V

We can extend the function h(z) = hV (z, w) to {x + iy : −n < x < n, −m < y < m} by Schwarz reflection. Note that 2 |h(z)| 6 c d/n , |z − ik| 6 min{m, n}/4. 2 Since min{m, n}  n, estimates for derivatives of harmonic functions then imply that |∂xxh(z + 4 ix)| 6 c d/n for 0 6 x 6 1. Using a Taylor expansion and these estimates we see that 4 hV (ik + 1, w) = h∂V (ik, w) + O(d/n ).

Let us abuse notation slightly and write

1 X G − (0, ζ) = G − (e, ζ). Un 4 Un |e|=1

Lemma 5.9. For every n, there exists cn such that for all |ζ| > n/2,

h −1/20 i G − (0, ζ) = c g − (−1, ζ) 1 + O(n ) . Un n Vn

Moreover, the constant cn is uniformly bounded away from 0 and ∞.

− Proof. Let Fn : Vn → D be the conformal transformation with Fn(−n/8) = 0,Fn(0) = 1. Note that Fn(z) = F (z/n) where F = F1. It is easy to see that |Fn(ζ) − 1| = |F (ζ/n) − 1| is uniformly 0 −1/2 −1 bounded away from 0 for |ζ| > n/2. We know that Fn(−1) = c0 n [1 + O(n )] for some c0. Using the explicit form of the Poisson kernel in the disk, we can see that

gV − (−1, ζ) = gD(Fn(−1),Fn(ζ)) −1/2 −1 = gD(1 − c0 n ,F (ζ/n)) [1 + O(n )] −1/2 −1 = 2π c0 n h∂D(1,F (ζ/n)) [1 + O(n )] 2π c [1 − |F (ζ/n)|2] = 0 [1 + O(n−1)]. n1/2 |F (ζ/n) − 1|2

Let z0 = −bn/8c. By equation (40) of [KL05], we can see that

2 0 1 − |F (ζ/n)| −1/20 G − (0, ζ) = G − (0, z ) [1 + O(n )]. U U |F (ζ/n) − 1|2

Here we are using the uniform bound on |F (ζ/n) − 1|. (In [KL05, (40)], the function F is the map 0 that takes D − to the disk sending z to the origin, but this contributes only a small error.) It is Un −1/2 easy to check that GU − (0, z)  n .

− − − − 0 Proof of Theorem 5.5. We write U = Un ,V = Vn . Let us first consider the case b = n + ik 0 with 0 < k < n, Let m = b3n/4c and let R = Rn be the rectangle

R = {x + iy : m < x < n, 0 < y < n}.

32 2 As an abuse of notation we will also write R for R ∩ Z . A last-exit decomposition shows that

n−1 X H − (0, b) = G − (0, m + ji) H (m + ji, b). ∂U Un ∂R j=1

A similar decomposition shows that

1 Z n hV − (−1, b) = gV − (−1, ζ) h∂R(ζ, b) |dζ|. 2π 0 This latter equality is perhaps better seen by writing 1 1 hV − (−1, b) = lim gV − (−1, b − ) = lim gV − (b − , −1), ↓0 2π ↓0 2π and using the strong Markov property. Lemma 5.8 shows that

h (m + ji, b) H (m + ji, b) = ∂R + O(d/n4), (35) ∂R 4 where d = min{k, n − k}. Using, say, the gambler’s ruin estimate (or the stronger estimate below), we can see that n−1 X GU − (0, m + ji) j=1 is bounded above by O(n) times the probability that a random walk starting at 0 hits ∂R before leaving U −. The latter probability is O(n−1/2) and hence

n−1 X 1/2 G − (0, m + ji) = O(n ). Un j=1

Therefore, using (35),

n−1 X  1  H∂U − (0, b) = G − (0, m + ji) H∂R(m + ji, b) − h∂R(m + ji, b) Un 4 j=1 n−1 1 X + G − (0, m + ji) h∂R(m + ji, b) 4 Un j=1 n−1 5/2 1 X = O(d/n ) + G − (0, m + ji) h∂R(m + ji, b). 4 Un j=1

By the last lemma, we can write

n−1 5/2 −1/20 X H − (0, b) = O(d/n ) + [1 + O(n )] c g − (−1, m + ji) h (m + ji, b), ∂U n Vn ∂R j=1

33 where this c)n is 1/4 times the value in that lemma. The sum above is greater than a constant times d/n3/2. Hence we can write this as

n−1 −1/20 X H − (0, b) = [1 + O(n )] c g − (−1, m + ji) h (m + ji, b). ∂U n Vn ∂R j=1

Routine estimates allows us to approximate the sum by an integral, Z n −1/20 H − (0, b) = [1 + O(n )] c g − (−1, ζ) h (ζ, b) |dζ| ∂U n Vn ∂R 0 −1/20 = 2π cn hV − (−1, b) [1 + O(n )]

We assumed that b = n + ik0 with k0 > 0. There are four other cases. For example, if b = k + ni with k > 0 we replace the rectangle R with the rectangle

{x + iy : −n < x < n, m < y < n}.

We do the other three cases similarly. In all cases we choose z0 = −bn/8c. The same argument gives us −1/20 H∂U − (0, b) = 2π cn hV − (−1, b) [1 + O(n )], with the same value of cn. From this we conclude that

H∂U − (0, b) hV − (−1, b) h −1/20 i P = R 1 + O(n ) . y∈∂U − H∂U (0, y) ∂V hV − (−1, y)|dy|

5.4 From the Slit Square to ∂A: Proof of Proposition 5.7 To prove Lemma 5.7, we need a version of Theorem 2.4 for the h-processes. We will however use −1 slightly different stopping times. Let ψ = fA and if 0 < u < 1, let

−u −u τu = inf{t > 0 : d(ψ(S2t), ∂D) 6 n },Tr = inf{t > 0 : d(ψ(Bt), ∂D) 6 n },

σu = τ2u ∧ T2u. Let τ, T be the times that the paths reach a. Intuitively, knowing that S and B will exit at the same point ζ should only make it easier to find a coupling that ensures that Ba and Sa are close with high probability. Indeed, this is the case. We will use the following lemma which does not give the optimal bounds.

Theorem 5.10. There exist u > 0, c < ∞ such that if n 6 dist(z, ∂A) 6 n + 1, a ∈ ∂A, and z ∈ A a a a a with |z| 6 3n/4 then B and S can be defind on the same with B0 = S0 = z such that except for an event of probability at most cn−u,

a a sup |S2t − Bt | 6 c log n, (36) 06t6σu

−u diam (ψ ◦ S[σu, τ]) + diam (ψ ◦ B[σu,T ]) 6 c n . (37)

34 Proof of Theorem 5.10. We use the KMT coupling to couple the paths up to time σu such that except for an event of probability O(n−2), (36) holds where the distribution is that of random walk and Brownian motion. To convert to the h-process distribution we use (14) which requires u sufficiently small and derivative estimates for harmonic functions to show that the paths are −u a t close except perhaps of an event of probability O(n ) in the new measure. We can let St ,Ba run independently after time σu. Standard estimates will show that (37) holds for each of the paths except for an event of probablity O(n−u).

Proof of Proposition 5.7. We couple Ba and Sa using the coupling of Theorem 5.10. Let

1/2 1/2 ξ = inf{t : |Bt| 6 n }, ζ = inf{t : |S2t| 6 n }.

B S B S Let Q = Q[B[0,T ]),Q = Q[S[0, τ]] Note that Q = Q Ia provided that ξ > T, ζ > τ, and (36)-(37) hold. Therefore, if z ∈ ∂U,

z  B S  z  B  z  S  E Q − Q Ia 6 E Q ; ξ < T + E Q Ia; ζ < τ +Pz{ξ < T < ζ} + Pz{ζ < T < ξ} + O(n−u). −1/4 −1/2 Since |λA(z; a)| 6 c n for |z| 6 n , we can use the strong Markov property to see that z  B  z  B  −1/4 E Q ; ξ < T 6 E Q | ξ < T 6 O(n ), and similarly, z  S  −1/4 E Q Ia; ζ < τ 6 O(n ). If (36) and (37) hold, then in order for ξ < T < ζ it must be true that

1/2 1/2 n < min |St| 6 n + c log n. 06t6τ

Using the strong Markov property and the gambler’s ruin estimate, we can see that if |St| 6 n1/2 + c log n, then the probability that it reaches distances 2n1/2 without entering the disk of radius n1/2 is O(log n/n1/2). (The gambler’s ruin estimate is for simple random walk, but this close to the origin the h-process is mutually absolutely continuous with respect to the simple walk.) Therefore, z −u P {ξ < T < ζ} 6 O(n ), and similarly for Pz{ζ < T < ξ}.

References

[BM10] Martin T. Barlow and Robert Masson. Exponential tail bounds for loop-erased random walk in two dimensions. Ann. Probab., 38(6):2379–2417, 2010.

[BVK13] C. Beneˇs, F. Johansson Viklund, and M. Kozdron. On the rate of convergence of loop- erased random walk to SLE(2). Comm. Math. Phys, 318(2):307–354, 2013.

[CI13] D. Chelkak and K. Izyurov. Holomorphic spinor observables in the critical ising model. Comm. Math. Phys, 322:303–332, 2013.

35 [Fom01] Sergey Fomin. Loop-erased walks and total positivity. Trans. Am. Math. Soc., 353(9):3563–3583, 2001.

[GPS13] C. Garban, G. Pete, and O. Schramm. Pivotal, cluster and interface measures for critical planar percolation. J. Amer. Math. Soc., 26:939–1024, 2013.

[HS11] Cl´ement Hongler and Stanislav Smirnov. The energy density in the planar Ising model. Preprint, 2011.

[Ken00] Richard Kenyon. The asymptotic determinant of the discrete Laplacian. Acta Math., 2000.

[KL05] M. J. Kozdron and G. F. Lawler. Estimates of random walk exit probabilities and application to loop-erased random walk. Electron. J. Probab., 10:1396–1421, 2005.

[KMT76] J. Koml´os,P. Major, and G. Tusn´ady. An approximation of partial sums of independent RV’s, and the sample DF. II. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 34:33–58, 1976.

[KW11] Richard Kenyon and David Wilson. Spanning trees of graphs on surfaces and the intensity of loop-erased random walk on Z2. Preprint, 2011.

[Law13a] G. F. Lawler. The probability that loop-erased random walk uses a given edge. 2013.

[Law13b] Gregory F. Lawler. Intersections of random walks. Reprint of the 1996 ed. New York, NY: Birkh¨auser,reprint of the 1996 ed. edition, 2013.

[LL10] Gregory F. Lawler and Vlada Limic. Random walk: A modern introduction. Cambridge: Cambridge University Press, 2010.

[LR13] G. F. Lawler and M Rezaei. Minkowski content and natural parametrization for the Schramm-Loewner evolution. 2013.

[LSW00] G. F. Lawler, O. Schramm, and W. Werner. Values of brownian intersection exponents ii: Plane exponents. Acta Math., pages 275–308, 2000.

[LSW04] G. F. Lawler, O. Schramm, and W. Werner. Conformal invariance of planar loop-erased random walks and uniform spanning trees. Ann. Probab., 32:939–995, 2004.

[LT07] G. F. Lawler and J. Trujillo Ferreras. Random walk loop soup. Trans. AMS., pages 767–787, 2007.

[LW04] Gregory F. Lawler and Wendelin Werner. The Brownian loop soup. Probab. Theory Relat. Fields, 128(4):565–588, 2004.

[Mas09] R. Masson. The growth exponent for planar loop-erased random walk. Electronic Journal of Probability, 14:1012–1073, 2009.

36