arXiv:2011.00323v1 [math.PR] 31 Oct 2020 Keywords rn[]suidmdl ihvrie en onso oso on pr point Poisson C a and of [11], points Wu being and vertices Fontes with Ferrari, w models mo model studied [4]. “torch-light” a Sarkar [9] a Tran [14], and and [7], Roy Sarkar Valle Athreya, and and by mod Coletti Roy by Howard’s Gangopadhyay, studied the was lattice, by Rinaldo model integer and studied the on was Rodriguez-Iturbe set networks (see vertex the geology attract with in have graphs becau networks arose limits graphs drainage these scaling in to Interest their years. and recent in trees attention spanning directed Random Introduction 1 Web. Brownian Principle, Invariance Limit, Scaling fIfhn saa 14-34,Ia.Eal azadeh.parvane Email: Iran. 81746-73441, Isfahan Isfahan, of fIfhn saa 14-34,Ia.Eal a.parvardeh@sc Email: Iran. 81746-73441, Isfahan Isfahan, of ∗ † otlades eateto ttsis aut fMathemat of Faculty , of Department address: Postal otlades eateto ttsis aut fMathemat of Faculty Statistics, of Department address: Postal riaentokwt eedneadthe and dependence with network drainage A iuiesaig hssse ovre ndsrbto ot to distribution in converges system this scaling, diffusive lotsrl for surely almost uey h rp ossso utoete for tree tha one show just We of geometry. consists graph graph its the examine surely, we first, At upwardly. ijittesfor trees disjoint eitoueannMroinsse fcaecn non-simp coalescing lattice of integer system the on non-Markovian walks a introduce We ietdSann re,Drce pnigFrs,Mro Cha Markov Forest, Spanning Directed Trees, Spanning Directed : .Aae avnhZ. Parvaneh Azadeh S. d d ≥ ≥ .I diin hr sn iifiiept ntegraph the in path bi-infinite no is there addition, In 4. .Te,w rv htfor that prove we Then, 2. rwinweb Brownian oebr3 2020 3, November Z d nwihwlesmv nte45 the in move walkers which in Abstract 1 ∗ fhnParvardeh Afshin d 2 = [email protected] i.ui.ac.ir d , ,udrasuitable a under 2, = n nntl many infinitely and 3 c n ttsis University Statistics, and ics c n ttsis University Statistics, and ics eBona web. Brownian he eo hi connection their of se ◦ ih cones light erandom le ,almost t, † lo drainage of el uirand oupier 1].For [16]). egeneral re sstudied as dlt of lots ed cs on ocess in, the continuum space. These models besides being of intrinsic interest are also related to the Brownian web. In this article, we study a slight variant of a random tree/forest model intro- duced by Athreya, Roy and Sarkar [4]. d d Consider the integer lattice Z with d 2. For u = (u1,...,ud) Z and k, h N, let ≥ ∈ ∈ (i) mk(u)=(u1,...,ud−1,ud + k);

d (ii) H(u,k)= v =(v1,...,vd) Z : vd = ud + k and v mk(u) 1 k and as a convention H(u, 0) = ;∈ k − k ≤  ∅ (iii) V (u, h) = v : v H(u,k) for some 1 k h and as a convention V (u, 0) = ; ∈ ≤ ≤ ∅ ∞ (iv) V (u)= h=1 V (u, h).

S bc bc bc bc bc bc bc

bc bc bc bc bc

bc bc bc

bc u

Figure 1: The fifteen vertices on the top of u make the region V (u, 3), and the seven vertices at the top constitute H(u, 3).

d Now, let Uw : w Z be a collection of i.i.d. uniform (0, 1) random variables. { ∈ } d Fix 0

2 Theorem 1.1 We have, almost surely, (i) for d =2, 3, the random graph G is connected and consists of a single tree.

(ii) for d 4, the random graph G is disconnected and consists of infinitely many disjoint≥ trees.

(iii) for d 2, the random graph G contains no bi-infinite path. ≥ Remark 1.2 The random graph described above differs from that of Athreya, Roy and Sarkar [4] in that the mechanism by which edges are constructed are differ- ent. In particular, for two-dimensions, the graph we described is a planar graph, while in their model it is not planar almost surely. This distinction carries in all dimensions, in the sense that while our graph in d-dimensions can be embedded in Rd, the same is almost surely not true for the graph constructed in Athreya, Roy and Sarkar [4] (see Proposition 2.1).

Although a priori there is no Markov structure in the graph G, in the next section we exhibit a construction of the graph which brings to fore the Markovian nature of the graph. Having established the Markovian structure, the proof of Theorem 1.1 follows, after some modifications, from that in Gangopadhyay, Roy and Sarkar [14]. We also show that the scaling limit of the graph G in two-dimensions is the Brownian web. The Brownian web is a collection of coalescing Brownian motions starting from each space-time point in R2. It was introduced by Arratia [1] to study the asymptotic behaviour of the one-dimensional . He studied a process of coalescing one-dimensional Brownian motions starting from every space- time point in R 0 , and later [2], he generalised the construction to a process of coalescing Brownian×{ } motions starting from every space-time point in R2. Later, T´oth and Werner [20] described a version of the Brownian web in their study of the true self repelling motion. The Brownian web we describe here is as in Fontes, Isopi, Newman and Ravishankar [12, 13]. R2 R2 Let c denote the completion of the space time plane with respect to the metric

tanh(x1) tanh(x2) ρ((x1, t1), (x2, t2)) := tanh(t1) tanh(t2) . (1) | − | ∨ 1+ t1 − 1+ t2 | | | | R2 2 The topological space c can be identified with the continuous image of [ , ] under a map that identifies the line [ , ] with a point ( , ),−∞ and∞ the −∞ ∞ × {∞} ∗ ∞ line [ , ] with a point ( , ). −∞ ∞ × {−∞}R2 ∗ −∞ A path π in c with starting time σπ [ , ] is a mapping π :[σπ, ] [ , ] such that π( ) = and,∈ when−∞ σ∞ = , π( ) = .∞ Also→ −∞ ∞ ∪ {∗} ∞ ∗ π −∞ −∞ ∗ 3 R2 t (π(t), t) is a continuous map from [σπ, ] to ( c , ρ). We then define Π to be7→ the space of all paths in R2 with all possible∞ starting times in [ , ]. The c −∞ ∞ following metric (for π , π Π), 1 2 ∈

tanh(π1(t σπ1 )) tanh(π2(t σπ2 )) d(π1, π2) := tanh(σπ1 ) tanh(σπ2 ) sup ∨ ∨ | − |∨t≥σπ ∧σπ 1+ t − 1+ t 1 2 | | | | makes Π a complete, separable metric space. The metric d is slightly different from the original choice in [13] which is somewhat less natural as explained in Sun and Swart [19]. Convergence in this metric can be described as locally uniform convergence of paths as well as convergence of starting times. Let be the space of compact subsets of (Π,d) equipped with the Hausdorff H metric dH given by

dH(K1,K2) := sup inf d(π1, π2) sup inf d(π1, π2). π1∈K1 π2∈K2 ∨ π2∈K2 π1∈K1

( ,d ) is a complete, separable metric space. Let be the Borel σ-algebra on H H BH the metric space ( ,dH). The Brownian web is characterized as (Theorem 2.1 [13]): H W

Theorem 1.3 There exists an ( , H) valued such that whose distribution is uniquely determinedH B by the following properties: W

2 (a) for each deterministic point z R , there is a unique path πz almost ∈ ∈ W surely;

2 (b) for a finite set of deterministic points z ,..., z R , the collection (πz ,...,πz ) 1 k ∈ 1 k is distributed as coalescing Brownian motions starting from z1,..., zk;

2 (c) for any countable deterministic dense set R , is the closure of πz : z in (Π,d) almost surely. D ⊆ W { ∈ D} Theorem 1.3 shows that the collection is almost surely determined by count- ably many coalescing Brownian motions.W An extensive study has been done to understand the properties of the Brownian web (see [20, 13, 19, 5]). Since we need to develop more notation, the statement and the proof of our result regarding the scaling limit (Theorem 4.2) is relegated to Section 4.

2 An alternate construction of the graph

First we justify Remark 1.2.

4 Proposition 2.1 The graph G does not have edges which cross each other almost surely. Proof. Suppose there exist two distinct edges x, y and z, w which cross each other. Without loss of generality assume thath x(di) < yh(d), iz(d) < w(d) and y(d) w(d). Since the two edges cross each other, we must have y, w ≤ y if U < U{ } ⊆ V (x) V (z). Thus, y(d) = w(d) and h(x) = h(z) = y w . This ∩ w if Uw < Uy proves the proposition.  Now we proceed to the construction of the graph which is based on the con- struction given in Roy, Saha and Sarkar [17]. The proofs of many of the statements we have regarding the construction follow from slight modifications of the argu- ment in [17] and as such we relegate them to the Appendix. We also state the construction of the graph starting from two vertices, the general case for k vertices follows in a similar fashion. Fix two vertices u, v Zd (not necessarily open) with u(d) = v(d). Let ∈ h0(u) := u, h0(v) := v, ∆0 := , r0 = s0 = u(d) and Ψ0 : ∆0 [0, 1] the empty function. Take ∅ →

1. h1(u) := h(u), h1(v) := h(v), 2. r := min h (u)(d), h (v)(d) , s := max h (u)(d), h (v)(d) , 1 { 1 1 } 1 { 1 1 }

V (h0(u),s1) V (h0(u),r1) if h1(u)(d) h1(v)(d) 3. ∆1 := \ ≥ , V (h (v),s ) V (h (v),r ) if h (v)(d) h (u)(d) ( 0 1 \ 0 1 1 ≥ 1

4. Ψ1 : ∆1 [0, 1] given by Ψ1(w)= Uw for w ∆1, where Ψ1 is taken to be the empty→ function if ∆ = . ∈ 1 ∅ Having defined hn(u), hn(v), rn, sn, ∆n and Ψn, we obtain

h(hn(u)) if hn(u)(d)= rn 1. hn+1(u) := , (hn(u) if hn(u)(d) >rn

h(hn(v)) if hn(v)(d)= rn hn+1(v) := , (hn(v) if hn(v)(d) >rn 2. r := min h (u)(d), h (v)(d) , s := max h (u)(d), h (v)(d) , n+1 { n+1 n+1 } n+1 { n+1 n+1 }

V (hn(u),sn+1) V (hn(u),rn+1) if hn+1(u)(d) hn+1(v)(d) 3. ∆n+1 := \ ≥ , V (h (v),s ) V (h (v),r ) if h (v)(d) h (u)(d) ( n n+1 \ n n+1 n+1 ≥ n+1

4. Ψn+1 : ∆n+1 [0, 1] given by Ψn+1(w) = Uw for w ∆n+1, where Ψn+1 is taken to be the→ empty function if ∆ = . ∈ n+1 ∅ 5 Remark 2.2 Having constructed n := (hn(u), hn(v), ∆n, Ψn) for n 0, we do not have any information about theZ region y Zd : y(d) > r except≥ for the { ∈ n} trapezoidal region ∆n. About the trapezoidal region ∆n, we know that there is at d least one vertex w ∆n y Z : y(d) = sn with Uw < p and for all vertices d ∈ ∩{ ∈ } z ∆ y Z : y(d)

G := (u, v, ∆, Ψ) : u, v Zd, ∆ , and Ψ : ∆ [0, 1] is a mapping with ∈ ∈D → nΨ(w) p for all w ∆ with w(d) < max z(d): z ∆ , Ψ(w)

When ∆n = , then hn(u)(d)= hn(v)(d) and the entire half-space above these two vertices are∅ unexplored. We will exploit this renewal property repeatedly, and so we define τ0 := 0 and τ = τ (u, v) := inf n > τ : ∆ = = inf n > τ : r = s l l { l−1 n ∅} { l−1 n n} for l 1. The total number of steps between the (l 1)-th and l-th renewal of the process≥ is denoted by − σ = σ (u, v) := τ τ , l l l − l−1 and the time required for the l-th renewal is denoted by T = T (u, v) := h (u)(d) u(d)= h (v)(d) v(d). l l τl − τl − From the renewal nature of the process we have that σ : l 1 and T T : { l ≥ } { l+1− l l 0 are collections of independent random variables. The next proposition shows≥ } that σ and T T have exponentially decaying tail probabilities. l l+1 − l Proposition 2.4 For all m 1, and some positive constants C1,C2,C3 and C4, we have P σ m C exp≥ C m and P T T m C exp C m . { l ≥ } ≤ 1 {− 2 } { l+1 − l ≥ } ≤ 3 {− 4 } Proof. We will prove the proposition for l = 1. First let n be such that ∆ = . n 6 ∅ Without loss of generality suppose hn(u)(d) < hn(v)(d), so that, by our construc- tion, hn+1(v)= hn(v) and hn+1(u)(d) > hn(u)(d). Consider the two sets of points m−(u) := h (u)(1) j,...,h (u)(d 1) j, h (u)(d)+ j : j > 0 , n n − n − − n m+(u) := h (u)(1) + j,...,h (u)(d 1) + j, h (u)(d)+ j : j > 0 . n  n n − n    6 − + It must be the case that either mn (u) ∆n = or mn (u) ∆n = . Let mn(u) − − ∩ ∅ + ∩ ∅ denote mn (u) if mn (u) ∆n = , otherwise it denotes mn (u). Taking w1, w2,... d ∩ ∅ to be the vertices of Z on mn(u) ordered in terms of their proximity to hn(u), let

J (u) := inf j 1 : w , a geometric (p) random variable. n+1 ≥ j ∈V  ′ Consider an independent collection of i.i.d. random variables Jn : n 1 so that J ′ has a geometric (p) distribution. Define J := max J ({u),J ′ ≥ .} Thus we 1 n+1 { n+1 n+1} have

h (v)(d) h (v)(d)=0, n+1 − n h (u)(d) h (u)(d) J . n+1 − n ≤ n+1 − + − + Next suppose n is such that ∆n = . Letm ˜ n (u),m ˜ n (u),m ˜ n (v) andm ˜ n (v) ∅ − + − + be the lines obtained by joining the points in mn (u), mn (u), mn (v) and mn (v) (which are defined like before) respectively. Put

m−(u) ifm ˜ −(u) m˜ +(v)= , m (u)= n n n n m+(u) ifm ˜ +(u) ∩ m˜ −(v)= ∅,  n n ∩ n ∅ and define similarly mn(v) and Jn+1(v). Let Jn+1 := max Jn+1(u),Jn+1(v) . Observe that max h (u)(d) h (u)(d), h (v)(d) h (v)({d) J , where} { n+1 − n n+1 − n } ≤ n+1 Jn : n 1 is a sequence of i.i.d. positive integer valued random variables with{ exponentially≥ } decaying tail probabilities. Now we define auxiliary random variables M := 0, M := max M ,J 1 for all n 0, and τ M := inf n 0 n+1 { n n+1} − ≥ { ≥ 1 : M =0 . From Lemma 2.6 of Roy, Saha and Sarkar [17] we have n } P τ M m C exp C m for all m 1, (2) { ≥ } ≤ 1 {− 2 } ≥ where C1 and C2 are positive constants. We will show that s r M for all 0 n σ , which yields σ τ M , and n − n ≤ n ≤ ≤ 1 1 ≤ thus, together with (2), completes the proof of the first part of the proposition. First, s0 r0 = M0 = 0. Also, s1 r1 < J1. Now suppose that it holds for some 1 n −σ . If ∆ = , then 0− = s r M . Otherwise, without ≤ ≤ 1 n+1 ∅ n+1 − n+1 ≤ n+1 loss of generality suppose hn(u)(d) < hn(v)(d). If hn+1(u)(d) < hn+1(v)(d), then r hn+1(v)(d), then sn+1 rn+1 < sn+1 rn = hn+1(u)(d) hn(u)(d) Jn+1. This completes the induction− step. − − ≤ τ1 Finally, T1 = rτ1 r0 n=1 Jn, and so from Lemma 2.7 of [17] we have that γT1 − ≤ E[e ] < for some γ > 0. Using Markov inequality we have P T1 m ∞ P { ≥ } ≤ E[eγT1 ]e−γm, which completes the proof of the proposition.

7 We now exploit the Markov renewal process (h (u), h (v)) : l 0 to obtain { τl τl ≥ } a martingale for d = 2. First note that by translation invariance, hτl (u) hτl (v): Zd { − l 0 is a on , and, since hτl (u)(d) = hτl (v)(d), taking w¯ = (w≥(1)},..., w(d 1)) for w =(w(1),..., w(d)), − Z (u, v) := h¯ (u) h¯ (v): l 0 (3) { l τl − τl ≥ } is a Markov chain on Zd−1 with 0¯ Zd−1 its only absorbing state. ∈ ¯ Theorem 2.5 For d =2, the process hτl (u)= hτl (u)(1) : l 0 is a martingale { ≥ } 2 with respect to the filtration : l 0 , where := σ( Uw : w Z , w(2) {FTl ≥ } Ft { ∈ ≤ u(2) + t ). } Proof. By translation invariance, it suffices to prove the theorem for u = 0. To simplify notation, throughout this proof, hn and h¯n will denote the vertices hn(0) ¯ Z and hn(0) respectively. For any n 0 and m1, m2 , noting that hτl (2) = Tl, the set ≥ ∈

if m2 = n, hτl =(m1, m2) Tl = n = ∅ 6 { }∩{ } hτ =(m1, n) if m2 = n,  { l } ¯ belongs to n. Thus hτl is Tl -measurable. Consequently, hτl is also Tl -measurable. The constructionF of ourF model ensures that h(w)(1) w(1) hF(w)(2) w(2) | − | ≤ − for every w Z2, so invoking the random variables introduced in the proof of Proposition 2.4,∈ we have

τl τl h¯ = (h (1) h (1)) J , | τl | n − n−1 ≤ n n=1 n=1 X X so, Lemma 2.7 of [17] implies that E[ h¯ ] < for every l 0. | τl | ∞ ≥ Finally, for any A , and taking := , we have ∈FTl Gn Fhn(2)

τl+1 E 1(A) h¯ h¯ = E 1(A) h¯ h¯ τl+1 − τl i − i−1 h i h i=τl+1 i ∞ ∞  X m = E 1(A) 1(τ = n)1(τ = n + m) h¯ h¯ l l+1 n+i − n+i−1 n=0 m=1 i=1 h X∞ X∞ n X oi = E 1(A) 1(τ = n)1(τ n + m) h¯ h¯ l l+1 ≥ n+m − n+m−1 n=0 m=1 ∞h ∞ X X n oi = E E 1(A)1(τ = n) 1 1(τ n + m 1) h¯ h¯ . l − l+1 ≤ − n+m − n+m−1 | Gn+m−1 n=0 m=1 X X h h    ii 8 Noting that 1(A)1(τl = n)[1 1(τl+1 n + m 1)] is n+m−1-measurable, h¯n+m h¯ is independent of − and≤ that− the incrementG of h¯ : n 0 −is n+m−1 Gn+m−1 { n ≥ } symmetric, we have, for the summand in the previous expression

E E 1(A)1(τ = n) 1 1(τ n + m 1) h¯ h¯ l − l+1 ≤ − n+m − n+m−1 | Gn+m−1 h h ii = E 1(A)1(τ = n) 1 1(τ n + m 1)E h¯ h¯  l − l+1 ≤ − n+m − n+m−1 =0.h    i E ¯ ¯ Thus hτl+1 hτl Tl = 0 almost surely, which completes the proof of the theorem. − | F   Corollary 2.6 For d = 2, the process Zl(u, v): l 0 , as defined in (3), is a martingale with respect to the filtration { : l 0 .≥ } {FTl ≥ } 3 Proof of Theorem 1.1

The proof of Theorem 1.1 is based on the ideas in Gangopadhyay, Roy and Sarkar [14], and Athreya, Roy and Sarkar [4], and as such we present a brief sketch of the modifications required, relegating the proofs to the Appendix. For d = 2, let u, v with u(2) = v(2) and, without loss of generality, assume that u(1) > v(1).∈ V Since the paths starting from u and v do not cross each other, the martingale Zl(u, v) = hτl (u)(1) hτl (v)(1) : l 0 is non- negative. Therefore, by the martingale{ convergence theorem,− there exists≥ } a random variable Z such that Z (u, v) a.s. Z as l . Since the Markov chain ∞ l −−→ ∞ → ∞ Zl(u, v): l 0 has 0 as its only absorbing state, we must have Z∞ = 0 almost surely.{ Hence,≥ there} exists some t 0 so that Z (u, v) = 0 for all l t almost ≥ l ≥ surely. Next, if u(2) < v(2), by the Borel-Cantelli lemma, with probability 1, we can find two vertices u′, v′ so that u′(2) = v′(2) = v(2) and ∈V u′(1) < v(1) u(1) v(1) + v(2) u(2) , − | − | − v′(1) > v(1) + u(1) v(1) + v(2) u(2) . | − | −  By the non-crossing property of our paths, the paths starting from u and v have to lie between the two paths starting from u′ and v′ from time v(2) onwards. Since the paths from u′ and v′ must meet almost surely, so do all paths sandwiched by them. This completes the proof for d = 2.

3.1 Constructing two independent processes In this subsection, we construct two independent processes that will be used later. Let u, v Zd be two distinct vertices with u(d) = v(d), also U u : w Zd ∈ { w ∈ } 9 v Zd and Uw : w two independent collections of i.i.d. uniform (0, 1) random { ∈ } IND IND variables. We construct two paths hn (u): n 1 from u and hn (v): n 1 u { Zd ≥ }v Zd { ≥ } from v, using the collection Uw : w and Uw : w respectively. The independence of the two collections{ of∈ uniform} random{ variables,∈ } ensures that the IND IND two processes hn (u): n 1 and hn (v): n 1 are independent. Denoting the{ l-th simultaneous≥ } { renewal time≥ of} two independent paths by TIND(u, v), note that TIND (u, v) TIND(u, v): l 0 is a sequence of i.i.d. pos- l { l+1 − l ≥ } itive integer valued random variables. For any simultaneous regeneration time TIND Z+ l (u, v), there exist some random variables Nl(u) and Nl(v) in , which are functions of both u and v, so that TIND IND IND l (u, v)= hNl(u)(u)(d)= hNl(v)(v)(d). Taking

R(x) := inf hIND(x)(d) x(d): k 1, hIND(x)(d) x(d) n n, x u, v , n { k − ≥ k − ≥ } − ∈{ } we have

TIND(u, v) TIND(u, v) := inf n 1 : R(u) = R(v) =0 . 1 − 0 { ≥ n n } Now we invoke the following result, which is Lemma 3.2 in [17].

(1) (2) Lemma 3.1 Let ξn : n 1 and ξn : n 1 be two independent collections of i.i.d. positive integer{ valued≥ } random{ variables≥ } with exponentially decaying tail P (1) P (2) probabilities. Assume min (ξ1 = 1), (ξ1 = 1) > 0. For i = 1, 2 and (i) k (i) (i) (i) (i) k 1, let S := ξ and Rn := inf S : k 1,S n n. Taking ≥ k j=1 j  { k ≥ k ≥ } − R (1) (2) τ := inf n 1 : Rn = Rn =0 , we have { ≥ P } P τ R m C exp C m , m 1, { ≥ } ≤ 5 {− 6 } ≥ where C5 and C6 are some positive constants, depending only on the distribution (1) (2) of ξn and ξn . From the above lemma we have

IND IND IND IND P h (u)(d) h (u)(d) m = P h (v)(d) h (v)(d) m Nl+1(u) − Nl(u) ≥ Nl+1(v) − Nl(v) ≥ = P TIND (u, v) TIND(u, v) m   l+1 − l ≥ = P TIND(u, v) TIND(u, v) m  1 − 0 ≥ C exp C m , (4) ≤ 7 {− 8 } where C7 and C8 are some positive constants, depending only on the distribution of [hIND(u)(d) hIND (u)(d)]’s. n − n−1 10 Now we study the displacement in the first d 1 coordinates. Recall, for w =(w(1),..., w(d)), we denote w¯ =(w(1),..., w(−d 1)). Let −

Nl(u) ψu = ψu(u, v) := h¯IND (u) h¯IND (u)= h¯IND(u) h¯IND (u) , l l Nl(u) − Nl−1(u) t − t−1 t=N − (u)+1 lX1   Nl(v) ψv = ψv(u, v) := h¯IND (v) h¯IND (v)= h¯IND(v) h¯IND (v) . l l Nl(v) − Nl−1(v) t − t−1 t=N − (v)+1 lX1   Proposition 3.2 The process (ψu, ψv): l 1 has the following properties: { l l ≥ } (a) It is a collection of i.i.d. random variables taking values in Z2(d−1), such that [ ψu + ψv ] has exponentially decaying tail probabilities for each l 1. k l k1 k l k1 ≥ (b) For all j =1,...,d 1 and r 1, we have P ψu(j)= r = P ψu(j)= r = − ≥ { 1 } { 1 − } P ψv(j)= r = P ψv(j)= r = P ψu(1) = r . { 1 − } { 1 } { 1 } E u m1 v m2 (c) [(ψ1 (i)) (ψ1 (j)) ] is independent of i, j and depends only on m1, m2, and E u m1 v m2 if at least one of m1, m2 is odd, then [(ψ1 (i)) (ψ1 (j)) ]=0. ¯IND ¯IND ¯IND ¯IND Proof. Since ht (u) ht−1(u): t 1 and ht (v) ht−1(v): t 1 are { − ≥ } { − u v ≥ } two independent sequences of i.i.d. random variables and (ψl , ψl ) is just related u v to the l-th renewal, (ψl , ψl ): l 0 is a collection of i.i.d. random variables. Moreover, using the fact{ that ≥ }

u v IND IND ψ 1 + ψ 1 2(d 1) h (u)(d) h (u)(d) k l k k l k ≤ − Nl(u) − Nl−1(u) and (hIND (u)(d) hIND (u)(d)) has exponentially decaying tail probabilities, Nl(u) − Nl−1(u) part (a) is established. From the symmetricity in each of their d 1 coordinates and the rotational ¯IND ¯IND ¯IND − ¯IND invariance of (ht (u) ht−1(u)) and (ht (v) ht−1(v)), part (b) holds. −¯IND ¯IND −¯IND ¯IND The distribution of (ht (u)(i) ht−1(u)(i), ht (v)(j) ht−1(v)(j)) is indepen- − − u Zd dent of i, j, because we can apply some independent rotations in Uw : w v Zd { ∈ } and Uw : w so that the (i, j)-th coordinate after rotation becomes the (1, 1)-th{ coordinate∈ } before rotation, with no change in the distribution. Hence, u v E u m1 v m2 the distribution of (ψ1 (i), ψ1 (j)) and thus [(ψ1 (i)) (ψ1 (j)) ] are independent of i, j. Further, if we fix the realizations for one family of uniform random vari- ables and reflect the realizations of another family along some coordinate, the distribution of (h¯IND(u)(i) h¯IND (u)(i), h¯IND(v)(j) h¯IND (v)(j)) does not change. t − t−1 t − t−1 Therefore, (ψu(i), ψv(j)) =(d ψu(i), ψv(j)). This proves (c). 1 1 1 − 1

11 Remark 3.3 Let d = 3. Since Proposition 3.2 shows that all the properties of u v (ψ1 , ψ1 ) discussed in [17] are also satisfied for our model, the argument in the Appendix of that paper (page 1141) allows us to conclude that if u¯ v¯ = x Z2, then − ∈

E (u¯ + ψu) (v¯ + ψv) 2 x 2 = α, (5) k 1 − 1 k2 −k k2 E u v 2 2 2 2  (u¯ + ψ1 ) (v¯ + ψ1 ) 2 x 2 2α x 2, (6) k − k −k k ≥ k k E u v 2 2 3 2  (u¯ + ψ1 ) (v¯ + ψ1 ) 2 x 2  = O( x 2) as x 2 , (7) k − k −k k k k k k →∞ where α is some non-negative constant.  

3.2 Proof of Theorem 1.1(i) for dimension 3 In this subsection, we consider two arbitrary distinct open vertices u, v Z3 and, without loss of generality, we assume u(3) = v(3). We aim to apply∈ the Foster-Lyapunov criterion (see Proposition 5.3 in Chapter I of [3]) on the process Z (u, v): l 0 . { l ≥ } Proposition 3.4 (Foster-Lyapunov criterion) An irreducible Markov chain with state space E and stationary transition probability matrix (pij)i,j∈E is recurrent if there exists a function f : E R such that f(x) and → →∞ p f(k) f(j) for j E , jk ≤ ∈ 0 Xk∈E where E is a subset of E so that E E is finite. 0 \ 0 The Markov chain Z (u, v): l 0 is not irreducible, because it has the { l ≥ } absorbing state (0, 0). Because of this, we modify the Markov process so that it has the same transition probabilities as Zl(u, v): l 0 except that instead of (0, 0) being an absorbing state, it goes to (1{, 0) with probability≥ } 1. With a slight abuse of notation, we denote the modified Markov chain by Zl(u, v): l 0 again. Using the Foster-Lyapunov criterion, we show that the Markov{ chain≥Z}(u, v): l 0 { l ≥ } is recurrent. This will prove that the graph G is connected. For applying the Foster-Lyapunov criterion, consider f : Z2 R+ by → f(x)= ln(1 + x 2). k k2 q Also, define a function g : R+ R+ by g(t) = ln(1 + t). In this case, f(x) = g( x 2). The fourth derivative→ of g is non-positive everywhere. Therefore, using k k2 p

12 Taylor’s expansion, we conclude

E f(Z (u, v)) Z (u, v)= x f(x) 1 | 0 − = E g( Z (u, v) 2) g( x 2) Z (u, v)= x  k 1 k2 − k k2 | 0 3 (k) 2  g ( x ) k  k k2 E Z (u, v) 2 x 2 Z (u, v)= x . (8) ≤ k! k 1 k2 −k k2 | 0 k=1 X    We know that Z1(u, v) is the difference of the first two coordinates of the paths starting from u and v at the first simultaneous regeneration. If the paths starting from u and v are independent until their first simultaneous regeneration time, this difference will have the same distribution as (u¯ +ψu) (v¯ +ψv), and then we could 1 − 1 use the relations (5), (6) and (7) in Remark 3.3. However, we can couple the joint process and independent process, and obtain a relation between the moments of Z (u, v) 2 x 2 and (u¯ + ψu) (v¯ + ψv) 2 x 2. k 1 k2 −k k2 k 1 − 1 k2 −k k2 Proposition 3.5 For any x Z2 (0, 0) and k 1, we have ∈ \{ } ≥ E Z (u, v) 2 x 2 k Z (u, v)= x k 1 k2 −k k2 | 0 E u v 2 2 k (k) 2k  (u¯ + ψ ) (v¯+ ψ ) x  + C x exp C10 x 2 , ≤ k 1 − 1 k2 −k k2 9 k k2 {− k k } (k)   where C9 and C10 are some positive constants depending on the distribution of (hIND(u)(d) hIND (u)(d))’s, and C(k) depends on k too. n − n−1 9 Proof. By the translation invariance of our model, it suffices to prove the result for u =(x(1), x(2), 0) and v = (0, 0, 0). Let r = u¯ v¯ 1/3=( x(1) + x(2) )/3. k − k | | | | u Recall that for constructing the independent paths, we use the collections Uw : Z3 v Z3 { w and Uw : w . We now consider another collection of i.i.d. uniform ∈ } { ∈ ′} Z3 (0, 1) random variables Uw : w independent of all other random variables, { ∈ } 3 and define a new collection of uniform random variables U˜w : w Z by { ∈ } u Uw, if w V (u, r ), ˜ v ∈ ⌊ ⌋ Uw := Uw, if w V (v, r ),  ′ ∈ ⌊ ⌋  Uw, otherwise.

Using this collection, we construct the joint path (hn(u), hn(v)) : n 0 starting { ≥ } from (u, v) until their first simultaneous regeneration time T1 at step τ1. We observe that

Z (u, v) Z (u, v) u¯ + u¯ Z (u, v) u¯ + x 4h (u)(3)+ x , k 1 k2 ≤k 1 − k2 k k2 ≤k 1 − k1 k k2 ≤ τ1 k k2 and ψu ψv ψu + ψv 4hIND (u)(3). k 1 − 1 k2 ≤k 1 k1 k 1 k1 ≤ N1(u) 13 Now, define the event A(r) := hIND (u)(3) >r . { N1(u) } An argument as in Proposition 2.4 yields that hIND (u)(3), like h (u)(3), has N1(u) τ1 exponentially decaying tail probabilities which do not depend on u or v. Hence, we have

E Z (u, v) 2 x 2 k (u¯ + ψu) (v¯ + ψv) 2 x 2 k k 1 k2 −k k2 − k 1 − 1 k2 −k k2 h k i k =E Z (u, v) 2  x 2 (u¯ + ψu) (v¯ + ψv) 2  x 2 1(A(r)) k 1 k2 −k k2 − k 1 − 1 k2 −k k2 h  i 2kE Z (u, v) 2k + (22k + 2) x 2k +22k ψu ψv 2k 1(A(r)) ≤ k 1 k2 k k2 k 1 − 1 k2 h IND  27kE h2k(u)(3) + x 2k +(h (u)(3))2k 1(A(r))  ≤ τ1 k k2 N1(u) h  i IND 27k E[h4k(u)(3)] + x 2k + E[(hIND (u)(3))4k] P h (u)(3) >r ≤ τ1 k k2 N1(u) N1(u) q qIND  27k 3 max E[h4k(u)(3)], E[(h (u)(3))4k] x 2kCexp C x /3 , ≤ × τ1 N1(u) k k2 7 {− 8k k2 } n o where we have used Cauchy-Schwarz inequality in the penultimate line and in- equality (4) in the last line above. This establishes the proposition. We now return to the relation (8). The above proposition implies that

E f(Z (u, v)) Z (u, v)= x f(x) 1 | 0 − 3 (k) 2  g ( x )  k k k2 E (u¯ + ψu) (v¯ + ψv) 2 x 2 ≤ k! k 1 − 1 k2 −k k2 k=1 X    3 g(k)( x 2) + k k2 C(k) x 2k exp C x . k! 9 k k2 {− 10k k2} Xk=1 For x large enough, from (5), (6) and (7) we have k k2 3 (k) 2 g ( x ) k k k2 E (u¯ + ψu) (v¯ + ψv) 2 x 2 k! k 1 − 1 k2 −k k2 k=1 X  −3/2   α x 2 ln(1 + x 2) / 8(1 + x 2)2 . ≤ − k k2 k k2 k k2 Also,    

3 g(k)( x 2) k k2 C(k) x 2k exp C x k! 9 k k2 {− 10k k2} Xk=1 2 max C(1),C(3) exp C x . ≤ { 9 9 } {− 10k k2}

14 Therefore,

E f(Z (u, v)) Z (u, v)= x f(x) 0, 1 | 0 − ≤ for x large. This completes the proof of Theorem 1.1(i) for dimension 3. k k2 3.3 Proof of Theorem 1.1(ii) Let d 4. We will show ≥ P G has at least m distinct trees =1, m 2. { } ∀ ≥ By the inherent in our model it suffices to show

P G has at least m distinct trees > 0, m 2. (9) { } ∀ ≥ The above relation happens if, with a positive probability, there exist m open vertices so that they are different in each of their simultaneous regeneration times. We follow the approach of [14].

Proposition 3.6 For 0 <ε< 1/3, there exist some positive constants C11, β = β(ε) and n 1, such that 0 ≥ P −β inf Zn4 (u, v) Dn2(1+ε) Dn2(1−ε) u, v 1 C11n , (u,v)∈An,ε ∈ \ | ∈V ≥ −  2d for all n n0, where An,ε := (u, v) Z : u(d)= v(d), u¯ v¯ Dn1+ε Dn1−ε and D :=≥ w Zd−1 : w { r . ∈ − ∈ \ } r { ∈ k k1 ≤ } Proof. Fix 0 <ε< 1/3 and suppose n 1. Consider the independent paths Zd ≥ u starting from u, v with u(d)= v(d), constructed using the collections Uw : w Zd and U v ∈: w Zd respectively, and define { ∈ } { w ∈ } ¯IND ¯IND Wn,ε(u, v) := hN (u)(u) hN (v)(v) Dn2(1+ε) Dn2(1−ε) , n4 − n4 ∈ \ n IND IND h¯ (u) h¯ (v) / D for all j =0, 1,...,n4 , Nj (u) − Nj (v) ∈ K ln n o where K is a constant to be specified later. Using the same argument with Lemma 3.3 of [14], we can show that there exists n such that for all n n , 0 ≥ 0 −α inf P Wn,ε(u, v) u, v 1 C12n , (10) (u,v)∈An,ε | ∈V ≥ −  where C12,α = α(ε) are some positive constants (see Lemma 5.1 in the Appendix).

15 Let r (n) := min K ln n/3 , hIND (u)(d) hIND (u)(d) . We consider an- l Nl(u) Nl−1(u) ⌊ ⌋ − ′′ other independent collection of i.i.d. uniform (0, 1) random variables Uw : w d  d { ∈ Z and define a new family Uˇw : w Z as } { ∈ } 4 U u if w n V hIND (u),r (n) , w l=1 Nl−1(u) l ∈ n4 IND Uˇw := v  Uw if w V h − (v),rl(n) , ∈ Sl=1 Nl 1(v)   U ′′ otherwise. w S   Note that on the eventWn,ε(u, v), we have

n4 n4 IND IND V h (u),rl(n) V h (v),rl(n) = . Nl−1(u) ∩ Nl−1(v) ∅ l=1 l=1 h [ i h [ i Let IND IND Bl(n) := h (u)(d) h (u)(d) 12/C8 and suitable choices of β,C11 > 0, which proves the proposition.

With the above proposition, the proof of Theorem 1.1(ii) follows along the lines of the proof of Theorem 2.1 of [14]. The details are presented in the Appendix.

4 Convergence to the Brownian Web

2 Starting from a vertex u Z the piecewise linear path πu :[u(2), ) R is ∈ ∞ → defined as πu(h (u)(2)) := h (u)(1), for every k 0, and πu is linear in the k k ≥ interval [hk(u)(2), hk+1(u)(2)]. The set of all paths comprising the graph G is denoted by := πu : u . Since we want to study the diffusive limit of we X { ∈ V} X define diffusive paths as follows:

16 Definition 4.1 For any n N and some normalization constants σ,γ > 0, if π ∈ (n) is a path starting from σπ, the scaled path π (σ, γ) is defined by

π(n)(σ, γ):[σ /(n2γ), ) R π ∞ → such that π(n)(σ, γ)(t)= π(n2γt)/(nσ).

(n) The collection of scaled paths is χn(σ, γ) := πu (σ, γ): u . We have the following theorem: { ∈ V}

Theorem 4.2 There exist σ := σ(p) and γ := γ(p) such that, χ¯n(σ, γ) converges in distribution to the standard Brownian web as n , where χ¯ (σ, γ) is the W →∞ n closure of χn(σ, γ) in (Π,d). For a subset of paths Γ in Π, and t , t, a, b R with t> 0 and a < b, consider 0 ∈ the following counting random variables:

η (t , t; a, b) := π(t + t): π Γ, σ t , π(t ) [a, b] , Γ 0 0 ∈ π ≤ 0 0 ∈ ηˆΓ(t0, t; a, b) := π(t0 + t): π Γ, σπ t0, π(t0 + t) [a, b] . ∈ ≤ ∈  From now on we denote the standard starting from x by Bx and standard coalescing Brownian motions starting from x1,..., xk by (Wx1 ,...,Wxk ). For the proof of Theorem 4.2, we use the following theorem:

Theorem 4.3 ([13]) Suppose Θ1, Θ2,... are ( , H) valued random variables with non-crossing paths. Assume that the followingH conditionsB hold: (I ) For all y R2, there exist ζy Θ such that, for any finite set of points 1 ∈ n ∈ n y ,..., y from a deterministic countable dense set of R2, (ζy1 ,...,ζyk ) 1 k D n n converges in distribution to (Wy ,...,Wy ) as n ; 1 k →∞

2 P (B1) For all t > 0, lim supn→∞ sup(a,t0)∈R ηΘn (t0, t; a, a + ε) 2 0 as ε 0+; { ≥ } → → −1 2 P (B2) For all t > 0, ε lim supn→∞ sup(a,t0)∈R ηΘn (t0, t; a, a + ε) 3 0 as ε 0+. { ≥ } → → Then, Θ converges in distribution to as n . n W →∞ To prove Theorem 4.2 we show that the sequence χ¯ (σ, γ): n 1 satisfies { n ≥ } the conditions of the above theorem. The verification of condition (I1) is along the lines of [17], however it needs some modification because of the differences between the two graphs and as such we present it here. The verification of condition (B1) follows from the same argument as in [11] and, for completeness, it is also stated in the Appendix.

17 4.1 χ¯n(σ, γ) is a ( , ) valued random variable H BH It suffices to show that has compact closure in (Π,d). For any path π Π, define the extended pathXπ ˆ as follows: ∈

π(σ ) for t σ , πˆ(t) := π π π(t) for t≤ > σ .  π For π , π Π, let 1 2 ∈ ˆ tanh(ˆπ1(t)) tanh(ˆπ2(t)) d(ˆπ1, πˆ2) := tanh(σπ1 ) tanh(σπ2 ) sup , | − | ∨ t≥σ ∧σ 1+ t − 1+ t π1 π2 | | | | so that dˆ(ˆπ1, πˆ2) = d(π1, π2). Thus we need to show that ˆ := πˆ : π has the compact closure in (Πˆ, dˆ), where Πˆ := πˆ : π Π . ForX this purpose,{ ∈ we X} prove { ∈ } that the closure of f( ˆ) is compact for some homeomorphism f. Note that each pathX π ˆ Πˆ can be seen as the graph (ˆπ(t), t): t R R2. ∈ { ∈ } ⊆ Taking the map (ϕ, ψ): R2 ( 1, 1) ( 1, 1) as → − × − tanh(x) ϕ, ψ (x, t)= ϕ(x, t), ψ(t) := , tanh(t) , 1+ t | |    we define the homeomorphism f : Πˆ f(Π)ˆ so that forπ ˆ Π,ˆ f(ˆπ) is the following graph: → ∈

ϕ(ˆπ(t), t), ψ(t) : t R ( 1, 1) ( 1, 1). ∈ ⊆ − × − n o  R Now using Arzel`a-Ascoli theorem, we prove that f( ˆ) ( 1, 1) ( 1, 1) has X ⊆ − × − compact closure. First, we show the equicontinuity of f( ˆ) on R. Note that every  path f(ˆπ) f( ˆ) is given by f(ˆπ)(t)= ϕ(ˆπ(t), t), ψ(t)X ( 1, 1) ( 1, 1). We equip ( 1,∈1) X( 1, 1) with the ∞-metricρ ˆ for which ∈ − × − − × − L 

ρˆ ϕ(ˆπ1(t1), t1), ψ(t1) , ϕ(ˆπ2(t2), t2), ψ(t2) = ϕ(ˆπ (t ), t ) ϕ(ˆπ (t ), t ) ψ(t ) ψ(t ) 1 1 1 − 2 2 2 ∨| 1 − 2 | = ρ (ˆπ1(t1), t1), (ˆπ2(t2), t2) ,

 for everyπ ˆ1, πˆ2 Π,ˆ t1, t2 R and where ρ is as defined in (1). We will now show that, for all ε>∈0, there exists∈ some δ > 0 such that if t , t R with t t < δ, 1 2 ∈ | 1 − 2| then sup ρ (ˆπ(t ), t ), (ˆπ(t ), t ) :π ˆ ˆ < ε, 1 1 2 2 ∈ X n  o 18 which establishes the uniform equicontinuity of f( ˆ). Indeed, if f( ˆ) is not uniformly equicontinuous on R, then there must existX ε > 0 such thatX for all n N, we can find tn, tn R with tn tn < 1/n and ∈ 1 2 ∈ | 1 − 2 | ρ (ˆπ (tn), tn), (ˆπ (tn), tn) >ε for someπ ˆ ˆ. n 1 1 n 2 2 n ∈ X However this is not possible because 

ψ(t) ψ(s) = tanh(t) tanh(s) 0 as t s 0, | − | | − |→ | − |→ and, noting that for our paths πˆ(t) πˆ(s) t s , we have | − |≤| − | tanh(ˆπ(t)) tanh(ˆπ(s)) ϕ(ˆπ(t), t) ϕ(ˆπ(s),s) = 0 as t s 0. − 1+ t − 1+ s → | − |→

| | | |

In addition, for any t R, if ft( ˆ) is the set of postions of the paths in f( ˆ) at time t, then ∈ X X

f ( ˆ)= ϕ(ˆπ(t), t), ψ(t) :π ˆ ˆ = ϕ(ˆπ(t), t):ˆπ ˆ ψ(t) . t X ∈ X ∈ X × Since ϕ(ˆπ(t), t):π ˆ ˆ is bounded, the closure of ϕ(ˆπ(t ), t): π ˆ ˆ is ∈ X ∈ X compact. Thus, f ( ˆ) has compact closure. This proves the lemma.  t X 

4.2 Verification of condition (I1) u u Let hk(u)=(ik, tk) (say), and Xk+1 = ik+1 ik and Yk+1 = tk+1 tk be the − − u marginal increments of the vertices on the path πu. We observe that Xk : k 1 u { u ≥ u } and Yk : k 1 are two collections of i.i.d. random variables, with Xk Yk . { u ≥ } u u | | ≤ Also Xk has a symmetric distribution, with both Xk and Yk having exponentially decaying tail probabilities; In particular, P Y u >| m| (1 p)(m+1)2−1. { k } ≤ − Writing X for X0 and Y for Y 0, for k 0 we have k k k k ≥ k k u u hk(u)= u(1) + Xi , u(2) + Yi , i=1 i=1 k k X kX  k d u d u Sk := Xi = Xi and Rk := Yi = Yi . i=1 i=1 i=1 i=1 X X X X 0 where we take i=1 = 0.

P (n) d Proposition 4.4 There exist σ and γ such that π (σ, γ) B0 in (Π,d). 0 −→

19 Proof. For N(t) := sup n 0 : R t , we have { ≥ n ≤ } t R π (t)= S + − N(t) X , 0 N(t) R R N(t)+1 N(t)+1 − N(t) and its diffusively rescaled version is

2 N(n γt) 2 (n) 1 n γt RN(n2γt) π0 (σ, γ)(t)= Xi + − XN(n2γt)+1 . nσ R 2 R 2 i=1 N(n γt)+1 N(n γt) h X − i Taking σ = Var(X1) and Zi := Xi/σ, we have

2 p N(n γt) 2 (n) 1 n γt RN(n2γt) π0 (σ, γ)(t)= Zi + − ZN(n2γt)+1 , n R 2 R 2 i=1 N(n γt)+1 N(n γt) h X − i where Zi’s are i.i.d. with mean 0 and variance 1. Next, we define another stochastic processπ ˆn as follows: ⌊n2t⌋ 1 2 2 πˆ (t) := Z +(n t n t )Z 2 , t 0, n n i − ⌊ ⌋ ⌊n t⌋+1 ≥ i=1 h X i and we see that 2 (n) d 2 2 1 n γt RN(n2γt) π0 (σ, γ)(t) =π ˆn(N(n γt)/n )+ − ZN(n2γt)+1 . (11) n RN(n2γt)+1 RN(n2γt) h − i From Donsker’s invariance principle, the processπ ˆn converges in distribution to the standard Brownian motion. Also taking γ = E[Y1], by the renewal theorem (see Theorem 4.4.1 of [10]), we have 2 N(n γt) a.s. γt 2 = t as n . n −→ E[Y1] →∞ Thus to complete the proof of the proposition, it suffices to show that the second term in (11) converges in probability to 0 for this choice of γ. 2 n γt R 2 For any ε > 0, noting that 0 − N(n γt) 1, we have, for any ≤ RN(n2γt)+1 RN(n2γt) ≤ s> 0, − 2 n γt RN(n2γt) P sup − ZN(n2γt)+1 > nε 0≤t≤s RN(n2γt)+1 RN(n2γt) n − o P sup ZN(n2γt)+1 > nε ≤ 0≤ t≤s | |  P sup Zi > nε ≤ i=1,...,⌊n2γs⌋+1 | | ( n2γs + 1)P Z > nε ≤ ⌊ ⌋ {| 1| } =( n2γs + 1)P X > nσε 0 as n , ⌊ ⌋ {| 1| }→ →∞

20 because X1 has exponentially decaying tail probabilities. This completes the proof. | | 2 a.s. If (un(1)/(nσ), un(2)/(n γ)) u as n , noting that (πun (t), t): t d −−→ →(n) ∞ ≥ u (2) = u + (π0(t), t): t 0 , we have πu (σ, γ) converges in distribution to n n ≥ n  Bu as n . →∞  For x R2 and n 1 let ∈ ≥ x =(x (1), x (2)) with x (1) := nσx(1) + i and x (2) := n2γx(2) , (12) n n n n ⌊ ⌋ n n ⌊ ⌋ 2 where in := inf i 1 : nσx(1) + i, n γx(2) . For any δ (0, 1), we ≥ ⌊ δ ⌋ ⌊ ⌋ ∈ V ∈ have P i > nδ (1 p)n −1, so (x (1)/(nσ), x (2)/(n2γ)) a.s. x as n . n  n n { x} ≤ (n)− −−→ → ∞ Thus, taking ζn := πxn (σ, γ), as observed in the previous paragraph we have that x ζ converges in distribution to Bx as n , which verifies the condition (I ) for n →∞ 1 k = 1. For k 2 we proceed by induction. Suppose condition (I1) holds for k 1 points. Fix≥ x1,..., xk R2 and without loss of generality assume that xk(2)− = i ∈ (n) (n) min1≤i≤k x (2) = 0. For simplicity in notation we write π for π (σ, γ), where σ and γ are as in Proposition 4.4. x (n) x1 xk Taking ζn := πxn where xn is as (12), we aim to show that (ζn ,...,ζn ) con- verges in distribution to (Wx1 ,...,Wxk ) as n . Here the convergence occurs k → ∞ in the product space Π equipped with the metric dk((π1,...,πk), (θ1,...,θk)) = k i=1 d(πi, θi) as in [11]. 1 As in [17], we first introduce a coalescence map. Fix α (0, 2 ) and, for n 1, Plet A := (π ,...,π ) Πk : t < where ∈ ≥ n,α { 1 k ∈ n,k ∞} t := inf t : t max σ , σ , π (t) π (t) n−α for some 1 i k 1 . n,k ≥ { πi πk } | i − k | ≤ ≤ ≤ − 2 2 In the case of tn,k < , let sn,k := ( n γtn,k + 1)/(n γ) and ∞ ⌊ ⌋ i := min i 1,...,k 1 : π (t ) π (t ) n−α , 0 ∈{ − } | i n,k − k n,k | ≤ and define 

πk(t) for σπk t tn,k, t tn,k ≤ ≤ π¯ (t) :=  πk(tn,k)+ − πi0 (sn,k) πk(tn,k) for tn,k

We complete the verification of I1 in the following 3 steps:

21 1 k 1 k Step 1 From x ,..., x we obtain xn,..., xn as in (12). Next we construct paths 1 k−1 k π1,...,πk−1 starting from xn,..., xn as before. However, from xn we construct 2 the pathπ ˜k using an independent collection U˜w : w Z of i.i.d. uniform (n) d (n) { ∈ } (0, 1) random variables. Clearlyπ ˜ = π k andπ ˜k is independent of the paths k xn π1,...,πk−1. Using the same argument as [17], we have

(n) (n) (n) d f (π ,...,π , π˜ ) (W 1 ,...,W k ) as n . (13) n,α 1 k−1 k → x x →∞

k Step 2 In this step, we construct another path starting from xn which is not necessarily independent of π ,...,π . Let U ′ : w Z2 be such that 1 k−1 { w ∈ } k−1 ∞ i i i Uw for w V (h (x ), h (x )(2) h (x )(2)), U ′ := ∈ i=1 m=0 m n m+1 n − m n (14) w U˜ . ( w otherwiseS S

Here k−1 ∞ V (h (xi ), h (xi )(2) h (xi )(2)) is the set of all vertices ex- i=1 m=0 m n m+1 n − m n plored to construct the paths π1,...,πk−1. LetSπ beS the path starting from xk constructed using the collection U ′ : w k n { w ∈ Z2 . In this step, we want to prove that } (n) (n) d f (π ,...,π ) (W 1 ,...,W k ) as n (15) n,α 1 k → x x →∞ (n) (n) (n) (n) (n) (n) (n) (n) Let π1,k and π2,k be such that fn,α(π1 ,...,πk−1, π˜k ) := (π1 ,...,πk−1, π1,k ) and (n) (n) (n) (n) (n) (n) fn,α(π1 ,...,πk−1, πk ) := (π1 ,...,πk−1, π2,k ). From (13), we need to show

P d (π(n),...,π(n) , π(n)), (π(n),...,π(n) , π(n)) = d(π(n), π(n)) 0 as n , k 1 k−1 1,k 1 k−1 2,k 1,k 2,k −→ →∞ i.e., to show (15) we need to prove that for anyt> 0,

(n) (n) P sup π1,k (s) π2,k (s) 0 as n . (16) 0≤s≤t | − | −→ →∞

2 (n) (n) Fix t > 0. If min1≤i≤k−1 σπi > n γt, then π1,k = π2,k on [0, t] and (16) is 2 obtained. Therefore, assume min1≤i≤k−1 σπi n γt and, for s > 0 and 1 ≤ i 2 i ≤ i k, let Ni(s) = Ni(s, n) := sup m 0 : hm(xn)(2) n γs and tj(xn) := 2[h≤ (xi )(2) h (xi )(2)], here we{ note≥ that for the vertex≤ xk , we} obtain h (xk ) j n − j−1 n n j n using the collection U ′ : w Z2 . Taking { w ∈ } N (s)+1 i σnβ W := t (xi ) < , n,s j n 2 {1≤i≤k:N (s)6=−∞} j=1 \i \ 

22 we have

⌊n2γs⌋+1 k σnβ P(W c ) P t (xi ) n,s ≤ { j n ≥ 2 } i=1 j=1 X X σnβ = k( n2γs + 1)P t (x1 ) 0 as n for β > 0. ⌊ ⌋ { 1 n ≥ 2 }→ →∞

By definition of tn,k, for all i =1,...,k 1 with σ (n) s tn,k, we have − πi ≤ ≤ π(n)(s) π(n)(s) n−α. | k − i | ≥ Hence, for s n2γt we have ≤ n,k 1−α min πk(s) πi(s) σn . {1≤i≤k−1:σπi ≤s} | − | ≥ Fix 0 <β< 1 α, we observe that − 1−α (i) if t tn,k, then min πk(s) πi(s) : 1 i k 1 and σπi s σn ≤ 2 {| − β | 1−≤α ≤ − ≤ } ≥ for s n γt. Also, since σn < σn , on the event Wn,t, the path πk ≤ k−1 ∞ i i i stays away from the region i=1 m=0 V (hm(xn), hm+1(xn)(2) hm(xn)(2)) ′ − (as given in the definition (14) of Uw) and therefore πk agrees withπ ˜k on 2 (n) (n) S S [0, n γt], i.e. π1,k and π2,k agree on [0, t].

(ii) if t > tn,k, then the above argument gives that, on the event Wn,t, πk agrees 2 (n) (n) withπ ˜k on [0, n γtn,k], i.e. π1,k and π2,k agree on [0, tn,k]. Further, since (n) (n) (n) πk (tn,k)=˜πk (tn,k), by the definition of α-coalescence map we have π1,k = (n) (n) (n) π2,k on [tn,k, t]. Therefore, π1,k and π2,k agree on [0, t] in this case too. Finally P(W ) 1 as n , so the statement (16) is established. n,t → →∞

Step 3 To complete the verification of condition (I1) it suffices to show that

P d (π(n),...,π(n) , π(n)), (π(n),...,π(n) , π(n)) = d(π(n), π(n)) 0 as n ; k 1 k−1 2,k 1 k−1 k 2,k k −→ →∞ i.e., we need to prove that for any t> 0, 

(n) (n) P sup π2,k (s) πk (s) 0 as n . (17) 0≤s≤t | − | −→ →∞ Fix t> 0. Again, we investigate two cases t t and t > t separately. When ≤ n,k n,k t t , the two paths π(n) and π(n) agree on [0, t]. Thus we just need to investigate ≤ n,k 2,k k the case t > tn,k.

23 (n) (n) Since π2,k and πk agree on [0, tn,k], by the non-crossing nature of our paths, we have (n) (n) (n) (n) sup π2,k (s) πk (s) sup πi0 (s) πk (s) . 0≤s≤t | − | ≤ tn,k≤s≤t | − |

We restrict ourselves to the event Wn,tn,k . On this event, for the determi- nation of the index i0 we only need to know the configuration in w : w(2) 2 β { ≤ n γtn,k + σn /4 +1 . Moreover, our graph is such that the displacement of the⌊ 1st⌋ coordinate⌊ ⌋ during} a time s is at most s. So, taking n large such that σnβ/4 1, we have ⌊ ⌋ ≥ π (s) π (s) σn1−α + σnβ for n2γt s n2γt + σnβ/4 +1. (18) i0 − k ≤ n,k ≤ ≤ ⌊ n,k⌋ ⌊ ⌋ Taking a := n2γt + σnβ/4 +1 /(n2γ), we have n,k ⌊ n,k⌋ ⌊ ⌋ (n) (n)  −α β−1 sup πi0 (s) πk (s) n + n 0 as n . tn,k≤s≤an,k | − | ≤ → →∞

Without loss of generality, assume that πi0 (s) πk(s). From (18) we can find Z 2 ≤ 2 un, vn such that un < πi0 (n γan,k), vn > πk(n γan,k) and (vn un)/n 0. ∈ 2 2 − → Let un := (un, n γan,k) and vn := (vn, n γan,k). By the non-crossing property of (n) (n) (n) (n) the paths, πi0 and πk lie between the paths πun and πvn from an,k onwards, hence (n) (n) (n) (n) sup πi0 (s) πk (s) sup πun (s) πvn (s) . (19) an,k≤s≤t | − | ≤ an,k≤s≤t | − | From Proposition 5.3 of [17], we conclude that

(n) (n) P sup πun (s) πvn (s) 0 as n , an,k≤s≤t | − | −→ →∞ which along with (19) establishes (17).

4.3 Verification of condition (B2)

For verifying condition (B2), we first estimate the expected value of the first colli- sion time of two of the three paths starting from the same time level. Let x, y, z Z with x

x,y,z rl := min hl((x, 0))(2), hl((y, 0))(2), hl((z, 0))(2) , x,y,z sl := max hl((x, 0))(2), hl((y, 0))(2), hl((z, 0))(2) . Also, taking 

0 if l =0, τ x,y,z := l inf n > τ x,y,z : rx,y,z = sx,y,z if l 1, ( { l−1 n n } ≥ 24 x,y,z x,y,z x,y,z x,y,z let σ := τ τ and T := h x,y,z ((x, 0))(2). l l l−1 l τl As in Section 2,− we have Proposition 4.6 For all m 1, and some positive constants C and C , ≥ 13 14 P σx,y,z m P T x,y,z m C exp C m . { 1 ≥ } ≤ { 1 ≥ } ≤ 13 {− 14 } Proof. For k x, y, z , taking uk,0 = u˜k,0 := (k, 0) and tk,0 := 1, we define (u , u˜ , t )∈ for {n 1} as follows: k,n k,n k,n ≥ (1) if [uk,n−1(1) tk,n−1, uk,n−1(1) + tk,n−1] n = , then take uk,n = u V ∩ + (0, 1), u˜ −= u˜ and t = t ×{+ 1; } ∅ k,n−1 k,n k,n−1 k,n k,n−1  (2) if [uk,n−1(1) tk,n−1, uk,n−1(1) + tk,n−1] n = , then take uk,n = u˜ V ∩= v and t =− 1, where v [u (1)×{ t} 6 ,∅u (1)+t ] k,n k,n ∈ V ∩ k,n−1 − k,n −1 k,n−1 k,n−1 × n is such that Uv Uw for all w [u (1) t , u (1) + { } ≤ ∈ V ∩ k,n−1 − k,n−1 k,n−1 t ] n . k,n−1 ×{ } Observe that T x,y,z = inf n 1 : t = t = t =1 . For each n 0, let 1 { ≥ x,n y,n z,n } ≥

1 if ux,n +( 1, 1) t˜x,n+1 := − ∈V , 0 if u +( 1, 1) / ( x,n − ∈V 1 if uy,n + (0, 1) t˜y,n+1 := ∈V , 0 if u + (0, 1) / ( y,n ∈V 1 if uz,n + (1, 1) t˜z,n+1 := ∈V . 0 if u + (1, 1) / ( z,n ∈V

Then t˜k,n : k x, y, z , n 1 is a collection of i.i.d. Bernoulli (p) random vari- { ∈{ } ≥ } 3 ables, and so B := inf n 1 : t˜x,nt˜y,nt˜z,n = 1 has a geometric (p ) distribution. Noting that σx,y,z T x,y,z{ ≥ B, we have the proposition.} 1 ≤ 1 ≤ Corollary 4.7 If B is a random variable with a geometric (p3) distribution, for each n, m 1, ≥ x,y,z m (m) E x,y,z x,y,z E m T T T [B ]= C , n − n−1 |F n−1 ≤ 15 (m)    where C15 is some positive constant that only depends on m.

x,y ¯ x,y,z ¯ x,y,z y,z ¯ x,y,z For n 0, let Zn := hτn ((y, 0)) hτn ((x, 0)) and Zn := hτn ((z, 0)) ¯ x,y,z ≥ − − hτn ((y, 0)), which are non-negative because of the non-crossing property of our Z2 x,y y,z paths. Recall that for w , w¯ = w(1). Hence, (Zn ,Zn ): n 0 constitutes a Markov chain on the state∈ space := (u, v) { Z2 : uv 0 ≥. Taking} := S { ∈ ≥ } S0 (u, v) : uv = 0 , we employ Theorem 3.1 of Coupier et al. [8] and estimate the{ expected∈ S value of} the first regeneration step in which (at least) one pair of three paths π , π and π collide, i.e., n := inf n 1 : Zx,yZy,z =0 . (x,0) (y,0) (z,0) x,y,z { ≥ n n } 25 Theorem 4.8 There exist some positive constants C16 and C17 such that E[n ] C + C (y x)(z y). x,y,z ≤ 16 17 − − (k) Proof. For k x, y, z and n 1, take I := h¯ x,y,z ((k, 0)) k and Xn := k τ1 ∈ { } ≥ − x,y,z (k) τ1 (k) h¯ ((k, 0)) h¯ ((k, 0)). Since Xn ’s are symmetric and I = Xn , we n − n−1 k n=1 have E[I ] = 0. Also I T x,y,z, so from Corollary 4.7, E[ I m] C(m) for each k | k| ≤ 1 | k| P≤ 15 m 1. Using Cauchy-Schwarz inequality we have ≥ E Zx,yZy,z (y x)(z y) = E (I I )(I I ) (20) 1 1 − − − y − x z − y (2)   4C 15 .  ≤ Noting that E[I2] P I = 1 p3(1 p)6 =: a, choose m 2 large such that y ≥ { y } ≥ − 0 ≥ P T x,y,z m /2 a/(6C(4)). { 1 ≥ 0 } ≤ 15 Let 1 := (u, v) :1 u v m0 and observe that if (y x, z y) / 1 S { x,y,z ∈ S ≤ ∧ ≤ } − − ∈ S then E I I 1(T m /2) = 0. Using Cauchy-Schwarz inequality we have x y 1 ≤ 0 E[I I ] E I I 1(T x,y,z m /2) + E I I 1(T x,y,z m /2) a/6. | x y | ≤ x y 1 ≤ 0 | x|| y| 1 ≥ 0 ≤ Similarly we have  E[I I ] a/6 and E [I I ] a/6. Together with (20) we have | y z | ≤ | x z | ≤ a E Zx,yZy,z (y x)(z y) . 1 1 − − − ≤ −2 So, for all x, y, z Z with x

E[ν ] C + C (y x)(z y). x,y,z ≤ 18 19 − − Proof. Taking Ix,y,z := T x,y,z T x,y,z, from Corollary 4.7, n n − n−1 nx,y,z ∞ E Ix,y,z = E Ix,y,z1(n n) n n x,y,z ≥ n=1 n=1 h X i ∞h X i

E E x,y,z x,y,z = I 1(nx,y,z n) T n ≥ |F n−1 n=1 X∞ h  i (1) E E x,y,z x,y,z E = 1(nx,y,z n) I T C [nx,y,z], ≥ n |F n−1 ≤ 15 n=1 X h  i which together with Theorem 4.8 completes the proof. Finally, let n N and t,ε > 0. Using translation invariance and the non- crossing property∈ of our paths, we have P sup ηχ¯n(σ,γ)(t0, t; a, a + ε) 3 2 (a,t0)∈R ≥  2 2 2 2 sup P ηX n γt0 , n γt + n γt0 n γt0 ; nσa 1, nσa + nσε +2 3 2 ≤ (a,t0)∈R ⌊ ⌋ − ⌊ ⌋ ⌊ ⌋ − ⌊ ⌋ ⌈ ⌉ ≥ P η (0, n2γt;0, nσε + 3) 3  ≤ X ⌈ ⌉ ≥ ⌈nσε⌉+1  P ν n2γt ≤ i,i+1,⌈nσε⌉+3 ≥ i=0 n [ o ⌈nσε⌉+1  E νi,i+1,⌈nσε⌉+3 ≤ n2γt i=0   X ⌈nσε⌉+1 C + C ( nσε +2 i) 18 19 ⌈ ⌉ − ≤ n2γt i=0 X C (nσε +2)+ C (nσε + 3)(nσε + 2) C σ2ε2 18 19 19 as n . ≤ n2γt → γt →∞ Therefore,

1 P lim sup lim sup sup ηχ¯n(σ,γ)(t0, t; a, a + ε) 3 =0. + 2 ε→0 ε n→∞ (a,t0)∈R ≥  This verifies (B2).

27 5 Appendix

d Proof of Theorem 2.3. Let Y := Vw : w Z , w(d) > 0 be a set of i.i.d. { ∈ } d uniform (0,1) random variables, independent of the collection Uw : w Z used { ∈ } to build the model. For n 0, assume that n = z =(w1, w2, ∆, Ψ) G. Define ≥ Z d ∈ another collection of i.i.d. random variables Y˜ := V˜w : w Z , w(d) > r as { ∈ n} follows: Ψ(w) if w ∆, V˜w := ∈ Vw′ otherwise,  where w′ Zd is such that w(j) = w′(j) for all j = 1,...,d 1 and w(d) = ′ ∈ Zd\H(0) −Zd w (d)+ rn. Thus Y˜ := f(Y, z) for some f : [0, 1] G [0, 1] . Since at the end of n-th step all vertices in Zd (∆ H(r )) are unexplored,× → we can consider \ ∪ n another independent collection of i.i.d. uniform (0,1) random variables, so that d taking X := Uw : w Z , w(d) >r we have n { ∈ n} X =d f(Y, ). (21) n | Zn Zn On the other hand, from the definition of our process,

= g ( ,X ), (22) Zn+1 n Zn n Zd where g : G [0, 1] \H(rn) G. From (21) and (22), we have n × → : j =0, 1,...,n =d g ( , f(Y, )), Zn+1 | {Zj } n Zn Zn and hence, by the random mapping representation is a Markov process [15].  Z Completion of the proof of Theorem 1.1 (ii). We first establish (9). Con- sider the case m = 2. In this case, it is enough to find two open vertices u, v Zd with u(d)= v(d) such that ∈

P Z (u, v) = (0,..., 0) for all l 0 > 0. { l 6 ≥ } d−1 components

d Let 0 <ε< 1/3 and take n0 1| as in{z Proposition} 3.6. Set u = (0, 0,..., 0) Z and v =(n , 0,..., 0) Zd. Using≥ the notations of Proposition 3.6, ∈ 0 ∈

− v¯ u¯ Dn1+ε Dn1 ε , − ∈ 0 \ 0

28 i− and thus (u, v) A . Put r = j (n2 1 )4. Since (0,..., 0) Zd−1 is the ∈ n0,ε j i=1 0 ∈ absorbing state of process Zl(u, v): l 0 , { P≥ } P Z (u, v) = (0,..., 0) for all l 0 u, v { l 6 ≥ | ∈ V} = lim P Zr (u, v) = (0,..., 0) u, v j→∞ { j 6 | ∈ V}

P k k lim Zrk (u, v) D 2 (1+ε) D 2 (1−ε) for all k =1,...,j u, v , ≥ j→∞ { ∈ n0 \ n0 | ∈ V} and using the of Zl(u, v), translation invariance of our model and Proposition 3.6,

P k k Zrk (u, v) D 2 (1+ε) D 2 (1−ε) for all k =1,...,j u, v { ∈ n0 \ n0 | ∈ V}

P j j j−1 j−1 = Zrj (u, v) D 2 (1+ε) D 2 (1−ε) Zrj−1 (u, v) D 2 (1+ε) D 2 (1−ε) , u, v { ∈ n0 \ n0 | ∈ n0 \ n0 ∈ V}

P k k Zrk (u, v) D 2 (1+ε) D 2 (1−ε) for all k =1,...,j 1 u, v × { ∈ n0 \ n0 − | ∈ V}

P j−1 j j inf Z(n2 )4 (x, y) D 2 (1+ε) D 2 (1−ε) x, y (x,y)∈A j−1 0 n0 n0 ≥ n2 ,ε { ∈ \ | ∈ V} 0

P k k Zrk (u, v) D 2 (1+ε) D 2 (1−ε) for all k =1,...,j 1 u, v × { ∈ n0 \ n0 − | ∈ V} 2j−1 −β P k k 1 C11(n0 ) Zrk (u, v) D 2 (1+ε) D 2 (1−ε) for all k =1,...,j 1 u, v ≥ − { ∈ n0 \ n0 − | ∈ V} j k−1 1 C (n2 )−β , ≥ − 11 0 k=1 Y   where the last line is obtained by iteration. Therefore,

P G has at least two distinct trees { } P Zl(u, v) = (0,..., 0) for all l 0 u, v P u, v ≥ { ∞ 6 ≥ | ∈V}× { ∈ V} k−1 p2 1 C (n2 )−β > 0. ≥ − 11 0 k=1 Y   Now consider m 2. Set ≥ ui = ((i 1)n , 0,..., 0) Zd, i =1, . . . , m, − 1 ∈ where n 1 is some constant satisfying n max n , m1/ε and 1 ≥ 1 ≥ { 0 } ∞ k−1 1 C (n2 )−β > 1 δ, − 11 1 − k=1 Y   for some δ > 0 that m(m 1)δ/2 < 1. Note that such a choice of n1 is possible − because ∞ 1 C (n2k−1 )−β 1 as n . Thus, for any i, j 1,...,m k=1 − 11 → →∞ ∈{ } Q   29 i j with i > j, we have u¯ u¯ D 1+ε D 1−ε , and so the earlier argument implies n1 n1 that − ∈ \ i j i j P Zl(u , u ) = (0,..., 0) for all l 0 u , u { ∞ 6 ≥ | ∈ V} k−1 1 C (n2 )−β > 1 δ. ≥ − 11 1 − k=1 Y   Thus, P G has at least m distinct trees { } m i−1 pmP Z (ui, uj) = (0,..., 0) for all l 0 u1,..., um ≥ l 6 ≥ | ∈V i=2 j=1 n \ \  o >pm 1 m(m 1)δ/2 > 0. − − This completes the proof.   Lemma 5.1 For 0 <ε< 1/3, there exist α = α(ε) > 0 and positive constants C20,C21,C22 such that for all n sufficiently large, we have IND IND −α P ¯ ¯ 2(1+ε) (a) sup(u,v)∈An,ε hN (u)(u) hN (v)(v) / Dn C20n , n4 − n4 ∈ ≤ IND IND −α P¯ ¯ 2(1−ε) (b) sup(u,v)∈An,ε h u (u) h v (v) Dn C21n , Nn4 ( ) − Nn4 ( ) ∈ ≤  IND IND 4 (c) sup P h¯ (u) h¯ (v) DK ln n for some j =0, 1,...,n (u,v)∈An,ε Nj (u) − Nj (v) ∈ −α  C22n , ≤ where An,ε and K are as in Proposition 3.6. Proof. For (u, v) A and sufficiently large n, ∈ n,ε n4 n4 P ¯IND ¯IND P u v h u (u) h v (v) / Dn2(1+ε) = ψ ψ + u¯ v¯ / Dn2(1+ε) Nn4 ( ) − Nn4 ( ) ∈ i − i − ∈ i=1 i=1 n o n X X o n4 P u v 2(1+ε) = ψi ψi + u¯ v¯ > n − − 1 i=1 n X o 4  n P u v 2(1+ε) ψi ψi > n u¯ v¯ 1 ≤ − 1 −k − k i=1 n X o 4  n P u v 2(1+ε) 1+ε ψi ψi > n n ≤ − 1 − i=1 n X o 4  n P u v 2(1+ε) ψi ψi > n /2 , ≤ − 1 n i=1 o X 

30 1+ε where the fourth line is obtained from the fact that u¯ v¯ 1 n . Now using Proposition 3.2 and Chebyshev’s inequality, k − k ≤

P ¯IND ¯IND h u (u) h v (v) / Dn2(1+ε) Nn4 ( ) − Nn4 ( ) ∈ n d−1 n4 o P ψu ψv (j) > n2(1+ε)/2 ≤ i − i j=1 i=1 n X X o 4  d−1 n n2(1+ε) P ψu ψv (j) > ≤ i − i 2(d 1) j=1 i=1  [ n X − o 4  n n2(1+ε) (d 1)P ψu ψv (1) > ≤ − i − i 2(d 1) i=1 − n X  o (d 1)n4V ar (ψu ψv)(1) − 1 − 1 . ≤ (n2(1+ε)/(2(d 1)))2 −  This completes the proof of (a). Let (u, v) A . Since 0 <ε< 1/3, we are sure that u¯ v¯ n2(1−ε). ∈ n,ε k − k1 ≤ Thus, P ¯IND ¯IND hN (u)(u) hN (v)(v) Dn2(1−ε) n4 − n4 ∈ 4  n P u v 2(1−ε) = ψi ψi + u¯ v¯ n − − 1 ≤ i=1 n X o 4  n P u v 2(1−ε) ψi ψi n + u¯ v¯ 1 ≤ − 1 ≤ k − k i=1 n X o n4 

P u v 2(1−ε) ψi ψi 2n ≤ − 1 ≤ i=1 n X o 4  d−1 n 2n2(1−ε) P ψu ψv (j) ≤ i − i ≤ d 1 j=1 i=1  [ n X − o 4  n ψu ψv (1) 2n−2ε (d 1)P i=1 i − i . (23) ≤ − n2 ≤ d 1 n P  − o

If N denotes a standard normal random variable and σ := Var (ψu ψv)(1) , 1 − 1 by the , q  4 n ψu ψv (1) i=1 i − i d N as n . n2σ −→ →∞ P  31 Also,

2n−2ε 4n−2ε P N . (24) | | ≤ (d 1)σ ≤ (d 1)σ√2π n − o − Now using Berry-Esseen bounds (see Corollary 4 in page 300 of [6]),

4 n ψu ψv (1) 2n−2ε 2n−2ε P i=1 i − i P N n2σ ≤ (d 1)σ − | | ≤ (d 1)σ P  n 4 − o n − o n ψu ψv (1) 2n−2ε 2n−2ε P i=1 i − i P N ≤ n2σ ≤ (d 1)σ − ≤ (d 1)σ P  n 4 − o n − o n ψu ψv (1) 2n−2ε 2n−2ε + P i=1 i − i < P N < n2σ −(d 1)σ − −(d 1)σ P  n 4 − o n − o n ψu ψv (1) P i=1 i − i P 2 sup 2 < x N x ≤ x∈R n σ − { ≤ } nP  o C E ψu ψv (1) 3 23 1 − 1 2 3 , (25) ≤  n σ   for some positive constant C23. Finally, (23), (24) and (25) conclude (b). For any (u, v) A , ∈ n,ε P h¯IND (u) h¯IND (v) D for some j =0, 1,...,n4 Nj (u) − Nj (v) ∈ K ln n n j o = P ψu ψv + u¯ v¯ D for some j =0, 1,...,n4 i − i − ∈ K ln n n i=1 o Xj  P ψu ψv v¯ u¯ + D for some j 0 ≤ i − i ∈ − K ln n ≥ n i=1 o X  j = P ψu ψv = z for some j 0 i − i ≥ z∈v¯−u¯+D i=1  [ K ln n n X o j  P ψu ψv = z for some j 0 , ≤ i − i ≥ z∈v¯−u¯+D i=1 X K ln n n X  o 0 Zd−1 where i=1 = 0 . Now, without loss of generality, suppose d = 3. Using ∈ j u Proposition 3.2 and a similar approach with the proof of its part (c), i=1(ψi v P { − ψi ): j 0 is an aperiodic, isotropic, symmetric whose steps are i.i.d. with≥ } mean 0 and variance-covariance matrix σ2I, where I is anP identity

32 2P u v matrix of order 2, and x∈Z3 x 2 ψ1 ψ1 = x < . Hence, we can employ Proposition 1 in page 308 of [18]k k and{ conclude− } ∞ P j P u v 2 −1 lim z 2 ψi ψi = z for some j 0 (2πσ ) . kzk2→∞ k k − ≥ ≤ i=1 n X  o Since v¯ u¯ D 1+ε D 1−ε , for any z v¯ u¯ + D and sufficiently large n − ∈ n \ n ∈ − K ln n we must have z z /3 > n1−ε/6. Therefore, k k2 ≥k k1 j P ψu ψv = z for some j 0 = O(n−(1−ε)) as n , i − i ≥ →∞ i=1 n X  o and so for n sufficiently large and some constant C24 > 0,

IND IND P h¯ (u) h¯ (v) D for some j =0, 1,...,n4 Nj (u) − Nj (v) ∈ K ln n n C (K ln n + 1)2dn−(1−ε) o ≤ 24 C n−(1−3ε/2). ≤ 24 This proves (c). Finally, the proof of Theorem 1.1(iii) follows exactly as Section 4 of [4].

1 k Verification of condition (B1). Let y ,..., y be a finite sequence of arbitrary vectors in R2, and for any i 1,...,k , let yi Z2 be such that ∈{ } n ∈ yi (1) yi (2) ( n , n ) yi as n . nσ n2γ → →∞ Again, we have

(n) (n) d (π 1 ,...,π k ) (Wy1 ,...,Wyk ) as n . (26) yn yn −→ →∞ Now fix t> 0. By translation invariance, for any ε> 0 and n N we have ∈ P P 2 sup ηχ¯n(σ,γ)(t0, t; a, a + ε) 2 ηX (0, n γt;0, nσε + 3) 2 2 (a,t0)∈R ≥ ≤ ⌈ ⌉ ≥   Moreover, we can find x , x Z so that n,0 n,ε ∈ (a) x 0 nσε +3 x , n,0 ≤ ≤ ⌈ ⌉ ≤ n,ε (b) (x /(nσ), 0) 0 as n , n,0 → →∞ (c) (x /(nσ), 0) (ε, 0) as n . n,ε → →∞ 33 Let vn,1 := (xn,0, 0) and vn,2 := (xn,ε, 0). By the non-crossing property of our paths, we have

2 lim sup P ηX (0, n γt;0, nσε + 3) 2 n→∞ ⌈ ⌉ ≥ P (n) (n) lim πv (t) = πv (t) ≤ n→∞ n,1 6 n,2 = P W0(t) = W (t) 6 (ε,0) = P inf √2B0(s)+ ε> 0  0≤s≤t

= P sup B0(s) < ε/√2 0≤s≤t  =1 P sup B0(s) ε/√2 − 0≤s≤t ≥  =1 2P B0(t) ε/√2 − ≥ = 2Φ(ε/√2t) 1,  − where the second line is obtained from (26) and the continuous mapping theorem, and the sixth line is obtained from the reflection principle for Brownian motion. Therefore, P lim sup sup ηχ¯n(σ,γ)(t0, t; a, a + ε) 2 2 n→∞ (a,t0)∈R { ≥ } 2 lim sup P ηX (0, n γt;0, nσε + 3) 2 ≤ n→∞ ⌈ ⌉ ≥ 2Φ(ε/√2t) 1 0 as ε 0+. ≤ − → →

This verifies (B1). 

Acknowledgements

We are deeply grateful to professor Rahul Roy for suggesting the topic of the article. The first author highly appreciates the Indian Statistical Institute, Delhi Centre, for the financial support and the kind hospitality during her visits to the Institute. She also thanks her supervisor, professor Rahul Roy, for his supports and guidance, and professor Anish Sarkar for many useful discussions.

References

[1] Arratia, R. (1979). Coalescing Brownian motions on the line. Ph.D. Thesis, University of Wisconsin, Madison.

34 [2] Arratia, R. (circa 1981). Coalescing Brownian motions and the voter model on Z. Unpublished partial manuscript. [3] Asmussen, S. (2003). Applied probability and queues. Springer, New York. [4] Athreya, S., Roy, R. and Sarkar, A. (2008). Random directed trees and forest- drainage networks with dependence. Electron. J. Probab. 13, 2160-2189. [5] Berestycki, N., Garban, C. and Sen, A. (2015). Coalescing Brownian flows: a new approach. Ann. Probab. 43, 3177-3215. [6] Chow, Y. S. and Teicher, H. (1978). : Independence, inter- changeability, martingales. Springer, New York. [7] Coletti, C. and Valle, G. (2014). Convergence to the Brownian web for a gen- eralization of the drainage network model. Ann. Inst. H. Poincar´eProb. Statist. 50, 899-919. [8] Coupier, D., Saha, K., Sarkar, A. and Tran. V. C. (2020). Collision times of random walks and applications to the Brownian web. Lecture Notes Series, IMS, NUS, Genealogies of Interacting Particle Systems, 38, pp. 267-293. [9] Coupier, D. and Tran, V. C. (2013). The 2d-directed forest is almost surely a tree. Random Structures and Algorithms. 42, 59-72. [10] Durrett, R. (2010). Probability: Theory and examples. Cambridge Univ. Press, Cambridge. [11] Ferrari, P. A., Fontes, L. R. G. and Wu, X. Y. (2005). Two-dimensional Pois- son trees converge to the Brownian web. Ann. Inst. H. Poincar´eProb. Statist. 41, 851-858. [12] Fontes, L. R. G., Isopi, M., Newman, C. M. and Ravishankar, K. (2002). The Brownian web. Proc. Nat. Acad. Sciences 99, 15888-15893. [13] Fontes, L. R. G., Isopi, M., Newman, C. M. and Ravishankar, K. (2004). The Brownian web: Characterization and convergence. Ann. Probab. 32, 2857-2883. [14] Gangopadhyay, S., Roy, R. and Sarkar, A. (2004). Random oriented trees: A model of drainage networks. Ann. App. Probab. 14, 1242-1266. [15] Levin, D. A., Peres, Y. and Wilmer, E. L. (2009). Markov chains and mixing times. Amer. Math. Soc., Providence, RI. [16] Rodriguez-Iturbe, I. and Rinaldo, A. (1997). Fractal river basins: Chance and self-organization. Cambridge Univ. Press, New York.

35 [17] Roy, R., Saha, K. and Sarkar, A. (2016). Random directed forest and the Brownian web. Ann. Inst. H. Poincar´eProb. Statist. 52, 1106-1143.

[18] Spitzer, F. (1964). Principles of random walk. Van Nostrand, Princeton, NJ.

[19] Sun, R. and Swart, J. M. (2008). The Brownian net. Ann. Probab. 36, 1153- 1208.

[20] T´oth, B. and Werner, W. (1998). The true self-repelling motion. Probab. The- ory Related Fields 111, 375-452.

36