arXiv:2011.00323v1 [math.PR] 31 Oct 2020 Keywords rn[]suidmdl ihvrie en onso oso on pr point Poisson C a and of [11], points Wu being and vertices Fontes with Ferrari, w models mo model studied [4]. “torch-light” a Sarkar [9] a Tran [14], and and [7], Roy Sarkar Valle Athreya, and and by mod Coletti Roy by Howard’s Gangopadhyay, studied the was lattice, by Rinaldo model integer and studied the on was Rodriguez-Iturbe set networks (see vertex the geology attract with in have graphs becau networks arose limits graphs drainage these scaling in to Interest their years. and recent in trees attention spanning directed Random Introduction 1 Web. Brownian Principle, Invariance Limit, Scaling fIfhn saa 14-34,Ia.Eal azadeh.parvane Email: Iran. 81746-73441, Isfahan Isfahan, of fIfhn saa 14-34,Ia.Eal a.parvardeh@sc Email: Iran. 81746-73441, Isfahan Isfahan, of ∗ † otlades eateto ttsis aut fMathemat of Faculty Statistics, of Department address: Postal otlades eateto ttsis aut fMathemat of Faculty Statistics, of Department address: Postal riaentokwt eedneadthe and dependence with network drainage A iuiesaig hssse ovre ndsrbto ot to distribution in converges system this scaling, diffusive lotsrl for surely almost uey h rp ossso utoete for tree tha one show just We of geometry. consists graph graph its the examine surely, we first, At upwardly. ijittesfor trees disjoint eitoueannMroinsse fcaecn non-simp coalescing lattice of integer system the on non-Markovian walks a introduce We ietdSann re,Drce pnigFrs,Mro Cha Markov Forest, Spanning Directed Trees, Spanning Directed : .Aae avnhZ. Parvaneh Azadeh S. d d ≥ ≥ .I diin hr sn iifiiept ntegraph the in path bi-infinite no is there addition, In 4. .Te,w rv htfor that prove we Then, 2. rwinweb Brownian oebr3 2020 3, November Z d nwihwlesmv nte45 the in move walkers which in Abstract 1 ∗ fhnParvardeh Afshin d 2 = [email protected] i.ui.ac.ir d , ,udrasuitable a under 2, = n nntl many infinitely and 3 c n ttsis University Statistics, and ics c n ttsis University Statistics, and ics eBona web. Brownian he eo hi connection their of se ◦ ih cones light erandom le ,almost t, † lo drainage of el uirand oupier 1].For [16]). egeneral re sstudied as dlt of lots ed cs on ocess in, the continuum space. These models besides being of intrinsic interest are also related to the Brownian web. In this article, we study a slight variant of a random tree/forest model intro- duced by Athreya, Roy and Sarkar [4]. d d Consider the integer lattice Z with d 2. For u = (u1,...,ud) Z and k, h N, let ≥ ∈ ∈ (i) mk(u)=(u1,...,ud−1,ud + k);
d (ii) H(u,k)= v =(v1,...,vd) Z : vd = ud + k and v mk(u) 1 k and as a convention H(u, 0) = ;∈ k − k ≤ ∅ (iii) V (u, h) = v : v H(u,k) for some 1 k h and as a convention V (u, 0) = ; ∈ ≤ ≤ ∅ ∞ (iv) V (u)= h=1 V (u, h).
S bc bc bc bc bc bc bc
bc bc bc bc bc
bc bc bc
bc u
Figure 1: The fifteen vertices on the top of u make the region V (u, 3), and the seven vertices at the top constitute H(u, 3).
d Now, let Uw : w Z be a collection of i.i.d. uniform (0, 1) random variables. { ∈ } d Fix 0
random graph { ∈ } G = ( , ) with := u, h(u) : u , where u, h(u) denotes the edge representedV E by a straightE {h line joiningi u and∈ V}h(u). h i Our first aim is to study the geometry of the random graph G. In fact, akin to the result of Gangopadhyay, Roy and Sarkar [14], and that of Athreya, Roy and Sarkar [4], we prove the following theorem:
2 Theorem 1.1 We have, almost surely, (i) for d =2, 3, the random graph G is connected and consists of a single tree.
(ii) for d 4, the random graph G is disconnected and consists of infinitely many disjoint≥ trees.
(iii) for d 2, the random graph G contains no bi-infinite path. ≥ Remark 1.2 The random graph described above differs from that of Athreya, Roy and Sarkar [4] in that the mechanism by which edges are constructed are differ- ent. In particular, for two-dimensions, the graph we described is a planar graph, while in their model it is not planar almost surely. This distinction carries in all dimensions, in the sense that while our graph in d-dimensions can be embedded in Rd, the same is almost surely not true for the graph constructed in Athreya, Roy and Sarkar [4] (see Proposition 2.1).
Although a priori there is no Markov structure in the graph G, in the next section we exhibit a construction of the graph which brings to fore the Markovian nature of the graph. Having established the Markovian structure, the proof of Theorem 1.1 follows, after some modifications, from that in Gangopadhyay, Roy and Sarkar [14]. We also show that the scaling limit of the graph G in two-dimensions is the Brownian web. The Brownian web is a collection of coalescing Brownian motions starting from each space-time point in R2. It was introduced by Arratia [1] to study the asymptotic behaviour of the one-dimensional voter model. He studied a process of coalescing one-dimensional Brownian motions starting from every space- time point in R 0 , and later [2], he generalised the construction to a process of coalescing Brownian×{ } motions starting from every space-time point in R2. Later, T´oth and Werner [20] described a version of the Brownian web in their study of the true self repelling motion. The Brownian web we describe here is as in Fontes, Isopi, Newman and Ravishankar [12, 13]. R2 R2 Let c denote the completion of the space time plane with respect to the metric
tanh(x1) tanh(x2) ρ((x1, t1), (x2, t2)) := tanh(t1) tanh(t2) . (1) | − | ∨ 1+ t1 − 1+ t2 | | | | R2 2 The topological space c can be identified with the continuous image of [ , ] under a map that identifies the line [ , ] with a point ( , ),−∞ and∞ the −∞ ∞ × {∞} ∗ ∞ line [ , ] with a point ( , ). −∞ ∞ × {−∞}R2 ∗ −∞ A path π in c with starting time σπ [ , ] is a mapping π :[σπ, ] [ , ] such that π( ) = and,∈ when−∞ σ∞ = , π( ) = .∞ Also→ −∞ ∞ ∪ {∗} ∞ ∗ π −∞ −∞ ∗ 3 R2 t (π(t), t) is a continuous map from [σπ, ] to ( c , ρ). We then define Π to be7→ the space of all paths in R2 with all possible∞ starting times in [ , ]. The c −∞ ∞ following metric (for π , π Π), 1 2 ∈
tanh(π1(t σπ1 )) tanh(π2(t σπ2 )) d(π1, π2) := tanh(σπ1 ) tanh(σπ2 ) sup ∨ ∨ | − |∨t≥σπ ∧σπ 1+ t − 1+ t 1 2 | | | | makes Π a complete, separable metric space. The metric d is slightly different from the original choice in [13] which is somewhat less natural as explained in Sun and Swart [19]. Convergence in this metric can be described as locally uniform convergence of paths as well as convergence of starting times. Let be the space of compact subsets of (Π,d) equipped with the Hausdorff H metric dH given by
dH(K1,K2) := sup inf d(π1, π2) sup inf d(π1, π2). π1∈K1 π2∈K2 ∨ π2∈K2 π1∈K1
( ,d ) is a complete, separable metric space. Let be the Borel σ-algebra on H H BH the metric space ( ,dH). The Brownian web is characterized as (Theorem 2.1 [13]): H W
Theorem 1.3 There exists an ( , H) valued random variable such that whose distribution is uniquely determinedH B by the following properties: W
2 (a) for each deterministic point z R , there is a unique path πz almost ∈ ∈ W surely;
2 (b) for a finite set of deterministic points z ,..., z R , the collection (πz ,...,πz ) 1 k ∈ 1 k is distributed as coalescing Brownian motions starting from z1,..., zk;
2 (c) for any countable deterministic dense set R , is the closure of πz : z in (Π,d) almost surely. D ⊆ W { ∈ D} Theorem 1.3 shows that the collection is almost surely determined by count- ably many coalescing Brownian motions.W An extensive study has been done to understand the properties of the Brownian web (see [20, 13, 19, 5]). Since we need to develop more notation, the statement and the proof of our result regarding the scaling limit (Theorem 4.2) is relegated to Section 4.
2 An alternate construction of the graph
First we justify Remark 1.2.
4 Proposition 2.1 The graph G does not have edges which cross each other almost surely. Proof. Suppose there exist two distinct edges x, y and z, w which cross each other. Without loss of generality assume thath x(di) < yh(d), iz(d) < w(d) and y(d) w(d). Since the two edges cross each other, we must have y, w ≤ y if U < U{ } ⊆ V (x) V (z). Thus, y(d) = w(d) and h(x) = h(z) = y w . This ∩ w if Uw < Uy proves the proposition. Now we proceed to the construction of the graph which is based on the con- struction given in Roy, Saha and Sarkar [17]. The proofs of many of the statements we have regarding the construction follow from slight modifications of the argu- ment in [17] and as such we relegate them to the Appendix. We also state the construction of the graph starting from two vertices, the general case for k vertices follows in a similar fashion. Fix two vertices u, v Zd (not necessarily open) with u(d) = v(d). Let ∈ h0(u) := u, h0(v) := v, ∆0 := , r0 = s0 = u(d) and Ψ0 : ∆0 [0, 1] the empty function. Take ∅ →
1. h1(u) := h(u), h1(v) := h(v), 2. r := min h (u)(d), h (v)(d) , s := max h (u)(d), h (v)(d) , 1 { 1 1 } 1 { 1 1 }
V (h0(u),s1) V (h0(u),r1) if h1(u)(d) h1(v)(d) 3. ∆1 := \ ≥ , V (h (v),s ) V (h (v),r ) if h (v)(d) h (u)(d) ( 0 1 \ 0 1 1 ≥ 1
4. Ψ1 : ∆1 [0, 1] given by Ψ1(w)= Uw for w ∆1, where Ψ1 is taken to be the empty→ function if ∆ = . ∈ 1 ∅ Having defined hn(u), hn(v), rn, sn, ∆n and Ψn, we obtain
h(hn(u)) if hn(u)(d)= rn 1. hn+1(u) := , (hn(u) if hn(u)(d) >rn
h(hn(v)) if hn(v)(d)= rn hn+1(v) := , (hn(v) if hn(v)(d) >rn 2. r := min h (u)(d), h (v)(d) , s := max h (u)(d), h (v)(d) , n+1 { n+1 n+1 } n+1 { n+1 n+1 }
V (hn(u),sn+1) V (hn(u),rn+1) if hn+1(u)(d) hn+1(v)(d) 3. ∆n+1 := \ ≥ , V (h (v),s ) V (h (v),r ) if h (v)(d) h (u)(d) ( n n+1 \ n n+1 n+1 ≥ n+1
4. Ψn+1 : ∆n+1 [0, 1] given by Ψn+1(w) = Uw for w ∆n+1, where Ψn+1 is taken to be the→ empty function if ∆ = . ∈ n+1 ∅ 5 Remark 2.2 Having constructed n := (hn(u), hn(v), ∆n, Ψn) for n 0, we do not have any information about theZ region y Zd : y(d) > r except≥ for the { ∈ n} trapezoidal region ∆n. About the trapezoidal region ∆n, we know that there is at d least one vertex w ∆n y Z : y(d) = sn with Uw < p and for all vertices d ∈ ∩{ ∈ } z ∆ y Z : y(d)
G := (u, v, ∆, Ψ) : u, v Zd, ∆ , and Ψ : ∆ [0, 1] is a mapping with ∈ ∈D → nΨ(w) p for all w ∆ with w(d) < max z(d): z ∆ , Ψ(w)
When ∆n = , then hn(u)(d)= hn(v)(d) and the entire half-space above these two vertices are∅ unexplored. We will exploit this renewal property repeatedly, and so we define τ0 := 0 and τ = τ (u, v) := inf n > τ : ∆ = = inf n > τ : r = s l l { l−1 n ∅} { l−1 n n} for l 1. The total number of steps between the (l 1)-th and l-th renewal of the process≥ is denoted by − σ = σ (u, v) := τ τ , l l l − l−1 and the time required for the l-th renewal is denoted by T = T (u, v) := h (u)(d) u(d)= h (v)(d) v(d). l l τl − τl − From the renewal nature of the process we have that σ : l 1 and T T : { l ≥ } { l+1− l l 0 are collections of independent random variables. The next proposition shows≥ } that σ and T T have exponentially decaying tail probabilities. l l+1 − l Proposition 2.4 For all m 1, and some positive constants C1,C2,C3 and C4, we have P σ m C exp≥ C m and P T T m C exp C m . { l ≥ } ≤ 1 {− 2 } { l+1 − l ≥ } ≤ 3 {− 4 } Proof. We will prove the proposition for l = 1. First let n be such that ∆ = . n 6 ∅ Without loss of generality suppose hn(u)(d) < hn(v)(d), so that, by our construc- tion, hn+1(v)= hn(v) and hn+1(u)(d) > hn(u)(d). Consider the two sets of points m−(u) := h (u)(1) j,...,h (u)(d 1) j, h (u)(d)+ j : j > 0 , n n − n − − n m+(u) := h (u)(1) + j,...,h (u)(d 1) + j, h (u)(d)+ j : j > 0 . n n n − n 6 − + It must be the case that either mn (u) ∆n = or mn (u) ∆n = . Let mn(u) − − ∩ ∅ + ∩ ∅ denote mn (u) if mn (u) ∆n = , otherwise it denotes mn (u). Taking w1, w2,... d ∩ ∅ to be the vertices of Z on mn(u) ordered in terms of their proximity to hn(u), let
J (u) := inf j 1 : w , a geometric (p) random variable. n+1 ≥ j ∈V ′ Consider an independent collection of i.i.d. random variables Jn : n 1 so that J ′ has a geometric (p) distribution. Define J := max J ({u),J ′ ≥ .} Thus we 1 n+1 { n+1 n+1} have
h (v)(d) h (v)(d)=0, n+1 − n h (u)(d) h (u)(d) J . n+1 − n ≤ n+1 − + − + Next suppose n is such that ∆n = . Letm ˜ n (u),m ˜ n (u),m ˜ n (v) andm ˜ n (v) ∅ − + − + be the lines obtained by joining the points in mn (u), mn (u), mn (v) and mn (v) (which are defined like before) respectively. Put
m−(u) ifm ˜ −(u) m˜ +(v)= , m (u)= n n n n m+(u) ifm ˜ +(u) ∩ m˜ −(v)= ∅, n n ∩ n ∅ and define similarly mn(v) and Jn+1(v). Let Jn+1 := max Jn+1(u),Jn+1(v) . Observe that max h (u)(d) h (u)(d), h (v)(d) h (v)({d) J , where} { n+1 − n n+1 − n } ≤ n+1 Jn : n 1 is a sequence of i.i.d. positive integer valued random variables with{ exponentially≥ } decaying tail probabilities. Now we define auxiliary random variables M := 0, M := max M ,J 1 for all n 0, and τ M := inf n 0 n+1 { n n+1} − ≥ { ≥ 1 : M =0 . From Lemma 2.6 of Roy, Saha and Sarkar [17] we have n } P τ M m C exp C m for all m 1, (2) { ≥ } ≤ 1 {− 2 } ≥ where C1 and C2 are positive constants. We will show that s r M for all 0 n σ , which yields σ τ M , and n − n ≤ n ≤ ≤ 1 1 ≤ thus, together with (2), completes the proof of the first part of the proposition. First, s0 r0 = M0 = 0. Also, s1 r1 < J1. Now suppose that it holds for some 1 n −σ . If ∆ = , then 0− = s r M . Otherwise, without ≤ ≤ 1 n+1 ∅ n+1 − n+1 ≤ n+1 loss of generality suppose hn(u)(d) < hn(v)(d). If hn+1(u)(d) < hn+1(v)(d), then r
7 We now exploit the Markov renewal process (h (u), h (v)) : l 0 to obtain { τl τl ≥ } a martingale for d = 2. First note that by translation invariance, hτl (u) hτl (v): Zd { − l 0 is a Markov chain on , and, since hτl (u)(d) = hτl (v)(d), taking w¯ = (w≥(1)},..., w(d 1)) for w =(w(1),..., w(d)), − Z (u, v) := h¯ (u) h¯ (v): l 0 (3) { l τl − τl ≥ } is a Markov chain on Zd−1 with 0¯ Zd−1 its only absorbing state. ∈ ¯ Theorem 2.5 For d =2, the process hτl (u)= hτl (u)(1) : l 0 is a martingale { ≥ } 2 with respect to the filtration : l 0 , where := σ( Uw : w Z , w(2) {FTl ≥ } Ft { ∈ ≤ u(2) + t ). } Proof. By translation invariance, it suffices to prove the theorem for u = 0. To simplify notation, throughout this proof, hn and h¯n will denote the vertices hn(0) ¯ Z and hn(0) respectively. For any n 0 and m1, m2 , noting that hτl (2) = Tl, the set ≥ ∈
if m2 = n, hτl =(m1, m2) Tl = n = ∅ 6 { }∩{ } hτ =(m1, n) if m2 = n, { l } ¯ belongs to n. Thus hτl is Tl -measurable. Consequently, hτl is also Tl -measurable. The constructionF of ourF model ensures that h(w)(1) w(1) hF(w)(2) w(2) | − | ≤ − for every w Z2, so invoking the random variables introduced in the proof of Proposition 2.4,∈ we have
τl τl h¯ = (h (1) h (1)) J , | τl | n − n−1 ≤ n n=1 n=1 X X so, Lemma 2.7 of [17] implies that E[ h¯ ] < for every l 0. | τl | ∞ ≥ Finally, for any A , and taking := , we have ∈FTl Gn Fhn(2)
τl+1 E 1(A) h¯ h¯ = E 1(A) h¯ h¯ τl+1 − τl i − i−1 h i h i=τl+1 i ∞ ∞ X m = E 1(A) 1(τ = n)1(τ = n + m) h¯ h¯ l l+1 n+i − n+i−1 n=0 m=1 i=1 h X∞ X∞ n X oi = E 1(A) 1(τ = n)1(τ n + m) h¯ h¯ l l+1 ≥ n+m − n+m−1 n=0 m=1 ∞h ∞ X X n oi = E E 1(A)1(τ = n) 1 1(τ n + m 1) h¯ h¯ . l − l+1 ≤ − n+m − n+m−1 | Gn+m−1 n=0 m=1 X X h h ii 8 Noting that 1(A)1(τl = n)[1 1(τl+1 n + m 1)] is n+m−1-measurable, h¯n+m h¯ is independent of − and≤ that− the incrementG of h¯ : n 0 −is n+m−1 Gn+m−1 { n ≥ } symmetric, we have, for the summand in the previous expression
E E 1(A)1(τ = n) 1 1(τ n + m 1) h¯ h¯ l − l+1 ≤ − n+m − n+m−1 | Gn+m−1 h h ii = E 1(A)1(τ = n) 1 1(τ n + m 1) E h¯ h¯ l − l+1 ≤ − n+m − n+m−1 =0.h i E ¯ ¯ Thus hτl+1 hτl Tl = 0 almost surely, which completes the proof of the theorem. − | F Corollary 2.6 For d = 2, the process Zl(u, v): l 0 , as defined in (3), is a martingale with respect to the filtration { : l 0 .≥ } {FTl ≥ } 3 Proof of Theorem 1.1
The proof of Theorem 1.1 is based on the ideas in Gangopadhyay, Roy and Sarkar [14], and Athreya, Roy and Sarkar [4], and as such we present a brief sketch of the modifications required, relegating the proofs to the Appendix. For d = 2, let u, v with u(2) = v(2) and, without loss of generality, assume that u(1) > v(1).∈ V Since the paths starting from u and v do not cross each other, the martingale Zl(u, v) = hτl (u)(1) hτl (v)(1) : l 0 is non- negative. Therefore, by the martingale{ convergence theorem,− there exists≥ } a random variable Z such that Z (u, v) a.s. Z as l . Since the Markov chain ∞ l −−→ ∞ → ∞ Zl(u, v): l 0 has 0 as its only absorbing state, we must have Z∞ = 0 almost surely.{ Hence,≥ there} exists some t 0 so that Z (u, v) = 0 for all l t almost ≥ l ≥ surely. Next, if u(2) < v(2), by the Borel-Cantelli lemma, with probability 1, we can find two vertices u′, v′ so that u′(2) = v′(2) = v(2) and ∈V u′(1) < v(1) u(1) v(1) + v(2) u(2) , − | − | − v′(1) > v(1) + u(1) v(1) + v(2) u(2) . | − | − By the non-crossing property of our paths, the paths starting from u and v have to lie between the two paths starting from u′ and v′ from time v(2) onwards. Since the paths from u′ and v′ must meet almost surely, so do all paths sandwiched by them. This completes the proof for d = 2.
3.1 Constructing two independent processes In this subsection, we construct two independent processes that will be used later. Let u, v Zd be two distinct vertices with u(d) = v(d), also U u : w Zd ∈ { w ∈ } 9 v Zd and Uw : w two independent collections of i.i.d. uniform (0, 1) random { ∈ } IND IND variables. We construct two paths hn (u): n 1 from u and hn (v): n 1 u { Zd ≥ }v Zd { ≥ } from v, using the collection Uw : w and Uw : w respectively. The independence of the two collections{ of∈ uniform} random{ variables,∈ } ensures that the IND IND two processes hn (u): n 1 and hn (v): n 1 are independent. Denoting the{ l-th simultaneous≥ } { renewal time≥ of} two independent paths by TIND(u, v), note that TIND (u, v) TIND(u, v): l 0 is a sequence of i.i.d. pos- l { l+1 − l ≥ } itive integer valued random variables. For any simultaneous regeneration time TIND Z+ l (u, v), there exist some random variables Nl(u) and Nl(v) in , which are functions of both u and v, so that TIND IND IND l (u, v)= hNl(u)(u)(d)= hNl(v)(v)(d). Taking
R(x) := inf hIND(x)(d) x(d): k 1, hIND(x)(d) x(d) n n, x u, v , n { k − ≥ k − ≥ } − ∈{ } we have
TIND(u, v) TIND(u, v) := inf n 1 : R(u) = R(v) =0 . 1 − 0 { ≥ n n } Now we invoke the following result, which is Lemma 3.2 in [17].
(1) (2) Lemma 3.1 Let ξn : n 1 and ξn : n 1 be two independent collections of i.i.d. positive integer{ valued≥ } random{ variables≥ } with exponentially decaying tail P (1) P (2) probabilities. Assume min (ξ1 = 1), (ξ1 = 1) > 0. For i = 1, 2 and (i) k (i) (i) (i) (i) k 1, let S := ξ and Rn := inf S : k 1,S n n. Taking ≥ k j=1 j { k ≥ k ≥ } − R (1) (2) τ := inf n 1 : Rn = Rn =0 , we have { ≥ P } P τ R m C exp C m , m 1, { ≥ } ≤ 5 {− 6 } ≥ where C5 and C6 are some positive constants, depending only on the distribution (1) (2) of ξn and ξn . From the above lemma we have
IND IND IND IND P h (u)(d) h (u)(d) m = P h (v)(d) h (v)(d) m Nl+1(u) − Nl(u) ≥ Nl+1(v) − Nl(v) ≥ = P TIND (u, v) TIND(u, v) m l+1 − l ≥ = P TIND(u, v) TIND(u, v) m 1 − 0 ≥ C exp C m , (4) ≤ 7 {− 8 } where C7 and C8 are some positive constants, depending only on the distribution of [hIND(u)(d) hIND (u)(d)]’s. n − n−1 10 Now we study the displacement in the first d 1 coordinates. Recall, for w =(w(1),..., w(d)), we denote w¯ =(w(1),..., w(−d 1)). Let −
Nl(u) ψu = ψu(u, v) := h¯IND (u) h¯IND (u)= h¯IND(u) h¯IND (u) , l l Nl(u) − Nl−1(u) t − t−1 t=N − (u)+1 lX1 Nl(v) ψv = ψv(u, v) := h¯IND (v) h¯IND (v)= h¯IND(v) h¯IND (v) . l l Nl(v) − Nl−1(v) t − t−1 t=N − (v)+1 lX1 Proposition 3.2 The process (ψu, ψv): l 1 has the following properties: { l l ≥ } (a) It is a collection of i.i.d. random variables taking values in Z2(d−1), such that [ ψu + ψv ] has exponentially decaying tail probabilities for each l 1. k l k1 k l k1 ≥ (b) For all j =1,...,d 1 and r 1, we have P ψu(j)= r = P ψu(j)= r = − ≥ { 1 } { 1 − } P ψv(j)= r = P ψv(j)= r = P ψu(1) = r . { 1 − } { 1 } { 1 } E u m1 v m2 (c) [(ψ1 (i)) (ψ1 (j)) ] is independent of i, j and depends only on m1, m2, and E u m1 v m2 if at least one of m1, m2 is odd, then [(ψ1 (i)) (ψ1 (j)) ]=0. ¯IND ¯IND ¯IND ¯IND Proof. Since ht (u) ht−1(u): t 1 and ht (v) ht−1(v): t 1 are { − ≥ } { − u v ≥ } two independent sequences of i.i.d. random variables and (ψl , ψl ) is just related u v to the l-th renewal, (ψl , ψl ): l 0 is a collection of i.i.d. random variables. Moreover, using the fact{ that ≥ }
u v IND IND ψ 1 + ψ 1 2(d 1) h (u)(d) h (u)(d) k l k k l k ≤ − Nl(u) − Nl−1(u) and (hIND (u)(d) hIND (u)(d)) has exponentially decaying tail probabilities, Nl(u) − Nl−1(u) part (a) is established. From the symmetricity in each of their d 1 coordinates and the rotational ¯IND ¯IND ¯IND − ¯IND invariance of (ht (u) ht−1(u)) and (ht (v) ht−1(v)), part (b) holds. −¯IND ¯IND −¯IND ¯IND The distribution of (ht (u)(i) ht−1(u)(i), ht (v)(j) ht−1(v)(j)) is indepen- − − u Zd dent of i, j, because we can apply some independent rotations in Uw : w v Zd { ∈ } and Uw : w so that the (i, j)-th coordinate after rotation becomes the (1, 1)-th{ coordinate∈ } before rotation, with no change in the distribution. Hence, u v E u m1 v m2 the distribution of (ψ1 (i), ψ1 (j)) and thus [(ψ1 (i)) (ψ1 (j)) ] are independent of i, j. Further, if we fix the realizations for one family of uniform random vari- ables and reflect the realizations of another family along some coordinate, the distribution of (h¯IND(u)(i) h¯IND (u)(i), h¯IND(v)(j) h¯IND (v)(j)) does not change. t − t−1 t − t−1 Therefore, (ψu(i), ψv(j)) =(d ψu(i), ψv(j)). This proves (c). 1 1 1 − 1
11 Remark 3.3 Let d = 3. Since Proposition 3.2 shows that all the properties of u v (ψ1 , ψ1 ) discussed in [17] are also satisfied for our model, the argument in the Appendix of that paper (page 1141) allows us to conclude that if u¯ v¯ = x Z2, then − ∈
E (u¯ + ψu) (v¯ + ψv) 2 x 2 = α, (5) k 1 − 1 k2 −k k2 E u v 2 2 2 2 (u¯ + ψ1 ) (v¯ + ψ1 ) 2 x 2 2α x 2, (6) k − k −k k ≥ k k E u v 2 2 3 2 (u¯ + ψ1 ) (v¯ + ψ1 ) 2 x 2 = O( x 2) as x 2 , (7) k − k −k k k k k k →∞ where α is some non-negative constant.
3.2 Proof of Theorem 1.1(i) for dimension 3 In this subsection, we consider two arbitrary distinct open vertices u, v Z3 and, without loss of generality, we assume u(3) = v(3). We aim to apply∈ the Foster-Lyapunov criterion (see Proposition 5.3 in Chapter I of [3]) on the process Z (u, v): l 0 . { l ≥ } Proposition 3.4 (Foster-Lyapunov criterion) An irreducible Markov chain with state space E and stationary transition probability matrix (pij)i,j∈E is recurrent if there exists a function f : E R such that f(x) and → →∞ p f(k) f(j) for j E , jk ≤ ∈ 0 Xk∈E where E is a subset of E so that E E is finite. 0 \ 0 The Markov chain Z (u, v): l 0 is not irreducible, because it has the { l ≥ } absorbing state (0, 0). Because of this, we modify the Markov process so that it has the same transition probabilities as Zl(u, v): l 0 except that instead of (0, 0) being an absorbing state, it goes to (1{, 0) with probability≥ } 1. With a slight abuse of notation, we denote the modified Markov chain by Zl(u, v): l 0 again. Using the Foster-Lyapunov criterion, we show that the Markov{ chain≥Z}(u, v): l 0 { l ≥ } is recurrent. This will prove that the graph G is connected. For applying the Foster-Lyapunov criterion, consider f : Z2 R+ by → f(x)= ln(1 + x 2). k k2 q Also, define a function g : R+ R+ by g(t) = ln(1 + t). In this case, f(x) = g( x 2). The fourth derivative→ of g is non-positive everywhere. Therefore, using k k2 p
12 Taylor’s expansion, we conclude
E f(Z (u, v)) Z (u, v)= x f(x) 1 | 0 − = E g( Z (u, v) 2) g( x 2) Z (u, v)= x k 1 k2 − k k2 | 0 3 (k) 2 g ( x ) k k k2 E Z (u, v) 2 x 2 Z (u, v)= x . (8) ≤ k! k 1 k2 −k k2 | 0 k=1 X We know that Z1(u, v) is the difference of the first two coordinates of the paths starting from u and v at the first simultaneous regeneration. If the paths starting from u and v are independent until their first simultaneous regeneration time, this difference will have the same distribution as (u¯ +ψu) (v¯ +ψv), and then we could 1 − 1 use the relations (5), (6) and (7) in Remark 3.3. However, we can couple the joint process and independent process, and obtain a relation between the moments of Z (u, v) 2 x 2 and (u¯ + ψu) (v¯ + ψv) 2 x 2. k 1 k2 −k k2 k 1 − 1 k2 −k k2 Proposition 3.5 For any x Z2 (0, 0) and k 1, we have ∈ \{ } ≥ E Z (u, v) 2 x 2 k Z (u, v)= x k 1 k2 −k k2 | 0 E u v 2 2 k (k) 2k (u¯ + ψ ) (v¯+ ψ ) x + C x exp C10 x 2 , ≤ k 1 − 1 k2 −k k2 9 k k2 {− k k } (k) where C9 and C10 are some positive constants depending on the distribution of (hIND(u)(d) hIND (u)(d))’s, and C(k) depends on k too. n − n−1 9 Proof. By the translation invariance of our model, it suffices to prove the result for u =(x(1), x(2), 0) and v = (0, 0, 0). Let r = u¯ v¯ 1/3=( x(1) + x(2) )/3. k − k | | | | u Recall that for constructing the independent paths, we use the collections Uw : Z3 v Z3 { w and Uw : w . We now consider another collection of i.i.d. uniform ∈ } { ∈ ′} Z3 (0, 1) random variables Uw : w independent of all other random variables, { ∈ } 3 and define a new collection of uniform random variables U˜w : w Z by { ∈ } u Uw, if w V (u, r ), ˜ v ∈ ⌊ ⌋ Uw := Uw, if w V (v, r ), ′ ∈ ⌊ ⌋ Uw, otherwise.
Using this collection, we construct the joint path (hn(u), hn(v)) : n 0 starting { ≥ } from (u, v) until their first simultaneous regeneration time T1 at step τ1. We observe that
Z (u, v) Z (u, v) u¯ + u¯ Z (u, v) u¯ + x 4h (u)(3)+ x , k 1 k2 ≤k 1 − k2 k k2 ≤k 1 − k1 k k2 ≤ τ1 k k2 and ψu ψv ψu + ψv 4hIND (u)(3). k 1 − 1 k2 ≤k 1 k1 k 1 k1 ≤ N1(u) 13 Now, define the event A(r) := hIND (u)(3) >r . { N1(u) } An argument as in Proposition 2.4 yields that hIND (u)(3), like h (u)(3), has N1(u) τ1 exponentially decaying tail probabilities which do not depend on u or v. Hence, we have
E Z (u, v) 2 x 2 k (u¯ + ψu) (v¯ + ψv) 2 x 2 k k 1 k2 −k k2 − k 1 − 1 k2 −k k2 h k i k = E Z (u, v) 2 x 2 (u¯ + ψu) (v¯ + ψv) 2 x 2 1(A(r)) k 1 k2 −k k2 − k 1 − 1 k2 −k k2 h i 2kE Z (u, v) 2k + (22k + 2) x 2k +22k ψu ψv 2k 1(A(r)) ≤ k 1 k2 k k2 k 1 − 1 k2 h IND 27kE h2k(u)(3) + x 2k +(h (u)(3))2k 1(A(r)) ≤ τ1 k k2 N1(u) h i IND 27k E[h4k(u)(3)] + x 2k + E[(hIND (u)(3))4k] P h (u)(3) >r ≤ τ1 k k2 N1(u) N1(u) q qIND 27k 3 max E[h4k(u)(3)], E[(h (u)(3))4k] x 2kCexp C x /3 , ≤ × τ1 N1(u) k k2 7 {− 8k k2 } n o where we have used Cauchy-Schwarz inequality in the penultimate line and in- equality (4) in the last line above. This establishes the proposition. We now return to the relation (8). The above proposition implies that
E f(Z (u, v)) Z (u, v)= x f(x) 1 | 0 − 3 (k) 2 g ( x ) k k k2 E (u¯ + ψu) (v¯ + ψv) 2 x 2 ≤ k! k 1 − 1 k2 −k k2 k=1 X 3 g(k)( x 2) + k k2 C(k) x 2k exp C x . k! 9 k k2 {− 10k k2} Xk=1 For x large enough, from (5), (6) and (7) we have k k2 3 (k) 2 g ( x ) k k k2 E (u¯ + ψu) (v¯ + ψv) 2 x 2 k! k 1 − 1 k2 −k k2 k=1 X −3/2 α x 2 ln(1 + x 2) / 8(1 + x 2)2 . ≤ − k k2 k k2 k k2 Also,
3 g(k)( x 2) k k2 C(k) x 2k exp C x k! 9 k k2 {− 10k k2} Xk=1 2 max C(1),C(3) exp C x . ≤ { 9 9 } {− 10k k2}
14 Therefore,
E f(Z (u, v)) Z (u, v)= x f(x) 0, 1 | 0 − ≤ for x large. This completes the proof of Theorem 1.1(i) for dimension 3. k k2 3.3 Proof of Theorem 1.1(ii) Let d 4. We will show ≥ P G has at least m distinct trees =1, m 2. { } ∀ ≥ By the ergodicity inherent in our model it suffices to show
P G has at least m distinct trees > 0, m 2. (9) { } ∀ ≥ The above relation happens if, with a positive probability, there exist m open vertices so that they are different in each of their simultaneous regeneration times. We follow the approach of [14].
Proposition 3.6 For 0 <ε< 1/3, there exist some positive constants C11, β = β(ε) and n 1, such that 0 ≥ P −β inf Zn4 (u, v) Dn2(1+ε) Dn2(1−ε) u, v 1 C11n , (u,v)∈An,ε ∈ \ | ∈V ≥ − 2d for all n n0, where An,ε := (u, v) Z : u(d)= v(d), u¯ v¯ Dn1+ε Dn1−ε and D :=≥ w Zd−1 : w { r . ∈ − ∈ \ } r { ∈ k k1 ≤ } Proof. Fix 0 <ε< 1/3 and suppose n 1. Consider the independent paths Zd ≥ u starting from u, v with u(d)= v(d), constructed using the collections Uw : w Zd and U v ∈: w Zd respectively, and define { ∈ } { w ∈ } ¯IND ¯IND Wn,ε(u, v) := hN (u)(u) hN (v)(v) Dn2(1+ε) Dn2(1−ε) , n4 − n4 ∈ \ n IND IND h¯ (u) h¯ (v) / D for all j =0, 1,...,n4 , Nj (u) − Nj (v) ∈ K ln n o where K is a constant to be specified later. Using the same argument with Lemma 3.3 of [14], we can show that there exists n such that for all n n , 0 ≥ 0 −α inf P Wn,ε(u, v) u, v 1 C12n , (10) (u,v)∈An,ε | ∈V ≥ − where C12,α = α(ε) are some positive constants (see Lemma 5.1 in the Appendix).
15 Let r (n) := min K ln n/3 , hIND (u)(d) hIND (u)(d) . We consider an- l Nl(u) Nl−1(u) ⌊ ⌋ − ′′ other independent collection of i.i.d. uniform (0, 1) random variables Uw : w d d { ∈ Z and define a new family Uˇw : w Z as } { ∈ } 4 U u if w n V hIND (u),r (n) , w l=1 Nl−1(u) l ∈ n4 IND Uˇw := v Uw if w V h − (v),rl(n) , ∈ Sl=1 Nl 1(v) U ′′ otherwise. w S Note that on the eventWn,ε(u, v), we have
n4 n4 IND IND V h (u),rl(n) V h (v),rl(n) = . Nl−1(u) ∩ Nl−1(v) ∅ l=1 l=1 h [ i h [ i Let IND IND Bl(n) := h (u)(d) h (u)(d)