arXiv:math/0702799v4 [math.PR] 22 Jun 2009 (1.1) ec,sie akdte,drce oye natree. a on polymer directed tree, marked spine, gence, easm hogottepprta,frsome for that, paper the throughout assume We 09 o.3,N.2 742–789 2, No. DOI: 37, Vol. Probability2009, of Annals The

oiinby position epciepstoslk needn oiso h initi the of on. behave—re copies and independent generation), positions—like second child the These respective form line. displa (who the own their on their process generation; of point first a to the correspond origin form children Its origin. rnhn admwl ntera line real the on walk random branching a c n yorpi detail. 742–789 typographic 2, the and by No. published 37, Vol. article 2009, original the Mathematical of of reprint Institute electronic an is This nttt fMteaia Statistics Mathematical of Institute e od n phrases. and words Key ewrite We 1.1. Introduction. 1. M 00sbetclassification. subject 2008. 2000 February AMS revised 2007; April Received 10.1214/08-AOP419 N IETDPLMR NDSREE TREES DISORDERED ON POLYMERS DIRECTED AND rnhn admwl n atnaeconvergence. martingale and walk random Branching OVREC NBACIGRNO WALKS, RANDOM BRANCHING IN CONVERGENCE IIA OIINADCIIA MARTINGALE CRITICAL AND POSITION MINIMAL ed ecie yDriaadSon[ probabili Spohn in part and convergence the Derrida of for by view phenomenon of described we point ready transition particular, the phase In (from tree. a function disordered of a proof on rigorous polymers [ directed Kyprianou of wh and theorem Biggins convergence of 10 martingale question a prove a bran answers also and super-critical walk, one-dimensional dom a in position minimal ntesneo pe lotsr limits. sure almost upper of sense phenomenon the transition in phase this Surprisingly, 817–840]. 20)6961.Ormto ple,frhroe otest the to furthermore, applies, method Our 609–631]. (2005) V | eetbihascn-re lotsr ii hoe o th for theorem limit sure almost second-order a establish We u ( | nvri´ ai IIadUiestePrsVI Universit´e Paris and XIII Universit´e Paris u = .[npriua,frteiiilancestor initial the for particular, [In ). n fa individual an if yYeu uadZa Shi Zhan and Hu Yueyun By rnhn admwl,mnmlpsto,mrigl conve martingale position, minimal walk, random Branching 2009 , hsrpitdffr rmteoiia npagination in original the from differs reprint This . 60J80. in u h naso Probability of Annals The si the in is 1 E ( R | u X .Sait Phys. Statist. J. ntal,april isa the at sits particle a Initially, . | =1 n hgnrto,addnt its denote and generation, th 1 ! > δ lcrn .Probab. J. Electron. 1+ δ 0, ) e δ < ehave we , + lpril.Adso And particle. al disappears eet rmthe from cements hn ran- ching ∞ 51 > e aechildren have ren , , y,al- ty), and 0 (1988) iea give aiet their to lative ition econsider We udy ich V e δ ( − e 0.] = ) > 0, r- 2 Y. HU AND Z. SHI

(1.2) E e−(1+δ+)V (u) + E eδ−V (u) < ∞, ( ) ( ) |uX|=1 |uX|=1 here E denotes expectation with respect to P, the law of the branching . Let us define the (logarithmic) moment generating function

ψ(t) := log E e−tV (u) ∈ (−∞, ∞], t ≥ 0. ( ) |uX|=1 By (1.2), ψ(t) < ∞ for t ∈ [−δ−, 1+ δ+]. Following Biggins and Kyprianou [9], we assume (1.3) ψ(0) > 0, ψ(1) = ψ′(1) = 0. Since the number of particles in each generation forms a Galton–Watson tree, the assumption ψ(0) > 0 in (1.3) says that this Galton–Watson tree is super-critical. In the study of the branching random walk, there is a fundamental mar- tingale, defined as follows:

−V (u) (1.4) Wn := e , n = 0, 1, 2,... := 0 . ∅ ! |uX|=n X Since Wn ≥ 0, it converges . When ψ′(1) < 0, it is proved by Biggins and Kyprianou [7] that there exists a sequence of constants (a ) such that Wn converges in probability to n an a nondegenerate limit which is (strictly) positive upon the survival of the system. This is called the Seneta–Heyde norming in [7] for branching random walk, referring to Seneta [35] and Heyde [22] on the rate of convergence in the classic Kesten–Stigum theorem for Galton–Watson processes. The case ψ′(1) = 0 is more delicate. In this case, it is known (Lyons [29]) that Wn → 0 almost surely. The following question is raised in Biggins and Kyprianou [9]: are there deterministic normalizers (a ) such that Wn n an converges? We aim at answering this question.

Theorem 1.1. Assume (1.1), (1.2) and (1.3). There exists a determin- λn λn istic positive sequence (λn) with 0 < lim infn→∞ n1/2 ≤ lim supn→∞ n1/2 < ∞, such that, conditionally on the system’s survival, λnWn converges in distri- bution to W , with W > 0 a.s. The distribution of W is given in (10.3).

The limit W in Theorem 1.1 turns out to satisfy a functional equation. Such functional equations are known to be closely related to (a discrete ver- sion of) the Kolmogorov–Petrovski–Piscounov (KPP) traveling wave equa- tion; see Kyprianou [25] for more details. MINIMAL POSITION, BRANCHING RANDOM WALKS 3

The almost sure behavior of Wn is described in Theorem 1.3 below. The two theorems together give a clear image of the asymptotics of Wn.

1.2. The minimal position in the branching random walk. A natural question in the study of branching random walks is about inf|u|=n V (u), the position of the leftmost individual in the nth generation. In the liter- ature the concentration (in terms of tightness or even weak convergence) of inf|u|=n V (u) around its median/quantiles has been studied by many authors. See, for example, Bachmann [4] and Bramson and Zeitouni [14], as well as Section 5 of the survey paper by Aldous and Bandyopadhyay [2]. We also mention the recent paper of Lifshits [26], where an exam- ple of a branching random walk is constructed such that inf|u|=n V (u) − median({inf|u|=n V (u)}) is tight but does not converge weakly. We are interested in the asymptotic speed of inf|u|=n V (u). Under assump- tion (1.3), it is known that, conditionally on the system’s survival, 1 (1.5) inf V (u) → 0 a.s., n |u|=n (1.6) inf V (u) → +∞ a.s. |u|=n The “” in (1.5) is a classic result, and can be found in Hammersley [19], Kingman [23] and Biggins [5]. The system’s transience to the right, stated in (1.6), follows from the fact that Wn → 0, a.s. A refinement of (1.5) is obtained by McDiarmid [31]. Under the additional 2 assumption E{( |u|=1 1) } < ∞, it is proved in [31] that, for some constant c < ∞ and conditionally on the system’s survival, 1 P 1 lim sup inf V (u) ≤ c1 a.s. n→∞ log n |u|=n

We intend to determine the exact rate at which inf|u|=n V (u) goes to infinity.

Theorem 1.2. Assume (1.1), (1.2) and (1.3). Conditionally on the sys- tem’s survival, we have 1 3 (1.7) lim sup inf V (u)= a.s., n→∞ log n |u|=n 2 1 1 (1.8) lim inf inf V (u)= a.s., n→∞ log n |u|=n 2 1 3 (1.9) lim inf V (u)= in probability. n→∞ log n |u|=n 2 4 Y. HU AND Z. SHI

Remark. (i) The most interesting part of Theorem 1.2 is (1.7)–(1.8). It reveals the presence of fluctuations of inf|u|=n V (u) on the logarithmic level, which is in contrast with known results of Bramson [13] and Dekking and 1 Host [16] stating that, for a class of branching random walks, log log n inf|u|=n V (u) converges almost surely to a finite and positive constant. (ii) Some brief comments on (1.3) are in order. In general [i.e., without as- suming ψ(1) = ψ′(1) = 0], the law of large numbers (1.5) reads 1 n inf|u|=n V (u) → c, a.s. (conditionally on the system’s survival), where c := inf{a ∈ R : g(a) ≥ 0}, with g(a) := inft≥0{ta + ψ(t)}. If (1.10) t∗ψ′(t∗)= ψ(t∗) for some t∗ ∈ (0, ∞), then the branching random walk associated with the V (u) := t∗V (u)+ ψ(t∗)|u| satisfies (1.3). That is, as long as (1.10) has a solution [which is the case, e.g., if ψ(1) = 0 and ψ′(1) > 0], the study will boilb down to the case (1.3). It is, however, possible that (1.10) has no solution. In such a situation, Theorem 1.2 does not apply. For example, we have already mentioned a class of branching random walks exhibited in Bramson [13] and Dekking and Host [16], for which inf|u|=n V (u) has an exotic log log n behavior. (iii) Under suitable assumptions, Addario–Berry [1] obtains a very precise asymptotic estimate of E[inf|u|=n V (u)], which implies (1.9). (iv) In the case of branching Brownian motion, the analogue of (1.9) was proved by Bramson [12], by means of some powerful explicit analysis.

1.3. Directed polymers on a disordered tree. The following model is bor- rowed from the well-known paper of Derrida and Spohn [17]: Let T be a rooted Cayley tree; we study all self-avoiding walks (= directed polymers) of n steps on T starting from the root. To each edge of the tree is attached a (= potential). We assume that these random variables are independent and identically distributed. For each walk ω, its energy E(ω) is the sum of the potentials of the edges visited by the walk. So the partition function is −βE(ω) Zn := e , ω X where the sum is over all self-avoiding walks of n steps on T, and β> 0 is the inverse temperature. More generally, we take T to be a Galton–Watson tree, and observe that the energy E(ω) corresponds to (the partial sum of) the branching random walk described in the previous sections. The associated partition function becomes −βV (u) (1.11) Wn,β := e , β> 0. |uX|=n MINIMAL POSITION, BRANCHING RANDOM WALKS 5

Clearly, when β = 1, Wn,1 is just the Wn defined in (1.4). ′ If 0 <β< 1, the study of Wn,β boils down to the case ψ (1) < 0, which was investigated by Biggins and Kyprianou [7]. In particular, conditionally on the system’s survival, Wn,β converges almost surely to a (strictly) positive E{Wn,β } random variable. We study the case β ≥ 1 in the present paper.

Theorem 1.3. Assume (1.1), (1.2) and (1.3). Conditionally on the sys- tem’s survival, we have

−1/2+o(1) (1.12) Wn = n a.s.

Theorem 1.4. Assume (1.1), (1.2) and (1.3), and let β > 1. Condi- tionally on the system’s survival, we have log W β (1.13) lim sup n,β = − a.s., n→∞ log n 2 log W 3β (1.14) lim inf n,β = − a.s., n→∞ log n 2 −3β/2+o(1) (1.15) Wn,β = n in probability.

Again, the most interesting part in Theorem 1.4 is (1.13) and (1.14), which describes a new fluctuation phenomenon. Also, there is no phase transition any more for Wn,β at β = 1 from the point of view of upper almost sure limits. The remark on (1.3), stated after Theorem 1.2, applies to Theorems 1.3 and 1.4 as well. An important step in the proof of Theorems 1.3 and 1.4 is to estimate all small moments of Wn and Wn,β, respectively. This is done in the next theorems.

Theorem 1.5. Assume (1.1), (1.2) and (1.3). For any γ ∈ [0, 1), we have

1/2 γ 1/2 γ (1.16) 0 < lim inf E{(n Wn) }≤ lim supE{(n Wn) } < ∞. n→∞ n→∞

Theorem 1.6. Assume (1.1), (1.2) and (1.3), and let β> 1. For any 1 0

r −3rβ/2+o(1) (1.17) E{Wn,β} = n , n →∞. 6 Y. HU AND Z. SHI

The rest of the paper is as follows. In Section 2 we introduce a change-of- measures formula (Proposition 2.1) in terms of spines on marked trees. This formula will be of frequent use throughout the paper. Section 3 contains a few preliminary results of the lower tail probability of the martingale Wn. The proofs of the theorems are organized as follows: • Section 4: upper bound in part (1.8) of Theorem 1.2. • Section 5: Theorem 1.6. • Section 6: Theorem 1.5. • Section 7: Theorem 1.3, as well as parts (1.14) and (1.15) of Theorem 1.4. • Section 8: (the rest of) Theorem 1.2. • Section 9: part (1.13) of Theorem 1.4. • Section 10: Theorem 1.1. Section 4 relies on ideas borrowed from Bramson [12], and does not require the preliminaries in Sections 2 and 3. Sections 5 and 6 are the technical part of the paper, where a common idea is applied in two different situations. Throughout the paper we write q := P{the system’s extinction}∈ [0, 1). The letter c with a subscript denotes finite and (strictly) positive constants. 0 We also use the notation ∅ := 0, ∅ := 1, and 0 := 1. Moreover, we use an an ∼ bn, n →∞, to denote limn→∞ = 1. P Qbn 2. Marked trees and spines. This section is devoted to a change-of- measures result (Proposition 2.1) on marked trees in terms of spines. The material of this section has been presented in the literature in various forms; see, for example, Chauvin, Rouault and Wakolbinger [15], Lyons, Pemantle and Peres [30], Biggins and Kyprianou [8] and Hardy and Harris [20]. There is a one-to-one correspondence between branching random walks and marked trees. Let us first introduce some notation. We label individuals in the branching random walk by their line of descent, so if u = i1 · · · in ∈ U ∅ ∞ N∗ k N∗ := { }∪ k=1( ) (where := {1, 2,...}), then u is the inth child of the i th child of. . . of the i th child of the initial ancestor e. It is sometimes n−1 S 1 convenient to consider an element u ∈ U as a “word” of length |u|, with ∅ corresponding to e. We identify an individual u with its corresponding word. If u, v ∈ U , we denote by uv the concatenated word, with u∅ = ∅u = u. Let U := {(u, V (u)) : u ∈ U ,V : U → R}. Let Ω be Neveu’s space of marked trees, which consists of all the subsets ω of U such that the first component of ω is a tree. [Recall that a tree t is a subset of U satisfying: (i) ∅ ∈ t; (ii) if uj ∈ t for some j ∈ N∗, then u ∈ t; (iii) if u ∈ t, then uj ∈ t if and only if 1 ≤ j ≤ νu(t) for some nonnegative integer νu(t).] MINIMAL POSITION, BRANCHING RANDOM WALKS 7

Let T :Ω → Ω be the identity application. According to Neveu [32], there exists a probability P on Ω such that the law of T under P is the law of the branching random walk described in the Introduction. Let us make a more intuitive presentation. For any ω ∈ Ω, let (2.1) TGW(ω) := the set of individuals ever born in ω, (2.2) T(ω) := {(u, V (u)), u ∈ TGW(ω),V such that (u, V (u)) ∈ ω}. [Of course, T(ω)= ω.] In words, TGW is a Galton–Watson tree, with the population members as the vertices, whereas the marked tree T corresponds to the branching random walk. It is more convenient to write (2.2) in an informal way: T = {(u, V (u)), u ∈ TGW}. For any u ∈ TGW, the shifted Galton–Watson subtree generated by u is TGW U TGW (2.3) u := {x ∈ : ux ∈ }. TGW TGW [By shifted, we mean that u is also rooted at e.] For any x ∈ u , let

(2.4) |x|u := |ux| − |u|,

(2.5) Vu(x) := V (ux) − V (u).

As such, |x|u stands for the (relative) generation of x as a vertex of the TGW TGW Galton–Watson tree u , and (Vu(x),x ∈ u ) the branching random walk which corresponds to the shifted marked subtree T TGW u := {(x, Vu(x)),x ∈ u }. GW Let Fn := σ{(u, V (u)), u ∈ T , |u|≤ n}, which is the sigma-field in- duced by the first n generations of the branching random walk. Let F∞ be the sigma-field induced by the whole branching random walk. Assume now ψ(0) > 0 and ψ(1) = 0. Let Q be a probability on Ω such that, for any n ≥ 1,

(2.6) Q|Fn := Wn • P|Fn . (n) GW Fix n ≥ 1. Let wn be a random variable taking values in {u ∈ T , |u| = n} such that, for any |u| = n, −V (u) (n) e (2.7) Q{wn = u|F∞} = . Wn (n) (n) (n) (n) (n) We write Je, wn K = {e =: w0 , w1 , w2 , . . ., wn } for the shortest path in TGW (n) (n) relating the root e to wn , with |wk | = k for any 1 ≤ k ≤ n. For any individual u ∈ TGW \ {e}, let ←−u be the parent of u in TGW, and ∆V (u) := V (u) − V (←−u ). 8 Y. HU AND Z. SHI

For 1 ≤ k ≤ n, we write I (n) TGW ←− (n) (n) (2.8) k := {u ∈ : |u| = k, u = wk−1, u 6= wk }. I (n) (n) (n) In words, k is the set of children of wk−1 except wk or, equivalently, the (n) set of the brothers of wk , and is possibly empty. Finally, let us introduce the following sigma-field:

G (n) (n) I (n) (2.9) n := σ δ∆V (x),V (wk ), wk , k , 1 ≤ k ≤ n , ( I (n) ) x∈Xk where δ denotes the Dirac measure. The promised change-of-measures result is as follows. For any marked tree T, we define its truncation Tn at level n by Tn := {(x, V (x)),x ∈ TGW, |x|≤ n}; see Figure 1.

Proposition 2.1. Assume ψ(0) > 0 and ψ(1) = 0, and fix n ≥ 1. Under probability Q, (n) (i) the random variables ( I (n) δ∆V (x), ∆V (wk )), 1 ≤ k ≤ n, are x∈ k P (1) i.i.d., distributed as ( I (1) δ∆V (x), ∆V (w1 )); x∈ 1 n−|x| (ii) conditionally onP Gn, the truncated shifted marked subtrees Tx , for n I (n) Tn−|x| x ∈ k=1 k , are independent; the conditional distribution of x (for n I (n) G any Sx ∈ k=1 k ) under Q, given n, is identical to the distribution of Tn−|x| P underS .

Throughout the paper, let ((Si,σi), i ≥ 1) be such that (Si − Si−1,σi), for i ≥ 1 (with S0 = 0), are i.i.d. random vectors under Q and distributed as (1) I (1) (V (w1 ), # 1 ).

Corollary 2.2. Assume ψ(0) > 0 and ψ(1) = 0, and fix n ≥ 1. (n) I (n) (i) Under Q, ((V (wk ), # k ), 1 ≤ k ≤ n) is distributed as ((Sk,σk), (n) 1 ≤ k ≤ n). In particular, under Q, (V (wk ), 1 ≤ k ≤ n) is distributed as (Sk, 1 ≤ k ≤ n). (ii) For any measurable function F : R → R+,

−V (u) (2.10) EQ{F (S1)} = E e F (V (u)) . ( ) |uX|=1

In particular, we have EQ{S1} = 0 under (1.2) and (1.3). MINIMAL POSITION, BRANCHING RANDOM WALKS 9

n−|x| n−|y| n−|z| Fig. 1. Spine; The truncated shifted subtrees Tx ,Ty , Tz ,... are actually rooted at e. Corollary 2.2 follows immediately from Proposition 2.1, and can be found in several papers (e.g., Biggins and Kyprianou [9]). We present two collections of probability estimates for (Sn) and for (V (u), |u| = 1), respectively. They are simple consequences of Proposition 2.1, and will be of frequent use in the rest of the paper.

Corollary 2.3. Assume (1.2) and (1.3). Then

aS1 (2.11) EQ{e } < ∞ ∀|a|≤ c2, x2 Q{|S |≥ x}≤ 2exp −c min x, n 3 n (2.12)    ∀n ≥ 1, ∀x ≥ 0,

c4 (2.13) Q min Sk > 0 ∼ , n →∞, 1≤k≤n 1/2   n 1/2 b min S (2.14) supn EQ{e 0≤i≤n i } < ∞ ∀b ≥ 0, n≥1 where c2 := min{δ+, 1+ δ−}. Furthermore, for any C ≥ c> 0, we have

−(c3C−1) Q max |Sj − Sk|≥ C log n ≤ 2cn log n 0≤j,k≤n,|j−k|≤c logn (2.15)   ∀n ≥ 2. 10 Y. HU AND Z. SHI

Corollary 2.4. Assume (1.1), (1.2) and (1.3). Let 0 < a ≤ 1. Then

ρ(a) −aV (u) (2.16) EQ e < ∞, ( ! ) |uX|=1

−c6x (2.17) Q sup |V (u)|≥ x ≤ c5e ∀x ≥ 0, |u|=1  with ρ(a) := δδ+ , where δ and δ are the constants in (1.1) and (1.2), 1+aδ+δ+ + respectively.

aS Proof of Corollary 2.3. By Corollary 2.2 (ii), EQ{e 1 } = (a−1)V (u) E{ |u|=1 e }, which, according to (1.2), is finite as long as |a|≤ c2. This proves (2.11). P Once we have the exponential integrability in (2.11) for (Sn), standard probability estimates for sums of i.i.d. random variables yield (2.12), (2.13) and (2.14); see Petrov [34]’s Theorem 2.7, Bingham [10] and Kozlov [24]’s Theorem A, respectively. To check (2.15), we observe that the probability term on the left-hand side of (2.15) is bounded by 0≤j

Proof of Corollary 2.4. Write ρ := ρ(a). We have −aV (u) ρ ρ ρ EQ{( |u|=1 e ) } = EQ{W1,a} = E{W1,aW1,1}. Let N := |u|=1 1. By a/(1+δ ) H¨older’s inequality, W ≤ W + N (1−a+δ+)/(1+δ+), whereas W ≤ P 1,a 1,1+δ+ P 1,1 1/(1+δ ) W + N δ+/(1+δ+). Therefore, by means of another application of H¨older’s 1,1+δ+ ρ (1+aρ)/(1+δ+ ) 1+δ (δ+−aρ)/(1+δ+) inequality, E{W1,aW1,1}≤ [E(W1,1+δ+ )] [E(N )] , which is finite [by (1.2) and (1.1)]. This implies (2.16). To prove (2.17), we write A := {sup|u|=1 |V (u)|≥ x}. By Chebyshev’s −c8x c8|V (u)| inequality, P(A) ≤ c7e , where c7 := E( |u|=1 e ) < ∞ as long as −V (u) 0 < c8 ≤ min{δ−, 1+ δ+} [by (1.2)]. Thus,PQ(A)= E{ |u|=1 e 1A}≤ ρ(1)/[1+ρ(1)] −V (u) 1+ρ(1) 1/(1+ρ(1)) c9[P(A)] , where c9 := [E{( |u|=1 e ) P}] < ∞. Now c8ρ(1)  (2.17) follows from (2.16), with c6 :=P1+ρ(1) .

3. Preliminary: small values of Wn. This preliminary section is devoted to the study of the small values of Wn. Throughout the section, we assume (1.1), (1.2) and (1.3). We define two important events: (3.1) S := {the system’s ultimate survival},

(3.2) Sn := {the system’s survival after n generations} = {Wn > 0}. MINIMAL POSITION, BRANCHING RANDOM WALKS 11

Clearly, S ⊂ Sn. Recall (see, e.g., Harris [21], page 16) that, for some con- stant c10 and all n ≥ 1, −c10n (3.3) P{Sn \ S }≤ e . Here is the main result of the section.

Proposition 3.1. Assume (1.1), (1.2) and (1.3). For any ε> 0, there exists ϑ> 0 such that, for all sufficiently large n, 1/2 −ε −ϑ (3.4) P{n Wn

The proof of Proposition 3.1 relies on Neveu’s multiplicative martingale. Recall that under assumption (1.3), there exists a nonnegative random vari- able ξ∗, with P{ξ∗ > 0} > 0, such that ∗ law ∗ −V (u) (3.5) ξ = ξue , |uX|=1 ∗ ∗ law where, given {(u, V (u)), |u| = 1}, ξu are independent copies of ξ , and “ = ” stands for identity in distribution. Moreover, there is uniqueness of the dis- tribution of ξ∗ up to a scale change (see Liu [27]); in the rest of the paper ∗ −ξ∗ 1 we take the version of ξ as the unique one satisfying E{e } = 2 . Let us introduce the Laplace transform of ξ∗: ∗ (3.6) ϕ∗(t) := E{e−tξ }, t ≥ 0. Let ∗ ∗ −V (u) (3.7) Wn := ϕ (e ), n ≥ 1. |uY|=n ∗ The process (Wn ,n ≥ 1) is also a martingale (Liu [27]). Following Neveu ∗ [33], we call Wn an associated “multiplicative martingale.” ∗ The martingale Wn being bounded, it converges almost surely (when ∗ n →∞) to, say, W∞. Let us recall from Liu [27] (see also Kyprianou [25]) that, for some c∗ > 0,

1 law ∗ (3.8) log ∗ = ξ , W∞

1 ∗ 1 (3.9) log ∗ ∼ c t log , t → 0. ϕ (t)  t  We first prove the following lemma:

Lemma 3.2. Assume (1.1), (1.2) and (1.3). There exist κ> 0 and a0 ≥ 1 such that ∗ a ∗ −κ (3.10) E{(W∞) |W∞ < 1}≤ a , ∀a ≥ a0,

∗ a −κ −c10n (3.11) E{(Wn ) 1Sn }≤ a + e , ∀n ≥ 1, ∀a ≥ a0. 12 Y. HU AND Z. SHI

Proof. We are grateful to John Biggins for fixing a mistake in the original proof. We first prove (3.10). In view of (3.8), it suffices to show that −aξ∗ ∗ −κ (3.12) E{e |ξ > 0}≤ a , a ≥ a0.

Let q ∈ [0, 1) be the system’s extinction probability. Let N := |u|=1 1. It is well known for Galton–Watson trees that q is the unique solution of P E(qN )= q (for q ∈ [0, 1)); see, for example, Harris [21], page 7. By (3.5), ∗ ∗ −V (u) ∗ ∗ ϕ (t)= E{ |u|=1 ϕ (te )}. Therefore, by (3.6), P{ξ = 0} = ϕ (∞)= ∗ −V (u) limt→∞ E{ Q|u|=1 ϕ (te )}, which, by dominated convergence, is = ∗ N ∗ N ∗ ∗ E{(ϕ (∞))Q } = E{(P{ξ = 0}) }. Since P{ξ = 0} < 1, this yields P{ξ = 0} = q. Following Biggins and Grey [6], we note that, for any t ≥ 0, ∗ ∗ E{e−tξ } = q + (1 − q)E{e−tξ |ξ∗ > 0}.

∗ Let ξ be a random variable such that E{e−tξ} = E{e−tξ |ξ∗ > 0} for any t ≥ 0. Let Y be a random variable independent of everything else, such that bP{Y = 0} = q = 1 − P{Y = 1}. Then ξ∗b and Y ξ have the same law ∗ −V (u) and, by (3.5), so do ξ and |u|=1 e Yuξu, where, given {u, |u| = 1}, b (Yu, ξu) are independent copiesP of (Y, ξ), independent of {V (u), |u| = 1}. −V (u) b Since { |u|=1 e Yuξu > 0} = { |u|=1 Yu > 0}, this leads to b b P −PV (u) −tξ b−t e Yuξu E{e } = E e |u|=1 Yu > 0 , t ≥ 0. ( ) P |u|=1 b b X

Let ϕ(t) := E{e−tξ}, t ≥ 0. Then for any t ≥ 0 and c> 0,

b−V (u) −c Nc ϕ(t)=bE ϕ(te Yu) Yu > 0 ≤ E [ϕ(te )] Yu > 0 , ( ) ( ) |uY|=1 |uX|=1 |uX|=1

b b b where Nc := |u|=1 1{Yu=1,|V (u)|≤c}. By monotone convergence, limc→∞ E{Nc| Y > 0} = E{ Y | Y > 0} > 1 [because P{ Y ≥ |u|=1 u P |u|=1 u |u|=1 u |u|=1 u 2} > 0 by assumption (1.3)]. We can therefore choose and fix a constant c> 0 P P P P Nc such that E{Nc| |u|=1 Yu > 0} > 1. By writing f(s) := E{s | |u|=1 Yu > 0}, we have P b P ϕ(t) ≤ f(ϕ(te−c)), ∀t ≥ 0. Iterating the inequality yields that, for any t ≥ 0 and any n ≥ 1, b b b −nc nc E{e−tξ}≤ f (n)(E{e−te ξ}), that is, E{e−te ξ}≤ f (n)(E{e−tξ}), (3.13) b b b b b b MINIMAL POSITION, BRANCHING RANDOM WALKS 13 where f (n) denotes the nth iterate of f. It is well known for Galton–Watson −n trees (Athreya and Ney [3], Section I.11) that, for any s ∈ [0, 1), limn→∞ γ × (n) b b ′ f (s) converges to a finite limit, with γ := (f) (0) ≤ P{ |u|=1 Yu = 1| |u|=1 Yu > 0} < 1. Therefore, (3.13) yields (3.12), and thus (3.10). b b ∗ a P It remains to check (3.11). Let a ≥ 1. Since ((Wn ) ,n ≥ 0) is a bounded P ∗ a ∗ a ∗ submartingale, E{(Wn ) 1Sn }≤ E{(W∞) 1Sn }. Recall that W∞ ≤ 1; thus, ∗ a ∗ a S S E{(Wn ) 1Sn }≤ E{(W∞) 1S } + P{ n \ }.

S S −c10n ∗ a S By (3.3), P{ n \ }≤ e . To estimate E{(W∞) 1S }, we identify ∗ S c ∗ with {W∞ < 1}: on the one hand, ⊂ {Wn = 1, for all sufficiently large n}⊂ ∗ ∗ ∗ {W∞ = 1}; on the other hand, by (3.8), P{W∞ < 1} = P{ξ > 0} = 1 − q = S S ∗ ∗ a P( ). Therefore, = {W∞ < 1}, P-a.s. Consequently, E{(W∞) 1S } = ∗ a −κ E 1 ∗ {(W∞) {W∞<1}}, which, according to (3.10), is bounded by a , for a ≥ a0. Lemma 3.2 is proved. 

We are now ready for the proof of Proposition 3.1.

∗ 1 Proof of Proposition 3.1. Let c11 > 0 be such that P{ξ ≤ c11}≥ 2 . ∗ ∗ −tξ −c11t ∗ 1 −c11t 1 Then ϕ (t)= E{e }≥ e P{ξ ≤ c11}≥ 2 e and, thus, log( ϕ∗(t) ) ≤ c11t + log 2. Together with (3.9), this yields, on the event Sn, 1 1 log ∗ = log ∗ −V (u) Wn ϕ (e )   |uX|=n   −V (u) −V (u) ≤ 1{V (u)≥1}c12V (u)e + 1{V (u)<1}(c11e + log 2). |uX|=n |uX|=n −V (u) S Since Wn = |u|=n e , we obtain, on n, for any λ ≥ 1, P 1 1 −V (u) (3.14) log ∗ ≤ c13λWn + c12 {V (u)≥λ}V (u)e , Wn   |uX|=n where c13 := c11 + c12 + e log 2. Note that c12 and c13 do not depend on λ. Let 0

−V (u) (3.15) + P 1{V (u)≥λ}V (u)e ≥ y Sn ( ) |u|=n X 1 2 =: RHS(3.15) + RHS(3.15), with obvious notation. 14 Y. HU AND Z. SHI

Recall that P(Sn) ≥ P(S ) = 1 − q. By Chebyshev’s inequality, c14 1 c ∗ 1/y e ∗ 1/y RHS ≤ e 14 E{(W ) |S }≤ E{(W ) 1S }. (3.15) n n 1 − q n n By (3.11), for n ≥ 1 and 0

1 κ −c10n (3.16) RHS(3.15) ≤ c15(y + e ). 2 To estimate RHS(3.15), we observe that 1 2 P 1 −V (u) RHS(3.15) ≤ {V (u)≥λ}V (u)e ≥ y 1 − q ( ) |uX|=n

1 −V (u) ≤ E 1{V (u)≥λ}V (u)e (1 − q)y ( ) |uX|=n 1 V (u)e−V (u) = EQ 1{V (u)≥λ} (1 − q)y ( Wn ) |uX|=n 1 (n) = EQ{V (wn )1 (n) }. (1 − q)y {V (wn )≥λ} (n) By Corollary 2.2(i), EQ{V (wn )1 (n) } = EQ{Sn1{Sn≥λ}} ≤ {V (wn )≥λ} 2 1/2 1/2 (EQ{Sn}) (Q{Sn ≥ λ}) , which, by (2.12), is bounded by 2 2 λ 2 c17n λ c16n exp(−c3 min{λ, n }). Accordingly, RHS(3.15) ≤ y exp(−c3 min{λ, n }). Together with (3.15) and (3.16), it yields that, for 0

Remark. Under the additional assumption that {u, |u| = 1} contains at least two elements almost surely, it is possible (Liu [28]) to improve (3.10): ∗ a ∗ κ1 E{(W∞) |W∞ < 1}≤ exp{−a } for some κ1 > 0 and all sufficiently large a, from which one can deduce the stronger version of Proposition 3.1: for 1/2 −ε ϑ1 any ε> 0, there exists ϑ1 > 0 such that P{n Wn

We complete this section with the following estimate which will be useful in the proof of Theorem 1.5.

Lemma 3.3. Assume (1.1), (1.2) and (1.3). For any 0

1 Proof. Let x > 1. By Chebyshev’s inequality, P{log( ∗ ) ≥ x} = Wn x ∗ x ∗ −e Wn ∗ P{e Wn ≤ 1}≤ eE{e }. Since Wn is a martingale, it follows from x ∗ x ∗ −e Wn −e W∞ ∗ −x/2 Jensen’s inequality that E{e } ≤ E{e } ≤ P{W∞ ≤ e } + exp(−ex/2). Therefore, 1 (3.18) P log ≥ x ≤ eP{W ∗ ≤ e−x/2} + exp(1 − ex/2). W ∗ ∞   n   ∗ ∞ −ty ∗ 1−E(e−tξ ) On the other hand, by integration by parts, 0 e P(ξ ≥ y) dy = t = 1−ϕ∗(t) 1 1 t , which, according to (3.9), is ≤ c18 log(R t ) for 0

4. Proof of Theorem 1.2: upper bound in (1.8). Assume (1.1), (1.2) and (1.3). This section is devoted to proving the upper bound in (1.8): conditionally on the system’s survival, 1 1 (4.1) lim inf inf V (u) ≤ , a.s. n→∞ log n |u|=n 2 The proof borrows some ideas from Bramson [12]. We fix −∞ 0. Let ℓ1 ≤ ℓ2 ≤ 2ℓ1 be integers; we are interested in the asymptotic case ℓ1 →∞. Consider n ∈ [ℓ1,ℓ2] ∩ Z. Let 0 < c19 < 1 be a constant, and let

1/3 1/3 ε gn(k) := min{c19k , c19(n − k) + a log ℓ1,n }, 0 ≤ k ≤ n.

GW Let Ln be the set of individuals x ∈ T with |x| = n such that

gn(k) ≤ V (xk) ≤ c20k, ∀0 ≤ k ≤ n and a log ℓ1 ≤ V (x) ≤ b log ℓ1, 16 Y. HU AND Z. SHI

GW where x0 := e, x1,...,xn := x are the vertices on the shortest path in T relating the root e and the vertex x, with |xk| = k for any 0 ≤ k ≤ n. We consider the measurable event

ℓ2 L Fℓ1,ℓ2 := {x ∈ n}. n[=ℓ1 |x[|=n

We start by estimating the first moment of #Fℓ1,ℓ2 : E(#Fℓ1,ℓ2 ) = −V (x) ℓ2 e V (x) E{ 1 L }. Since E{ 1 L } = E { e × n=ℓ1 |x|=n {x∈ n} |x|=n {x∈ n} Q |x|=n Wn (n) V (wn ) P1{x∈Ln}} =PEQ{e 1 (n) },P we can apply CorollaryP2.2 to see that {wn ∈Ln}

ℓ2 Sn E(#Fℓ1,ℓ2 )= EQ{e 1{gn(k)≤Sk≤c20k,∀0≤k≤n,a log ℓ1≤Sn≤b log ℓ1}} nX=ℓ1 ℓ2 a ≥ ℓ1Q{gn(k) ≤ Sk ≤ c20k, ∀0 ≤ k ≤ n, a log ℓ1 ≤ Sn ≤ b log ℓ1}. nX=ℓ1 We choose (and fix) the constants c19 and c20 such that Q{c19 1 −(3/2)+o(1) 0. Then, the probability Q{·} on the right-hand side is ℓ1 , for ℓ1 →∞. Accordingly, a−(3/2)+o(1) (4.2) E(#Fℓ1,ℓ2 ) ≥ (ℓ2 − ℓ1 + 1)ℓ1 .

We now proceed to estimate the second moment of #Fℓ1,ℓ2 . By definition,

ℓ2 ℓ2 2 E[(#Fℓ1,ℓ2 ) ]= E 1{x∈Ln,y∈Lm} ( ) nX=ℓ1 mX=ℓ1 |xX|=n |yX|=m

ℓ2 ℓ2

≤ 2 E 1{x∈Ln,y∈Lm} . m=n ( ) nX=ℓ1 X |xX|=n |yX|=m

1 − 3 An easy way to see why 2 should be the correct exponent for the probability is to ≤ ≤ n split the event into three pieces: the first piece involving Sk for 0 k 3 , the second piece n ≤ ≤ 2n 2n ≤ ≤ for 3 k 3 , and the third piece for 3 k n. The probability of the first piece is −(1/2)+o(1) ≤ ≤ n n (it is essentially the probability of Sk being positive for 1 k 3 , because conditionally on this, Sk converges weakly, after a suitable normalization, to a ; see Bolthausen [11]). Similarly, the probability of the third piece is n−(1/2)+o(1). n The second piece essentially says that after 3 steps, the random walk should lie in an interval of length of order log n; this probability is also n−(1/2)+o(1). Putting these pieces − 3 together yields the claimed exponent 2 . For a rigorous proof, the upper bound—not required here—is easier since we can only look at the event that the walk stays positive during n steps (with the same condition upon the random variable Sn), whereas the lower bound needs some tedious but elementary writing, based on the . Similar arguments are used for the random walk (Sk) in several other places in the paper, without further mention. MINIMAL POSITION, BRANCHING RANDOM WALKS 17

We look at the double sum |x|=n |y|=m on the right-hand side. By con- sidering z, the youngest common ancestor of x and y, and writing k := |z|, P P we arrive at n

1{x∈Ln,y∈Lm} = 1{zu∈Ln,zv∈Lm}, |xX|=n |yX|=m kX=0 |zX|=k (Xu,v) TGW where the double sum (u,v) is over u, v ∈ z such that |u|z = n − k and TGW |v|z = m − k and thatP the unique common ancestor of u and v in z is the root. Therefore,

ℓ2 ℓ2 n 2 E[(#Fℓ1,ℓ2 ) ] ≤ 2 E 1{zu∈Ln,zv∈Lm} m=n ( ) nX=ℓ1 X kX=0 |zX|=k (Xu,v)

ℓ2 ℓ2 n =: 2 Λk,n,m. m=n nX=ℓ1 X kX=0 We estimate Λk,n,m according to three different situations. ε First situation: 0 ≤ k ≤ ⌊n ⌋. Let Vz(u) := V (zu) − V (z) as in Section ε 2. We have 0 ≤ gn(k) ≤ V (z) ≤ c20n , and V (zui) ≥ 0 for 0 ≤ i ≤ n − k and V (zun−k) ≤ b log ℓ1, where u0 := e, u1, . . ., un−k are the vertices on the TGW shortest path in z relating the root e and the vertex u, with |ui|z = i ε for any 0 ≤ i ≤ n − k. Therefore, Vz(ui) ≥−c20n for 0 ≤ i ≤ n − k, and Vz(u) ≤ b log ℓ1. Accordingly,

Λk,n,m ≤ E 1{zv∈Lm}Bn−k , ( TGW ) |zX|=k v∈ z X,|v|z=m−k where

ε Bn−k := E 1{V (xi)≥−c20n ,∀0≤i≤n−k,V (x)≤b log ℓ1} ( ) |x|X=n−k (n−k) V (wn−k ) = EQ{e 1 (n−k) ε (n−k) } {V (wi )≥−c20n ,∀0≤i≤n−k,V (wn−k )≤b log ℓ1}

Sn−k ε = EQ{e 1{Si≥−c20n ,∀0≤i≤n−k,Sn−k≤b log ℓ1}} b ε ≤ ℓ1Q{Si ≥−c20n , ∀0 ≤ i ≤ n − k,Sn−k ≤ b log ℓ1} b−(3/2)+ε+o(1) b−(3/2)+2ε ≤ ℓ1 ≤ ℓ1 . Therefore,

b−(3/2)+2ε Λk,n,m ≤ ℓ1 E 1{zv∈Lm} ( TGW ) |zX|=k v∈ z X,|v|z=m−k 18 Y. HU AND Z. SHI

b−(3/2)+2ε = ℓ1 E 1{x∈Lm} ( ) |xX|=m and, thus,

ε ℓ2 ℓ2 ⌊n ⌋ b−(3/2)+2ε ε (4.3) Λk,n,m ≤ ℓ1 (ℓ2 − ℓ1 + 1)(ℓ2 + 1)E(#Fℓ1,ℓ2 ). m=n nX=ℓ1 X kX=0 Second situation: ⌊nε⌋ + 1 ≤ k ≤ min{m − ⌊nε⌋,n}. In this situation, since ε/3 ε/3 V (z) ≥ max{gm(k), gn(k)}≥ c19n , we simply have Vz(u) ≤ b log ℓ1 −c19n . Exactly as in the first situation, we get

Λk,n,m ≤ E 1{x∈Lm} E 1{V (x)≤b log ℓ −c nε/3} . ( ) ( 1 19 ) |xX|=m |x|X=n−k The second E{·} on the right-hand side is

ε/3 Sn−k b −c19n = E {e 1 ε/3 }≤ ℓ e Q {Sn−k≤b log ℓ1−c19n } 1 and, thus,

ε ℓ2 ℓ2 min{m−⌊n ⌋,n} ε/3 b −c19ℓ1 (4.4) Λk,n,m ≤ ℓ1e (ℓ2 − ℓ1 + 1)ℓ2E(#Fℓ1,ℓ2 ). m=n ε nX=ℓ1 X k=⌊Xn ⌋+1 Third and last situation: m − ⌊nε⌋ + 1 ≤ k ≤ n (this situation may hap- ε pen only if m ≤ n + ⌊n ⌋− 1). This time V (z) ≥ gm(k) ≥ a log ℓ1 and, thus, Vz(u) ≤ (b − a) log ℓ1; consequently,

Λk,n,m ≤ E 1{x∈Lm} E 1{V (x)≤(b−a) log ℓ1} ( ) ( ) |xX|=m |x|X=n−k

b−a ≤ ℓ1 E 1{x∈Lm} . ( ) |xX|=m Therefore, in case m ≤ n + ⌊nε⌋− 1,

ε ℓ2 ℓ2 n ℓ2 n+⌊n ⌋−1 ε b−a Λk,n,m ≤ ℓ2ℓ1 E 1{x∈Lm} m=n ε m=n ( ) nX=ℓ1 X k=m−⌊Xn ⌋+1 nX=ℓ1 X |xX|=m

ℓ2 m ε b−a ≤ ℓ2ℓ1 E 1{x∈Lm} ε ( ) mX=ℓ1 n=mX−2⌊m ⌋ |xX|=m 2ε b−a ≤ 2ℓ2 ℓ1 E(#Fℓ1,ℓ2 ). MINIMAL POSITION, BRANCHING RANDOM WALKS 19

Combining this with (4.3) and (4.4), and since

ℓ2 ℓ2 n 2 E[(#Fℓ1,ℓ2 ) ] ≤ 2 Λk,n,m, m=n nX=ℓ1 X kX=0 we obtain 2 E[(#Fℓ1,ℓ2 ) ] b−(3/2)+2ε ε 2 ≤ (2ℓ1 (ℓ2 − ℓ1 + 1)(ℓ2 + 1) [E(#Fℓ1,ℓ2 )] ε/3 b −c19ℓ1 2ε b−a −1 + 2ℓ1e (ℓ2 − ℓ1 + 1)ℓ2 + 4ℓ2 ℓ1 )(E(#Fℓ1,ℓ2 )) . ε/3 b−(3/2)+2ε ε b −c19ℓ b−(3/2)+4ε Since ℓ2 ≤ 2ℓ1, we have 2ℓ1 (ℓ2 + 1) + 2ℓ1e 1 ℓ2 ≤ ℓ1 for all sufficiently large ℓ1. On the other hand, E(#Fℓ1,ℓ2 ) ≥ (ℓ2 − ℓ1 + a−(3/2)−ε 1)ℓ1 by (4.2) (for large ℓ1). Therefore, when ℓ1 is large, we have 2 b−(3/2)+4ε b−a+3ε E[(#Fℓ1,ℓ2 ) ] ℓ1 (ℓ2 − ℓ1 +1)+ ℓ1 2 ≤ a−(3/2)−ε . [E(#Fℓ ,ℓ )] 1 2 (ℓ2 − ℓ1 + 1)ℓ1 [E(#F )]2 ∅ 1 ℓ1,ℓ2 By the Paley–Zygmund inequality, P{Fℓ1,ℓ2 6= }≥ E 2 ; thus, 4 [(#Fℓ1,ℓ2 ) ] 1 (ℓ − ℓ + 1)ℓa−(3/2)−ε P 2 1 1 (4.5) min V (x) ≤ b log ℓ1 ≥ b−(3/2)+4ε . ℓ1≤|x|≤ℓ2 4 b−a+3ε   ℓ1 (ℓ2 − ℓ1 +1)+ ℓ1 Of course, we can make a close to b, and ε close to 0, to see that, for any b ∈ R and ε> 0, all sufficiently large ℓ1 and all ℓ2 ∈ [ℓ1, 2ℓ1] ∩ Z, ℓ − ℓ + 1 P 2 1 (4.6) min V (x) ≤ b log ℓ1 ≥ (3/2)−b+ε . ℓ1≤|x|≤ℓ2 ε   ℓ1(ℓ2 − ℓ1 +1)+ ℓ1 [This is our basic estimate for the minimum of V (x). In Section 5 we are going to apply (4.6) to ℓ2 := ℓ1.] 1 j We now let b> 2 and take the subsequence nj := 2 , j ≥ j0 (with a suf- ficiently large integer j0). By (4.6) (and possibly by changing the value of ε),

−ε P min V (x) ≤ b log nj ≥ nj . nj ≤|x|≤nj+1  2ε Let τj := inf{k :#{u : |u| = k}≥ nj }. Then we have, for j ≥ j0,

P τj < ∞, min V (x) > max V (y)+ b log nj  τj +nj ≤|x|≤τj+nj+1 |y|=τj  2ε ⌊nj ⌋ ≤ P min V (x) > b log nj  nj ≤|x|≤nj+1  2ε −ε ⌊nj ⌋ ≤ (1 − nj ) , 20 Y. HU AND Z. SHI which is summable in j. By the Borel–Cantelli lemma, almost surely for all large j, we have either τj = ∞, or minτj +nj ≤|x|≤τj +nj+1 V (x) ≤ max|y|=τj V (y)+ b log nj. By the well-known law of large numbers for the branching random walk (Hammersley [19], Kingman [23] and Biggins [5]), of which (1.5) was a 1 special case, there exists a constant c21 > 0 such that n max|y|=n V (y) → c21 almost surely upon the system’s survival. In particular, upon survival, max|y|=n V (y) ≤ 2c21n almost surely for all large n. Consequently, upon the system’s survival, almost surely for all large j, we have either τj = ∞, or minτj +nj ≤|x|≤τj+nj+1 V (x) < 2c21τj + b log nj. Recall that the number of particles in each generation forms a Galton– Watson tree, which is super-critical under assumption (1.3) (because m := E #{u:|u|=k} { |u|=1 1} > 1). In particular, conditionally on the system’s survival, mk converges a.s. to a (strictly) positive random variable when k →∞, which P log nj implies τj ∼ 2ε log m a.s. (j →∞). As a consequence, upon the system’s sur- vival, we have, almost surely for all large j,

5εc21 min V (x) ≤ log nj + b log nj. nj ≤|x|≤2nj+1 log m

1 Since b can be as close to 2 as possible, this readily yields (4.1).

5. Proof of Theorem 1.6. Before proving Theorem 1.6, we need three estimates. The first estimate, stated as Proposition 5.1, was proved by McDiarmid 2 [31] under the additional assumption E{( |u|=1 1) } < ∞. P Proposition 5.1. Assume (1.1), (1.2) and (1.3). There exists c22 > 0 such that, for any ε> 0, we can find c23 = c23(ε) > 0 satisfying

(3+ε)/2c22 (5.1) E exp c22 inf V (x) 1Sn ≤ c23n , n ≥ 1.   |x|=n  

Remark. Since Wn ≥ exp[− inf|x|=n V (x)], it follows from (5.1) and H¨older’s inequality that, for any 0 ≤ s < c22 and ε> 0,

1 s/c22 (3+ε)/2s E 1S (5.2) s n ≤ c23 n , n ≥ 1. Wn  This estimate will be useful in the proof of Theorem 1.5 in Section 6.

Proof of Proposition 5.1. In the proof we write, for any k ≥ 0,

V k := inf V (u). |u|=k MINIMAL POSITION, BRANCHING RANDOM WALKS 21

Taking ℓ2 = ℓ1 in (4.6) gives that, for any ε> 0 and all sufficiently large 3 −ε 3 ℓ (say, ℓ ≥ ℓ0), we have P{V ℓ ≤ 2 log ℓ}≥ ℓ ; thus, P{V ℓ > 2 log ℓ}≤ 1 − −ε −ε ℓ ≤ exp(−ℓ ), ∀ℓ ≥ ℓ0. For any r ∈ R and integers k ≥ 1 and n>ℓ ≥ ℓ0, we have 3 P{V n > 2 log ℓ + r} 3 k ≤ P{#{u : |u| = n − ℓ, V (u) ≤ r} < k} + (P{V ℓ > 2 log ℓ}) ≤ P{#{u : |u| = n − ℓ, V (u) ≤ r} < k} + exp(−ℓ−εk).

By Lemma 1 of McDiarmid [31], there exist c24 > 0, c25 > 0 and c26 > 0 c25j −c26j such that, for any j ≥ 1, P{#{u : |u| = j, V (u) ≤ c24j}≤ e }≤ q + e , q being as before the probability of extinction. We choose j := ⌊ r ⌋ and c24 ℓ := n − ⌊ r ⌋ to see that, for all n ≥ ℓ and all 0 ≤ r ≤ c (n − ℓ ), c24 0 24 0

3 −c26⌊r/c24⌋ −ε c25⌊r/c24⌋ P{V n > 2 log n + r}≤ q + e + exp(−n ⌊e ⌋). 3 S c S c S c −c10n Noting that {V n > 2 log n + r}∩ n = n and that P{ n }≥ q − e [see (3.3)], we obtain, for 0 ≤ r ≤ c24(n − ℓ0), 3 S P{V n > 2 log n + r, n} (5.3) ≤ e−c10n + e−c26⌊r/c24⌋ + exp(−n−ε⌊ec25⌊r/c24⌋⌋). This implies that, for any 0 < c < min{ c26 , 2c10 }, there exists a constant 27 c24 c24 c V c c > 0 such that E{e 27 n 1 S }≤ c n 29 , with c := 28 {3/2 log n 0 be as in (1.2), we have e 1Sn ≤ δ−V (u) δ−V (u) |u|=n e . Since ψ(−δ−) := log E{ |u|=n e } < ∞ by (1.2), we can choose and fix c > 0 sufficiently large (in particular, c > c24 ) such that, P 31 P 31 2 for any x ≥ c31, S −δ−xn+ψ(−δ−)n −δ−xn/2 P{V n > xn, n}≤ e ≤ e , ∀n ≥ 1.

δ− Therefore, for any c32 < 2 , we have

c32V n (5.5) sup E{e 1{V >c31n}∩Sn } < ∞. n≥1 n

Finally, (5.3) also implies that, for n ≥ ℓ0, c P V > 24 n, S ≤ e−c10n + e−c26⌊n/2−3/(2c24) log n⌋ n 2 n   + exp(−n−ε⌊ec25⌊n/2−3/(2c24) log n⌋⌋). 22 Y. HU AND Z. SHI

Therefore, for any c < min{ c10 , c26 }, 33 c31 2c31

c33V n supE{e 1{c24/2n

Lemma 5.2. Let X1, X2,...,XN be independent nonnegative random N variables, and let TN := i=1 Xi. For any nonincreasing function F : (0, ∞) → R , we have + P

E{F (TN )1{T >0}}≤ max E{F (Xi)|Xi > 0}. N 1≤i≤N Moreover,

N i−1 E{F (TN )1{TN >0}}≤ b E{F (Xi)1{Xi>0}}, Xi=1 where b := max1≤i≤N P{Xi = 0}.

Proof. Let τ := min{i ≥ 1 : Xi > 0} (with min ∅ := ∞). Then E{F (TN )× N 1{TN >0}} = i=1 E{F (TN )1{τ=i}}. Since F is nonincreasing, we have F (TN )× 1 ≤ F (X )1 = F (X )1 1 . By independence, this {τ=i} Pi {τ=i} i {Xi>0} {Xj =0,∀j

E{F (TN )1{TN >0}}≤ E{F (Xi)1{Xi>0}}P{Xj = 0, ∀j < i}. Xi=1 This yields immediately the second inequality of the lemma, since P{Xj = 0, ∀j < i}≤ bi−1.

To prove the first inequality of the lemma, we observe that E{F (Xi)1{Xi>0}}≤ P{Xi > 0} max1≤k≤N E{F (Xk)|Xk > 0}. Therefore,

N E{F (TN )1{T >0}}≤ max E{F (Xk)|Xk > 0} P{Xi > 0}P{Xj = 0, ∀j < i}. N 1≤k≤N Xi=1 N N The i=1 · · · expression on the right-hand side is = i=1 P{Xi > 0, Xj = 0, N P P ∀j

0}≤ 1. This yieldsP the first inequality of the lemma.  P

(n) (n) To state our third estimate, let w ∈ Je, wn K be a vertex such that (5.6) V (w(n))= min V (u). (n) u∈Je,wn K MINIMAL POSITION, BRANCHING RANDOM WALKS 23

[If there are several such vertices, we choose, say, the oldest.] The following estimate gives a (stochastic) lower bound for 1 under Q outside a “small” Wn,β set. We recall that Wn,β > 0, Q-almost surely (but not necessarily P-almost surely).

Lemma 5.3. Assume (1.1), (1.2) and (1.3). For any K> 0, there exist θ> 0 and n0 < ∞ such that, for any n ≥ n0, any β> 0, and any nonde- creasing function G : (0, ∞) → (0, ∞),

(n) e−βV (w ) 1 nθβ (5.7) EQ G 1En ≤ max E G 1S , W 1 − q 0≤k

I (n) (n) Proof. Recall from (2.8) that k is the set of the brothers of wk . For any pair 0 ≤ k

I (1) # 1 Q{k is n-good}≥ EQ{1 I (1) (1 − q) } = c34 ∈ (0, 1). {# 1 ≥1} As a consequence, for all 1 ≤ ℓ 0. We take ℓ = ℓ(n) := ⌊c log n⌋ with c := K+2 . Let c := K+2 35 35 c34 36 c6 24 Y. HU AND Z. SHI

[where c is as in (2.17)] and c := max{ K+2 , c } [c being the constant in 6 37 c3 35 3 (2.12)]. Let n (1) (5.8) En := {j is n-good}, k\=1 j:1≤j≤n,|j−[k|≤⌊c35 log n⌋ (2) (n) (5.9) En := max sup |V (u) − V (wj−1)|≤ c36 log n , 1≤j≤n I (n)  u∈ j 

(3) (n) (n) (5.10) En := max |V (wj ) − V (wk )|≤ c37 log n . 0≤j,k≤n,|j−k|≤c35 log n  We have 1 Q{E(1)}≥ 1 − ne−c34c35 log n = 1 − . n nK+1 On the other hand, by Corollary 2.2,

(2) c Q{(En ) }≤ nQ sup |V (u)| > c36 log n ≤ nQ sup |V (u)| > c36 log n . I (1) |u|=1 u∈ 1    Applying (2.17) yields that c Q{E(2)}≥ 1 − c n−(c36c6−1) = 1 − 5 . n 5 nK+1 (3) To estimate Q{En }, we note that, by Corollary 2.2,

(3) c Q{(En ) } = Q max |Sj − Sk| > c37 log n , 0≤j,k≤n,|j−k|≤c35 log n 

−(c3c37−1) which, in view of (2.15), is bounded by 2c35n log n. Consequently, if (1) (2) (3) (5.11) En := En ∩ En ∩ En , Q 1 then {En}≥ 1 − nK for all large n. It remains to check (5.7). By definition, n (n) −βV (u) −βVu(x) −βV (wn ) Wn,β = e e + e (n) TGW j=1 I x∈ u ,|x|u=n−j X u∈Xj X (5.12) ≥ e−βV (u) e−βVu(x) L (n) TGW j∈ I x∈ u ,|x|u=n−j X u∈Xj X

(n) for any L ⊂ {1, 2,...,n}. We choose L := {1 ≤ j ≤ n : |j −|w || < c35 log n}. MINIMAL POSITION, BRANCHING RANDOM WALKS 25

I (n) L (n) On the event En, for u ∈ j with some j ∈ , we have V (u) ≤ V (w )+ −θβ −βV (w(n)) (c36 +c37) log n. Writing θ := c36 +c37, this leads to Wn,β ≥ n e × j∈L I (n) ξu, where u∈ j P P −βVu(x) ξu := e . TGW x∈ u X,|x|u=n−j

Since j∈L I (n) ξu > 0 on En, we arrive at u∈ j

P P (n) e−βV (w ) nθβ 1E ≤ 1 . n { L (n) ξu>0} W L (n) ξ j∈ u∈I n,β j∈ I u j u∈ j P P P P G L I (n) G Let n be the sigma-field in (2.9). We observe that and j are n- measurable. Moreover, an application of Proposition 2.1 tells us that under G I (n) L Q, conditionally on n, the random variables ξu, u ∈ j , j ∈ , are in- dependent, and are distributed as Wn−j,β under P. We are thus entitled to apply Lemma 5.2 to see that, if G is nondecreasing,

−βV (w(n)) θβ e G n EQ G 1En | n ≤ max E G |Wn−j,β > 0 j∈L   Wn,β    Wn−j,β   nθβ ≤ max E G |Wk,β > 0 . 0≤k 0} = P{Sk}≥ P{S } = 1 − q, this yields Lemma 5.3. 

We are now ready for the proof of Theorem 1.6. For the sake of clarity, the upper and lower bounds are proved in distinct parts. Let us start with the upper bound.

Proof of Theorem 1.6. The upper bound. We assume (1.1), (1.2) and (1.3), and fix β> 1. For any Z ≥ 0 which is Fn-measurable, we have E{Wn,βZ} = e−βV (u) −(β−1)V (u) EQ{ |u|=n Z} = EQ{ |u|=n 1 (n) e Z} and, thus, Wn {wn =u} P P (n) −(β−1)V (wn ) (5.13) E{Wn,βZ} = EQ{e Z}. β−1 3 Let s ∈ ( β , 1), and λ> 0. (We will choose λ = 2 .) Then 1−s −(1−s)βλ 1−s E E 1 −βλ {Wn,β }≤ n + {Wn,β {Wn,β >n }}

(n) −(β−1)V (wn ) −(1−s)βλ e = n + EQ 1 −βλ . W s {Wn,β>n }  n,β  26 Y. HU AND Z. SHI

(n) (n) −(β−1)V (w ) −βV (wn ) e n 1 Since e ≤ Wn,β, we have W s ≤ s−(β−1)/β ; thus, on the n,β Wn,β (n) −βλ e−(β−1)V (wn ) [βs−(β−1)]λ event {Wn,β >n }, we have s ≤ n . Wn,β Let K := [βs − (β − 1)]λ + (1 − s)βλ, and let En be the event in Lemma c −K 5.3. Since Q(En) ≤ n for all sufficiently large n (see Lemma 5.3), we obtain, for large n, 1−s −(1−s)βλ [βs−(β−1)]λ−K E{Wn,β }≤ n + n

(n) e−(β−1)V (wn ) (5.14) + EQ 1 −βλ W s {Wn,β >n }∩En  n,β  (n) −(β−1)V (wn ) −(1−s)βλ e E 1 −βλ = 2n + Q s {Wn,β>n }∩En .  Wn,β  We now estimate the expectation expression EQ{·} on the right-hand 3 side. Let a> 0 and ̺>b> 0 be constants such that (β − 1)a > sβλ + 2 and 3 [βs − (β − 1)]b> 2 . (The choice of ̺ will be made precise later on.) We recall (n) (n) (n) that wn ∈ Je, wn K satisfies V (w )=min (n) V (u), and consider the u∈Je,wn K following events: (n) (n) E1,n := {V (wn ) > a log n} ∪ {V (wn ) ≤−b log n}, (n) (n) E2,n := {V (w ) < −̺ log n, V (wn ) > −b log n}, (n) (n) E3,n := {V (w ) ≥−̺ log n, −b log n < V (wn ) ≤ a log n}. 3 Clearly, Q( i=1 Ei,n)=1. −βλ (n) On the eventS E1,n ∩ {Wn,β >n }, we have either V (wn ) > a log n, in (n) e−(β−1)V (wn ) sβλ−(β−1)a (n) which case s ≤ n , or V (wn ) ≤−b log n, in which case Wn,β (n) (n) −(β−1)V (w ) −βV (wn ) e n we use the trivial inequality Wn,β ≥ e to see that s ≤ Wn,β (n) e[βs−(β−1)]V (wn ) ≤ n−[βs−(β−1)]b (recalling that βs>β − 1). Since sβλ − 3 3 (β − 1)a< − 2 and [βs − (β − 1)]b> 2 , we obtain (n) −(β−1)V (wn ) e −3/2 (5.15) EQ 1 −βλ ≤ n . W s E1,n∩{Wn,β >n }  n,β  −βλ We now study the integral on E2,n ∩ {Wn,β >n }∩ En. Since s> 0, we c22 can choose s1 > 0 and 0 n },

(n) (n) (n) (n) e−(β−1)V (wn ) eβs2V (w )−(β−1)V (wn ) e−βs2V (w ) s = s1 s2 Wn,β Wn,β Wn,β MINIMAL POSITION, BRANCHING RANDOM WALKS 27

(n) −βs2V (w ) −βs2̺+(β−1)b+βλs1 e ≤ n s2 . Wn,β Therefore, by an application of Lemma 5.3 to G(x) := xs2 , x> 0, we obtain, for all sufficiently large n, (n) e−(β−1)V (wn ) E 1 −βλ Q s E2,n∩{Wn,β >n }∩En  Wn,β  n−βs2̺+(β−1)b+βλs1 ns2θβ ≤ max E s 1S . 0≤kn }∩En  n,β  We make a partition of E3,n: let M ≥ 2 be an integer, and let ai := i(a+b) −b + M , 0 ≤ i ≤ M. By definition, M−1 (n) (n) E3,n = {V (w ) ≥−̺ log n, ai log n < V (wn ) ≤ ai+1 log n} i[=0 M−1 =: E3,n,i. i[=0 Let 0 ≤ i ≤ M − 1. There are two possible situations. First situation: ai ≤ λ. −βV (w(n)) In this case, we argue that, on the event E3,n,i, we have Wn,β ≥ e n ≥ (n) (n) −(β−1)V (w ) −βai+1 −(β−1)V (wn ) −(β−1)ai e n βsai+1−(β−1)ai n and e ≤ n , thus, s ≤ n = Wn,β nβsai−(β−1)ai+βs(a+b)/M ≤ n[βs−(β−1)]λ+βs(a+b)/M . Accordingly, in this situa- tion, (n) −(β−1)V (wn ) e [βs−(β−1)]λ+βs(a+b)/M EQ 1 ≤ n Q(E ). W s E3,n,i 3,n,i  n,β  −βλ Second (and last) situation: ai > λ. We have, on E3,n,i ∩ {Wn,β >n }, (n) −(β−1)V (w ) e n βλs−(β−1)ai [βs−(β−1)]λ s ≤ n ≤ n ; thus, in this situation, Wn,β (n) −(β−1)V (wn ) e [βs−(β−1)]λ EQ 1 −βλ ≤ n Q(E ). W s E3,n,i∩{Wn,β >n } 3,n,i  n,β  28 Y. HU AND Z. SHI

We have therefore proved that

(n) e−(β−1)V (wn ) EQ 1 −βλ W s E3,n∩{Wn,β >n }  n,β  M−1 (n) e−(β−1)V (wn ) E 1 −βλ = Q s E3,n,i∩{Wn,β >n } Wn,β Xi=0   [βs−(β−1)]λ+βs(a+b)/M ≤ n Q(E3,n).

By Corollary 2.2, Q(E3,n)= P{min0≤k≤n Sk ≥−̺ log n, −b log n ≤ Sn ≤ a × log n} = n−(3/2)+o(1). Combining this with (5.14), (5.15) and (5.16) yields

1−s −(1−s)βλ −3/2 [βs−(β−1)]λ+βs(a+b)/M−(3/2)+o(1) E{Wn,β }≤ 2n + 2n + n . 3 We choose λ := 2 . Since M can be as large as possible, this yields the upper bound in Theorem 1.6 by posing r := 1 − s. 

Proof of Theorem 1.6. The lower bound. Assume (1.1), (1.2) and 1 (1.3). Let β> 1 and s ∈ (1 − β , 1). By means of (5.12) and the elementary inequality (a + b)1−s ≤ a1−s + b1−s (for a ≥ 0 and b ≥ 0), we have

n 1−s (n) 1−s −(1−s)βV (u) −βVu(x) −(1−s)βV (wn ) Wn,β ≤ e e + e (n) TGW ! j=1 I x∈ u ,|x|u=n−j X u∈Xj X n −(1−s)βV (w(n) ) −(1−s)β[V (u)−V (w(n) )] = e j−1 e j−1 j=1 I (n) X u∈Xj 1−s × e−βVu(x) TGW ! x∈ u X,|x|u=n−j (n) + e−(1−s)βV (wn ).

Let Gn be the sigma-field defined in (2.9), and let

−(1−s)β[V (u)−V (w(n) )] Ξj =Ξj(n,s,β) := e j−1 , 1 ≤ j ≤ n. I (n) u∈Xj (n) I (n) G Since V (wj ) and j , for 1 ≤ j ≤ n, are n-measurable, it follows from Proposition 2.1 that n (n) (n) 1−s G −(1−s)βV (wj−1) 1−s −(1−s)βV (wn ) EQ{Wn,β | n}≤ e ΞjE{Wn−j,β} + e . jX=1 MINIMAL POSITION, BRANCHING RANDOM WALKS 29

3 Let ε> 0 be small, and let r := 2 (1 − s)β − ε. By means of the already 1−s proved upper bound for E(Wn,β ), this leads to, with c38 ≥ 1, 1−s G EQ{Wn,β | n} (5.17) n (n) (n) −(1−s)βV (w ) −r −(1−s)βV (wn ) ≤ c38 e j−1 (n − j + 1) Ξj + e . jX=1 (n) 1−s e−(β−1)V (wn ) Since E(W )= EQ{ s } [see (5.13)], we have, by Jensen’s n,β Wn,β (n) inequality [noticing that V (wn ) is Gn-measurable], (n) −(β−1)V (wn ) 1−s e E(W ) ≥ EQ , n,β E 1−s G s/(1−s)  { Q(Wn,β | n)}  which, in view of (5.17), yields 1 E 1−s (Wn,β ) ≥ s/(1−s) c38

−(β−1)V (w(n)) × EQ (e n ) (

n (n) −(1−s)βV (w ) −r × e j−1 (n − j + 1) Ξj ( jX=1 s/(1−s) −1 (n) + e−(1−s)βV (wn ) . ) ! )

By Proposition 2.1, if (Sj − Sj−1,ξj), for j ≥ 1 (with S0 := 0), are i.i.d. ran- (1) −(1−s)βV (u) dom variables under Q and distributed as (V (w1 ), I (1) e ), u∈ 1 then the EQ{·} expression on the right-hand side is P e−(β−1)Sn = EQ n −r −(1−s)βSj−1 −(1−s)βSn s/(1−s) { j=1(n − j + 1) e ξj + e }  P e[βs−(β−1)]Sn = EQ , n −r (1−s)βSk s/(1−s) { k=1 k e ξke + 1}  where P e e Sℓ := Sn − Sn−ℓ, ξℓ := ξn+1−ℓ, 1 ≤ ℓ ≤ n. Consequently, e e 1 e[βs−(β−1)]Sn E 1−s E (Wn,β ) ≥ s/(1−s) Q . n −r (1−s)βSk s/(1−s) c38 { k=1 k e ξke + 1}  P e e 30 Y. HU AND Z. SHI

Let c39 > 0 be a constant, and define ⌊nε⌋−1 S 1/3 ε/2 ε/2 En,1 := {Sk ≤−c39k }∩{−2n ≤ S⌊nε⌋ ≤−n }, k=1 e \ n−⌊nε⌋−e1 e S 1/3 1/3 ε/2 ε/2 En,2 := {Sk ≤−[k ∧ (n − k) ]}∩{−2n ≤ Sn−⌊nε⌋ ≤−n }, ε k=⌊\n ⌋+1 e e e n−1 3 3 − ε 3 ES := S ≤ log n ∩ log n ≤ S ≤ log n . n,3 k 2 2 n 2 k=n−⌊nε⌋+1    e \ Let ρ := ρ((1 − s)β) bee the constant in Corollary 2.4e, and let ⌊nε⌋ ξ 2ε/ρ En,1 := {ξk ≤ n }, k=1 e \ n−⌊neε⌋ ξ nε/4 En,2 := {ξk ≤ e }, k=⌊nε⌋+1 e \ n e ξ 2ε/ρ En,3 := {ξk ≤ n }. ε k=n−⌊\n ⌋+1 e e 3 S ξ n −r (1−s)βSk 2ε+(2ε/ρ) On i=1(En,i ∩ En,i), we have k=1 k e ξk + 1 ≤ c40n , while [βs−(β−1)]S (3−ε)[βs−(β−1)]/2 e T n ≥ n e P (recalling that βs>β − 1). Therefore, with 2e s e e c41 := (2 + ρ ) 1−s , e 1−s −s/(1−s) −c41ε (3−ε)[βs−(β−1)]/2 E(Wn,β ) ≥ (c38c40) n n (5.18) 3 S ξ × Q (En,i ∩ En,i) . (i=1 ) \ e e 3 S ξ We need to bound Q( i=1(En,i ∩ En,i)) from below. Let S0 := 0. Note that, under Q, (S −S , ξ ), 1 ≤ ℓ ≤ n, are i.i.d., distributed as (S ,ξ ). For ℓ ℓ−1 Tℓ e e 1 1 e S j ≤ n, let Gj be the sigma-field generated by (Sk, ξk), 1 ≤ k ≤ j. Then E , e e e n,1 S ξ ξ G ξ En,2, En,1 ande En,2 are n−⌊nε⌋-measurable, wherease e En,3 is independente of G ε . Therefore, ne−⌊n ⌋e e e e 3 e S ξ G Q (En,i ∩ En,i)| n−⌊nε⌋ ! i\=1 e e e S ξ ≥ [Q(E |G ε )+ Q(E ) − 1]1 . n,3 n−⌊n ⌋ n,3 S S ξ ξ En,1∩En,2∩En,1∩En,2 e e e e e e e MINIMAL POSITION, BRANCHING RANDOM WALKS 31

ρ 2ε/ρ −2ε We have c42 := EQ(ξ1 ) < ∞ [by (2.16)]; thus, Q{ξ1 >n }≤ c42n , ξ 2ε/ρ ⌊nε⌋ −2ε ⌊nε⌋ −ε which entails Q(En,3) = (Q{ξ1 ≤ n }) ≥ (1−c42n ) ≥ 1−c43n . S G To estimate Q(En,e3| n−⌊nε⌋), we use the Markov property to see that, if ε/2 ε/2 S ε ∈ I := [−2n , −n ], the conditional probability is (writing N := n−⌊n ⌋ n e ⌊nε⌋) e e 3 ≥ inf Q Si ≤ log n − z, ∀1 ≤ i ≤ N − 1, z∈In 2  3 − ε 3 log n − z ≤ SN ≤ log n − z , 2 2  which is greater than N −(1/2)+o(1). Therefore, S G ξ −(ε/2)+o(1) −ε −(ε/2)+o(1) Q(En,3| n−⌊nε⌋)+ Q(En,3) − 1 ≥ n − c43n = n . As a consequence, e e e 3 S ξ −(ε/2)+o(1) S S ξ ξ (5.19) Q (En,i ∩ En,i) ≥ n Q(En,1 ∩ En,2 ∩ En,1 ∩ En,2). (i=1 ) \ e e e e e e S S ξ ξ G To estimate Q(En,1 ∩ En,2 ∩ En,1 ∩ En,2), we condition on ⌊nε⌋, and S ξ G ξ note that En,1 and Een,1 aree ⌊nε⌋-measurable,e e whereas En,2 is independente G S G −(3−ε)/2+o(1) ξ of ⌊nε⌋. Sincee Q(En,e2| ⌊nε⌋)e≥ n , whereas Qe(En,2) = [Q{ξ1 ≤ ε/4 ε ε/4 ε ε/5 en }]n−2⌊n ⌋ ≥ [1 − c e−ρn ]n−2⌊n ⌋ ≥ 1 − e−n (for large n), we have e e42 e e S S ξ ξ S ξ Q(E ∩ E ∩ E ∩ E |G ε ) ≥ [Q(E |G ε )+ Q(E ) − 1]1 n,1 n,2 n,1 n,2 ⌊n ⌋ n,2 ⌊n ⌋ n,2 S ξ En,1∩En,1 e e e e e e e ≥ n−(3−ε)/2+eo(1)1 . S ξ e e En,1∩En,1

S S ξ ξ −(3−ε)/2+o(1) S ξ Thus, Q(En,1 ∩ En,2 ∩ En,1 ∩ En,2) ≥ n Qe (En,e1 ∩ En,1). Going back to (5.19), we have e e e e e e 3 S ξ −(3/2)+o(1) S ξ Q (En,i ∩ En,i) ≥ n Q(En,1 ∩ En,1) (i=1 ) \ e e e e −(3/2)+o(1) S ξ ≥ n [Q(En,1)+ Q(En,1) − 1]. e eS −(ε/2)+o(1) We choose the constant c39 > 0 sufficiently small so that Q(En,1) ≥ n , ξ ξ −ε whereas Q(En,1)= Q(En,3) ≥ 1 − c43n . Accordingly, e e 3 e S ξ −(3+ε)/2+o(1) Q (En,i ∩ En,i) ≥ n , n →∞. (i=1 ) \ e e 32 Y. HU AND Z. SHI

Substituting this into (5.18) yields

1−s −c41ε (3−ε)[βs−(β−1)]/2 −(3+ε)/2+o(1) E(Wn,β ) ≥ n n n . Since ε can be as small as possible, this implies the lower bound in Theorem 1.6. 

6. Proof of Theorem 1.5. The basic idea in the proof of Theorem 1.5 is the same as in the proof of Theorem 1.6. Again, we prove the upper and lower bounds in distinct parts, for the sake of clarity. Throughout the section, we assume (1.1), (1.2) and (1.3).

1/2 Proof of Theorem 1.5: The upper bound. Clearly, n Wn ≤ Y n, where 1/2 + −V (u) Y n := (n ∨ V (u) )e . |uX|=n ∗ 1 1 Recall W from (3.7). Applying (3.14) to λ = 1, we see that Y n ≥ log( ∗ ), n c44 Wn S 1 S with c44 := c12 + c13. Thus, P{Y n < x, n} ≤ P{log( ∗ ) < c44x, n}≤ Wn c44 ∗ 1/x c44 κ e E{(Wn ) 1Sn }, which, according to (3.11), is bounded by e (x + e−c10n) for 0 0 and 0

1 c10 (6.1) supE s 1Sn < ∞, 0

1/2 1−s 1−s E{(n Wn) }≤ E{Y n 1En } + o(1).

1−s 1/2 (n) + −s Exactly as in (5.13), we have E{Y n 1En } = EQ{(n ∨V (wn ) )Y n 1En }. Thus, for n →∞,

1/2 1−s 1/2 (n) + −s (6.2) E{(n Wn) }≤ EQ{(n + V (wn ) )Y n 1En } + o(1). MINIMAL POSITION, BRANCHING RANDOM WALKS 33

For any subset L ⊂ {1, 2,...,n}, we have 1/2 + −V (x) Y n ≥ max{n ,V (x) }e L (n) TGW j∈ I x∈ u ,|x|u=n−j X u∈Xj X

−V (u) 1/2 + −Vu(x) = e max{n , [V (u)+ Vu(x)] }e . L (n) TGW j∈ I x∈ u ,|x|u=n−j X u∈Xj X

(n) (n) (n) Recall that w is the oldest vertex in Je, wn K such that V (w ) = min (n) V (u). Let c35 be the constant in (5.8). We choose u∈Je,wn K I (n) ∅ (n) (n) {j ≤ n : j 6= , |w |

1 −V (u) 1/2 + −Vu(x) Y n ≥ e max{(n − j) ,Vu(x) }e 2 L (n) TGW j∈ I x∈ u ,|x|u=n−j X u∈Xj X 1 =: e−V (u)ξ . 2 u j∈L I (n) X u∈Xj

(n) (n) If, however, V (w ) < −c46 log n, then on En, V (u) ≤ V (w )+ c45 log n< 1 1/2 + 1/2 − s log n and, since max{n , [V (u)+ Vu(x)] }≥ n , we have, in this case,

(1/s)+(1/2) −Vu(x) Y n ≥ n e L (n) TGW j∈ I x∈ u ,|x|u=n−j X u∈Xj X (1/s)+(1/2) =: n ηu. j∈L I (n) X u∈Xj 34 Y. HU AND Z. SHI

Therefore, in both situations we have −s −s s −V (u) Y n 1En ≤ 2 e ξu 1En j∈L I (n) ! X u∈Xj (6.4) −s −(s/2)−1 + n ηu 1En . j∈L I (n) ! X u∈Xj −s [Since L (n) TGW 1 > 0 on E , the (·) expressions on j∈ I x∈ u ,|x|u=n−j n u∈ j the right-handP P side areP well defined.] We claim that there exists 0 0 and s ∈ (0,s0), −s 1/2 (n) + −V (u) EQ (n + V (wn ) ) e ξu 1En ( j∈L I (n) ! ) X u∈Xj (6.5) ≤ c48, −s 1/2 (n) + EQ (n + V (wn ) ) ηu 1En ( j∈L I (n) ! ) X u∈Xj (6.6) 1/2+(3+ε)/2s ≤ c47n . We admit (6.5) and (6.6) for the time being. In view of (6.4), we obtain, for 0

MINIMAL POSITION, BRANCHING RANDOM WALKS 35

N −s i−1 −V (u) ≤ b EQ e ξ 1 −V (u) G , u { (n) e ξu>0} n ( ! u∈I ) i=1 u∈I (n) j(i) X Xj(i) P −V (u) G where b := maxj∈L Q{ I (n) e ξu = 0| n}. We note that b ≤ u∈ j S c max1≤j≤n P{ n−j}≤ q,P and that, for any i ≤ N, the EQ{·} expression on the right-hand side is, according to the first part of Lemma 5.2, bounded by

1 esV (u) E 1 G max Q s {ξu>0}| n . 1 − q I (n) ξu u∈ j(i)  

1 1 E 1 G E s 1S By Proposition 2.1, Q{ ξs {ξu>0}| n} = { n−j }, which is bounded u Y n−j in n and j [by (6.1)]. Summarizing, we have proved that

−s N −V (u) G i−1 sV (u) EQ e ξu 1En n ≤ c49 q max e . (n) ( L ! ) u∈I j∈ u∈I (n) i=1 j(i) X Xj X

As a consequence, the expression on the left-hand side of (6.5) is bounded by c49EQ{Λn}, where

N 1/2 (n) + i−1 sV (u) 1 (n) Λn := (n + V (wn ) ) q max e {|V (u)−V (w )|≤c45 log n} u∈I (n) Xi=1 j(i) N 1/2 (n) + i−1 sV (u) ≤ Λn := (n + V (wn ) ) q max e . u∈I (n) Xi=1 j(i) e The proof of (6.5) now boils down to verifying the following estimates: there exists 0

(6.7) supEQ{Λn1{n−|w(n)|≥2c log n}} < ∞, n 35

(6.8) lim EQ{Λen1 (n) } = 0. n→∞ {n−|w |<2c35 log n}

Let us first check (6.7). Let S0 := 0 and let (Sj − Sj−1,σj, ∆j), j ≥ 1, (1) I (1) be i.i.d. random variables under Q and distributed as (V (w ), # 1 , sV (u) max I (1) e ). Let u∈ 1

Sn := min Si, ϑn := inf{k ≥ 0 : Sk = Sn}. 0≤i≤n

[The random variable ϑn has nothing to do with the constant ϑ in Propo-

E 1 (n) sition 3.1.] Writing LHS(6.7) for Q{Λn {n−|w |≥2c35 log n}}, it follows from

e 36 Y. HU AND Z. SHI

Proposition 2.1 that M E 1/2 + i−1 sSℓ(i)−1 1 LHS(6.7) = Q [n + Sn ] q e ∆ℓ(i) {n−ϑn≥2c35 log n} ( ) Xi=1 M 1/2 + sSn i−1 s[Sℓ(i)−1−Sℓ(0)] = EQ [n + Sn ]e q e ∆ℓ(i)1{n−ϑn≥2c35 log n} , ( ) Xi=1 where ℓ(i) := inf{k>ℓ(i−1) : σk ≥ 1} with ℓ(0) := ϑn, and M := sup{i : ℓ(i) < ϑn + c35 log n}. At this stage, we use a standard trick for random walks: let ν0 := 0 and let

νi := inf k>νi−1 : Sk < min Sj , i ≥ 1. 0≤j≤ν  i−1  In words, 0= ν0 <ν1 < · · · are strict descending ladder times. On the event

{νk ≤ n<νk+1} (for k ≥ 0), we have ϑn = νk and Sn = Sνk . Thus, LHS(6.7) equals ∞ 1/2 + sS E 1 1 νk Q {n−νk≥2c35 log n} {νk≤n<νk+1}[n + Sn ]e ( kX=0 M i−1 s[Sℓ(i)−1−Sℓ(0)] × q e ∆ℓ(i) . ) Xi=1 For any k, we look at the expectation EQ{·} on the right-hand side. By con- + + ditioning upon (Sj,σj, ∆j, 1 ≤ j ≤ νk), and since Sn = [Sνk + (Sn − Sνk )] ≤ + (Sn − Sνk ) = Sn − Sνk on {νk ≤ n<νk+1}, we obtain ∞ sS E 1 νk (6.9) LHS(6.7) ≤ Q{ {n−νk≥2c35 log n}e fn(n − νk)}, kX=0 where, for any 1 ≤ j ≤ n, M ′ 1/2 i−1 sSm(i)−1 fn(j) := EQ 1{ν1>j}[n + Sj] q e ∆m(i) , ( ) Xi=1 ′ and m(i) := inf{k>m(i−1) : σk ≥ 1} with m(0) := 0, and M := sup{i : m(i) < ′ M i−1 sSm(i)−1 ∞ i−1 c35 log n}. For brevity, we write Ln := i=1 q e ∆m(i) = i=1 q × esSm(i)−1 ∆ 1 for the moment. By the Cauchy–Schwarz in- m(i) {m(i) j}] [EQ{(n + Sj) |ν1 > j}] [EQ{Ln1{ν1>j}}] . −1/2 By (2.13), Q{ν1 > j}≤ c50j for some c50 > 0 and all j ≥ 1. On the 1/2 2 2 other hand, (n + Sj) ≤ 2(n + Sj ), and it is known (Bolthausen [11]) that MINIMAL POSITION, BRANCHING RANDOM WALKS 37

2 Sj 1/2 2 EQ{ j |ν1 > j}→ c51 ∈ (0, ∞) for j →∞. Therefore, EQ{(n + Sj) |ν1 > j} ≤ c52n for some c52 > 0 and all n ≥ j ≥ 1. Accordingly, with c53 := 1/2 1/2 c50 c52 , we have −1/4 1/2 2 1/2 fn(j) ≤ c53j n [EQ{Ln1{ν1>j}}] , 1 ≤ j ≤ n.

2 ∞ i−1 ∞ i−1 2sSm(i)−1 2 By the Cauchy–Schwarz inequality, Ln ≤ ( i=1 q ) i=1 q e ∆m(i) × 1 . Therefore, for j ≥ 2c log n, {m(i)j} 1 − q m(i) {m(i)j} Xi=1 ∞ 1 i−1 2sS 2 ≤ q EQ{e m(i)−1 ∆ 1 1 }. 1 − q m(i) {m(i)≤j/2} {ν1>j} Xi=1 For any i ≥ 1, to estimate the expectation EQ{·} on the right-hand side, we apply the strong Markov property at time m(i) to see that

2sSm(i)−1 2 EQ{·} = EQ{e ∆m(i)1{m(i)≤j/2}1{ν1>m(i)}g(Sm(i), j − m(i))}, where g(z, k) := Q{z + Si ≥ 0, ∀1 ≤ i ≤ k} for any z ≥ 0 and k ≥ 1. By (13) 1/2 of Kozlov [24], g(z, k) ≤ c54(z + 1)/k for some c54 > 0 and all z ≥ 0 and sz c55 k ≥ 1. Since z + 1 ≤ c55e for all z ≥ 0, this yields, with c56 := 1−q ,

∞ sSm(i) 2 i−1 2sS 2 e EQ{L 1 }≤ c q EQ e m(i)−1 ∆ 1 n {ν1>j} 56 m(i) {m(i)≤j/2} (j − m(i))1/2 Xi=1   ∞ c56 i−1 2sS +sS 2 ≤ q EQ{e m(i)−1 m(i) ∆ } (j/2)1/2 m(i) Xi=1 21/2c ∞ 56 i−1 2sSm(i)−1+sSm(i) 2 = EQ q e ∆m(i) . j1/2 ( ) Xi=1 ∞ i−1 2sSm(i)−1+sSm(i) 2 ∞ R(k)−1 2sSk−1+sSk 2 We observe that i=1 q e ∆m(i) ≤ k=1 q e ∆k, 1/2 where R(k):=#P{1 ≤ j ≤ k : σj ≥ 1}. Therefore, withP c57 := 2 c56, ∞ 2 c57 R(k)−1 2sS +sS 2 EQ{L 1 }≤ EQ{q e k−1 k ∆ } n {ν1>j} j1/2 k kX=1 ∞ c57 2[R(k)−1] 1/2 4sS +2sS 4 1/2 ≤ [EQ{q }] [EQ{e k−1 k ∆ }] . j1/2 k kX=1 2[R(k)−1] −2 k 2 By definition, EQ{q } = q r , with r := Q(σ1 =0)+ q Q(σ1 ≥ 1) < 1 [because q< 1 and Q(σ1 = 0) < 1]. On the other hand,

4sSk−1+2sSk 4 6sSk−1 2s(Sk−Sk−1) 4 EQ{e ∆k} = EQ{e }EQ{e ∆k}

6sS1 k−1 2sS1 4 = [EQ{e }] EQ{e ∆1}. 38 Y. HU AND Z. SHI

6sS1 1 By (2.11), there exists s# > 0 sufficiently small such that EQ{e } < r for 8 c6 all 0 0 and all n ≥ j ≥ 1 with j ≥ Q n {ν1>j} j1/2 58 2c35 log n, which yields 1/2 −1/2 1/2 fn(j) ≤ c53c58 j n .

c6 c2 Going back to (6.9), we obtain, for any 0 j} for all j ≥ 1. Thus, with c61 := c59c60, ∞ 1/2 sS E 1 νk 1 LHS(6.7) ≤ c61n Q{ {n−νk≥2c35 log n}e {νk+1>n}} kX=0 ∞ 1/2 sS E 1 νk ≤ c61n Q{ {νk≤n<νk+1}e }, kX=0 1/2 s min S which equals c61n EQ{e 0≤i≤n i }, and, according to (2.14), is bounded in n. This completes the proof of (6.7). It remains to check (6.8). By definition,

N (n) 1/2 (n) + sc45 sV (w ) i−1 Λn ≤ [n + V (wn ) ]n e q . Xi=1 N i−1 1 Since i=1 q ≤ 1−q , this leads to, by an application of Proposition 2.1,

sc45 P n 1/2 + sS EQ{Λ 1 (n) }≤ EQ{[n + S ]e n 1 }, n {n−|w |<2c35 log n} 1 − q n {n−ϑn<2c35 log n} where (Si) is as in Proposition 2.1 and, as before, Sn := min0≤i≤n Si, ϑn := inf{k ≥ 0 : Sk = Sn}. 1 1/2+ε 1/2+ε c Let 0 <ε< 2 ; let An := {Sn >n } and Bn := {Sn ≤ n } = An. aS1 2ε Since EQ{e } < ∞ for |a| < c2 [see (2.11)] and Q(An) ≤ 2 exp(−c3n ) sc45 1/2 + [see (2.12)], the Cauchy–Schwarz inequality yields n EQ{[n + Sn ] × sSn e 1An }→ 0, n →∞. 1/2 + 1/2+ε 1/2 + sS On Bn, we have n + Sn ≤ 2n ; thus, EQ{[n + Sn ]e n × 1/2+ε sSn 1Bn∩{n−ϑn<2c35 log n}}≤ 2n EQ{e 1{n−ϑn<2c35 log n}}. It is clear that n Sn ≤ S⌊n/2⌋ := min0≤i≤n/2 Si, and that {n − ϑn < 2c35 log n} ⊂ { 2 − ϑn/2 <

e MINIMAL POSITION, BRANCHING RANDOM WALKS 39

2c35 log n}, where ϑn/2 := min{k ≥ 0 : Sk = min0≤i≤n−⌊n/2⌋ Si}, with Si :=

Si+⌊n/2⌋ − S⌊n/2⌋, i ≥ 0. Since S⌊n/2⌋ and ϑn/2 are independent, we have sS e sS e n e e EQ{e n 1 }≤ EQ{e ⌊n/2⌋ }Q{ −ϑ < 2c35 log n}. By (2.14), {n−ϑn<2c35 log n} e 2 n/2 sS⌊n/2⌋ −1/2 n EQ{e }≤ c62n ; on the other hand, Q{ − ϑn/2 < 2c35 log n}≤ e 2 (log n)1/2 1/2 + sS c (see Feller [18], page 398). Therefore, E {[n + S ]e n × 63 n1/2 Qe n −1/2+ε 1/2 1Bn∩{n−ϑn<2c35 log n}}≤ c64n (log n) . 1 Summarizing, we have proved that, for any s> 0 and 0 <ε< 2 , when n →∞,

c64 sc45−1/2+ε 1/2 EQ{Λ 1 (n) }≤ o(1) + n (log n) , n {n−|w |<2c35 log n} 1 − q which yields (6.8), as long as 0

(n) 1/2 (n) + −V (wn ) + min{n ,V (wn ) }e

n (n) −V (wj−1) −∆u (n) + + + ≤ e e [V (wj−1) +∆u + Vu(x) ] (n) TGW j=1 I x∈ u ,|x|u=n−j X u∈Xj X

−Vu(x) × e +Θn, (n) (n) (n) (n) I + −V (wn ) where ∆u := V (u) − V (wj−1) [for u ∈ j ], and Θn := V (wn ) e . −s s −1 s By means of the elementary inequality ( i ai) ≥ ( i ai ) and ( i bi) ≤ s −s 1 b for nonnegative ai and bi, we obtain Y ≥ on Sn, with Zn being i i P n Zn P P defined as P s (n) −sV (wj−1) −s∆u (n) + s + s −Vu(x) e e [(V (wj−1) ) + (∆u ) ] e u ( x ! Xj X X s + −Vu(x) s + Vu(x) e +Θn, " x # ) X 40 Y. HU AND Z. SHI

n where := , := (n) , and := TGW . We now j j=1 u I x x∈ u ,|x|u=n−j u∈ j P PG P P (n) P IP(n) G condition upon n, and note that V (wj ) and j are n-measurable. By Proposition 2.1,

(n) (n) + s G −sV (wj−1) −s∆u + s EQ{Zn| n} = e e {((V (wj−1) ) + (∆u ) ) u Xj X s s s × E(Wn−j)+ E(Un−j)} +Θn, + −V (y) where, for any k ≥ 0, Uk := |y|=k V (y) e . By Jensen’s inequality, s s 1 E(Wn−j) ≤ [E(Wn−j)] = 1. On the other hand, by (3.9), Uk ≤ c65 log ∗ P Wk s s 1 s and, thus, by Lemma 3.3, E(Uk ) ≤ c65E{[log ∗ ] }≤ c66. Therefore, the Wk s u sum on the right-hand side (without Θn, of course) is

P −s∆u (n) + s + s ≤ e {(V (wj−1) ) + (∆u ) + c67} u X (n) + s −s∆u −s∆u + s = [V (wj−1) ] e + e {(∆u ) + c67}. u u X X −sa + s −sa There exists c68 = c68(s) < ∞ such that e {(a ) + c67} ≤ c68(e + e−sa/2) for all a ∈ R. As a consequence, n (n) (n) G −sV (wj−1) + s EQ{Zn| n}≤ c69 e {[V (wj−1) ] + 1} jX=1 −s∆u −s/2∆u s × [e + e ]+Θn. I (n) u∈Xj 1 1 −s 1 By Jensen’s inequality again, EQ{ |G }≥ . Since Y ≥ on Zn n EQ{Zn|Gn} n Zn Sn, this leads to −s G EQ{Y n | n} c ≥ 70 . (n) (n) s n −sV (wj−1) + s −s∆u − ∆u s j=1 e {[V (wj−1) ] + 1} I (n) [e + e 2 ]+Θn u∈ j P P We apply Proposition 2.1: if (Sj −Sj−1, ηj), for j ≥ 1 (with S0 := 0), are i.i.d. (1) −sV (u) random variables (under Q) and distributed as (V (w ), I (1) [e + u∈ 1 e−s/2V (u)]), then P 1/2 (n) + −s EQ{(n ∧ V (wn ) )Y n } 1/2 + n ∧ Sn ≥ c70EQ n −sSj−1 + s −sSn + s  j=1 e [(Sj−1) + 1]ηj + e (Sn )  P MINIMAL POSITION, BRANCHING RANDOM WALKS 41

1/2 (n ∧ Sn)1{min1≤j≤n Sj >0} ≥ c70EQ . n −sSj−1 s −sSn s  j=1 e (Sj−1 + 1)ηj + e Sn 

P−sSj s −tSj s Note that if Sj > 0, then e [Sj + 1] ≤ c71e with t := 2 . Therefore, by writing

(n) Q {·} := Q ·| min Sj > 0 , 1≤j≤n   (n) (n) and EQ the expectation with respect to Q , and ηj := ηj + 1 for brevity, we get that b 1/2 1/2 (n) + −s (n) n ∧ Sn EQ{(n ∧ V (wn ) )Y n }≥ c72Q min Sj > 0 EQ 1≤j≤n n+1 −tSj−1    j=1 e ηj  1/2 εnP 1 1/2 (n) {Sn>εn b } ≥ c72Q min Sj > 0 EQ . 1≤j≤n n+1 −tSj−1    j=1 e ηj  −1/2 P Since Q{min1≤j≤n Sj > 0}≥ c73n [see (2.13)], this leads to b 1/2 (n) + −s EQ{(n ∧ V (wn ) )Y n }

1 1/2 (n) {Sn>εn } ≥ c74εE Q n+1 −tSj−1  j=1 e ηj 

(n) P 1 (n) 1/2 ≥ c74ε E b − Q {Sn ≤ εn } . Q n+1 −tSj−1   j=1 e ηj   P −sV (u) ρ(s) Let ρ(s) > 0 be as in Corollary 2.4. We have EQ{( I (1) e ) } < ∞ b u∈ 1 s −s/2V (u) ρ(s) by (2.16). Since ρ(s) ≤ ρ( 2 ), we also have EQ{( PI (1) e ) } < ∞. u∈ 1 ρ(s) Therefore, EQ{η1 } < ∞. We are thus entitledP to apply Lemma 6.1 (stated (n) 1 and proved below) to see that E { n+1 }≥ c75 for some c75 ∈ Q 1+ e−tSj−1 η b j=1 j 1 1 (0, ∞) and all n ≥ n0. Since n+1 ≥ n+1 , this yields e−tSj−P1 η 1+ e−tSj−1 η j=1 j j=1b j 1/2 (n) + −s P (n) P 1/2 EQ{(n ∧ V (wn ) )Y n }≥ c74ε[c75 −b Q {Sn ≤ εn }]b, n ≥ n0. 1/2 (n) On the other hand, Sn/n under Q converges weakly to the terminal value of a Brownian meander (see Bolthausen [11]); in particular, (n) 1/2 limε→0 limn→∞ Q {Sn ≤ εn } = 0. We can thus choose (and fix) a small (n) 1/2 c75 ε> 0 such that Q {Sn ≤ εn }≤ 2 for all n ≥ n1. Therefore, for n ≥ n0 + n1,

1/2 (n) + −s c75 EQ{(n ∧ V (w ) )Y }≥ c ε c − . n n 74 75 2   42 Y. HU AND Z. SHI

As a consequence, we have proved that, for 0 0, n→∞ n n which, in view of (6.10), yields the first inequality in (1.16), and thus com- pletes the proof of the lower bound in Theorem 1.5. 

We complete the proof of Theorem 1.5 by proving the following lemma, which is a very simple variant of a result of Kozlov [24].

Lemma 6.1. Let {(Xk, ηk), k ≥ 1} be a sequence of i.i.d. random vectors F P P E θ defined on (Ω, , ) with {η1 ≥ 0} = 1, such that {η1} < ∞ for some E E 2 θ> 0. We assume (X1) = 0 and 0 < (X1 ) < ∞. Let S0 := 0 and Sn := X1 + · · · + Xn, for n ≥ 1. Then 1 (6.11) lim E min Sk > 0 = c76 ∈ (0, ∞). n→∞ n+1 −Sk−1 1≤k≤n 1+ k=1 ηke 

P Proof. The lemma is an analogue of the identity (26) of Kozlov [24], except that the distribution of our η1 is slightly different from that of Ko- E θ zlov’s, which explains the moment condition {η1} < ∞: this condition will be seen to guarantee

E 1 lim lim sup j j→∞ n→∞ −Sk−1 1+ k=1 ηke (6.12) P 1 − min Sk > 0 = 0. n+1 −Sk−1 1≤k≤n 1+ k=1 ηke 

The identity (6.12), which playsP the role of Kozlov’s Lemma 1 in [24], is the key ingredient in the proof of (6.11). Since the rest of the proof goes along the lines of [24] with obvious modifications, we only prove (6.12) here. Without loss of generality, we assume θ ≤ 2 (otherwise, we can replace θ by 2). We observe that, for n > j, the integrand in (6.12) is nonnegative, and is

n+1 −Sk−1 n+1 −Sk−1 θ/2 n+1 θ/2 ηke ηke k=j+1 k=j+1 −Sk−1 ≤ ≤ ≤ ηke , n+1 −Sk−1 n+1 −Sk−1 1+P k=1 ηke 1+P k=1 ηke   k=Xj+1  P Pθ/2 n+1 −θ/2Sk−1 P which is bounded by k=j+1 ηk e . Since {min1≤k≤n Sk > 0}∼ 1/2 c4/n [see (2.13)], weP only need to check that n+1 θ/2 1/2 E −θ/2Sk−1 1 (6.13) lim lim supn {ηk e {min1≤i≤n Si>0}} = 0. j→∞ n→∞ k=Xj+1 MINIMAL POSITION, BRANCHING RANDOM WALKS 43

1/2 n+1 E Let LHS(6.13) denote the n k=j+1 {·} expression on the left-hand side. Let Si = Si(k) := Si+k − Sk,Pi ≥ 0. It is clear that (Si, i ≥ 0) is inde- pendent of (ηk, X1,...,Xk), and is distributed as (Si, i ≥ 0). Write Sk−1 := b b b min1≤j≤k−1 Sj and Sn−k := min1≤i≤n−k Si. Then n+1 b θ/2 b 1/2 E −θ/2Sk−1 LHS( ) ≤ n {ηk e 1 }. 6.13 {Sk−1>0,Sn−k>−Sk−1−Xk} k=Xj+1 E To estimate {·} on the right-hand side, we first conditionb upon (ηk,Sk−1,Sk−1, Xk), which leaves us to estimate the tail probability of Sn−k. At this stage, P it is convenient to recall (see (13) of Kozlov [24]) that {Sn−k > −y}≤ 1+y+ b c54 for some c54 > 0 and all y ∈ R. Accordingly, (n−k+1)1/2 b n+1 1 + (S + X )+ 1/2 θ/2 −θ/2Sk−1 k−1 k LHS ≤ c54n E η e 1{S >0} (6.13) k k−1 (n − k + 1)1/2 k=Xj+1   n+1 + θ/2 1+ Sk−1 + X 1/2 E −θ/2Sk−1 k ≤ c54n η e 1{S >0} . k k−1 (n − k + 1)1/2 k=Xj+1  

On the right-hand side, (ηk, Xk) is independent of (Sk−1,Sk−1). We condi- tion upon (Sk−1,Sk−1): for any z ≥ 1, an application of the Cauchy–Schwarz inequality gives E θ/2 + E θ 1/2 E + 2 1/2 {ηk (z + Xk )}≤ [ (ηk)] [ {(z + Xk ) }] . E θ E θ E + 2 E 2 Of course, (ηk)= (η1) < ∞ by assumption, and {(z + Xk ) }≤ 2 (z + 2 2 E 2 E θ/2 + Xk ) = 2[z + (X1 )]. Thus, {ηk (z + Xk )}≤ c77z for z ≥ 1. Consequently, with c78 := c54c77, n+1 1+ S 1/2 −θ/2Sk−1 k−1 LHS ≤ c78n E e 1{S >0} (6.13) k−1 (n − k + 1)1/2 k=Xj+1   n+1 1 1/2 −θ/3Sk−1 ≤ c79n E e 1{S >0} , k−1 (n − k + 1)1/2 k=Xj+1   −θ/6x the last inequality following from the fact that supx>0(1 + x)e < ∞. 1 P We use once again the estimate (2.13), which implies (n−k+1)1/2 ≤ c80 {Si > Sk−1, ∀k ≤ i ≤ n}. Since (Si −Sk−1, k ≤ i ≤ n) is independent of (Sk−1,Sk−1), this implies, with c81 := c79c80, n+1 LHS ≤ c n1/2 E{e−θ/3Sk−1 1 } (6.13) 81 {Sk−1>0,Si>Sk−1,∀k≤i≤n} k=Xj+1 44 Y. HU AND Z. SHI

n+1 ≤ c n1/2 E{e−θ/3Sk−1 1 }, 81 {Sn>0} k=Xj+1 where Sn := min1≤i≤n Si. It remains to check that n+1 1/2 −θ/3Sk−1 (6.14) lim lim sup n E{e 1{S >0}} = 0. j→∞ n→∞ n k=Xj+1 This would immediately follow from Lemma 1 of Kozlov [24], but we have been kindly informed by Gerold Alsmeyer (to whom we are grateful) of a flaw in its proof, on page 800, line 3 of [24], so we need to proceed differently. Since E{e−θ/3Sk−1 1 }≤ n−(3/2)+o(1)(n − k + 2)−1/2 (for n →∞) uniformly in {Sn>0} k ∈ [ n ,n + 1], we have n1/2 n+1 E{e−θ/3Sk−1 1 }→ 0, n →∞. On 2 k=⌊n/2⌋ {Sn>0} 1 the other hand, (36) of KozlovP [24] (applied to δ = 2 and ηi = 1 there) ⌊n/2⌋ implies that lim lim sup n1/2 E{e−θ/3Sk−1 1 } = 0. Therefore, j n k=j+1 {Sn>0} (6.14) holds: Lemma 6.1 is proved.  P 7. Proof of Theorem 1.3 and (1.14)–(1.15) of Theorem 1.4. In this sec- tion we prove Theorem 1.3, as well as parts (1.14)–(1.15) of Theorem 1.4. We assume (1.1), (1.2) and (1.3) throughout the section.

Proof of Theorem 1.3 and (1.14) and (1.15) of Theorem 1.4. Upper bounds. Let ε> 0. By Theorem 1.6 and Chebyshev’s inequality, P{Wn,β > −(3β/2)+ε −(3β/2)+o(1) n }→ 0. Therefore, Wn,β ≤ n in probability, yielding the upper bound in (1.15). The upper bound in (1.14) follows trivially from the upper bound in (1.15). It remains to prove the upper bound in Theorem 1.3. Fix γ ∈ (0, 1). Since γ Wn is a nonnegative supermartingale, the maximal inequality tells that, for any n ≤ m and any λ> 0, E(W γ) c P max W γ ≥ λ ≤ n ≤ 82 , n≤j≤m j λ γ/2   λn the last inequality being a consequence of Theorem 1.5. Let ε> 0 and 2/ε γ −(γ/2)+ε let nk := ⌊k ⌋. Then k P{maxnk≤j≤nk+1 Wj ≥ nk } < ∞. By the Borel–Cantelli lemma, almost surely for all large k, max W < P nk≤j≤nk+1 j −(1/2)+(ε/γ) ε nk . Since γ can be arbitrarily small, this yields the desired up- −(1/2)+o(1) per bound: Wn ≤ n a.s. 

Proof of Theorem 1.3 and (1.14) and (1.15) of Theorem 1.4. Lower bounds. To prove the lower bound in (1.14) and (1.15), we use the MINIMAL POSITION, BRANCHING RANDOM WALKS 45

Paley–Zygmund inequality and Theorem 1.6 to see that −(3β/2)+o(1) o(1) (7.1) P{Wn,β >n }≥ n , n →∞.

This is the analogue of (4.5) for Wn. From here, the argument follows the lines in the proof of the upper bound in (1.8) of Theorem 1.2 (Section 4), 2ε and goes as follows: let ε> 0 and let τn := inf{k ≥ 1:#{u : |u| = k}≥ n }. Then

−(3β/2)−ε P τn < ∞, min Wk+τn,β ≤ n exp −β max V (x)  k∈[n/2,n]  |x|=τn  −(3β/2)−ε ≤ P τn < ∞,Wk+τn,β ≤ n exp −β max V (x) |x|=τn k∈[Xn/2,n]    −(3β/2)−ε ⌊n2ε⌋ ≤ (P{Wk,β ≤ n }) , k∈[Xn/2,n] which, according to (7.1), is bounded by n exp(−n−ε⌊n2ε⌋) (for all suffi- ciently large n), thus summable in n. By the Borel–Cantelli lemma, almost surely for all sufficiently large n, we have either τn = ∞, or mink∈[n/2,n] Wk+τn,β > −(3β/2)−ε n exp[−β max|x|=τn V (x)]. Conditionally on the system’s ultimate 1 2ε log n survival, we have n max|x|=n V (x) → c21 a.s., τn ∼ log m a.s., n →∞, and Wn,β ≥ mink∈[n/2,n] Wk+τn,β for all sufficiently large n. This readily yields lower bounds in (1.14) and (1.15): conditionally on the system’s survival, −(3β/2)+o(1) Wn,β ≥ n almost surely (and a fortiori, in probability). The lower bound in Theorem 1.3 is along exactly the same lines, but using Theorem 1.5 instead of Theorem 1.6. 

8. Proof of Theorem 1.2. Assume (1.1), (1.2) and (1.3). Let β> 1. We trivially have Wn,β ≤ Wn exp{−(β − 1) inf|u|=n V (u)} and Wn,β ≥ exp{−β × inf V (u)}. Therefore, 1 log 1 ≤ inf V (u) ≤ 1 log Wn on S . |u|=n β Wn,β |u|=n β−1 Wn,β n Since β can be as large as possible, by means of Theorem 1.3 and of parts (1.14) and (1.15) of Theorem 1.4, we immediately get (1.7) and (1.9). Since Wn ≥ exp{− inf|u|=n V (u)}, the lower bound in (1.8) follows imme- diately from Theorem 1.3, whereas the upper bound in (1.8) was already proved in Section 4.

9. Proof of part (1.13) of Theorem 1.4. The upper bound follows from β Theorem 1.3 and the elementary inequality Wn,β ≤ Wn , the lower bound from (1.8) and the relation Wn,β ≥ exp{−β inf|u|=n V (u)}.

10. Proof of Theorem 1.1. The proof of Theorem 1.1 relies on Theorem 1.5 and a preliminary result, stated below as Proposition 10.1. Theorem 46 Y. HU AND Z. SHI

1/2 1.5 ensures the tightness of (n Wn,n ≥ 1), whereas Proposition 10.1 im- plies that Wn+1 converges to 1 in probability (conditionally on the system’s Wn survival).

Proposition 10.1. Assume (1.1), (1.2) and (1.3). For any γ> 0, there exists γ1 > 0 such that, for all sufficiently large n, W (10.1) P n+1 − 1 ≥ n−γ|S ≤ n−γ1 .  Wn 

Proof. Let 1 < β ≤ min{2, 1+ ρ(1)}, where ρ(1) is the constant in Corollary 2.4. We use a probability estimate of Petrov [34], page 82: for centered random β ℓ β variables ξ1,...,ξℓ with E(|ξi| ) < ∞ (for 1 ≤ i ≤ ℓ), we have E{| i=1 ξi| }≤ 2 ℓ E{|ξ |β}. i=1 i P By definition, on the set S , we have P n W e−V (u) n+1 − 1= e−Vu(x) − 1 , Wn Wn TGW ! |uX|=n x∈ u X:|x|u=1 GW where T and |x|u are as in (2.1) and (2.4), respectively. Conditioning on Fn, and applying Proposition 2.1 and Petrov’s probability inequality recalled above, we see that, on Sn, β W β e−βV (u) E n+1 F E −V (y) − 1 | n ≤ 2 β e − 1 Wn W ( )   |u|=n n |y|=1 X X (10.2) Wn,β = c83 β , Wn −V (v) β where c83 := 2E{| |v|=1 e − 1| } < ∞ [see (2.16)], and Wn,β is as in (1.11). P β−1 −(1/2)−ε Let ε> 0 and b> 0. Let s ∈ ( β , 1). Define Dn := {Wn ≥ n }∩ −(3β/2)+b −(1/2)−ε −ϑ {Wn,β ≤ n }. By Proposition 3.1, P{Wn 0 and all large n, whereas, by Theorem 1.6, P{Wn,β >n }≤ 3β(1−s)/2−(1−s)b 1−s −(1−s)b+o(1) n E{Wn,β } = n . Therefore, −ϑ −(1−s)b+o(1) P{S \ Dn}≤ n + n , n →∞.

On the other hand, since S ⊂ Sn, it follows from (10.2) and Chebyshev’s inequality that, for n →∞,

Wn+1 −γ γβ Wn,β P − 1 ≥ n , D , S ≤ n E c 1 S W n 83 β Dn∩ n  n   Wn 

γβ−(3β/2)+b+[(1/2)+ε]β ≤ c83n . MINIMAL POSITION, BRANCHING RANDOM WALKS 47

As a consequence, when n →∞,

Wn+1 −γ −ϑ −(1−s)b+o(1) γβ−β+b+εβ P − 1 ≥ n , S ≤ n + n + c83n .  Wn 

We choose ε and b sufficiently small such that γβ − β + b + εβ < 0. Propo- sition 10.1 is proved. 

We now have all of the ingredients needed for the proof of Theorem 1.1.

Proof of Theorem 1.1. Once Proposition 10.1 is established, the proof of Theorem 1.1 follows the lines of Biggins and Kyprianou [7]. 1/2 Assume (1.1), (1.2) and (1.3). Let λn > 0 satisfy E{(λnWn) } = 1. That is, 1/2 −2 λn := {E(Wn )} . λn λn By Theorem 1.5, we have 0 < lim infn→∞ n1/2 ≤ lim supn→∞ n1/2 < ∞, and (λnWn) is tight. Let W be any possible (weak) limit of (λnWn) along a subsequence. By Theorem 1.5 and dominated convergence, E(W 1/2)=1. c We now prove the uniqueness of W . By definition, c

−Vc(v) −Vv (x) Wn+1 = e e . TGW |vX|=1 x∈ v X,|x|v=n

By assumption, λnWn → W in distribution when n goes to infinity along a certain subsequence. Thus, λnWn+1 converges weakly (when n goes along the c −V (v)W same subsequence) to |v|=1 e v, where, conditionally on (v, V (v), |v| = 1), W are independent copies of W . v P c On the other hand, by Proposition 10.1, λnWn+1 also converges weakly (alongc the same subsequence) to Wc. Therefore,

law −V (v) W = c e Wv. |vX|=1 c c This is the same equation for ξ∗ in (3.5). Recall that (3.5) has a unique solution up to a scale change (Liu [27]), and since E(W 1/2) = 1, we have law ∗ ∗ 1/2 −2 W = c84ξ , with c84 := [E{(ξ ) }] . The uniqueness (in law) of W shows c that λnWn converges weakly to W when n →∞. c ∗ c By (3.3), P{Wn > 0} = P{Sn}→ P{S } = P{ξ > 0}. Let W > 0 be a random variable such that c W W (10.3) E(e−a )= E(e−a |W > 0), ∀a ≥ 0. It follows that, conditionally on the system’s survival, λ W converges in b c n n distribution to W .  48 Y. HU AND Z. SHI

Acknowledgments. We are grateful to Bruno Jaffuel, Gerold Alsmeyer, John Biggins and Alain Rouault for pointing out errors in the first versions of the manuscript, and to John Biggins for fixing Lemma 3.2. We also wish to thank two referees for their careful reading and for very helpful suggestions, without which we would not have had enough courage to remove the uniform ellipticity condition in the original manuscript.

REFERENCES [1] Addario-Berry, L. (2006). Ballot theorems and the heights of trees. Ph.D. thesis, McGill Univ. [2] Aldous, D. J. and Bandyopadhyay, A. (2005). A survey of max-type recursive distributional equations. Ann. Appl. Probab. 15 1047–1110. MR2134098 [3] Athreya, K. B. and Ney, P. E. (1972). Branching Processes. Die Grundlehren der mathematischen Wissenschaften 196. Springer, New York. MR0373040 [4] Bachmann, M. (2000). Limit theorems for the minimal position in a branching random walk with independent logconcave displacements. Adv. in Appl. Probab. 32 159–176. MR1765165 [5] Biggins, J. D. (1976). The first- and last-birth problems for a multitype age- dependent branching process. Adv. in Appl. Probab. 8 446–459. MR0420890 [6] Biggins, J. D. and Grey, D. R. (1979). Continuity of limit random variables in the branching random walk. J. Appl. Probab. 16 740–749. MR549554 [7] Biggins, J. D. and Kyprianou, A. E. (1997). Seneta-Heyde norming in the branch- ing random walk. Ann. Probab. 25 337–360. MR1428512 [8] Biggins, J. D. and Kyprianou, A. E. (2004). Measure change in multitype branch- ing. Adv. in Appl. Probab. 36 544–581. MR2058149 [9] Biggins, J. D. and Kyprianou, A. E. (2005). Fixed points of the smoothing transform: The boundary case. Electron. J. Probab. 10 609–631 (electronic). MR2147319 [10] Bingham, N. H. (1973). Limit theorems in fluctuation theory. Adv. in Appl. Probab. 5 554–569. MR0348843 [11] Bolthausen, E. (1989). A for two-dimensional random walks in random sceneries. Ann. Probab. 17 108–115. MR972774 [12] Bramson, M. D. (1978). Maximal displacement of branching Brownian motion. Comm. Pure Appl. Math. 31 531–581. MR0494541 [13] Bramson, M. D. (1978). Minimal displacement of branching random walk. Z. Wahrsch. Verw. Gebiete 45 89–108. MR510529 [14] Bramson, M. D. and Zeitouni, O. (2009). Tightness for a family of recursion equations. Ann. Probab. 37 615–653. [15] Chauvin, B., Rouault, A. and Wakolbinger, A. (1991). Growing conditioned trees. . Appl. 39 117–130. MR1135089 [16] Dekking, F. M. and Host, B. (1991). Limit distributions for minimal displace- ment of branching random walks. Probab. Theory Related Fields 90 403–426. MR1133373 [17] Derrida, B. and Spohn, H. (1988). Polymers on disordered trees, spin glasses, and traveling waves. J. Statist. Phys. 51 817–840. MR971033 [18] Feller, W. (1971). An Introduction to and Its Applications. Vol. II, 2nd ed. Wiley, New York. MR0270403 [19] Hammersley, J. M. (1974). Postulates for subadditive processes. Ann. Probab. 2 652–680. MR0370721 MINIMAL POSITION, BRANCHING RANDOM WALKS 49

[20] Hardy, R. and Harris, S. C. (2004). A new formulation of the spine approach to branching diffusions. Mathematics Preprint, Univ. Bath. [21] Harris, T. E. (1963). The Theory of Branching Processes. Die Grundlehren der Mathematischen Wissenschaften 119. Springer, Berlin. MR0163361 [22] Heyde, C. C. (1970). Extension of a result of Seneta for the super-critical Galton– Watson process. Ann. Math. Statist. 41 739–742. MR0254929 [23] Kingman, J. F. C. (1975). The first birth problem for an age-dependent branching process. Ann. Probab. 3 790–801. MR0400438 [24] Kozlov, M. V. (1976). The asymptotic behavior of the probability of non-extinction of critical branching processes in a random environment. Teor. Verojatnost. i Primenen. 21 813–825. MR0428492 [25] Kyprianou, A. E. (1998). Slow variation and uniqueness of solutions to the func- tional equation in the branching random walk. J. Appl. Probab. 35 795–801. MR1671230 [26] Lifshits, M. A. (2007). Some limit theorems on binary trees. (In preparation.) [27] Liu, Q. S. (2000). On generalized multiplicative cascades. Stochastic Process. Appl. 86 263–286. MR1741808 [28] Liu, Q. S. (2001). Asymptotic properties and absolute continuity of laws stable by random weighted mean. Stochastic Process. Appl. 95 83–107. MR1847093 [29] Lyons, R. (1997). A simple path to Biggins’ martingale convergence for branching random walk. In Classical and Modern Branching Processes (Minneapolis, MN, 1994). IMA Vol. Math. Appl. 84 217–221. Springer, New York. MR1601749 [30] Lyons, R., Pemantle, R. and Peres, Y. (1995). Conceptual proofs of L log L cri- teria for mean behavior of branching processes. Ann. Probab. 23 1125–1138. MR1349164 [31] McDiarmid, C. (1995). Minimal positions in a branching random walk. Ann. Appl. Probab. 5 128–139. MR1325045 [32] Neveu, J. (1986). Arbres et processus de Galton–Watson. Ann. Inst. H. Poincar´e Probab. Statist. 22 199–207. MR850756 [33] Neveu, J. (1988). Multiplicative martingales for spatial branching processes. In Sem- inar on Stochastic Processes, 1987 (Princeton, NJ, 1987). Progr. Probab. Statist. 15 223–242. Birkh¨auser Boston, Boston. MR1046418 [34] Petrov, V. V. (1995). Limit Theorems of Probability Theory: Sequences of inde- pendent random variables. Oxford Studies in Probability 4. Clarendon, Oxford. MR1353441 [35] Seneta, E. (1968). On recent theorems concerning the supercritical Galton–Watson process. Ann. Math. Statist. 39 2098–2102. MR0234530

Departement´ de Mathematiques´ Laboratoire de Probabilites´ Universite´ Paris XIII et Modeles` Aleatoires´ 99 avenue J-B Clement´ Universite´ Paris VI F-93430 Villetaneuse 4 place Jussieu France F-75252 Paris Cedex 05 E-mail: [email protected] France E-mail: [email protected]