On the volume of the sup ercritical

sup er-Brownian sausage conditioned on survival



János Engländer

Weierstrass Institute for AppliedAnalysis and

Mohrenstrasse 39

D-10117 Berlin, Germany

E-mail: englaend @w ia s- ber li n. de

URL: http://ww w.w ia s- be rli n. de / englaend

WIAS preprint No. 491 of May 10, 1999

AMS classication : primary 60J80; secondary 60J65, 60D05

Keywords : sup er-, sup er-Brownian sausage, branching Brow-

nian motion, branching Brownian sausage, Poissonian traps, hard obstacles.

Running head: Sup er-Brownian sausage



Supp orted by the grant for The Europ ean Union TMR Programme Pro ject ERBF MRX

CT 960075A, Sto chastic Analysis and its Applications, via Humb oldt University, Berlin.

Abstract

Let and b e p ositive constants. Let X b e the sup ercritical sup er-Brownian

1 2 d

motion corresp onding to the evolution equation u = + u u in R and let

t

2

Z b e the binary branching Brownian-motion with branching rate .For t  0,let

S

t

R(t)= supp (X (s)), that is R(t) is the (accumulated) supp ort of X up to time t.

s=0

S

a a



For t  0 and a>0, let R (t)= B (x; a): We call R (t) the super-Brownian

x2R(t)

sausage corresp onding to the sup ercritical sup er-Brownian motion X .For t  0,let

S

t

^ ^

R (t)= supp (Z (s)), that is R(t) is the (accumulated) supp ort of Z up to time

s=0

S

a a

^  ^

t.For t  0 and a>0, let R (t)= B (x; a): Wecall R (t) the branching

x2R(t)

Brownian sausage corresp onding to Z . In this pap er weprovethat

1 1

a a

^ ^

log E log E [exp ( jR (t)j)j X survives ] = lim exp( jR (t)j)= ; lim

 

0 0

t!1 t!1

t t

for all d  2 and all a; ;  > 0. 1

1 Intro duction and main results.

In the classic pap er [DV 75], Donsker and Varadhan describ ed the asymptotic b e-

haviour of the volume of the so-called `Wiener-sausage'. If W denotes the Wiener-

pro cess in d-dimension, then for t>0,

[

a



B (W (s);a) (1.1) W =

t

0st



is called the Wiener-sausage up to time t,whereB (x; a) denotes the closed ball with

d a a

radius a centered at x 2 R . Let us denote by jW j the d-dimensional volume of W .

t t

a

j ob eys By the classical result of Donsker and Varadhan, the Laplace-transform of jW

t

the following asymptotics:

d=(d+2) a

lim t log E exp ( jW j)= c(d;  );>0 (1.2)

0

t

t!1

for any a>0 where

  

d=(d+2)

d +2 2

d

2=(d+2)

c(d;  )= ;

2 d

1

and is the lowest eigenvalue of  for the d-dimensional sphere of unit volume

d

2

with Dirichlet b oundary condition.

The lower estimate for (1.2) had b een known by Kac and Luttinger (see [KL 74]),

and in fact the upp er estimate turned out to b e much harder. This latter one was

obtained in [DV 75] by using techniques from the theory of large deviations.

d

Note, that if P denotes the law of the Poisson p oint pro cess ! on R with intensity

dl (l is the Leb esgue-measure), >0 and with exp ectation E (in the notation we

suppress the dep endence on  ), and

8 9

< =

[



T := inf s  0;W(s) 2 B (x ;a) ;

0 i

: ;

x 2supp(! )

i

then

a

E exp ( jW j)= E  P (T >t); for t> 0: (1.3)

0 0 0

t

Denition 1 The random set

[



K := B (x ;a)

i

x 2supp(! )

i

is called a trap conguration (or hard obstacle) attached to ! .

Remark 1 In the sequel we will identify ! with K ,thatisan ! -wise statementwill

mean that it is true for all trap congurations (with a xed).

a

By (1.3), the lawofjW j can b e expressed in terms of the `annealed' or `averaged'

t

probabili ties that the Wiener-pro cess avoids the Poissonian traps of size a up to time

t. Using this interpretation of the problem, Sznitman [Sz 98] presented an alternative

pro of for (1.2). His metho d, called the `enlargement of obstacles' turned out to b e

extremely useful and resulted in a whole array of results concerning similar questions

(see [Sz 98 ], and references therein).

Another research area, the b ehavior of a class of measure-valued pro cesses called

has also b een extensively studied bynumerous authors in the past three

decades (see e.g. [D 93], [Dy 93 ], and references therein). 2

+

d d +

(R ) will denote the space of nonnegative, Notation 1 In the sequel C (R ) and C

c

b

d

b ounded, continuous functions on R and the space of nonnegative, continuous func-

d d d

tions with compact supp ort in R resp ectively. M (R ) and N (R ) will denote the

F

d

space of nite measures and the space of discrete measures on R resp ectively.

Denition 2 Let >0 and b e constants. One can dene a unique contin-

d

uous measure-valued Markov pro cess X which satises for any  2M (R ) and

F

+ d

g 2 C (R ),

b

 

Z

E exp ( ) = exp u(x; t) (dx) ;



d

R

where u(x; t) is the minimal nonnegative solution to

1

2 d

u + u u ; (x; t) 2 R  (0; 1); u =

t

(1.4)

2

d

lim u(x; t)= g (x);x2 R :

t!0

1 d

Wecall X an ( ; ; ; R )- (see [P 96], [EP 97 ]).

2

X is also called a sup ercritical (resp. critical, sub critical) super-Brownian motion

(SBM) for p ositive (resp. zero, negative) .

The size of the supp ort for sup er-Brownian motions has b een investigated in a

numb er of research pap ers (see e.g. [I 88], [P 95b ], [De 99], [T 94]). Results has b een

obtained concerning the radius of the supp ort (that is the radius of the smallest ball

centered at the origin and containing the supp ort) in [I 88] and [P 95b ]; the volume

of the -neighb orho o d of the range up to t>0 is studied in [De 99 ] and [T 94]. Note

that in [I 88], [De 99] and [T 94] the sup er-Brownian motion is critical; [P 95b ] is one

of the few articles in the literature which treat the sup ercritical case.

In the lightofthesetwo mathematical topics, it seems quite natural to ask whether

a similar asymptotics to (1.2) can b e obtained for sup erpro cesses. In this pap er we

will prove an analogous result to (1.2) replacing W by a sup ercritical sup er-Brownian

1 d

motion, X . In the sequel X will denote the ( ; ; ; R )-sup erpro cess and will b e

2

assumed to b e p ositive. We will denote the corresp onding probabili ties by fP ; 2



d

M (R )g. Note that X will b ecome extinct with p ositive and in fact (see

F

formula 1.4 in [P 96 ]),

=

P (X (t) = 0 for all t large ) = e : (1.5)



x

d

Notation 2 In the sequel we will use the notations S = fX (t; R ) > 0 for all t>0g

c

and E = S , and will call the events S and E survival and extinction.

We will need the following result concerning extinction (for the pro of see [EP 97, pro of

of Theorem 3.1]).

d d

1

Prop osition 1 If fP ; 2M (R )g correspond to the ( ; ; ; R )-superprocess,

 F

2

d d

1

then fP (j E ); 2M (R )g correspond to the ( ; ; ; R )-superprocess.

 F

2

Analogously to (1.1) wemake the following denition. 3

S

t

supp (X (s)), that is R(t) is the (accumulated) Denition 3 For t  0,let R(t)=

s=0

supp ort of X up to time t. Note that R(t) is a.s. compact whenever supp (X (0)) is

compact (see [EP 97]). For t  0 and a>0,let

[

a d



R (t)= B (x; a)=fx 2 R j dist (x; R(t))  ag: (1.6)

x2R(t)

a

We call R (t) the super-Brownian sausage corresp onding to the sup ercritical sup er-

Brownian motion X .

a

By the rightmost expression in (1.6) it is clear that also R (t) is compact P -a.s.



whenever supp () is compact and thus it has a nite volume (Leb esgue measure) in

d a

R P -a.s. Let us denote this (random) volume by jR (t)j. Note, that if  denotes



d

extinction time, that is  = inf ft> 0 j X (t; R )= 0g; then

a a

9l := lim E (exp( jR (t)j) j E )=E exp ( jR ( )j);

 

0 0

t!1

and l 2 (0; 1). Therefore , we are interested in the long-time b ehaviour of

a

[exp ( jR (t)j) j S ]: E



0

Our main result is as follows.

Theorem 1 Let >0 be xed. Then

1

a

lim log E [exp ( jR (t)j) j S ]= ; (1.7)



0

t!1

t

for al l d  2 and al l a; ;  > 0 .

It is interesting that, unlike in (1.2), the scaling is indep endentofd and the limit

is indep endentofd and  . One should interpret this phenomenon as the overwhelming

inuence of the branching over the spatial motion. (Recall that the measure-valued

pro cess X can also b e obtained as a high density limit of approximating branching

particle systems  see e.g. [D 93].) The fact that the limit do es not dep end on as

well could also b e surprising at the rst sight. An explanation for this is as follows.

d 1

; ; ; R )-sup erpro cess, Using (1.4) it is easy to check that if P corresp onds to the (

2

0 1 d

and P correp onds to the ( ; ; ; R )-sup erpro cess, then

2

 

0

P X 2 = P (X 2):



0



0

Since the supp ort pro cess t 7! supp (X (t)) is the same for X and =  X , one only

has to show that the starting measures  and =   give the same limit in (1.7) if

0 0

such a limit exists at all. This is not hard and we omit the details.

Wenowgive a brief description on the strategy of the pro of. The pro of of Theorem

1willbe`atypical' in a sense. In similar problems, like in the pro of of (1.2), the upp er

estimate is usually obtained by using either large deviation techniques or Sznitman's

metho d; the lower estimate is relatively simple in general. For the upp er estimate in

(1.7) we will use a comparison theorem based on a result due to Evans and O'Connel

(see [EO 94 ]) and some elementary estimates. The lower estimate in (1.7) however

will b e quite tedious  it will b e obtained by analyzing certain p de's related to the

sup erpro cess. In the analysis we will also utilize some estimates on the solutions of

these equations obtained by Pinsky (see [P 95a ]). 4

The dicultyinproving (1.7) is, of course, comp ounded in the fact that although,

by Prop osition 1, X (j E ) is a sup erpro cess itself, this do es not hold for X (jS ).

a

However we will b e able to show that the long-term b ehaviour of E [exp( jR (t)j) j



0

S ] remains the same if we replace X by a certain branching pro cess, whichsurvives

with probability one. Let Z denote the branching Brownian-motion (BBM) with



2

branching term (z z ) (that is, binary branching at rate ), and starting with

d

^

initial measure  2N(R ). The corresp onding exp ectation will b e denoted by E .



Note that Z can b e considered as a measure-valued pro cess where the state space of



d a

^

the pro cess is N (R ): One can dene the `branching Brownian sausage', fR (t);t  0g

analogousl y to (1.6) as follows.

S

t

^ ^

supp (Z (s)),thatis R(t) is the (accumu- Denition 4 For t  0, let R(t)=



s=0

lated) supp ort of Z up to time t.For t  0 and a>0,let



[

a

 ^

B (x; a): (1.8) R (t)=

x2R(t)

a

^

We call R (t) the branching Brownian sausage corresp onding to the branching Brow-

nian motion Z .



a

^

Again, it is easy to see that R (t) is compact for any compactly supp orted starting

d

measure  2N(R ) (and thus has a nite volume) with probabili tyone.

A crucial ingredient of the pro of will b e the demonstration of a comparison re-

a a

^ ^

sult for E [exp( jR (t)j)j S ] and E exp ( jR (t)j): Wewillprove the following

 

0 0

prop osition.

Prop osition 2 (Comparison) Let >0 bexed. Then

a a

^ ^

E [exp( jR (t)j)j S ]  E exp ( jR (t)j); (1.9)

 

0 0

for al l d  1 and al l a; ;  > 0.

After having the ab ove comparison, we will prove the following two estimates.

Prop osition 3 (Upp er estimate on BBM) Let >0 be xed. Then

1

a

^ ^

lim sup log E exp ( jR (t)j)  ; (1.10)



0

t

t!1

for al l d  2 and al l a;  > 0.

Prop osition 4 (Lower estimate on SBM) Let >0 bexed. Then

1

a

^

log E [exp( jR (t)j)j S ]  ; (1.11) lim inf



0

t!1

t

for al l d  1 and al l a; ;  > 0.

Theorem 1 will then follow from (1.9)-(1.11). Also, as a by-pro duct of our metho d,

we will obtain the following result for the branching Brownian motion.

Theorem 2 Let >0 be xed. Then

1

a

^ ^

log E exp ( jR (t)j)= ; (1.12) lim



0

t!1

t

for al l d  2 and al l a;  > 0. 5

The pro of of Theorem 2 is immediate from (1.9)-(1.11).

We end this section with four further problems.

Problem 1 (higher order asymptotics) In the light of the fact that the limit un-

der (1.7) do es not dep end on >0 and d  2,itwould b e desirable to rene the

results

a

[exp( jR (t)j) j S ] = exp ( t + o(t)) as t !1 E



0

and

a

^ ^

exp ( jR (t)j)=exp( t + o(t)) as t !1; E



0

by obtaining higher order asymptotics for the ab ove Laplace-transforms.

Problem 2 (one-dimensional case) What are the corresp onding asymptotics for

d =1?

Problem 3 (zero or negative ) According to (1.5), the event S has zero proba-

bilityif  0 (critical or sub critical sup er-Brownian motion). However, replacing S

by f>tg ( was dened in the paragraph preceding Theorem 1), the asymptotic

a

b ehavior of E [exp ( jR (t)j) j >t] can still b e studied. How can the result of



0

Theorem 1 b e extended for this case?

d

1

Problem 4 (general sup erpro cesses) The concept of the ( ; ; ; R )-sup erpro-

2

cess has b een generalized for spatially dep endent and and for a general domain

d

D  R with the elliptic op erator L on it (see [EP 97]). (See also the remark after

Theorem 3 in the next section.) How can the results of this pap er b e generalized for

such a general setting?

Remark 2 Problem 3 was suggested by A.-S. Sznitman.

2 Pro ofs.

It will b e more convenient to rewrite the Laplace-transforms in (1.9)-(1.11) using a

`trap-avoiding' setting similarly to (1.3). We record the basic corresp ondences. Let

E b e as in (1.3) and K as in Denition 1, and recall that Z denotes the branching

Brownian motion with branching rate >0. The corresp onding probabiliti es will b e

^

denoted by P . Let



T := inf ft  0 j X (t; K ) > 0g ;

and

^

T := inf ft  0 j Z (t; K ) > 0g ;

d

where Z (t; B ) denotes the numb er of particles of Z lo cated in B at time t for B  R

Borel and t>0.Then

a

E [exp( jR (t)j) j S ]=E  P (T>tj S ); (2.13)

 

0 0

and

a

^ ^ ^ ^

E exp ( jR (t)j)=E  P (T>t): (2.14)

 

0 0

The pro of of Prop osition 2 will b e based on a result on the decomp osition of sup er-

d 1

; ; ; R )- pro cesses with immigration whichwas proved in [EO 94 ]. Let X be the (

2

sup erpro cess with corresp onding probabili ties fP g.

 6

1 d

~

Theorem 3 (Evans and O'Connell) Let X bethe( ; ; ; R )-superpro-

2



~

=2 Z . Conditional cess. By Proposition 1, X corresponds to P (j E ).Let Z

 



1  ;

~

(s)g on fZ , let (R; P ) be the superprocess obtained by taking the process X with

s=0



starting measure , and adding immigration, where the immigration at time t is ac-



(t). This is described mathematical ly by the conditional cording to the measure Z



Laplace functional

;  

E (exp( ) j Z (s);s  0) =



 

Z

t

 

exp <;u~ (;t) > ds < Z (s); u~ (;t s) > ; (2.15)

g g

 

0

+ d

for g; k 2 C (R ),where u~ denotes the minimal nonnegative solution to (1.4), with

g

c

d

replacedby . Denote by N the law of the Poisson random measureonR with



 and dene the random initial measure  by intensity

L( )    N :

 



Then the law of R under P is P .



Remark 3 We note that Theorem 3 has b een generalized ([EP 97], Theorem 6.1) for

the more general setting mentioned in Problem 4.

Wenowprove Prop ositions 2-4. In the pro ofs we will use the following notations:

f (x)

Notation 3 The notation f  g wil l mean that ! 1. The notation f = o(g ) wil l

g (x)

f (x)

! 0. mean that

g (x)

Proof of Proposition 2. By (2.13) and (2.14) wemust prove that

^ ^

E  P (T>tj S )  E  P (T>t): (2.16)

 

0 0

We will in fact prove the stronger result that (2.16) holds ! -wise, that is

^ ^

P (T>tj S )  P (T>t); for every !: (2.17)

 

0 0

conditioned on p ositive Let Q b e the law of the Poisson distribution with parameter

integers, that is

i =

( = ) e

 ;i=1; 2;:: :: Q(m = i)=

=

1 e i!

Let M denote the corresp onding exp ectation. Finally,letK b e a xed trap congura-

tion. By Theorem 3 it is intuitively clear that

^ ^

P (T>tj S )  M  P (T>t): (2.18)

 m

0 0

The rigorous pro of of (2.18) however is a bit tedious. By standard probabili stic con-

siderations, (2.18) will follow from the lemma b elow.

d

Lemma 1 Fix an x 2 R and let B = B (x ;a). Fix also a t>0 and an  2 (0;t).

0 0

Then for any m  1,

 ;m

0 0

P (R(t; B ) > 0 j Z (s; B (x ;a)) > 0 8 t   s  t)= 1:

m 0

0 7



Obviously, Z can b e replaced by Z in Lemma 1. Thus, taking k =0 in (2.15),

m

m

0

0

using the test functions



n if jx x ja

0

g (x)=

n

0 if jx x j >a;

0

and nally letting n !1, the pro of of the lemma reduces to the pro of of the following

claim.

Claim 1 Let U be the minimal nonnegative solution to (1.4) with replacedby

a

and



1 if jx x ja

0

g (x):=

0 if jx x j >a:

0

(In fact U can be obtainedasa pointwise limit (as n !1) of solutions to (1.4) with

a

replacedby and with the above g 's.) Then

n

Z



U (l (t);t) dt = 1;

a

0

for any > 0 and any continuous function l :[0;] ! B (x ;a).

0

1

Proof. Wemay assume that x =0.LetU b e the minimal nonnegative solution to

0

a

(1.4) with replaced by and



0 if jxja

g (x):=

1 if jxj >a;

and let V b e the minimal nonnegative solution to (1.4) with replaced by and

g (x) 1: We claim that

t 1

[(1 e ) 1]: (2.19) V (x; t)= V (t)=

Note that by [EP 97, Theorem 3.1] and Prop osition 1,

V (t) d

~

P (< 1;X(t) >=0)= e ; for all x 2 R ; (2.20)



x

d 1

~

; ; ; R ). It is not hard to show that where P = P (jE ) and P corresp onds to (

2

 

t 1

P (1 e ) (< 1;X(t) >=0)=exp



x

(see for example [EP 97, Prop osition 3.1]). Using (1.5) it follows that

 

 

t 1

~

(1 e ) 1 P : (< 1;X(t) >= 0) = exp



x

Then (2.19) follows from this and (2.20).

By the minimali ty of the solution and the parab olic maximum principle (see

[EP 97 ]), it is easy to show that

1

U + U  V: (2.21)

a

a

1

Since we know V explicitly it will suce to have an upp er estimate on U . Similarly

a

as in the pro of of [P 95a, Theorem 2(i)], one can show that

1 2 +

U (x; t)  V (t) exp((k  a =t t k ) )forx 2 B (0;a) (2.22)

1 2

a 8

2

with some k ;k > 0. (In [P 95a, Theorem 1(i)] the nonlinearity u u is considered

1 2

t 1

and the ab ove estimate is proved with the function (1 e ) in place of V (t).The

same pro of go es through in the recent case.) It then follows that for t small

1 2

U (x; t)  V (t) exp (k  a =t) for x 2 B (0;a) (2.23)

a

with some k>0.From (2.19), (2.21) and (2.23), one obtains

2

U (x; t)  V (t)(1 exp (k  a =t)) for x 2 B (0;a): (2.24)

a

Using (2.19),

1

2

V (t)(1 exp (k  a =t))  V (t)  as t ! 0; (2.25)

t

which completes the pro of.

The pro of of (2.18) has thus b een completed. In order to complete the pro of of the

prop osition, wemust showhow (2.18) implies (2.17). Let m b e the random number

of particles with exp ectation M as in (2.18). Obviously, itisenoughtoshowthat

^ ^ ^ ^

M  P (T>t)  P (T>t):

m 

0 0

For this, denote

^ ^

(T>t);i=1; 2;:: : p := P

i;t i

0

^

where T is as in (2.14). Using the indep endence of the particles,

1

i =

X

( = ) e

^ ^

p M  P (T>t)=

i;t m

0

=

1 e i!

i=1

1

i = =

X

( =  p ) e e

1;t

= (exp ( =  p ) 1): =

1;t

= =

1 e i! 1 e

i=1

By the convexity of the exp onential function,

= x =

e 1  (e 1)x; for 0  x  1;

and thus

=

e

=

^ ^

M  P (T>t)  (e 1)p = p :

m 1;t 1;t

0

=

1 e

This completes the pro of of Prop osition 2. 

Proof of Proposition 3. By (2.14) wemust show that

1

^ ^

log E  P (T>t)  : (2.26) lim sup



0

t

t!1

The strategy for proving (2.26) is as follows. We will split the time interval [0;t] into

two parts : [0;t] and (t; t]. In the rst part we will use that `many' particles are

pro duced, in the second one, that each particle hits the traps with `large' probability.

Then the probabili tyofavoiding traps in [0;t] will b e `small'. To make things more

t

precise, let T b e as in the paragraph preceding (1.3) and denote Y = P (T >t),

0 x 0

x

t t

where P corresp onds to W . Then for any x 2 R;t>0 xed, Y = Y (! ) is a

x

x x

^

. As b efore, E denote now the exp ectations corresp onding to Z .Let

Z (t)=(Z (t);Z (t);:::;Z (t));t > 0;k  1, for jZ (t)j = k ,wherejZ (t)j denotes the

1 2 k 9

numb er of particles aliveatt. Using a well known formula (see e.g. [KT, formula

8.11.5] and the discussion afterwards) for the distributio n of jZ (t)j, one has

s s k 1 s k

^

q := P (jZ (s)j = k )=e (1 e ) = c (1 e ) ;

k;s  s

0

where

s

e

: c :=

s

s

1 e

Fix  2 (0; 1). By the Markov prop erty,

1

X

(1 )t (1 )t

^ ^ ^

q  Y :::Y : (2.27) E  P (T>t)  E  E

k;t  

0 0

Z (t) Z (t)

1

k

k =1

At this p ointwe note that we are going to work with exp ectations of nonnegative

series like the right-hand side of (2.27) which in principle might sum up to +1.After

anumb er of upp er estimates however wewillshow that the last one is in fact nite for

t large enough. Using the inequalitybetween the arithmetic and the geometric means,

 

1

   

X

k k

1

(1 )t (1 )t

^ ^ ^

( q E  P T>t)  Y +  + Y  E : E

k;t  

0 0

Z (t) Z (t)

1

k

k

k =1

In fact, the right-hand expression can b e much simplied since the spatial homogeneity

of ! implies that

   

k k

(1 )t (1 )t

^

; 8 1  i  k: = E Y  E Y E



0 0

Z (t)

i

t

Denote p := Y ;t > 0 and recall that p is the probability that a single Brownian

t t

0

particle starting at zero avoids the traps up to t.Wehave

1 1

h i

X X

k

t k

^ ^

(1 e )  p q E p c E  P (T>t)  = E

k;t t 

(1 )t

(1 )t

0

k =1 k =1

1

X

1

k

:  c E p = c E

t t

(1 )t

1 p

(1 )t

k =0

We are going to show that there exists a L>0 and a t > 0 suchthat

0

1

k := E t : (2.28)

t 0

1 p

t

If (2.28) holds then we are done b ecause from (2.28) we obtain that

^ ^

E  P (T>t)  c  k t ;

 t t 0

(1 )t

0

t

0

that is if t> . Then

1

1

^ ^

log E  P (T>t) lim sup



0

t

t!1

1

1

 lim sup log (c  L)= lim sup log c = ;

t t

t!1

t

t

t!1

and (2.26) follows by letting  ! 1. 10

It remains to show (2.28). Let  denote the rst exit time of W from the d-

n

dimensional n-b ox,

d

 := inf ft  0 j W (t) 2= [n; n] g:

n

d

Let P denote the probabili ty corresp onding to W in d-dimension and starting from

0

the origin. Then, as is well known [KT, formula 7.3.3], for d =1

r

Z

1

2

y

2

1

2

dy ; e P (  t)=

n

0

p



n= t

and thus by indep endence

" #

r

d

Z

1

2

y

2

d 1 d

2

e P (  t)= 1 [1 P (  t)] =1 1 dy :

n n

0 0

p



t n=

It follows from the binomial theorem that for t>0 xed,

r

Z

1

2

y

2

d

2

(  t)  d  P dy as n !1: e

n

0

p



n= t

Using L'Hospital's rule,

Z

1

2

p

y

2

1 n =2t

2

dy  e tn e as n !1:

p

t n=

Therefore,

r

p

2

2

d 1 n =2t

(  t)  d  P tn e as n !1: (2.29)

n

0



Let v denote the volume of the a-ball in d-dimension and let C denote the event

a n

d c

that there is no p ointof! in the n-b ox, that is C := f! ([n; n] )=0g. Let C

n

n

c 1

)= 1, C denote the complementofC . Since P([

n

n n=1

1

 E

1 p

t

1

X

1

 P(C n C ) 

n1 n

d d

P (  t)  E  P (W ( ) 2 K )

n n

0 0

n=1

1

X

1

P(C )  =

n1

d

v

a

(  t)  e P

n

0

n=1

1

X

1

d

exp( [2(n 1) + v ])  :

a

d

P (  t)

n

0

n=1

1

Putting this together with (2.29), we obtain that a sucient condition for E (1p ) <

t

1 is that

 

1

2

X

n

d

2 (n 1) < 1: (2.30) n exp

2t

n=1

1

Now, if d  3 then (2.30) holds for any t>0 and thus E (1 p ) < 1 for al l t> 0.

t

1

If d =2 then (2.30) holds whenever t> t := . This proves (2.28) and completes

0

4

the pro of of Prop osition 3. 

Proof of Proposition 4. We will need the following lemma. 11

Lemma 2 Denote

c

A := fX (s; B (0;n)) = 0;s tg: (2.31)

n;t

(i) For each n 2 N there exists a minimal positive solution u to

n

1

2

u = u + u u ; (x; t) 2 B (0;n)  (0; 1);

t

2

(2.32)

lim u(x; t)= 1;t>0;

x!@B(0;n)

lim u(x; t)=0;x2 B (0;n):

t!0

Moreover,

u (x;t)

n

(A )=e : (2.33) P

n;t 

x

(ii) For each n 2 N there exists a minimal positive solution u^ to

n

1

2

u u u ; (x; t) 2 B (0;n)  (0; 1); u =

t

2

(2.34)

lim u(x; t)= 1;t>0;

x!@B(0;n)

lim u(x; t)=0;x2 B (0;n):

t!0

Moreover,

u^ (x;t)

n

P (A j E )=e : (2.35)

 n;t

x

For the pro of of (i), see the pro of of [P 95a, formula 1.3] and [P 95b, formula 6]. The

pro of of (ii) is similar.

Let A b e as in (2.31). Our goal is to show that if t !1 and n !1 simulta-

n;t

d

neously and in suchawaythatn = o(t),then

1

lim inf log P (A j S )  : (2.36)

 n;t

x

t!1

t

Indeed, if (2.36) is true, then using the notation v for the volume of the d-dimensional

1

unit ball (and suppressing the dep endence of n on t in the notation),

1

log E  P (T>tj S )  lim inf



0

t!1

t

1

lim inf log [P (A j S )  PfThere are no traps in B (0;n)g]=

 n;t

x

t!1

t

 i h

1

d

(A j S )  exp v n = log P lim inf

n;t 1 

0

t!1

t

1

lim inf log P (A j S )  ;

 n;t

0

t!1

t

and (1.11) follows from this and (2.13).

In the rest of this pap er the following assumption will b e in force.

d

Assumption 1 n(t) !1 and n (t)= o(t) as t !1:

Notation 4 In the sequel we will mostly suppress the dep endence of n on t in the

notation and simply write n instead of n(t);we will use n(t) only when it will b e

imp ortant to remind the reader that n is a function of t. 12

Wehave

(A ) (A ) P P

n;t n;t  

x x

 P  [1 P (S j A )= (E j A )] P (A j S )=

  n;t n;t  n;t

x x x

(S ) (S ) P P

 

x x

 

P P (E ) (A )

  n;t

x x

(A j E )   1 P : =

n;t 

x

(S ) (A ) P P

n;t  

x x

Using Lemma 2, it follows,

(A ) P

n;t 

x

P  (1 exp [^u (x; t)+ = u (x; t)]): (2.37) (A j S )=

 n n n;t

x

(S ) P



x

Before proving (2.36), rst note that obviously

lim P (A j E )=1; whenever lim n(t)=1:

 n;t

0

t!1 t!1

Therefore, by Lemma 2(ii),

lim u^ (0;t)=0; whenever lim n(t)=1: (2.38)

n

t!1 t!1

By Theorem A(a)(i-ii) in [P 95b ] it follows that

lim u (0;t)= = ; (2.39)

n

t!1

n(t)

whenever lim n(t)= 1 and lim =0 hold. We conclude that

t!1 t!1

t

lim [^u (0;t)+ = u (0;t)] = 0; (2.40)

n n

t!1

n(t)

whenever lim n(t)= 1 and lim =0 hold.

t!1 t!1

t

=

By Lemma 2(i) and (2.39), P (A ) ! e under Assumption 1. From this

 n;t

0

and (2.37) we obtain that

P (A j S )  const [u ^ (0;t)+ = u (0;t)]; as t !1: (2.41)

 n;t n n

0

n(t)

=0 hold. Therefore, in order to prove whenever lim n(t)= 1 and lim

t!1 t!1

t

(2.36), it will suce to prove that

1

log g (0;t)  ; (2.42) lim inf

n

t!1

t

where

g (x; t):=^u (x; t)+ = u (x; t):

n n n

Note that g  0 by (2.37).

n

The next observation is that wemay assume that = .To see this, note that

if v and v^ denote the minimal p ositive solutions to (2.32) and (2.34) resp ectively,

n n

u but with replaced by , then in fact (as a simple computation shows), v =

n n

 

and v^ = u^ . Therefore, if g (x; t):=^v (x; t)+1 v (x; t), then g = g .This

n n n n n

n

means that if (2.36) holds for = , then it also holds for 6= . So, from nowon,

for simplicity,we will work with = and

g =^u +1 u :

n n n 13

Akey step in the pro of is to observe that  as a direct computation reveals 

0  g satises the following parab olic equation :

n

1

2

g = g (1 + 2u ^)g + g ; (x; t) 2 B (0;n)  (0; 1);

t

(2.43)

2

lim u(;t)=1:

t!0

In order to analyze (2.43), we will need some more facts concerning u and u^ .

n n

First, note that by (2.33) and (2.35), u and u^ are monotone nondecreasing in t.Let

n n

^

 ():=lim u (;t); and  () := lim u^ (;t):

n n n n

t!1 t!1

Then, using (2.33) and (2.35), it follows that

c (x)

(X (t; B (0;n)) = 0; 8t  0) = e ;x2 B (0;n) (2.44) P



x

and

^

c (x)

(X (t; B (0;n)) = 0; 8t  0 j E )=e ;x2 B (0;n): (2.45) P



x

^

Using this, it follows from Lemma 7.1 in [EP 97] that  and  are nite. Weclaim

that in fact

^

 =  1: (2.46)

To see this denote

c

A = fX (t; B (0;n)) = 0; 8t  0g:

n

By [EP 97, formula 3.8],

P (A \ S )=0;x2 B (0;n):

 n

x

Therefore

P (A \ E )=P (A );

 n  n

x x

and thus

1

P (A j E )=P (A )  ;x2 B (0;n):

 n  n

x x

P (E )



x

By (1.5),(2.44) and (2.45) then,

^

 

e = e  e:

Now let us return to (2.43). In order to prove (2.42), rst we will compare g to the

n

unique nonnegative solution v of the linear equation

n

1

^

v = v (1 + 2 )v; (x; t) 2 B (0;n)  (0; 1);

t n

2

(2.47)

lim v (;t)=1;

t!0

lim v (x; t)= 0; 8t> 0:

jxj!n

(As is well known, the existence of the solution to (2.47) is guaranteed bythe

negative sign of the zeroth order term in the right hand side of the rst equation

of (2.47); the uniqueness and the non-negativityfollows by the parab olic maximum 14

principle  see e.g. [PW 84 ] for more elab oration.) Using (2.43) along with the fact

^

that u^ (;t)   () 8t>0; 8n  1,wehave

n n

1

^

g (1 + 2 )g (g )  0; (x; t) 2 B (0;n)  (0; 1);

n n n n t

(2.48)

2

lim g (;t)= 1:

t!0 n

Therefore, by the parab olic maximum principle again,

g  v ; 8n  1;

n n

and consequently it will suce to prove (2.42) with g replaced by v .To do this, it

n n

will b e useful to rescale (2.47). Let

2

w (x; t):=v (nx; n t); for (x; t) 2 B (0; 1)  [0; 1):

n n

Then a simple computation shows that 0  w satises the rescaled equation,

n

1

2

^

w (x; t)= w (x; t) n (1 + 2 (nx))w (x; t); (x; t) 2 B (0; 1)  (0; 1);

t n

2

lim v (;t)=1;

t!0

lim w (x; t)=0; 8t>0:

jxj!1

(2.49)

We note that  is in fact the minimal p ositive solution to the elliptic b oundary-value

n

problem,

1

2

 +   =0; in B (0;n);

(2.50)

2

lim (x)=1;

jxj!n

and satises the upp er estimate:

2

cn

 (x)  1+ ;

n

2 2 2

(n r )

with some c = c(n; d; ), where r := jxj. (See Prop osition 1 in [P 95a] and its pro of.)

It then follows from (2.46) that

2

cn c

^

 (nx)= (nx) 1  = : (2.51)

n n

2 2 2 2 2 2 2

(n n r ) n (1 r )

By (2.49) and the Feynman-Kac formula ([F 85 ], Theorem 2.2),

   

Z

h i

t

2

^

w (x; t)=E exp n 1+2 (nW (s)) ds ;  >t ;

n x 1

0

where W is the d-dimensional Wiener pro cess and

 := inf ft  0 j W (t) 2= B (0; 1)g:

1

Therefore,

2

v (0;t)=w (0;t=n )=

n n

# " ( )

2

Z

h i

t=n

t

2

^

= E exp n 1+2(nW (s)) ds ;  >

x 1

2

n

0

# " ( )

Z 2

t=n

t

t 2

^

: e  E exp 2 n (nW (s)) ds ;  >

x 1

2

n

0 15

Using this along with (2.51), we obtain

# ) " (

Z 2

t=n

t 2 c

t

: ds ;  > exp v (0;t)  e  E

1 n x

2 2 2

(1 jW (s)j ) n

0

Using the notation

2 c

V (x):= ;

2 2

(1 jxj )

it follows

1

lim inf log v (0;t)  + (2.52)

n

t!1

t

" ( # )

Z 2

t=n

2

1 n t

lim inf  log E exp : V (W (s)) ds ;  >

x 1

2 2

t!1

n t n

0

2 (1)

First consider d  2.Thenby assumption, t=n (t) !1 as t !1.Let denote

1

the leading eigenvalue of the op erator +V on B (0; 1 ) with Dirichlet b oundary

2



data. Since V is smo oth on B (0; 1 ),byawell-known theorem ([P 95c ], Theorem

3.6.1),

" ( ) #

2

Z

2 t=n

n t

(1)

lim exp V (W (s)) ds ;  > log E =  ;

1 x

2

2

t n

t=n !1

0

(1)

where  is dened analogously to  . Since  > 1,then,a fortiori,

1 1

# " ( )

Z 2

t=n

2

t n

> 1: exp V (W (s)) ds ;  > log E lim inf

1 x

2

2

t n

t=n !1

0

Putting this together with (2.52) and with the fact that lim n(t)=1; we obtain

t!1

that

1

lim inf log v (0;t)  :

n

t!1

t

This completes the pro of for d  2.

2

Finally,even if d =1and t=n (t) !1is not satised as t !1, the ab ove

argument still works b ecause the exp ectation in the right hand side of (2.52) is clearly

2

monotone nonincreasin g with t=n . 

Acknowledgements. A substantial part of this pap er was written during my

stay at Zurich. I am grateful to the University of Zurich and ETH Zurich for inviting

me. The topic presented in the pap er was discussed from dierent asp ects with E.

Bolthausen, L. Erd®s, R. Pinsky, A.-S. Sznitman and M. Wütrich. I owe thanks to all

of them.

References

[D 93] D. A. Dawson Measure-Valued Markov Processes, Ecole d'Eté Probabilités

de Saint Flour XXI., LNM 1541, 1-260, 1993.

[De 99] J.-F. Delmas Some properties of the range of super-Brownian motion

preprint. 16

[Dy 93] E. B. Dynkin Superprocesses and partial dierential equations Ann. Probab.,

21 (1993), 1185-1262.

[DV 75] M. Donsker and S.R.S. Varadhan Asymptotics for the Wiener sausage

Comm. Pure Appl. Math., 28 (1995), 525-565.

[EO 94] S. Evans and N. O'Connell Weightedoccupation time for branching parti-

cle systems and a representation for the supercritical superprocess, Canad.

Math. Bull. 37 (1994) ,187-196.

[EP 97] J. Engländer and R.G. Pinsky On the construction and support properties

d

of measure-valued diusions on D  R with spatial ly dependent branching,

to app ear in Ann. Probab.

[F 85] M. Freidlin Functional Integration and Partial Dierential Equations,

Princeton Univ. Press, 1985.

[I 88] I. Isco e On the supports of measure-valued critical branching Brownian mo-

tion, Ann. Probab. 16 (1988), 200-221.

[KL 74] M. Kac and J.M. Luttinger Bose Einstein condensation in the presenceof

impurities II., J. of Math. Phys., 15(2) (1974), 183-186.

[KT 75] S. Karlin and M. Taylor A First Course in Processes. Academic

Press, 1975.

[P 95a] R. G. Pinsky K-P-P-type asymptotics for nonlinear diusion in a large bal l

d

with innite boundary data and on R with innite initial data outside a

large bal l Commun. in Partial Dierential Equations, 20 (1995), 1369-1393.

[P 95b] R. G. Pinsky On the large time growth rate of the support of supercritical

super-Brownian motion Ann. Probab., 23 (1995), 1748-1754.

[P 95c] R. G. Pinsky Positive Harmonic Functions and Diusion. Cambridge Uni-

versity Press, 1995.

[P 96] R. G. Pinsky Transience, recurrenceandlocal extinction properties of the

support for supercritical nite measure-valued diusions Ann. Probab., 24

(1996), 237-267.

[PW 84] M. Protter and H. Weinb erger Maximum Principles in Dierential Equa-

tions. Springer, Berlin, 1984.

[Sz 98] A. Sznitman Brownian Motion, Obstacles and Random Media. Springer,

1998.

[T 94] R. Trib e Arepresentation for super-Brownian motion Sto ch. Pro cess. and

Appl., 51(1994), 207-219. 17