arXiv:math/0410110v1 [math.PR] 5 Oct 2004 04 o.3,N.3,2099–2148 3A, No. DOI: 32, Vol. Probability2004, of Annals The rbe fptnilter for potential of problem o h rcs n tews are otherwise and process the for cesses: rcs,Gusa rcs,htigprobability. hitting process, Gaussian process, ability itn rbblte oa nltcepeso hc sde is the which namely expression set, the analytic of an “geometry” to probabilities hitting h ln awd ls fGusa rcse sas conside also is processes Gaussian of class wide equ (a differential plane partial the stochastic s hyperbolic are that nonlinear ou processes of non-Gaussian article, for this questions In these process. dress the for sets polar characterize

c aiainadtpgahcdetail. typographic 2099–2148 and 3A, the pagination by No. published 32, Vol. article 2004, original the Mathematical of of reprint Institute electronic an is This nttt fMteaia Statistics Mathematical of Institute e od n phrases. and words Key .Introduction. 1. 1 M 00sbetclassifications. subject 2000 AMS 2003. August revised 2002; September Received upre npr yteSisNtoa onainfrScien for Foundation National Swiss the by part in Supported 10.1214/009117904000000685 esta,wt rbblt ,aentvstdaesi obe to said are visited not are 1, probability with that, Sets ? given hspoesadpoeta h asoffdmnino t ran its of dimension Hausdorff the that prove min( and pola characterize process we this consequence, a As capacity. Newtonian h ouino ytmof system a of solution [ the Kohatsu-Higa of result Fields Using a Related processes. particular, Gaussian in of and, class calculus wide a to applies readily ouinaebuddaoeadblwb osat ie h ( the times constants by probabiliti below and hitting above the bounded unde coefficients, are that solution the show on We hypotheses variables. dard two with equations differential he ntrso h ( the of terms in sheet euto hsnvsnadSi[ Shi and Khoshnevisan of result a hr siae o h itn rbblte fte( the of probabilities hitting the for estimates where onsfrtepoaiiyta utprmtrpoeshi process multiparameter a set that probability the for bounds OETA HOYFRHPROI SPDES HYPERBOLIC FOR THEORY POTENTIAL yRbr .Dln n uai Nualart Eulalia and Dalang C. Robert By E egv eea ucetcniin hc ml pe n lo and upper imply which conditions sufficient general give We d, E ntrso aaiyof capacity a of terms in )a.s. 4) ⊂ cl oyehiu F´ed´erale Polytechnique Lausanne Ecole de ´ R d , nti ril,w r neetdi h olwn basic following the in interested are we article, this In 126 osti rcs visit process this does oeta hoy olna yeblcSD,multiparame SPDE, hyperbolic nonlinear theory, Potential 20)4147,w pl hs eea eut to results general these apply we 421–457], (2003) 2004 , d − 2 hsrpitdffr rmteoiia in original the from differs reprint This . rmr 01,6J5 eodr 00,60G60. 60H07, secondary 60J45; 60H15, Primary N d R etna aaiyaeotie,and obtained, are capacity Newtonian ) olna yeblcsohsi partial stochastic hyperbolic nonlinear capacity nonpolar in E d vle utprmtrsohsi pro- stochastic multiparameter -valued h naso Probability of Annals The eae otepoes hsextends This process. the to related n.Probab. Ann. 1 ftest nte betv sto is objective Another set. the of n betv st eaethese relate to is objective One . ( rhit or 27 19)1135–1159], (1999) ) ,d N, rbb Theory Probab. E anga st ad- to is goal main r icResearch. tific ihpstv prob- positive with Brownian ) tos(PE)in (SPDEs) ations ltost class a to olutions sagiven a ts Malliavin so this of es emndb the by termined esfor sets r , red). stan- r d eis ge − wer 4) 1 polar ter 2 R. C. DALANG AND E. NUALART

There is a large literature about potential theory for multiparameter pro- cesses. For multiparameter processes whose components are independent single-parameter Markov processes, Fitzsimmons and Salisbury [7] obtained upper and lower bounds on hitting probabilities in terms of a notion of en- ergy of a set. This type of multiparameter process arises in the study of multiple points of single-parameter processes. Song [26] characterized polar sets for the n-parameter Ornstein–Uhlenbeck process on a separable Fr´echet Gaussian space as null cn,2 capacity sets, where the capacity cn,2 is defined in a variational form. Hirsch and Song [8] obtained bounds for the hitting probabilities of a class of multiparameter symmetric Markov processes in terms of capacity, also in a variational form. However, in these references, the class of multiparameter Markov pro- cesses does not readily cover certain basic multiparameter processes such as the Brownian sheet or the fractional Brownian sheet. In [10], Khoshnevisan developed a potential theory for a class of multiparameter Markov processes which includes these processes. This article is essentially motivated by the work of Khoshnevisan and Shi [11], who obtained bounds for hitting probabilities of the Brownian sheet. In particular, if W = (W ,t RN ) denotes an Rd-valued Brownian sheet, t ∈ + they showed that for any compact subset A of Rd and any 0

As a first application of the general result of Theorem 2.4, we consider multiparameter Gaussian processes in Section 3. We give sufficient condi- tions on the covariance function of a Gaussian process which imply bounds for hitting probabilities in terms of the Newtonian capacity (Theorem 3.1). This theorem contains many results that exist in the literature and readily applies to multiparameter Gaussian processes such as the Brownian sheet, α-regular Gaussian fields, the Ornstein–Uhlenbeck sheet and the fractional Brownian sheet. For the second and the fourth process, we obtain only a lower bound. The upper bound cannot be obtained from Theorem 3.1 since such processes are not necessarily adapted to a commuting filtration. In Section 4, we apply results of and, in particular, the very recent result of Kohatsu-Higa [13] to the system of nonlinear hyperbolic SPDEs,

2 i d 2 j ∂ Xt i ∂ Wt i R2 = σj(Xt) + b (Xt), t = (t1,t2) +, (1.1) ∂t1 ∂t2 ∂t1 ∂t2 ∈ jX=1 Xi = x if t t = 0, 1 i d, t 0 1 2 ≤ ≤ where W = (W j, j = 1,...,d) is a two-parameter d-dimensional Wiener pro- cess, the second-order mixed derivative of W j is the white noise on the i i Rd plane and σj, b are smooth functions on . It is known [21] that (1.1) has R2 a unique continuous solution X = (Xt,t +). In this article, we consider this system of equations in the integral form∈ (4.7) as it was studied in [21]. In the case b 0, under some regularity and strong ellipticity conditions on the matrix σ ≡(Conditions P1 and P2), we give in Proposition 4.13 an upper bound of Gaussian type for the conditional density of an increment of X given the past. This uses existing techniques of Malliavin calculus that are adapted to the present context (cf. [17] and [18], Chapter 2). We then use the result of Kohatsu-Higa [13] to establish a Gaussian-type lower bound for the density of the Xt for any t away from the axes (Theo- rem 4.14) and use a Gaussian-type lower bound for the conditional density of the increment of the process given the past (Theorem 4.16). In the last section, we apply the results obtained in Sections 2–4 to the solution of system (1.1). In the case b 0, we prove in Theorem 5.1 that under Conditions P1 and P2 introduced≡ in Section 4.4, the hitting probabil- ities of the solution can be bounded above and below in terms of the (d 4) Newtonian capacity. The verification of Hypothesis H2 uses the upper bound− of Gaussian type obtained in Section 4.4. The main effort in proving Theo- rem 5.1 lies in verification of Hypothesis H3, which uses the lower bounds of Gaussian type for the density of the solution obtained in Section 4.5. These Gaussian-type lower bounds also imply the positivity of the density of the solution and so the verification of Hypothesis H1. We treat the case b 0 via a change of probability measure (see Corollary 5.3). As a consequence6≡ 4 R. C. DALANG AND E. NUALART of Corollary 5.3, we prove in Corollaries 5.4 and 5.5 that polar sets for the solution to (1.1) are those of (d 4) Newtonian capacity zero, that the Haus- dorff dimension of the range of− the solution is min(d, 4) almost surely and its stochastic codimension is (d 4)+. Finally, we identify d = 3 as the critical dimension for the solution to− hit points in Rd (see Corollary 5.6). Notice that we obtain the same zero capacity condition for polarity and Hausdorff dimension obtained by Dynkin [6], LeGall [16] and Perkins [24] for super-Brownian motion. However, there is no direct connection between this work and theirs.

2. General theory. For any s,t RN , we write s t when s t for all ∈ + ≤ i ≤ i i = 1,...,N, where si denotes the ith coordinate of s, and we write st s. (iii) For every s,t∈ RNFand for allF bounded, -measurable random vari- + T t ables Y , ∈ F E[Y ]= E[Y ] a.s. |Fs |Fs∧t Note that when N = 2, (iii) is hypothesis (F4) of Cairoli and Walsh [1]. For N > 2, hypothesis (iii) appears in [10], Chapter 7, Section 2.1. RN Rd Let X = (Xt,t + ) be a continuous -valued defined on (Ω, , P) and not∈ necessarily adapted to . We suppose that for all s,t (0, + G)N with t = s, the distribution of theF random variable (X , X ) has∈ a ∞ 6 t s density that is denoted pXt,Xs (x,y). We write pXt (x) for the density of the N random variable Xt for all t (0, + ) . Given a Borel subset E of∈ Rd, we∞ denote by (E) the collection of all probability measures on Rd with support in E. P Following [10], Appendix D, we say that k( ) is a kernel in Rd (or a gauge function) if k( ) is an even, nonnegative and· locally integrable function on Rd which is continuous· on Rd 0 and positive in a neighborhood of the \ { } origin. Basic examples of kernels are the Newtonian β kernels kβ( ) (see Section 3.1). · Given a kernel k( ), for any µ (E), we write · ∈ P

k(µ)= k(x y)µ(dx)µ(dy) E Rd Rd − Z Z POTENTIAL THEORY FOR HYPERBOLIC SPDES 5 and term this quantity the k energy of µ. The k capacity of E is defined by 1 Cap (E)= . k inf (µ) µ∈P(E) Ek The following properties of Capk( ) are given in [10], Appendix D, in par- ticular, Lemma 2.1.2 there. ·

Lemma 2.1. Cap ( ) has the following properties: k · (a) Monotonicity. For any two Borel subsets E E of Rd, Cap (E ) 1 ⊂ 2 k 1 ≤ Capk(E2). (b) Outer regularity on compact sets. For any sequence E, E1, E2,... of compact subsets of Rd such that E E, lim Cap (E )=Cap (E). n ↓ n→∞ k n k d Let 0(E) denote the collection of all probability measures on R with supportP in E that are absolutely continuous with respect to Lebesgue mea- 0 sure. The absolutely continuous capacity Capk(E) of E with respect to k( ) is defined by · 1 Cap0(E)= . k inf (µ) µ∈P0(E) Ek 0 Since Capk is not outer regular on compact sets (cf. [10], Appendix D, Section 2.2), we define, for all bounded Borel sets E Rd, ⊂ Capac(E)=inf Cap0(F ) : F E, F bounded and open . k { k ⊃ } ac Then Capk(A) Capk (A). We now state an additional condition on k (which is related≥ to the classical notion of balayage; see [15], Chapter IV) that ensures that capacity and absolutely continuous capacity with respect to k agree on compact sets. Following [10], Appendix D, we say that a kernel k( ) on Rd is proper if, for all compact sets A Rd and µ (A), there exist· bounded open sets ⊂ ∈ P A1, A2,... such that:

1. An A. 2. For↓ all large n 1, there exist absolutely continuous measures µ with ≥ n support contained in An such that, for all ε> 0, there exists N0 such that for all n N : ≥ 0 (a) µn(An) 1 ε; ≥ − d (b) Rd k(x y)µ (dy) Rd k(x y)µ(dy) for all x R . − n ≤ − ∈ R R Proposition 2.2 ([10], Appendix D, Theorem 2.3.1). Let k( ) be a proper kernel in Rd. Then, for all compact sets A Rd, Cap (A)=Cap· ac(A). ⊂ k k 6 R. C. DALANG AND E. NUALART

We now introduce the following hypotheses, which ensure a lower bound on hitting probabilities for X [see Theorem 2.4(a)].

Hypothesis H1. For all 0 0, there exists a finite positive constant C (a, b, M) such that for∞ almost all x Rd with x M, 1 ∈ k k≤

pXt (x) dt C1. N ≥ Z[a,b] Hypothesis H2. There exists a proper kernel k( ) in Rd such that for all · 0 0, there exists a finite positive constant C2(a, b, M) such that, for∞ almost all x,y Rd with x M and y M, ∈ k k≤ k k≤

pXt,Xs (x,y) dt ds C2k(x y). N N ≤ − Z[a,b] Z[a,b] In the case where X is adapted to , for s (0, + )N , let P (ω, ) be a F ∈ ∞ s · regular version of the conditional distribution of the process (Xt Xs,t RN [0,s]) given . If for almost all ω and all t RN [0,s],− the law∈ + \ Fs ∈ + \ of Xt Xs under Ps(ω, ) is absolutely continuous with respect to Lebesgue measure− on Rd, we let p · (ω,x) denote the density of X X under P (ω, ). s,t t − s s · In this case, there is a null set Ns s such that for ω Ω Ns, E a Borel subset of Rd and s

P (ω, X X E )= p (ω,x) dx. s { t − s ∈ } s,t ZE In particular, ps,t(ω,x) is a version of the conditional density of Xt Xs given . The function (ω,t,x) p (ω,x) can be chosen to be measurable.− Fs 7→ s,t Proposition 2.3. Let f : Rd Rd R be a nonnegative Borel function, × → let Y be an s-measurable random variable and suppose that E[f(Xt Xs, Y )] < . ThenF − ∞

E[f(Xt Xs,Y ) s]= f(x,Y )ps,t(ω,x) dx a.s. − |F Rd Z

Proof. If f(x,y)= f1(x)f2(y), then using [5], Theorem 10.2.5, we have E[f(X X ,Y ) ]= E[f (X X )f (Y ) ] t − s |Fs 1 t − s 2 |Fs = f (Y )E[f (X X ) ] 2 1 t − s |Fs

= f2(Y ) f1(x)ps,t(ω,x) dx a.s. Rd Z = f(x,Y )ps,t(ω,x) dx a.s. Rd Z POTENTIAL THEORY FOR HYPERBOLIC SPDES 7

One easily concludes the proof using a monotone class argument ([3], Chap- ter I, Theorem 21). 

We now introduce a third hypothesis, which leads to an upper bound on hitting probabilities for X [see Theorem 2.4(b)].

Hypothesis H3. For all 0 0, there exists a finite ∞ N positive constant C3(a, b, M) such that for all s [a, b] , a.s., for almost all x Rd, ∈ ∈

ps,t(ω,x) dt C3k(x)½{kx+Xsk≤M,kXsk≤M}(ω), N ≥ Z[b,2b−a] where k(x) is the same kernel as in Hypothesis H2.

We are now ready to state the main result of this section.

Theorem 2.4. (a) Assuming Hypotheses H1 and H2, for all 0 0, there exists a finite positive constant K1(a, b, M) such that∞ for all compact sets A x Rd : x < M , ⊂ { ∈ k k } K Cap (A) P t [a, b]N : X A . 1 k ≤ {∃ ∈ t ∈ }

RN RN (b) If (Xt,t + ) is adapted to a commuting filtration ( t,t + ), and Hypotheses H2∈and H3 hold, then for all 0 0, there ∞ exists a finite positive constant K2(a, b, M) such that for all compact sets A x Rd : x < M , ⊂ { ∈ k k } P t [a, b]N : X A K Cap (A). {∃ ∈ t ∈ }≤ 2 k Before proving Theorem 2.4, we mention an important consequence. Re- call that a Borel set E Rd is said to be polar for the process X if ⊂ P t (0, + )N : X E = 0. {∃ ∈ ∞ t ∈ } Corollary 2.5. For a process X adapted to a commuting filtration, under Hypotheses H1–H3, a compact subset E of Rd is polar for X if and only if Capk(E) = 0.

Proof. If E is polar for X, then clearly Capk(E) = 0 by Theorem N 1 N 2.4(a). Conversely, suppose Capk(E) = 0. Write (0, + ) = m∈N[ m ,m] . By Theorem 2.4(b), for all m 1, there is K < (depending∞ on m) such 2 S that ≥ ∞ 1 N P t ,m : X E K Cap (E) = 0. m t 2 k ∃ ∈   ∈  ≤ 8 R. C. DALANG AND E. NUALART

Since this holds for all m, E is polar for X. 

Proof of Theorem 2.4. (a) The lower bound. Suppose 0

Ja,b(f)= f(Xt) dt. N Z[a,b] By the Cauchy–Schwarz inequality, E[J (f)] 2 (2.1) P t [a, b]N : X A P J (f) > 0 { a,b } . {∃ ∈ t ∈ ε}≥ { a,b }≥ E[ J (f) 2] { a,b } Using Fubini’s theorem, we easily deduce from Hypothesis H1 that (2.2) E[J (f)] C a,b ≥ 1 and from Hypothesis H2 that (2.3) E[ J (f) 2] C (f), { a,b } ≤ 2Ek where k(f) denotes the k energy of the measure f(x) dx. Applying (2.2) and (2.3) to (2.1E), we obtain C2 P t [a, b]N : X A 1 . {∃ ∈ t ∈ ε}≥ C (f) 2Ek Take the supremum over all f(x) dx (A ) and see that for all ε> 0, ∈ P ε 2 P N C1 ac t [a, b] : Xt Aε Capk (Aε). {∃ ∈ ∈ }≥ C2 ac By Proposition 2.2, we can replace Capk (Aε) by Capk(Aε) because k is + proper. As ε 0 , Aε A, which is compact. By Lemma 2.1(b), Capk(Aε) → ↓ + converges to Capk(A) as ε 0 . Finally, since A is compact and the process t X is continuous, → 7→ t t [a, b]N : X A = t [a, b]N : X A . {∃ ∈ t ∈ ε} {∃ ∈ t ∈ } ε>\0 We conclude that 2 P N C1 t [a, b] : Xt A Capk(A). {∃ ∈ ∈ }≥ C2 This concludes the proof of (a) of Theorem 2.4. POTENTIAL THEORY FOR HYPERBOLIC SPDES 9

(b) The upper bound. Suppose 0

2 N 2 N 2 E sup Mt(f) 4 sup E[ Mt(f) ] 4 E[ Jb,2b−a(f) ],  t∈[a,b]N ∩DN { }  ≤ t∈[a,b]N ∩DN { } ≤ { } where D denotes the set of dyadic rationals. Suppose that f is a density function on Rd supported on x Rd : x < M . By Hypothesis H2 and (2.3), we get { ∈ k k }

2 N (2.4) E sup Mt(f) 4 C2 k(f).  t∈[a,b]N ∩DN { }  ≤ E d For ε (0, 1), define Aε = x R : dist(x, A) < ε , the open ε enlargement ∈ { ∈ } d of A. Suppose ε is small enough so that Aε x R : x < M . We can assume that ⊂ { ∈ k k } (2.5) P t [a, b]N : X A > 0 for all ε> 0. {∃ ∈ t ∈ ε} Indeed, if there exists an ε> 0 such that this probability is equal to zero, then the upper bound is trivial since P t [a, b]N : X A P t [a, b]N : X A for all ε> 0. {∃ ∈ t ∈ }≤ {∃ ∈ t ∈ ε} Assuming (2.5), we claim that there exists a random variable T ε, taking values in ([a, b]N DN ) + , such that ∩ ∪ { ∞} (2.6) T ε < t [a, b]N DN : X A { ∞} ⇐⇒ {∃ ∈ ∩ t ∈ ε} ε N N and XT ε Aε on T < . Indeed, order the set [a, b] D = q1,q2,... and define∈ T ε = q{ ∞} , where inf ∅ is defined to∩ be + {and in this} inf {k : Xqk ∈Aε} ε ∞ case T =+ . Note that assumption (2.5), the fact that Aε is open and the ∞ ε continuity of t Xt imply that P T < > 0. In particular, it is possible to condition on7→ the event T ε < { . ∞} For any Borel set E R{d, define∞} ⊂ ε µ (E)= P X ε E T < . ε { T ∈ | ∞} Clearly µ (A ). Moreover, µ is absolutely continuous because every X ε ∈ P ε ε t is: Let fε(x) be the Radon–Nikodym derivative of µε with respect to λ. Then f is supported on A . Applying (2.4) to f , we have that for all ε (0, 1), ε ε ε ∈ 2 N E sup Mt(fε) 4 C2 k(fε).  t∈[a,b]N ∩DN { }  ≤ E 10 R. C. DALANG AND E. NUALART

We claim that for all ε (0, 1) and all s [a, b]N , ∈ ∈

(2.7) Ms(fε) C3 ½{kXsk≤M} fε(x + Xs)k(x) dx a.s. ≥ Rd Z N Indeed, fix s [a, b] . Using the fact that Xs is s-measurable, Proposi- tion 2.3 and Hypothesis∈ H3, we have that for almostF all ω Ω, ∈

Ms(fε)= E fε(Xt Xs + Xs) dt s [b,2b−a] − F Z 

= fε(x + Xs) ps,t(ω,x) dt dx a.s. Rd Z Z[b,2b−a] 

C3 ½{kXsk≤M} fε(x + Xs)k(x) dx a.s. ≥ Rd Z The last inequality follows since fε is nonnegative and supported in Aε. This completes the proof of (2.7). Since (2.7) holds for all s [a, b]N and since T ε [a, b]N , we can replace ε ∈ ∈ ε s by T in (2.7). Note that X ε M holds on T < . Therefore, {k T k≤ } { ∞}

(2.8) sup Ms(fε) C3 ½{T ε<∞} fε(x + XT ε )k(x) dx a.s. N DN ≥ Rd s∈[a,b] ∩ Z Square both sides of the last inequality, take expectations and apply (2.4) to the left-hand side, to obtain 2 N 2P ε E ε 4 C2 k(fε) C3 T < fε(x + XT ε )k(x) dx T < E ≥ { ∞} Rd ∞ Z   2 2P ε = C3 T < fε(x + y)k(x) dx fε(y) dy. { ∞} Rd Rd Z Z  Using Jensen’s inequality, we get 2 N 2P ε 4 C2 k(fε) C3 T < k(x y)fε(x)fε(y) dx dy E ≥ { ∞} Rd Rd − Z Z  = C2P T ε < ( (f ))2. 3 { ∞} Ek ε If (f ) were finite, this would imply Ek ε 4N C (2.9) P T ε < 2 , { ∞}≤ C2 (f ) 3 Ek ε but we do not know a priori that fε has finite energy. For that reason, we use a truncation argument. For all q> 0 and all ε (0, 1), define ∈ q d

f (x)= f (x)½ (f (x)), x R . ε ε [0,q] ε ∈ POTENTIAL THEORY FOR HYPERBOLIC SPDES 11

q Since fε is supported on Aε, so is fε . Moreover, the latter is a subprobability density function that is bounded above by fε and q. Therefore, since k is locally integrable in Rd, (f q) < . Apply to f q exactly the same argument Ek ε ∞ ε that led to (2.8) to see that

q q sup Ms(fε ) C3 ½{T ε<∞} fε (x + XT ε )k(x) dx a.s. N DN ≥ Rd s∈[a,b] ∩ Z Square both sides of the inequality, take expectations and use Jensen’s in- equality to get

E q 2 sup Ms(fε ) s∈[a,b]N ∩DN { }  2 2P ε q C3 T < k(x y)fε (x)fε(y) dx dy . ≥ { ∞} Rd Rd − Z Z  By (2.4), the left-hand side is bounded above by 4N C (f q). The right- 2Ek ε hand side is clearly bounded below by C2P T ε < ( (f q))2. Hence, we 3 { ∞} Ek ε obtain N P ε 4 C2 T < 2 q . { ∞} ≤ C (fε ) 3 Ek Finally, since k(x) is nonnegative, lim (f q)= (f ), we can let q q↑+∞ Ek ε Ek ε ↑ + in the above inequality and use the monotone convergence theorem to ∞ obtain (2.9). Now, since t X is continuous and A is open, using (2.6) and (2.9), we 7→ t ε obtain 4N C P t [a, b]N : X A 2 . {∃ ∈ t ∈ ε}≤ C2 (µ ) 3 Ek ε Recall that µ (A ), so for all ε> 0, ε ∈ P ε N P N P N 4 C2 t [a, b] : Xt A t [a, b] : Xt Aε 2 Capk(Aε). {∃ ∈ ∈ }≤ {∃ ∈ ∈ }≤ C3

Finally, since Aε is compact, Lemma 2.1(b) implies that Capk(Aε) converges as ε 0+ to Cap (A). This concludes the proof of Theorem 2.4(b).  → k 3. Multiparameter Gaussian processes. In this section, we focus on Gaus- sian processes and reformulate the Hypotheses H1–H3 as conditions on the covariance of the process so as to relate bounds on hitting probabilities di- rectly to properties of the covariance. 12 R. C. DALANG AND E. NUALART

3.1. Relating Newtonian capacity and covariance. Let (Ω, , P) be a com- RN G Rd plete probability space. Let X = (Xt,t + ) be a continuous -valued ∈ i centered Gaussian process with independent coordinate processes (Xt ). For N all t (0, ) , we denote by pXt (x) the density function of the centered ∈ ∞ d N Gaussian random variable Xt on R . For all s,t (0, ) , we write σ(s,t)= E i i 2 ∈ ∞ [XsXt ], which does not depend on i, σ (t)= σ(t,t) and ρ(s,t)= σ(s,t)/(σ(t)σ(s)). Given α (0, 1) and γ α, we introduce the following hypotheses. ∈ ≥ Hypothesis A1. For all 0

(3.3) C t s 2α 1 ρ2(s,t) C t s 2α if t s δ, 4k − k ≤ − ≤ 5k − k k − k≤ (3.4) ρ(s,t) < 1 ε if t s > δ. | | − k − k Hypothesis A2. For all 0 0, there exist finite positive constants C , C and C such that∞ for all s,t [a, b]N with s t, 6 7 8 ∈ ≤ γ (3.5) ½ E[X X ] C t s , {kXsk≤M}| t − s|Fs |≤ 6k − k (3.6) C t s 2α E[(X X ) E[X X ]]2 C t s 2α. 7k − k ≤ t − s − t − s|Fs ≤ 8k − k Theorem 3.1. (a) Assume there are α (0, 1) and γ α for which Hypothesis A1 holds and N/α 2. Then for all∈ 0 0, ≥ ∞ there exists a finite positive constant K1(a, b, M), such that for all compact sets A x Rd : x < M , ⊂ { ∈ k k } K Cap (A) P t [a, b]N : X A , 1 d−(N/α) ≤ {∃ ∈ t ∈ } where, for β 0, Capβ( ) denotes the capacity with respect to the Newtonian β kernel k (≥), where · β · x −β, if 0 <β 0, there exists a ≥ ∞ finite positive constant K2(a, b, M) such that for all compact sets A x Rd : x < M , ⊂ { ∈ k k } P t [a, b]N : X A K Cap (A). {∃ ∈ t ∈ }≤ 2 d−(N/α) POTENTIAL THEORY FOR HYPERBOLIC SPDES 13

Remark 3.2. (a) For 0 <β 0 for x,y A when A x R : x < M , and this is suffi- cient− for the results of∈ Section 2 to⊂ hold. { ∈ k k }

(b) Note that for any 0 β d 2, kβ( ) is a proper kernel. Indeed, given d≤ ≤ − · a compact set A x R : x < M and µ (A), for any n 1, let An = d ⊂ { ∈ k k } ∈ P ≥ x R : dist(x, A) < 1/n be the open enlargement of A. Let (Bt, t 0) be a{ standard∈ Brownian motion} in Rd and let p be the density of B . For≥n 1, t t ≥ let µn,t be the measure whose density function is the restriction to the set A¯n of p µ, where denotes the convolution product. Set f(x) = (µ k )(x). t ∗ ∗ ∗ β Since ∆kβ(x) 0 for 0 β d 2, Ex(f(Bt)) f(x) for all t 0 and x Rd, or equivalently,≤ ≤ ≤ − ≤ ≥ ∈

kβ(x y)(pt µ)(y) dy kβ(x y) µ(dy). Rd − ∗ ≤ Rd − Z Z For small t, the µn,t are nearly probability measures, and so (a) and (b) of the definition of a proper kernel hold. (c) The condition N/α 2 ensures that kd−(N/α) is a proper kernel. This is only a restriction when N≥ = 1. (d) If α = 1/2 and d< 2N, the choice Capd−2N (A) = 1 is natural, since in this case, the Brownian sheet hits points in Rd (cf. [23]).

Before proving Theorem 3.1, we mention an important consequence. Given s 0 and a Borel subset E of Rd, let ≥ ∞ ∞ s s(E) = lim inf (2ri) : E (xi, ri), sup ri ε , H ε→0+ ( ⊂ B i ≤ ) Xi=1 i[=1 where (x, r) denotes the closed ball of radius r> 0 centered at x Rd. The is calledB the d-dimensional Hausdorff measure. Moreover, we∈ associate Hs to the set E a number dimH(E) as follows: dim (E)=sup s> 0 : (E)= = inf s> 0 : (E) = 0 . H { Hs ∞} { Hs } This is the Hausdorff dimension of E. Following [10], the stochastic codi- mension of a random set E in Rd, denoted codim(E), if it exists, is the real number β [0, d] such that for all compact sets A Rd, ∈ ⊂ > 0, whenever dim (A) > β, P E A = ∅ H = 0, whenever dim (A) < β. { ∩ 6 }  H The following result gives a relationship between Hausdorff dimension and stochastic codimension. 14 R. C. DALANG AND E. NUALART

Theorem 3.3 ([10], Theorem 4.7.1, Chapter 11). Given a random set E in Rd whose codimension β is strictly between 0 and d,

dimH(E) + codim(E)= d a.s.

Theorems 3.1 and 3.3 imply the following result.

Corollary 3.4. Under the hypotheses of Theorem 3.1(b), codim X((0, + )N ) = (d (N/α))+ { ∞ } − and if d > N/α, dim X((0, + )N ) = N/α a.s. H{ ∞ } Proof. By Frostman’s theorem (see [10], Appendix C, Theorem 2.2.1), the capacitarian and Hausdorff dimensions agree on compact sets. Therefore, the desired result follows from Theorems 3.1 and 3.3. 

Proof of Theorem 3.1. (a) Suppose 0 0 are fixed. We assume that Hypothesis A1 holds for α (0, 1)∞ and γ α fixed, and show that Hypotheses H1 and H2 of Theorem ∈2.4 hold. ≥

Verification of Hypothesis H1. Fix x Rd such that x M. Inequality (3.1) implies that ∈ k k≤ 2 −d/2 −d x pXt (x) dt = (2π) σ (t)exp k k dt N N −2σ2(t) Z[a,b] Z[a,b]   M 2 (2π)−d/2(C )−d/2(b a)N exp , 2 2C ≥ − − 1  which proves Hypothesis H1.

Verification of Hypothesis H2. By (3.3) and (3.4), for all s,t N ∈ [a, b] with t = s, (Xt, Xs) has a (Gaussian) density pXt,Xs (x,y). The latter can be written6 as

pXt,Xs (x,y)= pXt|Xs=y(x)pXs (y), where pXt|Xs=y denotes the conditional density function of the random vari- able Xt given Xs = y. i i i Note that the conditional distribution of Xt given Xs = y is Normal with mean m(s,t)yi, where m(s,t) = (σ(s,t))/(σ2(s)), and variance τ 2(s,t)= σ2(t)(1 ρ2(s,t)). Observe that τ 2(s,t) > 0 by (3.3) and (3.4), that (3.2) is a condition− on the conditional mean m(s,t) and that (3.3) is a condition on the conditional variance. POTENTIAL THEORY FOR HYPERBOLIC SPDES 15

Rd Fix x,y such that x M and y M. By (3.4), pXt,Xs ( , ) is bounded by∈ some constant kC′k≤> 0 when tk k≤s > δ. Therefore, · · k − k

pXt,Xs (x,y) dt ds C + pXt,Xs (x,y) dt ds, N N ≤ Z[a,b] Z[a,b] Z ZD(δ) where D(δ)= (s,t) [a, b]N [a, b]N : t s δ and C = C′(b a)2N . The integral on{ the right-hand∈ × side cank be− writtenk≤ } − x m(s,t)y 2 (2π)−d τ −d(s,t)exp k − k − 2τ 2(s,t) (3.7) Z ZD(δ)   y 2 σ−d(s)exp k k dt ds. 2σ2(s) × −  By the triangle inequality and the identity (u v)2 u2/2 v2, − ≥ − x m(s,t)y 2 exp k − k 2τ 2(s,t) (3.8) −  x y 2 y 2 1 m(s,t) 2 exp k 2− k exp k k | −2 | . ≤ − 4τ (s,t)   2τ (s,t) 

By (3.2), there exists a constant C3 such that (3.9) 1 m(s,t) 2 C t s 2γ. | − | ≤ 3k − k By (3.1) and (3.3),

(3.10) K t s 2α τ 2(s,t) K t s 2α. 1k − k ≤ ≤ 2k − k We now apply (3.8)–(3.10) to (3.7). Because γ α, this yields ≥

pXt,Xs (x,y) dt ds N N Z[a,b] Z[a,b] C + (2π)−d(K )−d/2(C )−d/2 dt ds t s −αd ≤ 1 1 k − k Z ZD(δ) x y 2 C2 y 2 t s 2γ y 2 exp k − k exp 3 k k k − k exp k k × −4K t s 2α 2K t s 2α − 2C  2k − k   1k − k   2  2 −αd x y C + K3 t s exp k − k 2α dt ds. ≤ D(δ) k − k −4K t s Z Z  2k − k  We now fix t and use the change of variables u = t s to see that this − expression is less than or equal to

2 N −αd x y C + K3(b a) u exp k − k2α du, − B(δ) k k −4K u Z  2k k  16 R. C. DALANG AND E. NUALART where B(δ)= u RN : u δ . Finally, use the change of variables u = x y 1/αz(4K{ )−∈1/(2α)ktok≤ see that} this is less than or equal to k − k 2 −d+(N/α) C + K4 x y (3.11) k − k z −αd exp( 1/ z 2α) dz. × 1/(2α) 1/α k k − k k ZB((4K2) δ/kx−yk ) We now state a real variable technical lemma which is crucial for our estimates. The proof of this lemma is left to the reader.

2α Lemma 3.5. Define ϕ (r)= z −βe−1/kzk dz, for all r> 0. Then α,β B(r) k k for any r0 > 0 and α (0, 1), thereR exist finite constants c1, c2, c3, c4 > 0 such that for all r r , ∈ ≥ 0 c ϕ (r) c , if β>N, 1 ≤ α,β ≤ 2 c ln(r/r ) ϕ (r) c ln(r), if β = N. 3 0 ≤ α,β ≤ 4 Continuing the verification of Hypothesis H2, apply Lemma 3.5 with β = αd to (3.11) and use the fact that C C(2M)d−(N/α) x y −d+(N/α) be- cause x M and y M, to conclude≤ the verificationk of− Hypothesisk H2 for d >k N/αk≤. When dk=k≤N/α, choose r > 0 such that (4K )1/(2α)δ/(2M)1/α 0 2 ≥ r0 and apply Lemma 3.5 to (3.11) to obtain 1/(2α) (4K2) δ pXt,Xs (x,y) dt ds C + c4K4 ln 1/α . [a,b]N [a,b]N ≤ x y Z Z  k − k  Note that for all x,y Rd with x M and y M, x y 2M. Then we can check that if ∈x y 2kMk≤, there existsk ak≤ finite constantk − k≤C′ > 1 such that k − k≤ (4K )1/(2α)δ 3M ln 2 C′ ln . x y 1/α ≤ x y  k − k  k − k  On the other hand, note that C C(ln(3/2))−1 ln(3M/ x y ), and the verification of Hypothesis H2 for d≤= N/α is completed. Whenk − dk < N/α, the expression in (3.11) is bounded, so Hypothesis H2 holds with k(x) 1. ≡ This completes the proof of Theorem 3.1(a). (b) We now assume that Hypotheses A1 and A2 hold for α (0, 1) fixed and γ α, and show that Hypothesis H3 of Theorem 2.4 holds.∈ We also assume≥ that d N/α, since otherwise, the statement is trivial. ≥ Verification of Hypothesis H3. For all s,t (0, ) with s 2Mk. Fixk≤s [a, b]N . Then k k ∈

ps,t(ω,x) dt N Z[b,2b−a] (3.12) x µ(s,t) 2 = (2π)−d/2 β−d(s,t)exp k − k dt. N − 2β2(s,t) Z[b,2b−a]   By the triangle inequality, x µ(s,t) 2 x 2 µ(s,t) 2 (3.13) exp k − 2 k exp k2 k exp k 2 k . − 2β (s,t)  ≥ −2β (s,t) − 2β (s,t)  We now apply (3.5), (3.6) and (3.13) to (3.12), and we see that this expression is greater than or equal to 2 2 2γ −αd x C6 t s K4 t s exp k k 2α exp k − k2α dt. [b,2b−a]N k − k −C t s −C t s Z  7k − k   7k − k  1/α −1/(2α) Using the fact that γ α and the change of variables t s = x z(C7) , we get that this is greater≥ than or equal to − k k

−d+(N/α) −αd 2α (3.14) K5 x z exp( 1/ z ) dz. k k 1/(2α) 1/α k k − k k ZB((C7) (b−a)/kxk ) 1/(2α) 1/α It now suffices to choose r0 > 0 such that (C7) (b a)/(2M) 1/α − ≥ 3 r0 and apply Lemma 3.5 to (3.14). This concludes the verification of Hypothesis H3 when d > N/α. Finally, for d = N/α, we apply Lemma 3.5 to (3.14), to obtain 61/αM 1/α 1 3M ps,t(ω,x) dt c3K5 ln 1/α c3K5 ln , [b,2b−a]N ≥ x ≥ α x Z  k k  k k  which concludes the proof of Hypothesis H3 for d N/α. ≥ This completes the proof of Theorem 3.1(b). 

3.2. Examples.

1 d 3.2.1. The Brownian sheet. Suppose that W = (Wt = (Wt ,...,Wt ),t RN ∈ + ) is a d-dimensional N-parameter Brownian sheet, that is, the coordinate processes of W are Gaussian, with zero mean and covariances N E[W kW j]= (s t )δ for all s,t RN , 1 k, j d, s t i ∧ i kj ∈ + ≤ ≤ iY=1 18 R. C. DALANG AND E. NUALART and defined on the canonical probability space (Ω, , P), where Ω is the space RN Rd G P of all continuous functions ω : + vanishing on the axes, is the law of W and is the completion of the→ Borel σ-field of Ω with respect to P. We G RN also consider the increasing family of σ-fields = ( t,t + ), such that RN F F ∈ for any t + , t is generated by the random variables (Ws,s t), and the null sets∈ of F. The latter is a complete, right continuous, commuting≤ filtration (see [10G], Chapter 7, Theorem 2.4.1).

Theorem 3.6 ([11], Theorem 1.1). The (N, d) Brownian sheet satisfies Hypotheses A1 and A2 with α = 1/2 and γ = 1, and therefore the conclusions of Theorem 3.1.

E 2 Proof. Fix 0

1 N ∂f f(t) f(s)= f(φ(1)) f(φ(0)) = (φ(u))φ˙i(u) du − − 0 ∂ti Z Xi=1 N C (φ (1) φ (0)) C′ t s , ≤ i − i ≤ k − k Xi=1 which proves the upper bound in (3.15). The lower bound is obtained by proceeding along the same lines. We now prove (3.2) and (3.3). If s,t [a, b]N , using (3.1) and (3.15), ∈ σ2(s) σ(s,t) σ2(s) σ2(s t) −2 = −2 ∧ C s (s t) C t s , σ (s) σ (s) ≤ k − ∧ k≤ k − k which proves (3.2). To prove (3.3), if s,t [a, b]N , using (3.1) and (3.15), ∈ σ2(t)σ2(s) (σ(s,t))2 σ2(s t) − ∧ (σ2(t) σ2(s t)+ σ2(s) σ2(s t)) σ2(t)σ2(s) ≥ 2σ2(t)σ2(s) − ∧ − ∧ C( t (s t) + s (s t) ) ≥ k − ∧ k k − ∧ k C t s , ≥ k − k POTENTIAL THEORY FOR HYPERBOLIC SPDES 19 which concludes the proof of (3.3). Since (3.6) follows from (3.15), it remains to prove (3.4). Because s,t ρ(s,t) is continuous on [a, b]N , it suffices to check that ρ(s,t) < 1 for any7→ s,t [a, b]N with s = t. This is clear since ∈ 6 N s t ρ(s,t)= i ∧ i < 1. √siti iY=1 This proves Theorem 3.6. 

3.2.2. α-regular Gaussian fields. For α (0, 1), an N-parameter, Rd- RN ∈ valued process X = (Xt,t + ) is said to be α-regular (see [10], Chapter 11, Section 5.2) if X is a∈ centered, stationary Gaussian process with i.i.d. coordinate processes whose covariance function R satisfies the following:

Assumption R1. If s = 0, then R(s) < 1, and there exist finite positive constants c c and δ> 60, such that,| for| all s RN with s δ, 1 ≤ 2 ∈ + k k≤ c s 2α 1 R(s) c s 2α. 1k k ≤ − ≤ 2k k Such Gaussian fields were studied in [10], Chapter 11, Section 5.2. In the latter, the stochastic codimension and the Hausdorff dimension of the range of these processes is obtained (see [10], Chapter 11, Theorem 5.2.1 and Corollary 5.2.1). The following result proves that the α-regular Gaussian fields satisfy the lower bound of Theorem 3.1(a). The upper bound cannot be obtained from Theorem 3.1(b) since such processes are not necessarily adapted to a com- muting filtration. An upper bound involving Hausdorff measure is given in [10], Chapter 11, Example 5.3.3.

RN Theorem 3.7. Fix α (0, 1) with N/α 2. Let X = (Xt,t + ) be α-regular. Fix 0 0) because 1 ρ2(s,t)=(1+ ρ(s,t))(1 ρ(s,t)) c (1 c t s 2α) t s 2α − − ≥ 1 − 2k − k k − k c (1 c δ2α) t s 2α. ≥ 1 − 2 k − k Since R( ) is a bounded covariance function, it is nonnegative definite. By Bochner’s· theorem ([25], Theorem 6.5.64), R( ) is the Fourier transform · 20 R. C. DALANG AND E. NUALART of a nonnegative measure, which is a probability measure since R(0) = 1. Therefore, R( ) is in fact continuous, so by Assumption R1, there is ε> 0 such that R(t· s) < 1 ε for t s > δ with s,t [a, b]N . Therefore, (3.4) holds. This| proves− | that− Hypothesisk − A1k holds with γ∈= 2α, so the conclusion follows from Theorem 3.1(a). 

3.2.3. The (N, d) Ornstein–Uhlenbeck sheet. The (N, d) Ornstein–Uhlenbeck sheet U = (U ,t RN ) is defined as t ∈ + −|t|/2 N U = e W t , t R , t e ∈ + t t1 tN RN Rd where t = t1 + + tN , e = (e ,...,e ) and W = (Wt,t + ) is an - valued| Brownian| | | · sheet. · · | Therefore,| U is an N-parameter centered∈ stationary Gaussian process on Rd with covariance function given by E[U U ]= e−|t−s|/2, s,t RN . s t ∈ + Consider its natural and completed filtration, denoted U , that is, Ft U t W N = σ U ,s t = σ W ,s e = t , t R , Ft { s ≤ } { s ≤ } Fe ∈ + W where t denotes the natural filtration of the Brownian sheet and is there- fore a commutingF filtration.

Theorem 3.8. The (N, d) Ornstein–Uhlenbeck sheet satisfies Hypothe- ses A1 and A2 with α = 1/2 and γ = 1, and therefore the estimates of Theorem 3.1.

Proof. Since the (N, d) Ornstein–Uhlenbeck sheet is an α-regular Gaus- sian field with α = 1/2, the lower bound follows from Theorem 3.7 and it remains to verify Hypothesis A2. Using the estimate 1 e−x x for all x 0 and using (3.15), we have, for all s,t [a, b]N with s −t, ≤ ≥ ∈ ≤ E U [Ut Us s ] | −E |F−|t|/2| −|t|/2 −|s|/2 W = [e (Wet Wes ) + (e e )Wes es ] | −|t|/2 −|s−|/2 − |F | (3.16) = (e e )W s | − e | Us ( t s )/2 ≤C | M| | |t − |s | ≤ 1 k − k on Us M , which proves (3.5). To prove (3.6), use (3.16) to see that the expectation{| |≤ in} (3.6) is equal to −|t|/2 −|s|/2 −|t|/2 −|s|/2 2 −|t|/2 2 E[e W t e W s (e e )W s ] = E[e (W t W s )] , e − e − − e e − e N so by (3.15), there exist two constants C2 and C3 such that for all s,t [a, b] with s t, ∈ ≤ t s −|t| 2 t s C e e e E[(W t W s ) ] C e e . 2k − k≤ e − e ≤ 3k − k POTENTIAL THEORY FOR HYPERBOLIC SPDES 21

Finally, using the inequalities cx 1 e−x x, for x [0, b], we obtain ≤ − ≤ ∈ −|t| 2 C t s e E[(W t W s ) ] C t s , 4k − k≤ e − e ≤ 5k − k which proves (3.6). The upper bound follows then from Theorem 3.1(b). 

3.2.4. The fractional Brownian sheet. The fractional Brownian sheet with H H RN Hurst parameter H (0, 1), X = (Xt ,t + ), is a d-dimensional centered Gaussian process with∈ independent coordinate∈ processes, each with covari- ance function given by

N c E[(XH )j(XH )j]= (t2H + s2H t s 2H ), s,t RN , 1 j d, t s 2 i i − | i − i| ∈ + ≤ ≤ iY=1 1 where c is a positive finite constant. Note that if H = 2 , one obtains the standard Brownian sheet. As in Theorem 3.7, we obtain only a lower capacity estimate for the fractional Brownian sheet as a consequence of Theorem 3.1(a), since this process is not necessarily adapted to a commuting filtration.

Theorem 3.9. Fix H (0, 1), 0

NQ 1 2H 2H 2H To prove (3.2) for s fixed, set f(u)= i=1 2 (si + ui ui si ) and RN − | − | let φ = (φ1,...,φN ) : [0, 1] be anQ affine function such that φ(0) = t and φ(1) = s. Because f is→ differentiable in the orthant centered at s that contains t, and in this orthant, for t s sufficiently small and u Im φ, k − k ∈ ∂f (u) C( u s 2H−1 1), the numerator on the right-hand side of (3.17) ∂ui i i can| be|≤ bounded| − by | ∨

1 N 2H−1 2H∧1 f(φ(1)) f(φ(0)) C ( φi(r) si 1)φ˙i(r) dr = C t s . | − |≤ 0 | − | ∨ | − | Z Xi=1 This concludes the proof of (3.2). 22 R. C. DALANG AND E. NUALART

N 1 −H 2H To prove (3.3), set f(u)= i=1 g(ui), where g(ui)= 2 ui (1 + ui 2H − ui 1 ). An elementary calculation shows that for ui 1 sufficiently | − | Q | 2H−−1| ′ small, there are constantsc ˜2 > c˜1 > 0 such thatc ˜1 ui 1 g (ui) 2H−1 | − | ≤ | |≤ c˜2 ui 1 . Therefore, there are constants c2 > c1 > 0 such that for u| (1−,...,| 1) sufficiently small, k − k 2H−1 ∂f 2H−1 (3.18) c1 ui 1 (u) c2 ui 1 . | − | ≤ ∂ui ≤ | − |

Let φ = (φ ,...,φ ) : [0, 1] RN be an affine function such that φ(1) = 1 N → (1,..., 1) and φ(0) = (t1/s1,...,tN /sN ). Observe that 1 N ∂f 1 ρ(s,t)= f(φ(1)) f(φ(0)) = (φ(r))φ˙i(r) dr. − − 0 ∂ui Z Xi=1 By (3.18), for s,t [a, b]2 with s t sufficiently small, this expression is bounded above and∈ below by constantsk − k times

N 1 N 2H−1 ˙ 2H 2H φi(r) 1 φi(r) dr = ti si /si . 0 | − | | − | Xi=1 Z Xi=1 Because 1 ρ2(s,t) = (1 ρ(s,t))(1 + ρ(s,t)) and the second factor is less than or equal− to 2 and bounded− away from 0, it follows that (3.3) holds. It remains to prove (3.4). For this, it suffices to check that for any si,ti [a, b] with t s > 0, ∈ i − i t2H + s2H t s 2H < 2 t2H s2H . i i − | i − i| i i Move the square root to the left-hand side andq isolate the perfect square H H H to see that this is equivalent to ti si < ti si , which holds since 0

4. Gaussian-type bounds for densities of solutions of hyperbolic SPDEs. In this section, we use techniques of Malliavin calculus to establish Gaussian- type bounds on the density of the solution to a nonlinear hyperbolic SPDE. These bounds are such that they can be used to verify Hypotheses H1–H3, as we see in Section 5.

4.1. Elements of Malliavin calculus. In this section, we recall, follow- 1 d ing [18], some elements of Malliavin calculus. Let W = (Wt = (Wt ,...,Wt ), R2 Rd t +) bean -valued two-parameter Brownian sheet defined on its canon- ∈ P R2 ical probability space (Ω, , ) and let = ( t,t +) be its natural filtra- tion (see Section 3.2.1). G F F ∈ 2 R2 Rd Let H be the H = L ( +, ). For any h H, we set d j ∈ W (h)= j=1 R2 hj(z) dWz . The Gaussian subspace = W (h), h H + H { ∈ } 2 of L (Ω, , P) isR isomorphic to H. PG POTENTIAL THEORY FOR HYPERBOLIC SPDES 23

Let S denote the class of smooth random variables F = f(W (h1),...,W (hn)), ∞ Rn where h1,...,hn are in H, n 1, and f belongs to CP ( ), the set of func- tions f such that f and all its≥ partial derivatives have at most polynomial growth. Given F in S, its derivative is the d-dimensional stochastic process DF = (D F = (D(1)F,...,D(d)F ),t R2 ) given by t t t ∈ + n ∂f DtF = (W (h1),...,W (hn))hi(t). ∂xi Xi=1 More generally, the kth-order derivative of F is obtained by iterating the derivative operator k times: if F is a smooth random variable and k is an k integer, set D F = Dt Dt F . Then for every p 1 and any natural t1,...,tk 1 · · · k ≥ number k, we denote by Dk,p the closure of S with respect to the seminorm defined by k · kk,p k F p = E[ F p]+ E[ DjF p ], k kk,p | | k kH⊗j jX=1 ⊗j 2 R2 j Rd where H is the product space L (( +) , ), and

d 1/2 (j) j (k1) (kj ) 2 ⊗j D F H = Dt1 Dt F dt1 dtj . k k R2 · · · R2 | · · · j | · · · ! k1,...,kXj =1 Z + Z + D∞ Dk,p We set = p≥1 k≥1 . Similarly, for any separable Hilbert space V , we can define the analogous T T spaces Dk,p(V ) and D∞(V ) of V -valued random variables, and the related k,p,V seminorms (the related smooth functionals being of the form F = k ·n k j=1 Fjvj , where Fj S and vj V ). We denote by δ the∈ adjoint of∈ the operator D, which is an unbounded P operator on L2(Ω,H) taking values in L2(Ω) ([18], Definition 1.3.1). In par- ticular, if u belongs to Dom δ, then δ(u) is the element of L2(Ω) characterized by the duality relationship d E E (j) j D1,2 (4.1) (F δ(u)) = Dt F ut dt for any F . R2 ! ∈ jX=1 Z + 2 R2 Rd If u L ( + Ω, ) is an adapted process, then (see [18], Proposition 1.3.4) u belongs∈ to× Dom δ and δ(u) coincides with the Itˆointegral

d j j δ(u)= us dWs . R2 jX=1 Z + We use the following estimate of the norm of δ(u). k · kk,p 24 R. C. DALANG AND E. NUALART

Proposition 4.1 ([18], Proposition 3.2.1, and [19], (1.11) and page 131). The adjoint δ is a continuous operator from Dk+1,p(H) to Dk,p for all p> 1, k 0. Hence, for all u Dk+1,p(H), ≥ ∈ (4.2) δ(u) c u k kk,p ≤ k,pk kk+1,p,H for some constant ck,p > 0.

Our first application of Malliavin calculus to the study of probability laws is the following global criterion for smoothness of densities.

Theorem 4.2 ([18], Theorem 2.1.2 and Corollary 2.1.2). Let F = (F 1, ...,F d) be a random vector satisfying the following two conditions: (i) For all i = 1,...,d, F i D∞. ∈ i j (ii) The Malliavin matrix of F defined by γF = ( DF , DF H )1≤i,j≤d is invertible a.s. h i Then the probability law of F is absolutely continuous with respect to Lebesgue measure. Moreover, assuming (i) and (ii), and (iii) (det γ )−1 Lp for all p 1, F ∈ ≥ the probability density function of F is infinitely differentiable.

A random vector F that satisfies conditions (i)–(iii) of Theorem 4.2 is said to be nondegenerate. For a nondegenerate random vector, the following integration by parts formula plays a key role.

Proposition 4.3 ([19], Proposition 3.2.1). Let F = (F 1,...,F d) (D∞)d D∞ ∞ R∈d be a nondegenerate random vector, let G and let g Cp ( ). Fix ∈ ∈k k 1. Then for any multiindex α = (α1,...,αk) 1,...,d , there exists an≥ element H (F, G) D∞ such that ∈ { } α ∈ (4.3) E[(∂αg)(F )G]= E[g(F )Hα(F, G)], where the random variables Hα(F, G) are recursively given by d −1 j H(i)(F, G)= δ(G(γF )ijDF ), jX=1

Hα(F, G)= H(αk)(F,H(α1,...,αk−1)(F, G)).

4.2. Conditional Malliavin calculus. In this section, we give the condi- tional version of some of the results established in Section 4.1. The proofs are very similar to the one-parameter case and are left to the reader. The first result is the conditional version of the duality relationship (4.1), which is the two-parameter version of [17], (2.12). POTENTIAL THEORY FOR HYPERBOLIC SPDES 25

R2 Proposition 4.4. Let s,t + with s

(4.4) E F ur dWr s = E DrF, ur dr s . [0,t]\[0,s] F [0,t]\[0,s]h i F  Z  Z 

The following norms are the two-parameter versions of those in [17], Def- R2 2 n Rd inition 1. Let s,t + with s

R2 Dk+1,p Proposition 4.5. Let s,t + with s

Fs Fs (4.5) δ(u) ck,p u k kk,p,Hs,t ≤ k kk+1,p,Hs,t for some constant ck,p > 0.

Finally, we give the conditional version of the two-parameter Burkholder inequality (see [18], A.2, for the two-parameter nonconditional version).

Proposition 4.6. Fix p> 1. There is a finite constant bp > 0 such that for all adapted X = (X ,t R2 ) in L2(R2 Ω) and all s,t R2 with s

26 R. C. DALANG AND E. NUALART

d d 4.3. Hyperbolic stochastic partial differential equations. Let b, σj : R R , 1 j d, be measurable globally Lipschitz functions, where the vec→tor- ≤ ≤ i valued functions σ1,...,σd denote the columns of a matrix σ = (σj)1≤i,j≤d. Consider the system of stochastic integral equations on the plane

d i i j i R2 (4.7) Xt = x0 + σj(Xs) dWs + b (Xs) ds, t +, 1 i d, [0,t] [0,t] ∈ ≤ ≤ jX=1 Z Z where the first integral is an Itˆointegral with respect to the Brownian sheet (as defined in [28], Chapter 4) and x Rd is the constant value of the process 0 ∈ Xt on the axes. It is well known (see [21], Lemma 3.1) that there exists a unique two-parameter, d-dimensional, continuous and adapted process X = R2 E p (Xt,t +) that satisfies equation (4.7). In addition, [supr∈[0,t] Xr ] < ∈ R2 | | ∞ for any p 2 and t +. In[21], Malliavin calculus is used to establish the following≥ result. ∈

Theorem 4.7 ([21], Proposition 3.3). If the coefficients of σ and b are infinitely differentiable with bounded partial derivatives of all orders, then Xi belongs to D∞ for all t R2 and i = 1,...,d. t ∈ + Assuming the latter infinite differentiability condition on the coefficients of σ and b, we state the following standard hypoellipticity hypothesis:

Condition P. There is n 1 such that the spanned by ≥ the column vectors σ1,...,σd, σi▽σj, 1 i, j d, σi▽(σj▽σk), 1 i, j, k d,... ,σ ▽( (σ ▽σ ) ), 1 i ,...,i≤ ≤d, at the point x is R≤d, where≤ the i1 · · · in−1 in · · · ≤ 1 n ≤ 0 column vector σi▽σj denotes the covariant derivative of σj in the direction of σi.

The following result uses Theorem 4.2 and gives the existence and smooth- ness of the density of Xt for any t away from the axes.

Theorem 4.8 ([18], Theorem 2.4.2, and [21], Theorem 4.3). Under Con- dition P, for any point t away from the axes, the random vector Xt is nonde- generate and therefore has an absolutely continuous with respect to Lebesgue measure on Rd. Moreover, its probability density function is infinitely differentiable.

4.4. Gaussian-type upper bounds. In this section, we present some pre- liminary results and establish a Gaussian-type upper bound for the drift-free case with vanishing initial conditions. POTENTIAL THEORY FOR HYPERBOLIC SPDES 27

R2 Let X = (Xt,t +) be the unique solution of (4.7) with b 0 and x0 = 0, that is, ∈ ≡

d i i j R2 (4.8) Xt = σj(Xs) dWs , t +, 1 i d. [0,t] ∈ ≤ ≤ jX=1 Z We assume that the following two hypotheses on the matrix σ hold:

Hypothesis P1. The coefficients of the matrix σ are bounded and in- finitely differentiable with bounded partial derivatives (we denote by T the uniform bound on the coefficients of σ and its first partial derivatives).

2 d d i i 2 Hypothesis P2. Strong ellipticity: σ(x)ξ = k=1( i=1 σk(x)ξ ) ρ2 > 0 for some ρ> 0, for all x Rd andk for allk ξ Rd with ξ = 1. ≥ ∈ ∈P Pk k Note that Hypothesis P2 implies Condition P. Indeed, Hypothesis P2 implies that the vector space spanned by the column vectors σ1(x),...,σd(x) at any point x in Rd is Rd, so Condition P holds. RN Fix s + . Let Ps(ω, ) be a regular version of the conditional distribution ∈ · RN of the process (Xt Xs,t + [0,s]) given s, as defined in Section 2. As in Theorem 4.8, we− can check∈ that\ under ConditionF P for P-almost all ω Ω ∈ and for any s

E ½ ps,t(ω,x) = ( 1) [ {Xi−Xi>xi,i∈σ, Xi−Xi

Lemma 4.9 follows along the same lines as the one-parameter noncondi- tional case (see [19], Corollary 3.2.1) and is therefore omitted. For the gen- eral case, see [22], (5.3), where a similar expression is obtained for the kernel of a stochastic semigroup. A key property is that the moments of the iterated derivatives of X are fi- nite; see [17], Lemma 6, where similar nonconditional estimates are obtained for one-parameter Brownian martingales in Rd.

Lemma 4.10. Assuming Hypothesis P1, for any 0 0 depending on a, b∞and≥ the uniform≥ bounds from Hypothesis P1 such that for any s,t [a, b]2 with s

d p/2 2p−1 dT p + b dp−1 E D(k)σi (X ) 2 dr ≤ p | z j r | Fs ( i,j=1  Z[z,t] ) X

d p−1 p 2p p−2 p E (k) i p 2 dT + bpd b T [ Dz (Xr) s] dr . ≤ ( [z,t] | | |F ) Z Xi=1 Finally, for z fixed, using a two-parameter version of Gronwall’s lemma (see [18], Exercise 2.4.3) in the form

(4.11) f(z,t) A + B f(z, r) dr, ≤ Z[z,t] POTENTIAL THEORY FOR HYPERBOLIC SPDES 29 we conclude the proof of (4.9) for n = 1. We now assume that (4.9) holds for n> 1. Apply n times the derivative operator to (4.10), which yields n+1 (k1) (kn+1) i (k1) (kl−1) (kl+1) (kn+1) i D D (X )= D Dz Dz D (σ (Xz )) z1 · · · zn+1 t z1 · · · l−1 l+1 · · · zn+1 kl l Xl=1 d (k1) (kn+1) i j + Dz1 Dzn+1 (σj(Xr)) dWr . [z1∨···∨zn+1,t] · · · jX=1 Z For the first term, we use the induction hypothesis and the uniform bounds on the derivatives of the coefficients of σ. For the second term, we use Burkholder’s inequality and again the induction hypothesis and the bounds on the derivatives on the coefficients of σ. We finally obtain d E[ D(k1) D(kn+1)(Xi) p] | z1 · · · zn+1 t | Xi=1 d E (k1) (kn+1) i p C1 + C2 Dz1 Dzn+1 (Xr) dr ≤ " [z1∨···∨zn+1,t] | · · · | # Z Xi=1 and the proof is completed using Gronwall’s lemma. 

In the next lemma, we follow [17], Lemma 12, where similar estimates are carried out for one-parameter Brownian martingales in Rd.

Lemma 4.11. Assuming Hypotheses P1 and P2, there exists a finite constant C> 0 depending on a, b and the uniform bounds from Hypotheses P1 and P2 such that, for any s,t [a, b]2 with s

d c Ht,s Fs ((γs,t )−1) Fs D(Xj Xj) Fs . ≤ k (1,...,d−1)k1,4,Hs,t k Xt−Xs djk1,8,Hs,t k t − s k1,8,Hs,t jX=1 30 R. C. DALANG AND E. NUALART

Hence, to prove (4.12), it suffices to show that for each p 1 and n = 1,...,d, ≥ (4.13) D(Xj Xj) Fs c1 t s 1/2 k t − s kn,p,Hs,t ≤ n,pk − k 1 for some finite constant cn,p > 0 and (4.14) ((γs,t )−1) Fs c2 t s −1 k Xt−Xs ijkn,p,Hs,t ≤ n,pk − k 2 for some finite constant cn,p > 0. Indeed, (4.13) and (4.14) with n =1 and p = 8 imply that Ht,s Fs cdc1c2 t s −1/2 Ht,s Fs . k (1,...,d)k0,2,Hs,t ≤ k − k k (1,...,d−1)k1,4,Hs,t Iterating the process, we find Ht,s Fs (cdc1c2)d t s −d/2, k (1,...,d)k0,2,Hs,t ≤ k − k which concludes the proof of (4.12). 

Proof of (4.13). Fix p 1. By definition, ≥ D(Xi Xi) Fs k t − s kn,p,Hs,t (4.15) n+1 1/p = E[ D(Xi Xi) p ]+ E[ Dk(Xi Xi) p ] . t s Hs,t s t s H⊗k s ( k − k |F k − k s,t |F ) kX=2 (k) i i Furthermore, for any z [0,t] [0,s], the process (Dz (Xt Xs), 1 k d) satisfies the system of stochastic∈ \ differential equations − ≤ ≤

d (k) i i k (k) i j (4.16) Dz (Xt Xs)= σi (Xz)+ Dz σj(Xr) dWr . − [z,t] jX=1 Z By Burkholder’s inequality (4.6) for conditional expectations and using the Cauchy–Schwarz inequality, for any p 1, ≥ i i p E[ D(X X ) s] k t − s kHs,t |F d p/2 E (k) i i 2 = Dz (Xt Xs) dz s " [0,t]\[0,s] | − | F # k=1 Z X d p/ 2−1 p/2−1 E (k) i i p d (c t s ) Dz (Xt Xs) dz s ≤ k − k [0,t]\[0,s]| − | F k=1 Z  X 2p−1dp/2−1(c t s )p/2−1 ≤ k − k dT pc t s × ( k − k POTENTIAL THEORY FOR HYPERBOLIC SPDES 31

d p + dp−1 E D(k)σi (X ) dW j dz z j r r Fs k,j=1 Z[0,t]\[0,s] Z[z,t] ) X

2p−1(c t s )p/2 ≤ k − k p/2 p p/2 5p/2−2 p d T + (c t s ) d bp(T ) × ( k − k d E (k) l p sup [ Dz (Xr) s] . × r,z∈[0,t]\[0,s] | | |F ) k,lX=1 Finally, by Lemma 4.10, we get that there exists a positive finite constant k1(a, b, T ) such that

i i p p/2 (4.17) E[ D(X X ) s] k1 t s . k t − s kHs,t |F ≤ k − k To estimate the second term on the right-hand side of (4.15), we apply n times the derivative operator to (4.16) to get

n+1 (k1) (kn+1) i i (k1) (kl−1) (kl+1) (kn+1) i D D (X X )= D Dz Dz D (σ (Xz )) z1 · · · zn+1 t − s z1 · · · l−1 l+1 · · · zn+1 kl l Xl=1 d (k1) (kn+1) i j + Dz1 Dzn+1 (σj(Xr)) dWr . [z1∨···∨zn+1,t] · · · jX=1 Z Proceeding as above, by Burkholder’s inequality (4.6) for conditional ex- pectations, and using the Cauchy–Schwarz inequality and Lemma 4.10, we obtain, for all k = 2,...,n + 1, E k i i p p/2 (4.18) [ D (Xt Xs) ⊗k s] k2 t s , k − kHs,t |F ≤ k − k where k2 is a finite positive constant. Finally, substituting (4.17) and (4.18) into (4.15) concludes the proof of (4.13). 

Proof of (4.14). Fix p 1. By definition ≥ ((γs,t )−1) Fs k Xt−Xs ijkn,p,Hs,t E s,t −1 p = [( (γX −X ) )ij s] (4.19) ( | t s | |F n 1/p E k s,t −1 p + [ D ((γ ) ) ⊗k s] . Xt−Xs ij H k k s,t |F ) kX=1 32 R. C. DALANG AND E. NUALART

We use a standard argument to estimate the moments of the inverse of the Malliavin matrix. We follow [18], proof of (3.22), and [17], Lemma 10. Using s,t −1 the Cauchy–Schwarz inequality and Cram´er’s formula for (γXt−Xs ) , we can easily check that for all p 1, ≥ E s,t −1 p [(((γXt−Xs ) )ij) s] (4.20) s,t|F −2p 1/2 4p(d−1) 1/2 cd,pE[(det γ ) s] E[ D(Xt Xs) s] ≤ Xt−Xs |F × k − kHs,t |F for some constant cd,p > 0. For the second factor, we use (4.17) to get 4p(d−1) 2p(d−1) (4.21) E[ D(Xt Xs) s] k1 t s k − kHs,t |F ≤ k − k for some finite constant k1(a, b, T ) > 0. On the other hand, we write s,t T s,t d det γX −X inf (v γX −X v) t s ≥ kvk=1 t s

d d 2 d (k) i i = inf Dz (Xt Xs)vi dz . kvk=1 [0,t]\[0,s] − ! k=1 Z i=1 X X

Using (4.16) and Hypothesis P2, for any h (0, 1], we see that the expression in parentheses is bounded below by ∈ d d d 2 i (k) i j dz vi σk(Xz)+ Dz σj(Xr) dWr [0,t−(1−h)(t−s)]\[0,s] [z,t] ! k=1 Z i=1 j=1 Z X X X 1 2 Aρ Ih, ≥ 2 − where A denotes the area of the region [0,t (1 h)(t s)] [0,s] and − − − \ d d 2 (k) i j Ih = Dz σj(Xr) dWr ) dz. [0,t−(1−h)(t−s)]\[0,s] [z,t] k=1 Z i,j=1 Z X X 2 − 1/d We choose y such that Aρ = 4y and notice that since h 1, y c := 4d(b√2 )−d t s −dρ−2d. In addition, as h varies in (0, 1], y varies≤ in [≥c, ). Applying Chebyshev’sk − k inequality for conditional probabilities, we find that∞ for any q 2, ≥ d s,t 1 1 2 1 P det γ < s P Aρ I < s Xt−Xs y F ≤ 2 − h y F     

P I >y−1/d yq/d E[ I q ]. ≤ { h |Fs}≤ | h| |Fs Using Burkholder’s inequality (4.6) for conditional expectations, for any q 2, we have ≥ d E q 5q−3 2(q−1) E (k) i 2q [ Ih s] d bqA Dz σj(Xr) dr dz s . | | |F ≤ ([0,t]\[0,s])2 | | F i,j,k=1 Z Z  X

POTENTIAL THEORY FOR HYPERBOLIC SPDES 33

By Lemma 4.10(i), the conditional expectation of the right-hand side is bounded above by some finite positive constant k2(a, b, T ). Using the defi- nition of A, we obtain 42(q−1) E[ I q ] k y2(1−q)/d. | h| |Fs ≤ 3 ρ4(q−1) Consequently, taking q> 2 + 2pd, s,t −2p E[(det γ ) s] Xt−Xs |F ∞ 2p−1 s,t −1 = 2py P (det γ ) >y s dy { Xt−Xs |F } Z0 ∞ 2p 2p−1P s,t 1 c + 2p y det γXt−Xs < s dy ≤ c y F Z   2dp ∞ 4 2p− 1+(q/d) q + 2p y E[ Ih s] dy ≤ t s 2dp(b√2 )2dpρ4dp c | | |F k − k Z 2dp 2(q−1) ∞ 4 4 2p−1−(q/d)+(2/d) + 2pk3 4(q−1) y dy ≤ t s 2dp(b√2 )2dpρ4dp ρ c k − k Z k t s −2dp, ≤ 4k − k where k4 is a finite positive constant. Therefore, we have proved that s,t −2p −2dp (4.22) E[(det γ ) s] k4 t s . Xt−Xs |F ≤ k − k Substituting (4.21) and (4.22) into (4.20), we obtain

s,t −1 p −2pd 2p(d−1) 1/2 E[ ((γ ) ) s] k5( t s t s ) (4.23) | Xt−Xs ij| |F ≤ k − k k − k = k t s −p 5k − k for some finite constant k5 > 0 not depending on i or j. This proves the desired estimate for the first term in (4.19). Turning to the second term, we claim that for all i, j = 1,...,d and k = 1,...,n, E k s,t −1 p −p (4.24) [ D ((γXt−Xs ) )ij ⊗k s] k6 t s k kHs,t |F ≤ k − k for some finite constant k6 > 0. Indeed, by iterating the equality (see [18], Lemma 2.1.6)

d −1 −1 −1 D(γ )ij = (γ ) D(γX ) (γ ) , Xt − Xt ik t kl Xt jl k,lX=1 and using H¨older’s inequality for conditional expectations, we have E k s,t −1 p sup [ D ((γXt−Xs ) )ij ⊗k s] i,j k kHs,t |F 34 R. C. DALANG AND E. NUALART

k c sup E[ Dk1 (γs,t ) p(r+1) ]1/(r+1) Xt−Xs i1j1 ⊗k1 s ≤ k kHs,t |F ×··· rX=1 k1+···X+kr=k kl≥1,l=1,...,r

E kr s,t p(1+r) 1/(r+1) [ D (γXt−Xs )irjr ⊗kr s] × k kHs,t |F 2 sup E[ ((γs,t )−1) p(r+1) ]1/(r+1), Xt−Xs ij s × i,j | | |F where the supremum before the summation is over i1, j1,...,ir, jr 1,...,d . By (4.23), for all i, j = 1,...,d, ∈ { }

s,t −1 p(r+1)2 −p(r+1)2 E[ ((γ ) ) s] k7 t s | Xt−Xs ij| |F ≤ k − k for some finite constant k7 > 0. For the other factors, express Dk(γs,t ) using the definition of γs,t Xt−Xs ij Xt−Xs and use the Cauchy–Schwarz inequality twice and (4.18) to get E k s,t p [ D (γXt−Xs )ij ⊗k s] k kHs,t |F p E k i i j j = D Dr(Xt Xs) Dr(Xt Xs ) dr s [0,t]\[0,s] − · − H⊗k F  Z  s,t 

k k p (k + 1)p−1 E DlD (Xi Xi) ≤ l r t − s l=0    Z[0,t]\[0,s] X p k−l j j D Dr(Xt Xs ) dr s · − ⊗k F Hs,t 

k p p/2 p−1 k E l i i 2p 1/2 d (k + 1) ( [ D D(Xt Xs) ⊗(l+1) s]) ≤ l { k − kHs,t |F Xl=0   E k−l j j 2p 1/2 ( [ D D(Xt Xs ) ⊗(k−l+1) s]) × k − kHs,t |F } k t s p ≤ 8k − k for some finite constant k8 > 0 not depending on i and j. This concludes the proof of (4.24). Finally, substituting (4.23) and (4.24) into (4.19), we conclude the proof of (4.14). The proof of Lemma 4.11 is now complete. 

The following result is a consequence of Lemmas 4.9 and 4.11.

Lemma 4.12. Assuming Hypotheses P1 and P2, for any 0

Proof. Using Lemma 4.9 with s =0 and σ = 1,...,d and the Cauchy– Schwarz inequality, we obtain { }

p (x) (E[ H (X , 1) 2])1/2. Xt ≤ { (1,...,d) t } By Lemma 4.11 with s = 0, there exists a finite positive constant c2 de- pending on a, b and the uniform bounds from Hypotheses P1 and P2 such that E[ H (X , 1) 2] c , { (1,...,d) t } ≤ 2 which proves the lemma. 

The next proposition is the main result of this section.

Proposition 4.13. Assuming Hypotheses P1 and P2, for any 0

(s1+u,s2), if 0 u t1 s1, u = F ≤ ≤ − G (t1,u+s2+s1−t1), if t1 s1 u t s .  F − ≤ ≤ | | − | | 36 R. C. DALANG AND E. NUALART

Notice that M = 0, M = X X and = . By [1], (2.9), 0 |t|−|s| t − s G0 Fs M i = M i + ( M i M i ) h i|t|−|s| h it1−s1 h i|t|−|s| − h it1−s1 d i 2 = σj(Xr) dr. [0,t]\[0,s] jX=1 Z Moreover, Hypothesis P1 and the Cauchy–Schwarz inequality imply that i 1/2 2 M |t|−|s| C t s , where C = bd2 T for all 1 i d. Applying the hexponentiali ≤ martingalek − k inequality [18], A.2, we get ≤ ≤ xi 2 (4.27) P Xi Xi xi 2exp | | , 1 i d. {| t − s| ≥ | ||Fs}≤ −2C t s ≤ ≤  k − k Finally, substituting (4.27) into (4.26) and using Lemma 4.11 concludes the proof of (4.25). 

4.5. Gaussian-type lower bounds. In this section we present a lower bound of Gaussian type for the density of the random variable Xt for any t away R2 from the axes, where X = (Xt,t +) denotes the solution of (4.8), and we present an analogous lower bound∈ for the conditional density of X X t − s given s, when s

Theorem 4.14. Assume Hypotheses P1 and P2 and let X = (Xt,t R2 ∈ +) be the solution of (4.8). Then for any 0

Consider a sufficiently fine partition 0= t <

tn 2 ∆n−1(g)= g(t, ) 2 Rd dt = (tn tn−1)s2 k · kL ([0,s2], ) − Ztn−1 for n = 1,...,N. Note that g 2 Rd = s s . k kL ([0,s1]×[0,s2], ) 1 2 We now define the sequence of nondegenerate random vectors Fn required in the Main setup of [13] by Fn = Xtn,s2 , for 0 n N, with FN = Xs1,s2 . 1 d i i ≤ ≤ Notice that Fn = (Fn ,...,Fn ) and Fn = Xtn,s2 . By (4.8),

d i i i j (4.28) Fn Fn−1 = σj(Xθ) dWθ , 1 i d. − Rt ,tn ≤ ≤ jX=1 Z n−1 The first objective is to find the Itˆoexpansion of order l 1 of the ran- i i ¯i ≥ dom variable Fn Fn−1 to obtain the approximations Fn required in [13], Theorem 5. For this,− we need to introduce some notation. We define the n multiindex β n≥1 0, 1 , with length l(β), and write β = (β1,...,βl(β)). We write β ∈and β { for} the multiindex obtained by deleting the first and S the last component,− − respectively, of the multiindex β. For completeness, we write v for the multiindex of length zero. These multiindices will be used to write{ } multiple integrals, some of which are stochastic and some are de- 1 j j terministic: when the index is 1, d Wθ denotes dWθ , and when the index 0 j is 0, d Wθ denotes dθ. Usually, these integrals are not taken on the whole space [0,s ] [0,s ], but on subsets of it. The subset most often is of the 1 × 2 type [tn−1,tn] [0,s2]. In general, these integrals are denoted by Iβ(hβ) for an 1 -measurable× random process h such that Ftn−1 β

2 l(β) d sup hβ L (([tn−1,tn]×[0,s2]) ,R )(ω) < . ω∈Ω k k ∞

d Given a family of M functions fm : R R, 1 m M, and M points 1 M → ≤ ≤ θ ,...,θ placed on the same vertical line in Rtn−1,tn , we define for any 1 r d and ξ R the following operations on random variables: ≤ ≤ ∈ tn−1,tn M 1 Lr,θ1,...,θM ,ξ fm(Xθm ) ! mY=1 38 R. C. DALANG AND E. NUALART

M M d ∂fm k

½ n m = Rπθm,θm (ξ) fn(Xθ △ξ) (Xθ △ξ)σr (Xξ), ! ∂xk mX=1 n=1Y,n6=m kX=1 M 0 Lr,θ1,...,θM ,ξ fm(Xθm ) ! mY=1 M 1 M

½ n = Rπθm,θm (ξ) fn(Xθ △ξ) ( 2 ! mX=1 n=1Y,n6=m d 2 ∂ fm k l (Xθm△ξ)σr (Xξ)σr(Xξ) × ∂xk ∂xl ) k,lX=1 M 1 M

½ n + R m1 m1 ∩R m2 m2 (ξ) fn(Xθ △ξ) ( 2 πθ ,θ πθ ,θ ! m1X,m2=1 n=1,nY6=m1,m2 d ∂fm1 k ∂fm2 l (Xθm1 △ξ)σr (Xξ) (Xθm2 △ξ)σr(Xξ) . × ∂xk ∂xl ) k,lX=1 Note that the points θ1 △ ξ,...,θM △ ξ and ξ are also placed on a single vertical line in Rtn−1,tn . These operations will be used to apply the standard multidimensional Itˆo 1 M M m formula to f(Xθ1 ,...,XθM ), when f(x ,...,x ) is of the form m=1 fm(x ), where the f are as above, 1 m M. Indeed, m ≤ ≤ Q u1 (Xu ,θ1 ,...,Xu ,θM ), tn−1 u1 tn, 7→ 1 2 1 2 ≤ ≤ is an Md-dimensional martingale with mutual covariation

d θn∧θm k l 2 2 k l n m d X·,θ , X·,θ u = du1 σr (Xu1,ξ2 )σr(Xu1,ξ2 ) dξ2. h 2 2 i 1 0 rX=1 Z Therefore, according to the Itˆoformula [2], M M fm(Xθm )= fm(Xπθm ) m=1 m=1 Y Y d M 1 r (4.29) + Lr,θ1,...,θM ,ξ fm(Xθm ) dWξ R r=1 Z tn−1,tn m=1 ! Xd YM 0 + Lr,θ1,...,θM ,ξ fm(Xθm ) dξ. Rt ,tn ! rX=1 Z n−1 mY=1 We are now going to iteratively apply the Itˆoformula to the integrands in (4.29), to obtain an Itˆoexpansion similar to the one presented in [12], Chap- ter 5. For this, we introduce some additional notation. Given a multiindex POTENTIAL THEORY FOR HYPERBOLIC SPDES 39

β, we define the functions

hi (θ)= σi (X ), if β = v , β,j j θ { } i 1 i hβ,j,r(θ,ξ)= Lr,θ,ξ(σj(Xθ)), if β = (1), i 0 i hβ,j,r(θ,ξ)= Lr,θ,ξ(σj(Xθ)), if β = (0), i 1 l(β) β i hβ,j,r ,...,r (θ,ξ ,...,ξ )= L 1 l(β) (σj(Xθ)), 1 l(β) r1,...,rl(β),θ,ξ ,...,ξ where

β β1 β2 βl(β) L 1 l(β) = L 1 l(β) L 1 l(β)−1 L 1 . r1,...,rl(β),θ,ξ ,...,ξ rl(β),θ,ξ ,...,ξ ◦ rl(β)−1,θ,ξ ,...,ξ ◦···◦ r1,θ,ξ

We define the multiple (Itˆostochastic/deterministic) integrals Iβ by

i 1 l(β)−1 Iβ[h (θ,ξ ,...,ξ )] β−,j,r1,...,rl(β)−1 Rtn−1,tn d = Rt ,tn Rπθ,θ · · · (4.30) j,r1,...,rXl(β)−1=1 Z n−1 Z  Z hi (θ,ξ1,...,ξl(β)−1) × β−,j,r1,...,rl(β)−1 r β1 l(β)−1 βl(β)−1 r1 j d Wξl(β)−1 d Wξ1 dWθ , ×  · · ·  where the domain of integration of each integral is given by the indicator functions that appear in the definition of the operators L0 and L1. Note that the functions hi (θ,ξ1,...,ξl(β)−1) are sums of func- β−,j,r1,...,rl(β)−1 l(β) d tions of the form f (X n ), where f : R R are derivatives of some n=1 n θ n → order of the coefficients of the matrix σ, and θ1,...,θl(β) are l(β) points Q 1 l(β)−1 placed on a single vertical line in Rtn−1,tn that depends on θ,ξ ,...,ξ . π We now define multiple Itˆostochastic integrals Iβ , by modifying the in- tegrand in the definition of Iβ: each point in (4.30) is projected onto the vertical line through (tn−1, 0), that is,

Iπ[hi (θ,ξ1,...,ξl(β)−1)] β β−,j,r1,...,rl(β)−1 Rtn−1,tn

d l(β) = fn(Xπθn ) Rt ,tn Rπθ,θ · · · j,r1,...,rXl(β)−1=1 Z n−1 Z  Z X nY=1 r β1 l(β)−1 d Wξl(β)−1 ×  · · ·

βl(β)−1 r1 j d Wξ1 dWθ . ×  40 R. C. DALANG AND E. NUALART

Lemma 4.15. The Itˆoexpansion of order l 1, ≥ i i π i 1 l(β)−1 Fn Fn−1 = Iβ [hβ−,j,r ,...,r (θ,ξ ,...,ξ )]R − 1 l(β)−1 tn−1,tn β∈Al (4.31) X i 1 l(β)−1 + Iβ[h (θ,ξ ,...,ξ )] , β−,j,r1,...,rl(β)−1 Rtn−1,tn βX∈Bl holds, where the sets l and l are recursively defined by 1 = (1) , 1 = (1, 1), (0, 1) , =A , B and = β : β forAl 1{. } B { } Al+1 {Al Bl} Bl+1 { − ∈Bl} ≥ Proof. We prove this lemma by induction on l. By the standard multi- i dimensional Itˆoformula, applied to the random variable σj(Xθ) with respect to the first coordinate, it follows from (4.28) that d i i i j Fn Fn−1 = σj(Xθ) dWθ − Rt ,tn jX=1 Z n−1 d i j = σj(Xπθ) dWθ Rt ,tn jX=1 Z n−1 d i △ ∂σj(Xθ η) k r j + σr (Xη) dWη dWθ Rt ,tn Rπθ,θ ∂xk j,k,rX=1 Z n−1 Z  d 2 i △ 1 ∂ σj(Xθ η) k l j + σr (Xη)σr(Xη) dη dWθ 2 Rt ,tn Rπθ,θ ∂xk ∂xl j,k,l,rX=1 Z n−1 Z  for all 1 i d, which proves (4.31) for l = 1. ≤ ≤ Now assume that (4.31) holds with l−1 and l−1 for l> 1. Then we apply A B l(β) the Itˆoformula (4.29) to all the random variables of the form n=1 fn(Xθn ) that appear in the stochastic integrals of the second term of (4.31) with l−1. d Q B Here, M = l(β), the functions fm : R R, 1 m l(β), are derivatives of order greater than or equal to 0 of the→ coefficients≤ ≤ of the matrix σ and 1 l(β) θ ,...,θ are l(β) points placed on a single vertical line in Rtn−1,tn . The π first term on the right-hand side of (4.29) gives a new Iβ integral and the two other terms each give a new Iβ integral. Note that integrals with the 1 operator L add a 1 to all the multiindexes of l−1 and that integrals with 0 B the operator L add a 0. This yields (4.31) for l with l = l−1, l−1 and = β : β .  A {A B } Bl { − ∈Bl−1} Continuing the proof of Theorem 4.14, with this result we define, follow- ing [13], Theorem 5, the approximation F¯i of order l 1 by n ≥ F¯i = ((t t )s )(l+1)/2Zi + F i n n − n−1 2 n n−1 + Iπ[hi (θ,ξ1,...,ξl(β)−1)] , β β−,j,r1,...,rl(β)−1 Rtn−1,tn βX∈Al POTENTIAL THEORY FOR HYPERBOLIC SPDES 41 where (Zn, n = 1,...,N) is an i.i.d. sequence of d-dimensional N(0,I) ran- dom variables independent of the W . In the setting of [13], Theorem 5, we have k = 1 and set Gl = Iπ[hi (θ,ξ1,...,ξl(β)−1)] . n β β−,j,r1,...,rl(β)−1 Rtn−1,tn β∈AXl\A1 i To simplify notation we write hβ for hi (θ,ξ1,...,ξl(β)−1). β−,j,r1,...,rl(β)−1 With these definitions and since by Hypothesis P1, for any multiindex β , ∈ Al i 2 l(β) d sup hβ L (([tn−1,tn]×[0,s2]) ,R )(ω) < , ω∈Ω k k ∞ (H1) of [13], Theorem 5 is satisfied. To prove (H2a) of [13], Theorem 5, note that by definition F 1 F 1 tn−1 i ¯i tn−1 (l+1)/2 i i Fn Fn n,p = ((tn tn−1)s2) Zn + Iβ[hβ]Rt ,t k − k − n−1 n β∈Bl n,p X and, for each β , ∈Bl F 1 i tn−1 Iβ[hβ]Rt ,tn q,p,H k n−1 k tn−1,tn E i p 1 = [ Iβ[hβ]Rt ,tn tn−1 ] (4.32) ( | n−1 | |F q 1/p E j i p 1 + D Iβ[h ]R ⊗j , β tn−1,tn H tn−1 k k tn−1,tn |F ) jX=1 h i where H = L2([t ,t ] [0,s ], Rd). tn−1,tn n−1 n × 2 Fix β l and recall that l(β)= l +1. To estimate the first term in (4.32), we use Burkholder’s∈B inequality (4.6) for conditional expectations and the uniform bounds on the derivatives of the coefficients of σ to get i p 1 (l+1)p/2 (4.33) E[ Iβ[h ]R ] C((tn tn−1)s2) | β tn−1,tn | |Ftn−1 ≤ − for some positive finite constant C > 0 independent of the partition, t1, s and ω Ω. For∈ the second term in (4.32) fix j 1,...,q . Using formula (1.46) from [18], we can easily check that the term∈ { DjI [h}i ] contains mul- β β Rtn−1,tn tiple (Itˆostochastic/deterministic) integrals of orders l + 1,...,l + 1 j. Therefore, again using Burkholder’s inequality for conditional expectations− and the uniform bounds on the derivatives of the coefficients of σ, we obtain (4.34) E DjI [hi ] p 1 C((t t )s )(l+1)p/2 β β Rtn−1,tn ⊗j tn−1 n n−1 2 k kHt ,tn |F ≤ − h n−1 i 42 R. C. DALANG AND E. NUALART for some positive finite constant C > 0 independent of the partition, t1, s and ω Ω. Finally,∈ using (4.33) and (4.34), we get that there exists a constant C independent of the partition, t , s and ω Ω, such that 1 ∈ F 1 i i tn−1 (l+1)/2 F F¯ n,p C((t t )s ) , k n − nk ≤ n − n−1 2 which proves (H2a) of [13], Theorem 5, with γ = 1/2. Now, using exactly the same argument that led to (4.22) we can easily check that

tn−1,tn −p 1 1/p −d (E[(det γ ) ]) C((tn tn−1)s2) , Fn |Ftn−1 ≤ − tn−1,tn i j where γ = ( DF , DF 2 Rd ) , for some positive Fn h n niL ([tn−1,tn]×[0,s2], ) 1≤i,j≤d finite constant C independent of the partition, t1 and ω Ω. Therefore, (H2b) of [13], Theorem 5, is proved. ∈ Condition (H2c) of [13], Theorem 5, can be rewritten in our case as

d d 2 −1 i i C2 ((tn tn−1)s2) σk(Xπθ)ξ dθ C1 ≤ − Rt ,tn ! ≤ Z n−1 kX=1 Xi=1 for ξ Rd with ξ = 1. This is obviously satisfied by Hypotheses P1 and P2. ∈ k k Finally, condition (H2d) of [13], Theorem 5, is satisfied using the same argument that led to condition (H2a), because the higher-order integrals are, 1 Dn,p in the tn−1 conditional norm, smaller than (tn tn−1)s2. Theorem 4.14 is nowF proved.  −

The next result is the conditional version of Theorem 4.14. Let ps,t(ω,x) be the density of Xt Xs under the conditional distribution Ps(ω, ) defined just above Lemma 4.9−, for ω Ω N . · ∈ \ s Theorem 4.16. Assuming Hypotheses P1 and P2, for any 0

Lemma 4.17. The Itˆoexpansion of order l 1, ≥ i i i F F = Iβ[h (θ,ξ1,...,ξ )] n − n−1 β−,j,r1,...,rl(β)−1 l(β)−1 [0,tn]\[0,tn−1] βX∈Al + Iπ[hi (θ,ξ ,...,ξ )] , β β−,j,r1,...,rl(β)−1 1 l(β)−1 [0,tn]\[0,tn−1] βX∈Bl holds, where the sets l and l are recursively defined by 1 = (1) , 1 = (1, 1), (0, 1) , =A , B and = β : β forAl 1{. } B { } Al+1 {Al Bl} Bl+1 { − ∈Bl} ≥ ¯i Again, following [13], Theorem 5, we define the approximation Fn of order l 1 by ≥ F¯i = t t (l+1)/2Zi + F i n k n − n−1k n n−1 + Iπ[hi (θ,ξ ,...,ξ )] , β β−,j,r1,...,rl(β)−1 1 l(β)−1 [0,tn]\[0,tn−1] βX∈Al i where (Zn, n = 0,...,N) is an i.i.d. sequence of d-dimensional N(0,I) ran- dom variables independent of the Wiener process W . The remainder of the proof follows exactly as in the proof of Theorem 4.14. We note that the constant c in the conclusion of Theorem 4.16 is also uniform in ω Ω N , s,t [a, b]2 with s

5. Potential theory for hyperbolic SPDEs. In this section, we extend the results obtained in Section 2 to the solution of equation (4.7). The proofs make use of Malliavin calculus and are an application of the results of Section 4, which contains the technical work.

R2 5.1. The case b 0. Let X = (Xt,t +) be the unique d-dimensional adapted continuous≡ process defined on∈ (Ω, , P) that solves (4.8). The aim of this section is to establish the following result.G

Theorem 5.1. Assuming Hypotheses P1 and P2, for all 0 0, there exists a positive finite constant K depending on a, b, M,∞ ρ and the uniform bounds on the coefficients of σ and its derivatives, such that for all compact sets A x Rd : x < M , ⊂ { ∈ k k } K−1 Cap (A) P t [a, b]2 : X A K Cap (A), d−4 ≤ {∃ ∈ t ∈ }≤ d−4 where Capβ denotes the capacity with respect to the Newtonian β kernel k ( ) defined in Theorem 3.1. β · Proof. To prove Theorem 5.1 it suffices to prove that Hypotheses H1– H3 of Theorem 2.4 hold for the process X. Since Hypothesis H3 is only used for the upper bound in the statement of Theorem 5.1, we only prove Hypothesis H3 when d 4, since the upper bound is trivially satisfied for d 3. Fix 0 0 and recall that Hypotheses P1 and P2 are satisfied.≤ ∞

Verification of Hypothesis H1. Fix x Rd such that x M. Using Theorem 4.14, we find that ∈ k k≤ 2 −d/2 x pXt (x) dt C (t1t2) exp k k dt 2 ≥ 2 −Ct t Z[a,b] Z[a,b]  1 2  2 2 −d M C(b a) b exp 2 , ≥ − −Ca  which shows that Hypothesis H1 is satisfied.

Verification of Hypothesis H2. Fix x and y such that x M and y M. Let s,t [a, b]2 and assume that s 0. Take the conditional expectation of both sides with respect to σ(Xs) to find that z 2 p (z) c t s −d/2 exp k k . Xt−Xs|Xs=y ≤ k − k −c t s  k − k 2 Rd By Lemma 4.12, pXs (y) is uniformly bounded over s [a, b] and x . Therefore, for all s,t [a, b]2 with s 0. 2 We now assume that s,t [a, b] with s t and t s. Let E1 and E2 be two Borel subsets of Rd. Using∈ the conditional6≤ independence6≤ property, we obtain P X E , X E = E[P X E , X E ] { t ∈ 1 s ∈ 2} { t ∈ 1 s ∈ 2|Fs∧t} = E[P X E P X E ] { t ∈ 1|Fs∧t} { s ∈ 2|Fs∧t} = E p ( ,x X ) dx p ( ,y X ) dy . s∧t,t · − s∧t s∧t,s · − s∧t ZE1 ZE2  Set p(c, x)= c−d/2 exp( x 2/(2c)). By Proposition 4.13 and Lemma 4.12, the right-hand side is bounded−k k above by

CE dx p(c t (s t) ,x X ) dy p(c s (s t) ,y X ) k − ∧ k − s∧t k − ∧ k − s∧t ZE1 ZE2  = C dx dy dz p(c t (s t) ,x z)p(c s (s t) ,y z). Rd k − ∧ k − k − ∧ k − ZE1 ZE2 Z For x fixed, do the change of variables u = z x and use the fact that p(σ2, ) is an even function to write the inner integral− as a convolution of two Gaussian· densities, which is equal to p(c t (s t) + c s (s t) ,y x) Cp(˜c t s ,y x) k − ∧ k k − ∧ k − ≤ k − k − by the triangle inequality. We conclude that

P X E , X E C dx dy p(˜c t s ,y x) { t ∈ 1 s ∈ 2}≤ k − k − ZE1 ZE2 and, therefore, x y 2 (5.2) p (x,y) C t s −d/2 exp k − k . Xt,Xs ≤ k − k − c˜ t s  k − k  This shows that an estimate like (5.1) holds also when s t and t s. 6≤ 6≤ 46 R. C. DALANG AND E. NUALART

Using (5.1) and (5.2), we get

pXt,Xs (x,y) dt ds 2 2 Z[a,b] Z[a,b] x y 2 C t s −d/2 exp k − k dt ds. ≤ [a,b]2 [a,b]2 k − k − c t s Z Z  k − k  Fix t and use the change of variables u = t s to see that this expression is less than or equal to − x y 2 4C(b a)2 u −d/2 exp k − k du. − [0,b−a]2 k k − c u Z  k k  Next, use the change of variables u = x y 2c−1z, to see that this is less than or equal to k − k

cd/2−24C(b a)2 x y −d+4 z −d/2 exp( 1/ z ) dz. − k − k 2 2 k k − k k Z[0,c(b−a)/kx−yk ] When d 4, applying Lemma 3.5 with β = d/2 2 and α = 1/2 in the same way as in≥ the proof of Theorem 3.1 shows that≥ Hypothesis H2 is verified with k( )= kd−4( ) and d 4. When d 4, the above expression is bounded and Hypothesis· H2· holds≥ with k(x) 1.≤ ≡ Verification of Hypothesis H3. Assume d 4. Fix x Rd with ≥ ∈ x M. Use the Gaussian-type lower bound for ps,t(ω,x), obtained in kTheoremk≤ 4.16, to see that for all s [a, b]2 and for almost all ω, ∈ 2 −d/2 x ps,t(ω,x) dt c t s exp k k dt [b,2b−a]2 ≥ [b,2b−a]2 k − k −c t s Z Z  k − k for some finite constant c> 0. Now, using the change of variables z = c(t s)/ x 2, we see that this expression is − k k cd/2−2 x −d+4 z −d/2 exp ( 1/ z ) dz. ≥ k k 2 2 k k − k k Z[0,c(b−a)/kxk ] Finally, Lemma 3.5 with β = d/2 2 and α = 1/2 shows that Hypothesis H3 holds for d 4 in the same way≥ as in the proof of Theorem 3.1. ≥ The proof of Theorem 5.1 is complete. 

5.2. The case b 0. The aim of this section is to extend Theorem 5.1 to the case b 0. Our6≡ main tool is Girsanov’s theorem for an adapted transla- tion of the6≡ Brownian sheet (see [20], Proposition 1.6). POTENTIAL THEORY FOR HYPERBOLIC SPDES 47

R2 Let Y = (Yt,t +) denote the d-dimensional adapted continuous process defined on (Ω, ,∈P) satisfying (4.7) with x = 0, that is, G 0 d i i j i R2 (5.3) Yt = σj(Ys) dWs + b (Ys) ds, t +, 1 i d. [0,t] [0,t] ∈ ≤ ≤ jX=1 Z Z We introduce the following condition on the vector b:

Hypothesis P3. For some constant N, for all 1 i d and x Rd, bi(x) N. ≤ ≤ ∈ | |≤ Consider the random variable

−1 1 −1 2 L = exp σ (Y )b(Y ) dW σ (Y )b(Y ) d ds . t − s s · s − 2 k s s kR  Z[0,t] Z[0,t]  We have the following Girsanov theorem.

Theorem 5.2 ([20], Proposition 1.6). The random variable Lt is such that E[L ] = 1. If P˜ denotes the probability measure on (Ω, ) defined by t F dP˜ (ω)= L (ω), dP t ˜ −1 P˜ then Wt = Wt + [0,t] σ (Ys)b(Ys) ds is a standard Brownian sheet under . R Consequently, the law of Y under P˜ coincides with the law of X under P, R2 where X = (Xt,t +) is the solution of (5.3) with b 0. The following result∈ is the extension of Theorem 5.1≡to the case b 0. It is sufficient to characterize polar sets of Y . 6≡

Corollary 5.3. Assuming Hypotheses P1–P3, for all 0 0, there exists a finite positive constant Kε depending on a, b, ε, λ, M,N, ρ and the uniform bounds of the coefficients of σ and its derivatives, such that for all compact sets A x Rd : x < M , ⊂ { ∈ k k } K−1(Cap (A))1+ε P t [a, b]2 : Y A K (Cap (A))1/1+ε. ε d−4 ≤ {∃ ∈ t ∈ }≤ ε d−4 Proof. Consider the random variable

−1 1 −1 2 J = exp σ (X )b(X ) dW + σ (X )b(X ) d ds . t − s s · s 2 k s s kR  Z[0,t] Z[0,t]  2 Fix 0

−1 −1

½ ½

P E ½ E E (5.4) [GY ]= P[ GY ]= P˜[ GY Lt ]= P[ GX Jt ]. 48 R. C. DALANG AND E. NUALART

Let ε> 0 and apply H¨older’s inequality:

−1/1+ε 1/1+ε −1 1/1+ε 1/ε ε/1+ε ½ P[G ]= EP[½ J J ] (EP[ J ]) (EP[J ]) . X GX t t ≤ GX t t Rewriting the last inequality we obtain 1+ε 1/ε −ε P[G ] (P[G ]) (EP[J ]) . Y ≥ X t Let r> 0. By the Cauchy–Schwarz inequality

r −1 EP[J ] EP exp ( 2)rσ (X )b(X ) dW t ≤ − s s · s   Z[0,t] 1/2 1 4r2 σ−1(X )b(X ) 2 ds − 2 k s s kRd Z[0,t]  1/2 2 −1 2 EP exp (2r + r) σ (X )b(X ) ds . × k s s kRd   Z[0,t]  The first expectation on the right-hand side equals 1 since it is the expecta- tion of an exponential martingale with bounded quadratic variation (see [9], Chapter 3, Proposition 5.12). By Hypotheses P2 and P3, the second factor is bounded by some positive finite constant. Therefore, 1/ε EP[J ] k t ≤ ε for some constant kε > 0. Finally, by Theorem 5.1 there exists a positive P finite positive constant K such that [GX ] K Capd−4(A), which concludes the proof of the lower bound. ≥ The upper bound is proved along the same lines. Let ε> 0 and apply H¨older’s inequality to the right-hand side of (5.4): −1 1/1+ε −(1+ε/ε) ε/1+ε

P[G ]= EP[½ J ] (P[G ]) (EP[J ]) . Y GX t ≤ X t Let r> 0. Again by the Cauchy–Schwarz inequality we find that

−r −1 EP[J ] EP exp 2rσ (X )b(X ) dW t ≤ s s · s   Z[0,t] 1/2 1 2 −1 2 4r σ (X )b(X ) d ds − 2 k s s kR Z[0,t]  1/2 2 −1 2 EP exp (2r r) σ (X )b(X ) d ds . × − k s s kR   Z[0,t]  The first expectation on the right-hand side equals 1 since it is the expec- tation of an exponential martingale with bounded quadratic variation, as above. By Hypotheses P2 and P3, the second factor is bounded by some positive finite constant. Therefore, −(1+ε/ε) EP[J ] k t ≤ ε POTENTIAL THEORY FOR HYPERBOLIC SPDES 49 for some constant kε > 0. Finally, by Theorem 5.1, there exists a positive P finite positive constant K such that [GX ] K Capd−4(A), which completes the proof of the Corollary 5.3.  ≤

As a consequence of Corollaries 2.5 and 5.3 we obtain the following ana- lytic criterion for polarity for the process Y :

Corollary 5.4. Assume Hypotheses P1–P3. Let E be compact subset Rd of . Then E is a polar set for Y if and only if Capd−4(E) = 0.

Finally, Theorem 3.3 and Corollary 5.3 give the stochastic codimension and Hausdorff dimension of the range of the process Y :

Corollary 5.5. Assuming Hypotheses P1–P3, codim Y ((0, + )2) = (d 4)+ { ∞ } − and if d> 4, then dim Y ((0, + )2) = 4 a.s. H{ ∞ } 5.3. Critical dimension for hitting points. For an N-parameter Rd-valued Brownian sheet W = (W ,t RN ), Orey and Pruitt [23] showed that for any t ∈ + x Rd, ∈ 1, if d< 2N, P t RN : W = x = {∃ ∈ + t } 0, if d 2N.  ≥ This fact also can be obtained as a consequence of work by Khoshnevisan and Shi [11]. Corollary 5.4 immediately yields an analogous result for the solution Y of (5.3):

Corollary 5.6. Assuming Hypotheses P1–P3, for any x Rd, ∈ P t R2 : Y = x > 0 if and only if d< 4. {∃ ∈ + t } Acknowledgment. This article is based on E. Nualart’s Ph.D. disserta- tion, written under the supervision of R. C. Dalang.

REFERENCES [1] Cairoli, R. and Walsh, J. B. (1975). Stochastic integrals in the plane. Acta Math. 134 111–183. MR420845 [2] Chung, K. L. and Williams, R. J. (1990). Introduction to Stochastic Integration, 2nd ed. Birkh¨auser, Boston. MR1042337 [3] Dellacherie, C. and Meyer, P.-A. (1975). Probabilit´es et Potentiel, Chap 1-4. Hermann, Paris. MR488194 50 R. C. DALANG AND E. NUALART

[4] Doob, J. L. (1953). Stochastic Processes. Wiley, New York. MR58896 [5] Dudley, R. M. (1989). Real Analysis and Probability. Wadsworth and Brooks/Cole, Pacific Grove, CA. MR982264 [6] Dynkin, E. B. (1991). A probabilistic approach to one class of nonlinear differential equations. Probab. Theory Related Fields 89 89–115. MR1109476 [7] Fitzsimmons, P. J. and Salisbury, T. S. (1989). Capacity and energy for multipa- rameter Markov processes. Ann. Inst. H. Poincar´eProbab. Statist. 25 325–350. MR1023955 [8] Hirsch, F. and Song, S. (1995). Markov properties of multiparameter processes and capacities. Probab. Theory Related Fields 103 45–71. MR1347170 [9] Karatzas, I. and Shreve, S. E. (1991). Brownian Motion and . Springer, New York. MR1121940 [10] Khoshnevisan, D. (2002). Multiparameter Processes. An Introduction to Random Fields. Springer, New York. MR1914748 [11] Khoshnevisan, D. and Shi, Z. (1999). Brownian sheet and capacity. Ann. Probab. 27 1135–1159. MR1733143 [12] Kloeden, P. and Platen, E. (1992). Numerical Solution of Stochastic Differential Equations. Springer, New York. MR1214374 [13] Kohatsu-Higa, A. (2003). Lower bound estimates for densities of uniformly elliptic random variables on Wiener space. Probab. Theory Related Fields 126 421–457. MR1992500 [14] Kusuoka, S. and Stroock, D. (1987). Applications of the Malliavin calculus III. J. Fac. Sci. Univ. Tokyo Sect. IA Math. 34 391–442. MR914028 [15] Landkof, N. S. (1972). Foundations of Modern Potential Theory. Springer, New York. MR350027 [16] Le Gall, J.-F. (1999). Spatial Branching Processes, Random Snakes and Partial Differential Equations. Birkh¨auser, Basel. MR1714707 [17] Moret, S. and Nualart, D. (2001). Generalization of Itˆo’s formula for smooth nondegenerate martingales. Stochastic Process. Appl. 91 115–149. MR1807366 [18] Nualart, D. (1995). The Malliavin Calculus and Related Topics. Springer, New York. MR1344217 [19] Nualart, D. (1998). Analysis on Wiener space and anticipating stochastic calculus. Ecole´ d’Et´ede´ Probabilit´es de Saint Flour XXV. Lecture Notes in Math. 1690 123–227. Springer, New York. MR1668111 [20] Nualart, D. and Pardoux, E. (1994). Markov field properties of solutions of white noise driven quasi-linear parabolic PDEs. Stochastics Stochastics Rep. 48 17–44. MR1786190 [21] Nualart, D. and Sanz, M. (1985). Malliavin calculus for two-parameter Wiener functionals. Z. Wahrsch. Verw. Gebiete 70 573–590. MR807338 [22] Nualart, D. and Viens, F. (2000). Evolution equation of a stochastic semigroup with white-noise drift. Ann. Probab. 28 36–73. MR1755997 [23] Orey, S. and Pruitt, W. E. (1973). Sample functions of the N-parameter Wiener process. Ann. Probab. 1 138–163. MR346925 [24] Perkins, E. (1990). Polar sets and multiple points for super-Brownian motion. Ann. Probab. 18 453–491. MR1055416 [25] Schwartz, L. (1993). Analyse IV : Applications `ala Th´eorie des Mesures. Hermann, Paris, MR1143061 [26] Song, S. (1993). In´egalit´es relatives aux processus d’Ornstein–Uhlenbeck `a n- param`etres et capacit´egaussienne cn,2. S´eminaire de Probabilit´es XXVII. Lec- tures Notes in Math. 1557 276–301. Springer, New York. MR1308570 POTENTIAL THEORY FOR HYPERBOLIC SPDES 51

[27] Stein, E. M. (1970). Singular Integrals and Differentiability Properties of Functions. Princeton Univ. Press. MR290095 [28] Walsh, J. B. (1980). Martingales with a multidimensional parameter and stochastic integrals in the plane. Lectures in Probability and Statistics. Lecture Notes in Math. 1215 329–491. Springer, New York. MR875628 [29] Watanabe, S. (1984). Lectures on Stochastic Differential Equations and Malliavin Calculus. Springer, Berlin.

Institut de Mathematiques´ Ecole´ Polytechnique Fed´ erale´ 1015 Lausanne Switzerland e-mail: robert.dalang@epfl.ch e-mail: eulalia.nualart@epfl.ch url: http://ima.epfl.ch/prob/membres/dalang.html