<<

Stochastic Processes and their Applications 74 (1998) 21–36

Parabolic SPDEs driven by Poisson white Sergio Albeverioa;∗;1, Jiang-Lun Wua;2, Tu-Sheng Zhangb aFakultat fur Mathematik der Ruhr-Universitat, D-44780 Bochum and SFB 237 Essen-Bochum-Dusseldorf, Germany bFaculty of Engineering, HSH, Skaregt 103, 5500 Haugesund, Norway

Received 23 October 1996; received in revised form 5 June 1997

Abstract Stochastic partial di erential equations (SPDEs) of parabolic type driven by (pure) Poisson white noise are investigated in this paper. These equations are interpreted as stochastic integral equations of the jump type involving evolution kernels. Existence and uniqueness of the solution is established. c 1998 Elsevier Science B.V. All rights reserved.

AMS classiÿcation: primary 60H15; secondary 35R60

Keywords: Parabolic SPDEs; Poisson white noise; Stochastic integral equations of jump type; Existence and uniqueness

1. Introduction

Let ( ; F;P) be a complete probability space with a usual ÿltration {Ft}t∈[0; ∞) (i.e., {Ft} is a right continuous, increasing family of sub -algebras of F and F0 contains 2 all P-null sets of F), let (U; B(U);)bea-ÿnite measure space. Let u0 ∈ L (R)be given. Consider the following Poisson white noise driven SPDE: Z @u 1 @2u @t (t; x; !)=2 @x2 (t; x; !)+f(t; x; u(t; x; !)) + g(t; x; u(t; x; !); y)Át(dy; !); U (1.1)

u(0;x;!)=u0(x) for t ∈ (0; ∞);x∈Rand ! ∈ , where f:(0;∞)×R×R→Rand g :(0;∞)×R×R×

U→R are measurable, and Át is a Poisson white noise deÿned heuristically as the Radon–Nikodym derivative q(dt; dy; !) Á (dy; !)= (t);t∈[0; ∞) (1.2) t dt

∗ Corresponding author. 1 BiBoS-Research Centre, Bielefeld, Germany; CERFIM, Locarno, Switzerland. 2 On leave from Institute of Applied Mathematics, Academia Sinica, Beijing 100 080, People’s Republic of China.

0304-4149/98/$19.00 c 1998 Elsevier Science B.V. All rights reserved PII S0304-4149(97)00112-9 22 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36

(here dt is understood as Lebesgue measure on [0; ∞)), where q is the martingale measure associated with a given {Ft}-Poisson , namely, for A ∈ B(U) with (A)¡∞, q is given by the following formula: q([0;t];A;!):=p([0;t];A;!)−t(A);t∈[0; ∞);!∈ ; where p is the Poisson random measure on [0; ∞) × U of the given {Ft}-. One rigorous formulation of Eq. (1.1) can be given by the following integral equation Z Z t Z u(t; x; !)= Gt(x−z)u0(z)dz+ Gt−s(x−z)f(s; z; u(s; z; !)) dz ds R 0 R

Z t+ Z Z + Gt−s(x − z)g(s; z; u(s; z; !); y)dzq(ds; dy; !) (1.3) 0 U R for t ∈ [0; ∞) and x ∈ R, where {Gt(x);x∈R}t∈[0; ∞) stands for the fundamental solu- tion of the operator @=@t − 1 @2=@x2 on [0; ∞) × R, i.e.  2 2 √1 e−x =2t;t¿0;x∈R; G (x)= 2t t  x;t=0:

WeR have the following facts which will be used later on (i) G (x)dx=1;t¿0, RR t (ii) G (x − z)G (z − x0)dz=G (x−x0);x;x0∈R;06r¡s¡t¡∞. R t−s s−r t−r The stochastic integral with respect to q(ds; dy; !) in Eq. (1.3) will be speciÿed in Section 2.

The idea to interpret Eq. (1.1) by Eq. (1.3) is as follows. We remark that Gt(x) satisÿes (in the distributional sense) the following equation:   @ 1 @2 − G (x)= : @t 2 @x2 t t; x Hence, if u(t; x; !) solves the following convolution equation: 

u(t; x; !)=(G∗u0)(t; x)+ G∗[f(·;·;u(·;·;!)) Z 

+ g(·; ·;u(·;·;!); y)Á·(dy; !)] (t; x); (1.4) U then u satisÿes Eq. (1.1). In fact, for t¿0 and x ∈ R, we have      @ 1 @2 @ 1 @2 − u (t; x; !)= − (G ∗ u )(t; x) @t 2 @x2 @t 2 @x2 0   @ 1 @2 + − {G ∗ [f(·; ·;u(·; ·;!)) @t 2 @x2 Z

+ g(·; ·;u(·;·;!); y)Á·(dy; !)]} (t; x) U Z

= f(t; x; u(t; x; !)) + g(t; x; u(t; x; !); y)Át(dy; !); U S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36 23

where in the derivation of the second equality, we used the following fact: for t¿0; x ∈ R,     Z @ 1 @2 @ 1 @2 − (G ∗ u )(t; x)= − G (x − z)u (z)dz 2 0 2 t 0 @t 2 @x @t 2 @x R Z    @ 1 @2 = − G (x − z) u(0;z;!)dz 2 t R @t 2 @x Z

= t; x(s; z)u0(0;z;!)dz R

=0:

On the other hand, Eq. (1.4) is heuristically equivalent to Eq. (1.3) since we have the following heuristic derivation of the second term on the right-hand side of Eq. (1.4):   Z 

G ∗ f(·; ·;u(·;·;!)) + g(·; ·;u(·;·;!); y)Á·(dy; !) (t; x) U

Z t Z = Gt−s(x − z)f(s; z; u(s; z; !)) dz ds 0 R

Z t Z Z + Gt−s(x − z)g(s; z; u(s; z; !); y)Ás(dy; !)dzds 0 U R

Z t Z = Gt−s(x−z)f(s; z; u(s; z; !)) dz ds 0 R

Z t+ Z Z + Gt−s(x − z)g(s; z; u(s−;z;!); y)dzq(ds; dy; !): 0 U R

The Gaussian white noise driven parabolic SPDEs had been introduced and dis- cussed initially by Walsh (1986). There are many works on such equations, see e.g. Bally et al. (1994), Gyongy and Pardoux, Pardoux and Zhang (1993), Peszat (1995), and references therein. On the other hand, there are many investigations of stochas- tic di erential equations with respect to Poisson point processes, see e.g. Ikeda and Watanabe (1981), and more recently Kurtz et al. (1995) (and references therein) in the more general setting of equations driven by general . Let us also men- tion Kallianpur and Perez-Abreu (1988) where the authors discussed stochastic evolu- tion equations driven by nuclear-space-valued martingales which includes a SPDE with respect to the martingale measure of a Poisson random measure. However, so far as we know, there has been no mathematical treatment of Poisson white noise driven parabolic SPDEs. In this paper, we attempt to investigate Eq. (1.1) by discussing the rigorous in- tegral formulated Eq. (1.3). We will prove, under suitable conditions, existence and uniqueness of the solutions of Eq. (1.3). 24 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36

2. Preliminaries and the main result

In this section, we set up notations and introduce some notions. We refer the readers to e.g. Ikeda and Watanabe (1981) and Jacod and Shiryaev (1987) for details. We present our main result at the end of the section.

Let ( ; F;P;{Ft}t∈[0; ∞)) be given as in Section 1. Let (U; B(U)) be a measurable space. It is known (e.g. from Ikeda and Watanabe (1981)) that for any -ÿnite measure  on (U; B(U)) there exists a stationary Poisson point process on U with characteristic measure .

Now, suppose we are given a stationary {Ft}-Poisson point process with characteris- tic measure , whose Poisson random measure is denoted by p, namely, p : B([0; ∞)) × B(U) × → N ∪{0}. For B ∈ B(U) with (B)¡∞,weset

q([0;t];B;!):=p([0;t];B;!)−t(B);t∈[0; ∞);!∈ :

Then {q([0;t];B;!)}t∈[0; ∞);!∈ is an {Ft}-martingale (measure). Now, we set 2 Hp := {h(t; y; !): h is {Ft}-predictable and ∀t¿0;

Z t Z   E |h(s; y; ·)|2(dy)ds ¡∞ : 0 U It is known (see e.g. Ikeda and Watanabe (1981) and Jacod and Shiryaev (1987)) that for any t¿0 the stochastic integral

Z t+ Z h(s; y; ·)q(ds; dy; ·);t∈[0; ∞) 0 U

2 for h ∈ Hp can be well deÿned. Moreover, the stochastic integral has the following isometry property: ( ) Z t+ Z 2 Z t Z  E h(s; y; ·)q(ds; dy; ·) = E |h(s; y; ·)|2(dy)ds : 0 U 0 U R R Thus for any t¿0;!∈ 7→ t+ h(s; y; !)q(ds; dy; !) ∈ L2( ). 0 U Here, for later use in our paper, we need to extend stochastic integrals with respect to q(ds; dy; ·) to a slightly more general class H of integrands without the predictable property. A function h(t; y; !) is said to be of the class H if it is {Ft}-adapted, and 2 there exists a sequence {hn(t; y; !)}n∈N ⊂ Hp such that for any t¿0

Z t Z 2 E [hn(s; y; ·) − h(s; y; ·)] ds(dy) → 0asn→∞: 0 U For h ∈ H and for any ÿxed t¿0, the stochastic integral

Z t+ Z h(s; y; ·)q(ds; dy; ·); 0 U S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36 25

is deÿned as the L2( )-limit of the following Cauchy sequence

Z t+ Z  hn(s; y; ·)q(ds; dy; ·) ; 0 U n∈N

2 2 each term of which is well deÿned as a in L ( ) since hn ∈ Hp for all n ∈ N. Namely,

Z t+ Z Z t+ Z 2 h(s; y; ·)q(ds; dy; ·):=L ( )− lim hn(s; y; ·)q(ds; dy; ·): (2.1) 0 U n→∞ 0 U Due to the isometry property of the stochastic integrals for integrands from the class H2, the stochastic integral deÿned by Eq. (2.1) does not depend on the chosen sequence p R R {h (t; y; !)} ⊂ H2. Thus, the stochastic integral t+ h(s; y; ·)q(ds; dy; ·) is a well- n n∈N p 0 U deÿned, square integrable {Ft}-martingale.

Remark 2.1. It is worthwhile to point out that for any h ∈ H there is not, in general, a 0 2 0 predictable version h ∈ Hp of h in the sense that for every t¿0;h(t; ·; ·)=h(t; ·; ·);(dy) ⊗ P(d!)-a.s. Indeed, by the deÿnition, ∀h ∈ H, there is a sequence {hn(s; y; !)}n∈N 2 ⊂ Hp such that ∀t¿0

Z t Z 2 lim E [hn(s; y; ·) − h(s; y; ·)] ds(dy)=0: n→∞ 0 U

2 However, although each hn ∈ Hp is predictable, since the measure ds is also involved in the above equality, we can not guarantee the existence of the predictable version 0 2 0 h ∈ Hp in the above sense (i.e., for every t¿0;h(t; ·; ·)=h(t; ·; ·);(dy) ⊗ P(d!)-a.s.). Let us now give a precise formulation of solutions for (1.3). By a solution to Eq. (1.3), we a function u :(t; x; !) ∈ [0; ∞) × R × 7→ u(t; x; !) ∈ R with the following properties:

(1) u is {Ft}-adapted, 2 (2) {u(t; x; ·)}t∈[0; ∞), as a family of L ( ; F;P)-valued random variables, is right continuous and has left limits in the variable t ∈ [0; ∞), namely,

u(t−;x;·)=L2( )−lim u(s; x; ·);t∈[0; ∞): s↑t

In this paper we simply call such a u modiÿed cadl ag in t (after the French acronym). Clearly, our condition on left limit in the cÃadlÃag sense is weaker than the usual one; (3) u is continuous in the variable x for almost all ! ∈ ; (4) Eq. (1.3) holds a.s. Furthermore, we say that the solution is unique in the sense that if whenever u(1) and (2) u are any two solutions of Eq. (1.3) with respect to the set-up ( ; F;P;{Ft}t∈[0; ∞)), then u(1)(t; x; ·)=u(2)(t; x; ·), a.s. for all (t; x) ∈ [0; ∞) × R, namely, they are versions of the same . Clearly, the notion of uniqueness given here is slightly di erent from the usual notion of uniqueness for stochastic di erential equations. We have the following main result. 26 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36

Theorem 2.2. Assume that for any T¿0, there exist a (positive) real function KT ∈ 1 2 L (R) ∩ L (R) and a constant LT ¿0 such that Z 2 2 [f(t; x; z)] + [g(t; x; z; y)] (dy)6KT (x)(1 + |z|); (2.2) U Z 2 2 2 [f(t; x; z1) − f(t; x; z2)] + [g(t; x; z1; y) − g(t; x; z2; y)] (dy)6LT |z1 − z2| U (2.3)

2 for (t; x; z) ∈ [0;T]×R×R. Then for any u0 ∈ L (R), there exists a unique solution to Eq. (1.3).

3. The proof of Theorem 2.2

Let us start to prove the existence of the solution by successive approximations. In order to do that, let us denote by UT , for any ÿxed T¿0, the collection of all functions u :[0;T]×R× →R which are measurable in the triple (t; x; !), {Ft}t∈[0;T]-adapted, modiÿed cÂadlÂag in t, continuous in x and u(t; ·;!)∈L2(R) for all t ∈ [0;T] and a.s.

! ∈ . Clearly, UT is a real vector space, i.e., UT is closed under the linear operation. We set Z

u1(t; x; !)= Gt(x−z)u0(z)dz; R Z t Z un+1(t; x; !)= u1(t; x; !)+ Gt−s(x−z)f(s; z; un(s; z; !)) dz ds 0 R (3.1) Z t+ Z Z + Gt−s(x − z)g(s; z; un(s−;z;!); y)dzq(ds; dy; !) 0 U R

:= u1(t; x; !)+I1(t; x; !)+I2(t; x; !) for (t; x; !) ∈ [0;T]×R× and n ∈ N (where I1 and I2 denote the second, and third term respectively, on the right-hand side of the second equality). We have the following regularity result for {un}n∈N:

Proposition 3.1. ∀T¿0;un ∈UT for all n ∈ N.

We shall prove this result by induction which is completed by the following Lemmas. Let T¿0 be arbitrarily ÿxed, then we have clearly

Lemma 3.2. u1 ∈ UT .

Lemma 3.3. If un ∈ UT for some n ∈ N, then Z

ht; x(s; y; !):= Gt−s(x−z)g(s; z; un(s−;z;!); y)dz∈H R for any arbitrarily ÿxed t ∈ [0;T]. S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36 27

Proof. For any ÿxed t ∈ [0;T], we deÿne

2Xm−1   (m) kt u (s; z; !):=u (0;z;!)+ u ;z;! 1 m m (s);m∈N n n n 2m (kt=2 ; ((k+1)t)=2 ] k=0

(m) for (s; z; !) ∈ [0;t]×R× . Clearly, un (s; z; !)is{Fs}-predictable. We set Z m (m) ht; x(s; y; !):= Gt−s(x−z)g(s; z; un (s; z; !); y)dz; m ∈ N (3.2) R

m for (s; y; !) ∈ [0;t]×U× . Then ht; x(s; y; !)is{Fs}-predictable. Let us show that m 2 for any ÿxed m ∈ N and (t; x) ∈ [0;T]×R;ht; x deÿned by Eq. (3.2) belongs to Hp.We ÿrst remark that the integrand on the right-hand side of Eq. (3.2) is measurable with respect to the variable z. Thus we need to show that Z (m) Gt−s(x − z)|g(s; z; un (s; z; !); y)| dz¡∞;-a:e:y∈U: R To this end, it suces to verify that Z (m) 2 Gt−s(x − z)[g(s; z; un (s; z; !); y)] dz¡∞;-a:e:y∈U; (3.3) R since then by the Schwarz inequality we obtain Z (m) Gt−s(x − z)|g(s; z; un (s; z; !); y)| dz R Z 1=2 Z 1=2 (m) 2 6 Gt−s(x − z)dz Gt−s(x−z)[g(s; z; un (s; z; !); y)] dz R R ¡∞;-a:e:

In fact, by a general version of Fubini theorem (see e.g. Theorem 7.8 in Rudin, 1974), the assumption (2.2) and Schwarz inequality, we have Z Z (m) 2 Gt−s(x − z)[g(s; z; un (s; z; !); y)] dz(dy)¡∞; (3.4) U R

m which implies that the inequality (3.3) holds. Hence, ht; x(s; y; !) is well deÿned by Eq. (3.2). Moreover, by Eq. (3.4), we have

Z t Z  m 2 E [ht; x(s; y; ·)] (dy)ds 0 U Z t Z Z  (m) 2 6E Gt−s(x−z)[g(s; z; un (s; z; !); y)] dz(dy)ds 0 U R ¡∞

m 2 for (t; x) ∈ [0;T]×R. Thus ht; x ∈ Hp. 28 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36

On the other hand, by the assumption (2.3), we have

Z t Z  m l 2 E [ht; x(s; y; ·) − ht; x(s; y; ·)] (dy)ds 0 U Z t Z 6 ds Gt−s(x−z)E 0 R Z  (m) (l) 2 × [g(s; z; un (s−;z;·); y) − g(s; z; un (s−;z;·); y)] (dy) dz U → 0asm; l →∞; where the last line follows from the existence of the left limit un(s−;z;·)ofun(s; z; ·) 2 in L ( ). Hence ht; x(s; y; !) ∈ H for any ÿxed t ∈ [0;T].

Lemma 3.4. If un ∈ UT for some n ∈ N, then the two integral terms I1 and I2 on the right-hand side of Eq. (3.1) are well deÿned.

Proof. For the ÿrst integral

Z t Z I1(t; x; !)= Gt−s(x − z)f(s; z; un(s; z; !)) dz ds; 0 R we remark that the integrand is measurable with respect to the variable s ∈ [0;t] and z ∈ R, thus, we only need to show that the integrand is integrable with respect to the variable z over R. In fact, by the Schwarz inequality and the assumption (2.2), we have Z

Gt−s(x − z)|f(s; z; un(s; z; !))| dz R Z 1=4  Z 1=4 2 2 6 Gt−s(x − z)[KT (z)] dz 2 Gt−s(x − z)(1+[un(s; z; !)] )dz R R ¡∞; a:s: for 06s6t6T and x ∈ R since by the assumption that un ∈ UT , we have un(s; ·;!)∈ 2 2 L (R) a.s. and the fact that Gt is contractive in L (R). Hence, I1(t; x; !) is well deÿned as a Lebesgue integral over [0;t]×R for (t; x) ∈ [0;T]×R.

Furthermore, remarking that by Lemma 3.3 we have ht; x ∈ H, thus the second integral on the right-hand side of Eq. (3.1) is well deÿned (and in fact) as the L2( )-limit of the Cauchy sequences

Z t+ Z  m ht; x(s; y; ·)q(ds; dy; ·) : 0 U m∈N The proof of Lemma 3.4 is now complete.

Lemma 3.5. If un ∈ UT for some n ∈ N, then for any ÿxed (t; x) ∈ [0;T]×R;I2(t; x; ·) is {Ft}-measurable (which immediately implies that I2 is {Ft}-adapted). Moreover, I2 is modiÿed cadl ag in t ∈ [0;T] and continuous in x ∈ R a.s. S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36 29

Proof. First of all, we observe that I2(t; x; !) is right continuous in t which is obvi- ous from the fact that I2 is a well-deÿned integral as we elucidated in the proof of Lemma 3.4, since the upper limit of I2(t; x; ·) is given by the right limit of t. Now, for any ÿxed t ∈ [0;T], we set Z r+ Z Jt(r; x; !):= ht; x(s; y; !)q(ds; dy; !); (r; x; !) ∈ [0;t]×R× : (3.5) 0 U

Then by Eq. (3.2), {Jt(r; x; !)} is a square integrable {Ft}-martingale with the quadratic variational process given by the following (non-stochastic) integral

Z r Z Z 2 hJt(·;x;!)i(r)= Gt−s(x−z)g(s; z; un(s−;z;!); y)dz (dy)ds; (3.6) 0 U R since we had derived in the proof of Lemma 3.3 that ht; x ∈ H. Furthermore, it is well known that (see e.g. Theorem I.6.9 of Ikeda and Watanabe, 1981), Jt(r; x; !) has a cÂadlÂag version (in the variable r ∈ [0;t]). On the other hand, we have I2(t; x; !)= Jt(t; x; !). Hence I2(t; x; !)is{Ft}-measurable. Moreover, since 2 L ( ) − lim(Jr(r; x; ·) − Jt(r; x; ·))=0; r↑t we have 2 2 L ( ) − lim I2(r; x; ·)=L ( )− lim Jr(r; x; ·) r↑t r↑t 2 = L ( ) − lim Jt(r; x; ·) r↑t 2 +L ( ) − lim[Jr(r; x; ·) − Jt(r; x; ·)] r↑t = Jt(t−;x;·):

2 Hence, the left L ( )-limits of I2(t; x; ·) exist for all t ∈ [0;T] and x ∈ R. Combining with the right continuity of I2(t; x; !)int, we get that I2 is modiÿed cÂadlÂag in the variable t.

In what follows, let us show that I2 is continuous in the variable x. To this end, we need the following version of Totoki’s extension of Kolmogorov–Prokhorov’s continu- ity theorem (cf. e.g. Theorem 2.6.5 of Itˆo, 1984):

Lemma 3.6. Let {X (x)}x∈R be a real-valued stochastic process. If for every L¿0, there exist positive constants L;ÿL and ”L (depending only on L) such that

L 1+”L E{|X (x1) − X (x2)| }6ÿL|x1 − x2|

for all x1;x2 ∈[−L; L], then {X (x)} has a continuous version.

Now, we take up again the proof of Lemma 3.5. Let L¿0 and t ∈ [0;T] be arbitrarily

ÿxed and x1;x2 ∈[−L; L];x16=x2. We observe that 2 E [Jt(r; x1; ·) − Jt(r; x2; ·)]

Z r+ Z Z 2 =E [Gt−s(x1 − z) − Gt−s(x2 − z)]g(s; z; un(s−;z;!); y)dz 0 U R ×(dy)ds: 30 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36

Hence, by the Schwarz inequality, Fubini’s theorem and the assumption (2.3), we have

2 E(|I2(t; x1; ·) − I2(t; x2; ·)| )

2 = E(|Jt(t; x1; ·) − Jt(t; x2; ·)| ) ( Z t Z Z 2 = E (Gt−s(x1 − z) − Gt−s(x2 − z))g(s; z; un(s−;z;·); y)dz 0 U R ) ×(dy)ds

Z t Z 6E |Gt−s(x1 −z)−Gt−s(x2 −z)|dz 0 R Z Z  2 × |Gt−s(x1 −z)−Gt−s(x2 −z)| [g(s; z; un(s−;z;·); y)] (dy)dzds R U

Z t Z 1 2 2 6E √ | e−z =2s − e−(x2−x1+z) =2s| dz 0 2s R Z 

× |Gs(x1 − z) − Gs(x2 − z)|KT (z)(1 + |un((t − s)−;z;·)|)dzds ; R where we used the change of variables: z 7→ x1 −z and s 7→ t −s in the latter inequality. On the other hand, Z

|Gs(x1 − z) − Gs(x2 − z)|KT (z)(1 + |un((t − s)−;z;·)|)dz R

Z 1=2 2 6 [Gs(x1 −z)+Gs(x2 −z)][KT (z)] dz R

 Z 1=2 2 × 2 [Gs(x1 − z)+Gs(x2 −z)](1 + [un((t − s)−;z;·)] )dz R ( ) √ Z 1=2 2 62 2 sup Gs(x − z)[KT (z)] dz x∈[−L; L] R

( Z )1=2 2 × sup Gs(x − z)(1+[un((t − s)−;z;·)] )dz x∈[−L; L] R

:= ct; L(s) ¡∞; where the latter inequality is derived by the following argument. From the assumptions 2 that KT and un belong to L (R), we know that the two integrals on the fourth and ÿfth lines are continuous in x. Thus, the suprema of the two integrals for x over the closed S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36 31

interval [−L; L] are ÿnite. Furthermore, by the change of variables: z 7→ (x2 − x1)z and 2 s 7→ (x2 − x1) s, we obtain

2 E{|I2(t; x1; ·) − I2(t; x2; ·)| } " # Z t Z 1 2 2 −z =2s −(x2−x1+z) =2s 6E sup ct; L(s) √ |e − e | dz ds s∈[0;t] 0 2s R " # Z 2 t=(x2−x1) 2 1 6E sup sup ct; L(s) (x2 − x1) √ t∈[0;T]s∈[0;t] 0 2s Z × |e−z2=2s − e−(1+z)2=2s| dz ds R

2 6(x2 − x1) CL; T

for x1;x2 ∈[−L; L], where " #

CL; T := E sup sup ct; L(s) t∈[0;T]s∈[0;t]

Z +∞ Z 1 2 2 × √ |e−z =2s − e−(1+z) =2s| dz ds¡∞ 0 2s R is a constant only depending on L for ÿxed T. Thus, we have

2 2 sup E{|I2(t; x1; ·) − I2(t; x2; ·)| }6(x2 − x1) CL; T : t∈[0;T]

Therefore, by Lemma 3.6, I2 has a version which is continuous in the variable x.

Remark 3.7. It is worthwhile to point out that I2(t; x; !) is, in general, not an {Ft}- martingale since the integrand of I2 also depends on t and is di erent for each t.

Lemma 3.8. If un ∈ UT for some n ∈ N, then un+1 ∈ UT .

Proof. By Lemma 3.2, u1 ∈ UT . Thus, by Eq. (3.1), it suces to show that Ij ∈ UT for j =1;2, since UT is closed under linear operations. By Lemma 3.5 we know that {I2(t; x; !)}(t; x; !) ∈ [0;T] × R × is {Ft}-adapted, modiÿed cÂadlÂag in t and continu- ous in x, a.e. for ! ∈ . Let us note that the same is true for {I1(t; x; !)}, this is because I1 is a Lebesgue integral depending on the parameter t and x, whose inte- grands are absolutely integrable and continuous in t and x. Moreover, I1 is obviously {Ft}-adapted since it is a non-stochastic integral and its corresponding random 2 integrand is {Ft}-adapted. Let us ÿnally prove that un+1(t; ·;!)∈L (R) a.s. for all t ∈ [0;T]. Namely, we will show Z 2 E [un+1(t; x; ·)] dx¡∞;t∈[0;T]: R 32 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36

Remark that by Eq. (3.1), we have

Z Z X2 Z 2 2 2 E [un+1(t; x; ·)] dx64 [u1(t; x; ·)] dx +4 E [Ij(t; x; ·)] dx; (3.7) R R j=1 R so it suces to prove that the two terms on the sum of the right-hand side of Eq. (3.7) are ÿnite. In fact, by Schwarz inequality, the assumption (2.2) and Fubini’s theorem, we have the following derivation: Z 2 E [I1(t; x; ·)] dx R

Z Z t Z  2 6tE Gt−s(x − z)[f(s; z; un(s; z; ·))] dz ds dx R 0 R

Z Z t Z  6tE Gt−s(x − z)KT (z)(1 + |un(s; z; ·)|)dzdsdx R 0 R

Z t Z Z   6tE Gt−s(x−z)dx KT(z)(1 + |un(s; z; ·)|)dzds 0 R R ! Z t Z Z 1=2  Z 1=2 2 2 6t KT(z)dz+ [KT(z)] dz E 2 [un(s; z; ·)] dz ds 0 R R R ¡∞;

1 2 2 since KT ∈ L (R) ∩ L (R), un(s; ·;!)∈L (R) a.s. for s ∈ [0;t] and un is cÃadlÃag in the variable s. Similarly by Eq. (3.6), we obtain Z Z 2 2 E [I2(t; x; ·)] dx = E([Jt(t; x; ·)] )dx R R ( ) Z Z t Z Z 2 = E Gt−s(x−z)g(s; z; un(s−;z;·); y)dz (dy)ds dx R 0 U R

Z Z t Z Z  2 6 E Gt−s(x−z)[g(s; z; un(s−;z;·); y)] dz(dy)ds dx R 0 U R

Z Z t Z  6E Gt−s(x−z)KT(z)(1 + |un(s−;z;·)|)dzds dx R 0 R ¡∞:

Therefore, we conclude that un+1 ∈ UT .

Now, combining Lemma 3.2 with Lemma 3.8, we obtain Proposition 3.1 by induc-

tion. Hence the sequence {un(t; x; !); (t; x; !) ∈ [0;T]×R× }n∈N is well deÿned by Eq. (3.1). S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36 33

2 In what follows, we will show that {un(t; ·;!)}n∈N converges in L (R) to a solution u(t; ·;!), say, of Eq. (3.1), which completes the existence proof of solutions to (1.3).

The proof of the existence. Set Z  2 Fn(t):=E [un+1(t; x; ·) − un(t; x; ·)] dx ;t∈[0;T];n∈N: R Then by Eq. (3.1), Schwarz inequality, Fubini’s theorem (as used in the previous arguments) and the assumption (2.3), we have the following derivation for n ∈ N:

Fn(t)

Z Z t Z 2 62E Gt−s(x − z)[f(s; z; un(s; z; ·)) − f(s; z; un−1(s; z; ·))] dz ds dx R 0 R

Z Z t+ Z Z +2E Gt−s(x − z)[g(s; z; un(s; z; ·); y) R 0 U R

 2 −g(s; z; un−1(s; z; ·); y)] dzq(ds; dy; ·) dx

Z Z t Z 2 62tLT E Gt−s(x − z)[un(s; z; ·) − un−1(s; z; ·)] dz ds dx R 0 R

Z Z t Z 2 +2LT E Gt−s(x − z)[un(s; z; ·) − un−1(s; z; ·)] dz ds dx R 0 R

Z t Z 2 62LT (T +1) E [un(s; z; ·) − un−1(s; z; ·)] dz ds 0 R

Z t = CT LT Fn−1(sz)ds; 0 where CT := 2(T + 1) is a constant only depending on T. Hence, by induction we get Z Z Z t t1 tn−2 n−1 Fn(t)6[CT LT ] dt1 dt2 ··· F1(tn−1)dtn−1: 0 0 0 On the other hand, by Eq. (3.1) and the assumption (2.2), we have Z  2 F1(t)= E [u2(t; x; ·) − u1(t; x; ·)] dx R Z Z t Z 6 CT Gt−s(x − z)KT (z)(1 + |u1(s; z)|)dzdsdx R 0 R Z 1=2  Z 1=2 2 2 6TCT [KT (z)] dz 2T + [u0(z)] dz R R which is again a constant only depending on T, denoted by const. Thus we obtain const:[C L T]n−1 06F (t)6 T T ;n∈N n (n − 1)! 34 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36 P TCT LT which implies that the series n∈N Fn(t)(6const:e ) converges uniformly on [0;T]. 2 Therefore, the sequence {un(t; ·;!): (t; !) ∈ [0;T]× }n∈N converges in L (R) uni- formly for t ∈ [0;T] and a.s. for ! ∈ . Deÿne

2 u(t; ·;!):=L (R)− lim un(t; ·;!): n→∞ It is easy to see that u(t; ·;!)∈L2(R) for all t ∈ [0;T] and a.s. ! ∈ , and moreover,

u is {Ft}-adapted. It remains to show that u satisÿes an equation of the form (1.3), namely, we need to prove

Z t Z u(t; x; !)=u1(t; x; !)+ Gt−s(x−z)f(s; z; u(s; z; !)) dz ds 0 R

Z t+ Z Z + Gt−s(x − z)g(s; z; u(s−;z;!); y)dzq(ds; dy; !): (3.8) 0 U R In fact, by Eq. (3.1), we have Z

E [un(t; x; ·) − u1(t; x; ·) R

Z t Z − Gt−s(x − z)f(s; z; u(s; z; ·)) dz ds 0 R

Z t+ Z Z − Gt−s(x − z)g(s; z; u(s−;z;·); y)dzq(ds; dy; ·) 0 U R

Z t Z  2 6CT LT E [un−1(s−;z;·)−u(s−;z;·)] dz ds: (3.9) 0 R 2 Since {un(t; ·;!): (t; !) ∈ [0;T]× }n∈N converges to u(t; ·;!)inL(R) uniformly for t ∈ [0;T] and a.s. for ! ∈ , we can take the L2( )-limit as n →∞ through the integral of the variable s over [0;t] on the right-hand side of the inequality (3.9), from which we obtain Eq. (3.8). Moreover, from Eq. (3.8) and by a similar argument as in the proofs of Lemmas 3.2, 3.4, 3.5 and 3.8, we conclude that u has a version which is modiÿed cÂadlÂag in t and continuous in x. Thus u is a solution of Eq. (1.3).

The proof of uniqueness. Let u(1) and u(2) be two solutions of Eq. (1.3). Then u(1) (2) (1) (2) and u belong to UT . Hence u and u satisfy Eq. (3.8). Set Z  H(t):=E [u(1)(t; x; ·) − u(2)(t; x; ·)]2 dx ;t∈[0;T]; R which is obviously modiÿed cÂadlÂag in the variable t. Thus, supt∈[0;T] H(t)¡∞.By Eq. (3.8) and the same argument as in the proof of existence, we get

Z t H(t)6CT LT H(s)ds: 0 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36 35

By Gronwall inequality, we obtain that supt∈[0;T] H(t) = 0. This clearly implies that

u(1)(t; x; ·)=u(2)(t; x; ·)a:s: for all t ∈ [0;T] and (actually dx-almost) all x ∈ R.

Remark 3.9. We point out that the solution given by Eq. (3.8) is not a

since the second stochastic integral on the right-hand side of Eq. (3.8) is not an {Ft}- martingale as we observed in Remark 3.7. This makes an intrinsic di erence between SPDEs and SDEs driven by Poisson white noise. The solutions of SDEs driven by Poisson white noise are namely semimartingales, see e.g. Section 9 of Chapter IV in Ikeda and Watanabe (1981) and also Theorem 3.2 of Kurtz et al. (1995).

Remark 3.10. We have proved that the solution u(t; x; !) is modiÿed cÃadlÃag in t. Furthermore, u(t; x; !) is even modiÿed continuous in the sense that for each x ∈ R, the 2 sequence of L -valued variables {u(t; x; ·)}t∈[0;∞) is continuous. Indeed, this comes from the known fact that the associated Poisson random measure satisÿes p({t};U;·)=0; P-a:s:

1 Finally, we notice that the condition KT ∈ L (R) is only used in the proof of Lemma 3.8 and directly from there we have the following (alternative) sucient con- dition (2.2a) for Theorem 2.2.

Theorem 2.20. Assume that for any T¿0, there exist a (positive) real function 2 KT ∈ L (R) and a constant LT ¿0 such that Z 2 2 [f(t; x; z)] + [g(t; x; z; y)] (dy)6KT (x)|z|; (2.2a) U Z 2 2 2 [f(t; x; z1) − f(t; x; z2)] + [g(t; x; z1; y) − g(t; x; z2; y)] (dy)6LT |z1 − z2| U

2 for (t; x; z) ∈ [0;T]×R×R. Then for any u0 ∈ L (R), there exists a unique solution to Eq. (1.3).

Acknowledgements

The authors would like to thank the referee for careful reading of the ÿrst version of the paper and for making most useful comments and suggestions which led to substantial improvement of the paper, especially for pointing out Remark 3.10. The ÿrst and second authors gratefully acknowledge the ÿnancial support by D.F.G. through SFB 237. The third author was partly supported by VISTA, a research co-operation between the Norwegian Academy of Science and STATOIL. 36 S. Albeverio et al. / Stochastic Processes and their Applications 74 (1998) 21–36

References

Bally, V., Gyongy, I., Pardoux, E., 1994. White noise driven parabolic SPDEs with measurable drift. J. Funct. Anal. 120, 484–510. Gyongy, I., Pardoux, E., Weak and strong solutions of white noise driven parabolic SPDEs. preprint. Ikeda, N., Watanabe, S., 1981. Stochastic Di erential Equations and Di usion Processes. North-Holland, Amsterdam. Itˆo, K., 1984. Foundations of stochastic di erential equations in inÿnite dimensional spaces. CBMS-NSF Regional Conf. Series in Applied Mathematics, vol. 47. SIAM, Philadelphia. Jacod, J., Shiryaev, A.N., 1987. Limit Theorems for Stochastic Processes. Springer, Berlin. Kallianpur, G., Perez-Abreu, V., 1988. Stochastic evolution equations driven by nuclear-space-valued martingales. Appl. Math. Optim. 17, 237–272. Kurtz, T.G., Pardoux, E., Protter, P., 1995. Stratonovich stochastic di erential equations driven by general semimartingales. Ann. Inst. H. PoincarÃe B 31, 351–377. Pardoux, E., Zhang, T.-S., 1993. Absolute continuity of the law of the solution of a parabolic SPDE. J. Funct. Anal. 112, 447–458. Peszat, S., 1995. Existence and uniqueness of the solution for stochastic equations on Banach spaces. Stochastics and Stochastics Rep. 55, 167–193. Rudin, W., 1974. Real and Complex Analysis. 2nd ed. McGraw-Hill, New York. Walsh, J.B., 1986. An introduction to stochastic partial di erential equations. In: Ecole d’EtÃÃ e de ProbabilitÃes de St. Flour XIV, Lecture Notes in Mathematics, vol. 1180. Springer, Berlin, pp. 266–439.