arXiv:math/0504027v1 [math.PR] 1 Apr 2005 tions. M 1991 AMS e od n phrases and words Key h eteuto ihmlilctv stable multiplicative with equation heat The 2 1 eerhspotdi atb h salSineFudto (gr Foundation Science Israel the by part in supported Research upre ya S grant. NSF an by Supported aut fIdsra niern n Management and Engineering Industrial of Faculty ujc classifications subject ehin–Ire nttt fTechnology of Institute Israel – Technion -al [email protected] E-mail: -al [email protected] E-mail: -al [email protected] E-mail: eteuto,wienie tcatcprildffrnilequa- differential partial stochastic noise, white equation, heat nvriyo Rochester of University nvriyo Rochester of University et fMathematics of Dept. et fMathematics of Dept. ohse,N 14627 NY Rochester, ohse,N 14627 NY Rochester, af 20,Israel 32000, Haifa endMytnik Leonid L´evy noise alMueller Carl rmr,6H5 eodr,3R0 35K05. 35R60, Secondary, 60H15; Primary, ue Stan Aurel 1 1 1 2 n o 116/01-10.0). No. ant Abstract We study the heat equation with a random potential term. The potential is a one-sided stable noise, with positive jumps, which does not depend on time. To avoid singularities, we define the equation in terms of a construction similar to the Skorokhod or Wick product. We give a criterion for existence based on the dimension of the space variable, and the parameter p of the stable noise. Our arguments are different for p< 1 and p ≥ 1.

2 1 Introduction

Our goal is to study the heat equation with stable L´evy noise L˙ (x) which depends only on space. Assumption 1 Throughout the paper, we assume that L˙ has only positive jumps. Our noise is multiplicative, in the sense that it is multiplied by u in the equation. ∂u = ∆u + u ⋄ L˙ (x) (1.1) ∂t u(0,x) = u0(x).

Here, L˙ (x) is a one-sided stable noise of index p ≥ 1 defined on x ∈ Rd. The product u ⋄ L˙ (x) is related to the Skorokhod integral or Wick product. We will give precise conditions later. Our main goal is to give conditions on p and the dimension d such that the solution exists and is unique. In fact, for p < 1, the stochastic integral is defined in a different way and can include more singular terms. In this case, we use the notation ∂u = ∆u + u ⋆ L˙ (x) (1.2) ∂t u(0,x) = u0(x).

There are many papers concerning the heat equation with a mul- tiplicative noise term (see [Par93]). In most cases, however, the noise depends on time, and is even independent from one time point to an- other. This is a very natural assumption from the point of view of Itˆo integration, since the relevant often yield martingales. For example, the equation ∂u = ∆u + f(u)W˙ (t,x) (1.3) ∂t u(0,x) = u0(x) where W˙ (t,x) is space-time , is usually formulated in terms of an integral equation

u(t,x) = G(t,x − y)u0(y)dy Rd Z t + G(t − s,x − y)f(u(s,y))W (dyds) Rd Z0 Z 3 where G(t,x) is the heat kernel, and where the final integral can be defined in much the same way as the Itˆointegral. But if the noise does not depend on time, such an integral would be anticipating. Further evidence of the difficulty of noise depending only on space is given by the following equation: ∂u = ∆u + δ(x)u (1.4) ∂t u(0,x) = u0(x) where δ(x) is the Dirac delta function. When u does not depend on t, this equation has been extensively studied by mathematical physi- cists. See the book [AGHKH88] by Albeverio et al., for example. This equation (without t dependence) models the quantum mechanics of a particle under the influence of a point interaction. As usual, we can think of this equation in terms of the density of Brownian particles. The δ(x) term means that Brownian particles at 0 would give birth to many new particles. But these new particles would themselves be close to 0, and so they might give birth to other particles, ad infini- tum. Because of this unstable behavior, when the dimension d is 2 or higher we need to take an approximate δ(x) function, multiply it by a very small term, and take the limit of the approximation. The case of x ∈ R is better behaved. Yaozhong Hu has written two papers on time-independent noise, [Hu01] and [Hu02]. He deals with the equation ∂u = ∆u + u ⋄ W˙ (x) ∂t u(0,x) = u0(x) where ⋄ denotes Skorokhod integration. One can also think of this product as the Wick product, see [Jan97] Chapter 3. W˙ (x) is white noise in [Hu02], but in [Hu01] it is colored Gaussian noise. By the Skorokhod integral, roughly speaking, we mean that in the relevant stochastic integrals, we should drop diagonal terms such as (W˙ (x))2. These terms correspond to repeated influence of W˙ (x) for the same point x, so they are related to the singular behavior in the equation (1.4). The goal of this article is to study (1.1) for time-independent L´evy noise L˙ (x). The product ⋄, like the Wick product, involves the deletion of diagonal terms in the appropriate integrals. But it is not defined in the same way as the Wick product for Brownian functionals, at least

4 with the usual definitions. Once again, the reason for using such a product is to avoid singular terms such as those coming from (1.4). The product ⋆ used in (1.2) is similar, but we delete fewer terms. Our idea is that, once we understand the equation without the singular terms, we can try gingerly adding back the singular terms. But the understanding of the simpler situation must come first. In the time dependent situation, there have been several papers involving L´evy noise, such as Kallianpur, Xiong, Hardy, and Rama- subramanian [KXHR94]. In most cases, the noise has been additive. That is, it appears without any multiple of u or f(u). In the case p< 1, and for noise depending on both time and space, [Mue98] dealt with the equation ∂u = ∆u + uγL˙ (t,x) (1.5) ∂t u(0,x) = u0(x). If x ∈ D for a smooth and bounded domain D ⊂ Rd, and with some conditions on u0, short-time existence was obtained in the case (1 − p)α d< . γp − (1 − p) For p ≥ 1, Mytnik [Myt02] found that if 2 pγ < + 1, d 2 1

5 2 Theorems

In this section we list our main results. They depend on some def- initions which will be explained later. We assume that u0(x) is a bounded function on D. Since our equation is linear, we may assume without loss of generality that ku0k∞ ≤ 1. Here are our main theorems.

Theorem 1 Suppose that 0

We are not sure that the conditions on p, d in the above theorem are optimal.

Theorem 2 Suppose that p ≥ 1 and u0(·) is a bounded function on D. Assume that 2 p< 1+ . d Then equation (1.1) has a unique solution u(t,x). Moreover, for all q t> 0 and x ∈ D, u(t,x) ∈ Lloc(Ω), for all q ∈ [1, 1 + (2/d)). q Here Lloc(Ω) denotes the space of all random variables f :Ω → R, such that, for all ǫ > 0, there exists an event A ⊂ Ω, of probability q less than ǫ, such that E[|f1Ac | ] < ∞. Remarks: 1. In Theorem 2, in the Brownian case of p = 2, the critical dimen- sion is d = 2, which agrees with [Hu02]. 2. For p < 1, if we use the product ⋄ instead of ⋆, the proof of Theorem 1 is trivial. The reader can check that by removing the large atoms of the measure L˙ (x), we can take q = 1 and use the proof of Theorem 2.

3 Basic Definitions

We will index sequences of real numbers, such as yi, by subscripts. A sequence of vectors in Rn will be indexed by superscripts. Thus, x(i)

6 th (i) th is the i vector in the sequence, and xk is the k component of the ith vector. If it is not stated otherwise, C in this paper will denote a constant which value my change from line to line. Throughout the paper, G(t,x) denotes the fundamental solution of the heat equation for x ∈ Rd, ∂u = ∆u ∂t u(0,x) = δ(x).

Explicitly, 1 x2 G(t,x)= exp − . (4πt)d/2 4t ! Given a domain D ⊂ Rd, we let G(t,x,y) be the fundamental solution of the heat equation in D, with Dirichlet boundary conditions. That is, G(t,x,y) satisfies ∂u = ∆u ∂t u(0,x) = δy(x) u(t,x)=0 for x ∈ ∂D.

We note the elementary inequality

0 ≤ G(t,x,y) ≤ G(t,x − y).

In other words, when G has two arguments, it is the heat kernel on Rd, and when it has three arguments, it is the heat kernel on D. Finally, we write

Tn = Tn(t) := {(s1,...,sn) : 0

4 The Noise

Roughly speaking, our noise L˙ (x) will be the analogue of dL(t)/dt where L(t) is a one-sided of index p ∈ (0, 2), and t ∈ [0, ∞). Of course, this derivative only exists in the generalized sense. For example, we could use the theory of Schwartz distributions. We will start with a review of the one-sided stable L´evy processes, and refer the reader to Bertoin’s book [Ber96] for details. There is a basic difference between the cases p< 1 and p ≥ 1.

7 For p< 1, we can construct L(t) via a Poisson process, as follows. Consider the following measure on (t,y) ∈ [0, ∞)2 given by

−(p+1) ν(dydt)= cpy dydt, where cp is a constant depending on p. This is the same constant that occurs in the L´evy measure of the one-sided stable processes (see [Ber96]). Let Y (·) be a Poisson random measure on [0, ∞)2 with intensity ν. Then, the p-stable process L(t) is defined as ∞ Lt = yY (dyds). Z[0,t] Z0

Note that if we let (ti,yi) be the locations of the points (or atoms) of the Poisson measure Y , then

Lt = yi. ≤ tXi t In a similar way, we define the random measure L(A) : A ⊂ [0, ∞) as

L(A)= yi ∈ tXi A and observe that t Lt = L(ds). Z0 For p ≥ 1, we must introduce an approximation and compensation procedure, as follows. Let

Yn(dydt)= Y (dydt)1(y ≥ 1/n). Note that if we define

νn(dydt) := ν(dydt)1(y ≥ 1/n) then Yn becomes a Poisson measure with intensity νn. Note that c ν ([0,t] × [0, ∞)) = ν([0,t] × [1/n, ∞)) = p npt. n p Next, let

t ∞ ∞ (n) Lt = yYn(dyds) − t ydνn(y) Z0 Z0 Z0 t ∞ c = yY (dyds) − p np−1t. n p − 1 Z0 Z0 8 There is a problem with p = 1. We leave the details to the reader, but roughly speaking, we would truncate the measure by removing large jumps. (n) Note that Lt is a martingale with respect to its own filtration. (n) The p-stable process L(t) is the limit of the processes Lt as n →∞. For details of this limit, we refer the reader to [Ber96]. We could define the random measure L(dt) in a similar way. The notation L˙ (t) stands for the the density of the measure L, even though this is only defined in the generalized sense. This is similar to the convention of writing B˙ (t) for the derivative of Brownian motion. Our construction of L(A) : A ⊂ Rd is similar. First consider the case p < 1. Fix a bounded open region D ⊂ Rd; we will give some conditions on D later. For (x,y) ∈ B ⊂ D × [0, ∞), let Y (B) be (i) the Poisson measure with intensity dxν(dy). Let (x ,yi) denote the locations of the points (atoms) of the Poisson measure Y . For A ⊂ D, we define L(A)= yi. i x(X)∈A For p ≥ 1, we must approximate. Given n, let Y,ν be defined as above, and let

Yn(dxdy) = 1(y ≥ 1/n)Y (dxdy) −(p+1) dxν(dy) = cpy dxdy

dxνn(dy) = 1(y ≥ 1/n)dxν(dy).

As above, for A ⊂ D, we let

∞ Ln(A) = yi1(yi ≥ 1/n) − ydxνn(dy) i 0 A x(X)∈A Z Z cp p−1 = yi1(yi ≥ 1/n) − m(A) n i p − 1 x(X)∈A where m(A) denotes the Lebesgue measure of A. Again, we leave the case p = 1 to the reader. To get L(dx), we again take a limit as n →∞. Since this procedure is not very different than for the one-sided stable processes with p ∈ [1, 2), we refer the reader to [Ber96]. (i) For the future, we label the atoms of the measure L by (x ,yi) : (i) d (i) x ∈ R . Then, x is the location of the atom, and yi is the mass of the measure L({x(i)}).

9 Fix K > 0. Since D is a bounded domain, there are only a finite (i) (i) number of atoms (x ,yi) with x ∈ D and yi > K. Let LK denote L with the preceding atoms deleted. That is,

LK = L − yiδx(i) . yXi>K

Assumption 2 In the succeeding sections, we will tacitly assume that L˙ is actually L˙ K. Furthermore, let AK denote the event that L = LK, that is, there are no atoms larger than K. Note that lim P (AK) = lim P (L = LK) = 1 K→∞ K→∞ From now on, we will replace L by LK, and let K →∞ at the end.

5 Multiple Stochastic Integrals

In the one-dimensional case, such integrals have been considered ear- lier, for example in [RW86]. We suppose that D ⊂ Rd is an open, bounded domain with smooth boundary. As described earlier, for p ∈ (0, 2), let L(A) be the one-sided stable random measure of index p, defined for Borel sets A ⊂ D. Assume that p ≥ 1. Now we describe our multiple integrals with re- spect to the noise L˙ , recalling that L˙ (x)dx is just another notation for (1) (n) L(dx). Assume that we have a symmetric function fn(x ,...,x ) (1) (n) n defined for x ,...,x ∈ D. Define Dn ⊂ D as follows. (1) (2) (n) n (1) (2) (n) Dn = {(x ,x ,...,x ) ∈ D :e x1

We regarde x1 as the time variable, so that the following integral is defined in the Itˆosense.

(1) (n) (1) (n) In(fn)= n! fn(x ,...,x )L(dx ) ...L(dx ). ZDn Sufficient conditions for the existence of the integral In(fn) are given in Lemma 3. e Since fn is symmetric, In(fn) is invariant under permutations of (i) the indices i = 1,...,n of the x . For f = (f0,f1,...), where fn is a symmetric function of the variables x(1),...,x(n), let ∞ I(f)= In(fn). nX=0 10 Here, f0 is interpreted as a constant, and I0(f0)= f0. For future use, we define the symmetrization of a function as fol- lows. 1 sym(f)(x(1),...,x(n))= f(x(σ(1)),...,x(σ(n))) n! σ X where the sum is taken over all permutations σ of the indices 1,...,n. If we wish to symmetrize over only some of the variables, we write those in a subscript.

(1) (n) (1) (m) symx(1),...,x(n) (f)(x ,...,x , z ,...,z ) 1 = f(x(σ(1)),...,x(σ(n)), z(1),...,z(m)) n! σ X Note that the integral I(f) has the following property. Because of the ordering we have imposed on the variables x(i), it follows that I(f) depends linearly on the masses yi of the atoms of the measure L. 2 That is, I(f) can have terms like y1y2, but no quadratic terms like y1. We note in passing that our multiple stochastic integrals are re- lated to the Wick product for the compensated Poisson process Y ; see Lytvynov [Lyt03], for example. For the case p < 1, we propose a different integral J(f), which n allows for more singularities. Let Xn(D) ⊂ D be the set of coordi- nates (x(1),...,x(n)) with no adjacent indices equal. In other words, x(i) 6= x(i+1) for i = 1,...,n − 1. Let

(1) (n) (1) (n) Jn(fn)= fn(x ,...,x )L(dx ) ...L(dx ). ZXn(D) (1) (n) Since L(dx ) ...L(dx ) is a measure, the integral Jn(fn) can be de- fined for each ω, provided fn is integrable with respect to this product measure. We define J(f) in a similar way:

∞ J(f)= Jn(fn). nX=0 6 A Framework for the Equation

The goal of this section is to set up a framework for discussing the equation, so that existence and uniqueness can be rigorously discussed.

11 In the white noise case, there are some related concepts in [NR97] and [MR98]. We define

n In ≡ {In(fn) : fn is a symmetric function on D }. (6.1)

Definition 1 Let g(·, ·) be a measurable function on D ×Ω, such that g(x, ·) ∈In for almost every x ∈ D. That is, there exists a function n+1 fn on D such that

(1) (n) (1) (n) g(x, ·)= n! fn(x ,...,x ,x)L(dx ) ...L(dx ), a.e. x, ZDn n and fn(·,x) is ae symmetric function on D for almost every x. Then we define

g(x) ⋄ L(dx) ZD (1) (n) (n+1) ≡ (n + 1)! symx(1),...,x(n+1) fn(x ,...,x ,x ) ZDn+1 L(dx(1)) ...L(dx(n+1)). e Set ∞ I = gn : gn ∈In and the sum is converging in probability . ( ) nX=0 Definition 2 Let g(·, ·) be a measurable function on D ×Ω, such that g(x, ·) ∈I for almost every x ∈ D. That is, there exists a sequence {gn}n≥0 such that

gn(x, ·) ∈In , n ≥ 0, a.e.x ∈ D, and ∞ g(x, ·)= gn(x, ·), a.e.x ∈ D. nX=0 Then we define

∞ g(x) ⋄ L(dx) ≡ gn(x) ⋄ L(dx), D D Z nX=0 Z where the sum is converging in probability.

12 Next definition is analogous to the definition of Hu, from [Hu01].

Definition 3 A measurable function u : R+ × D × Ω 7→ R is called a solution to the equation (1.1) if u(t,x) ∈ I for every x ∈ Rd and t ∈ (0, T ] and the following equation is satisfied:

u(t,x) = G(t,x,y)u0(y)dy (6.2) ZD t + G(t − s,x,y)u(s,y)ds ⋄ L(dy), ZD Z0  ∀x ∈ Rd,t ∈ (0, T ]. Our strategy is to expand u(t,x) in a recursively defined series: ∞ u(t,x)= un(t,x), nX=0 where

u0(t,x) = G(t,x,y)u0(y)dy (6.3) ZD t un+1(t,x) = G(t − s,x,y)un(s,y) ⋄ L(dy)ds. Z0 ZD Note that u0 has two meanings. If u0(x) is a function of one variable, then it is the initial value for our SPDE (1.1). If u0(t,x) depends on two variables, then it is the first term of the expansion for u(t,x). To deal with the case p< 1, we define the operation ⋆. We define n Jn ≡ {Jn(fn) : fn is a function on D }. (6.4) Definition 4 Let g(·, ·) be a measurable function on D ×Ω, such that g(x, ·) ∈Jn for almost every x ∈ D. That is, there exists a function n+1 fn on D such that

(1) (n) (1) (n) g(x, ·)= fn(x ,...,x ,x)L(dx ) ...L(dx ), a.e. x, ZXn(D) n+1 and fn is a function on D . Then we define

g(x) ⋆L(dx) ZD (1) (n+1) (1) (n+1) ≡ fn(x ,...,x )L(dx ) ...L(dx ). ZXn+1(D) The other definitions for p < 1 are completely analogous to the case p ≥ 1, with ⋆ instead of ⋄. We leave this part to the reader.

13 7 Proof of the existence part of Theorem 1 (p < 1) 7.1 Calculations for the heat equation

As mentioned at the end of the Section 4, we will assume that AK N occurs, so that L˙ = L˙ K. We suppose that K = 2 for some integer N. We will obtain conclusions that hold almost surely for each integer N K = 2 . Since P (AK) → 1 as K → ∞, our assertion will almost surely hold for L. We will use the notation of Mueller [Mue98]. We call the atom at (i) x with mass yi a particle of type n if −(n+1) −n 2

7.2 Distance between points We start with an elementary lemma about Poisson probabilities. Lemma 1 Let X be a Poisson with parameter λ. Then for any nonnegative integer n0 we have λn0 P (X ≥ n0) ≤ . n0! As a consequence n0 λ n0 P (X ≥ n0) ≤ n0 e . n0 Proof. ∞ P (X ≥ n0) = P (X = n) n=n X0 ∞ λn = e−λ n! n=n0 X ∞ λn0 λn−n0 n !(n − n )! = · 0 0 e−λ n0! n=n (n − n0)! n! X0

14 But for all n ≥ n0, we have n !(n − n )! 1 0 0 = ≤ 1. n! n n0 Thus, 

∞ λn0 λn−n0 P (X ≥ n ) ≤ e−λ 0 n ! (n − n )! 0 n=n0 0 X∞ λn0 λk = e−λ n0! k! kX=0 λn0 = . n0! Since n n2 nn0 en0 = 1+ 0 + 0 + . . . + 0 + . . . 1! 2! n0! nn0 ≥ 0 n0! we have n0 n0 λ n0 e P (X ≥ n0) ≤ ≤ λ n0 . n0! n0

Suppose that D ⊂ B, where B is a cube in Rd with side r ≥ 1. Fix a, ℓ > 0. Consider the set of closed cubes of side length ℓ whose d centers lie in the scaled lattice aZ . Let Cℓ(B, a) be the subset of such cubes which intersect B. Suppose that x,y ∈ D is a pair of points whose distance |x−y|≤ a. We claim that there exists a cube in C3a(B, a) which contains both x and y. Indeed, since the cubes Ca(B, a) cover B, it follows that x ∈ Sa for some cube Sa ∈ Ca(B, a). Let S3a be the cube with the same center as Sa, but with side length 3a. It is easy to see that every point within distance a of Sa lies in S3a. Therefore, S3a is a cube from the set C3a(B, a) which contains both x and y. Furthermore, let R ⊂ Rd and let N(R,n) be the number of parti- cles of type n which lie in R. Let |R| be the Lebesgue measure of R. Note that N(R,n) is a Poisson random variable with parameter

2−n λ(R,n) = |R| ν(dy) (7.1) −(n+1) Z2 15 2−n cp = |R| p+1 dy 2−(n+1) y Z np = Cp|R|2 . p where Cp = cp(2 − 1)/p. For a, M > 0, let A(n,m,M,S) be the event that within S ∩ B, there is at least one particle of type n and at least M particles of type m, where S is a cube in C3a(B, a). Let S˜ = S ∩ B and note that |S˜|≤ (a ∧ r)d where r is the side of B. We will estimate P (A(n,m,M,S)) for M = 2k, k ≥ 0. There are two cases. Case 1: n 6= m. According to Lemma 1, P (A(n,m,M,S)) = P (N(S,n˜ ) ≥ 1)P (N(S,m˜ ) ≥ M) λ(S,m˜ )M eM ≤ Ceλ(S,n˜ ) M M 2np+mMpeM = Ce|S˜|M+1 M M 2np+mMpeM ≤ Ce · (a ∧ r)d(M+1) . M M Case 2: n = m. P (A(n,m,M,S)) = P (N(S,m˜ ) ≥ M + 1) λ(S,n˜ )M+1eM ≤ C M M 2np+mMpeM = C|S˜|M+1 M M 2np+mMpeM ≤ C(a ∧ r)d(M+1) . M M

Next, for some p0 > 0, let −1/d 4/(Md) −pn/(dM) −pm/d 1/d −δ(n+m+k)/(dM) an,m,M = e δ 2 2 M 2 . Note that

np+mMp M −1 dM 4 2 e −δ(n+m+k) an,m,M = δ 2 " M M #

16 and therefore, for S ∈ C3an,m,M (B, a),

−d (an,m,2k ∧ r) P (A(n,m,M,S)) np+mMp M −d d(M+1) 2 e ≤ (a k ∧ r) (a k ∧ r) n,m,2 n,m,2 M M np+mMp M dM 2 e = (a k ∧ r) n,m,2 M M ≤ δ42−δ(n+m+k).

Let A(B) be the event that there exist n,m ≥−N and M = 2k with k ≥ 0, such that there exists a particle of type n in B, and within distance an,m,M of this particle there are M particles of type m in B. Note that the number of cubes in C3a(B, a) is the same as the number of cubes in Ca(B, a). This number is less than or equal to a constant −d times (a ∧ r) . Therefore, we have the following. We write S for the sum over S ∈ C3a (B, a k ). As before, S˜ = S ∩ B. n,m,2k n,m,2 P P (A(B)) ∞ ∞ ≤ P (A(n,m,M,S)) − kX=0 n,mX= N XS ∞ ∞ d −d = (an,m,2k ∧ r) (an,m,2k ∧ r) P (A(n,m,M,S)) − kX=0 n,mX= N XS ∞ ∞ d 4 −δ(n+m+k) ≤ C (an,m,2k ∧ r) δ 2 − kX=0 n,mX= N XS ∞ ∞ d 4 −δ(n+m+k) = C (an,m,2k ∧ r) δ 2 #S − kX=0 n,mX= N ∞ ∞ = C δ42−δ(n+m+k) − kX=0 n,mX= N 2 ∞ ∞ = Cδ4 2−δk 2−δn   − kX=0 n=XN  δN 2  4 1 2 = Cδ − − 1 − 2 δ 1 − 2 δ ! ≤ Cδ where C depends on N, but not on δ, as long as δ < 1. Here, #S denotes the number of cubes S that we are considering.

17 Thus, we have shown:

Lemma 2 There exists a constant C > 0 such that the following holds. Fix a ball B ⊂ Rd of radius at least 1, let δ ∈ (0, 1), and let

−1/d 4/(Md) −pn/(dM) −pm/d 1/d −δ(n+m+k)/(dM) an,m,M = e δ 2 2 M 2 .

Let A be the event that for some n,m,M, there exists a particle of type n in B with M particles of type m in B within distance an,m,M . Then P (A) ≤ Cδ.

7.3 Estimation of the solution for the heat equation. Now we will use Lemma 2 to bound the solution u(t,x). Assume that event A does not occur, where this event was defined in Lemma 2. Taking x as fixed, define A1 to be the event that for some m, M, there exist M particles of type m within distance a0,m,M of x. The same arguments as before easily lead to the estimate

P (A1) ≤ Cδ where C is the same constant as in Lemma 2. For the rest of the section, fix n, and consider un(t,x). Here n is no longer a subscript for an,m,M . We will assume that neither A1 nor A occur. Also, we write

x(n+1) = x M = 2k.

Define

n M(n)= {(i1,...,in) ∈ N : ∀1 ≤ k ≤ n − 1, ik 6= ik+1} .

Let v(t, ·) be the solution of the heat equation on D with initial func- tion u0, and recall that G(t,x,y) ≤ G(t,x − y). Setting in+1 = n + 1 and mn+1 = 0, we get

c c |un(t,x)|1(A ∩ A1) (7.2) ≤ Tn (i1,...,iXn)∈M(n) Z 18 n (ij+1) (ij ) (i1) G(sj+1 − sj,x − x )yij |v(s1,x )|ds jY=1 h i ∞ ∞ ≤ ku0k∞ T − n k1,...,kXn=0 m1,...,mXn= N Z n kj +1 −mj G(sj+1 − sj, a kj )2 2 ds. mj+1,mj ,2 jY=1 h i

We assumed that ku0k∞ ≤ 1. It is known that the solution of the heat equation is bounded when the initial function u0(x) is bounded, and kv(t, ·)k∞ ≤ ku0k∞. When summing over all (i1,...,in) ∈ Mn we (i1) (in) sum first over all possible types m1,...,mn of particles x ,...,x . (ij ) (ij ) For two fixed consecutive particles x and x +1 , we let kj be the smallest nonnegative integer k such that

(ij+1) (ij ) k k+1 amj+1,mj ,2 ≤ x − x < amj+1,mj ,2

(ij ) (ij ) where mj+1 and mj are the types of the particles x +1 and x . Ob- serve that such a kj always exists. Because we are on the complement of both sets A and A1, there is no particle of type mj at a distance (ij+1) less than amj+1,mj ,0 from x . Thus,

(ij+1) (ij ) 0 x − x ≥ amj+1,mj ,2 .

On the other hand, for fixed mj+1 and mj, we have

lim am ,m ,2k = ∞. k→∞ j+1 j Thus for large enough k, we have

(ij+1) (ij ) k x − x ≥ amj+1,mj ,2 .

Therefore,

(ij+1) (ij ) k G(sj+1 − sj,x − x ) ≤ G(sj+1 − sj, amj+1,mj ,2 ).

(ij ) (ij+1) Since x and x are within distance a kj +1 for a fixed kj mj+1,mj ,2 and a fixed x(ij+1), there could be at most 2kj +1 − 1 < 2kj +1 particles (ij ) (ij ) x of type mj within this distance from x +1 . This is the reason why the factor 2kj +1 appears in the previous inequality. Since x(ij ) is −mj −mj of type mj, we have yij ≤ 2 and this is why 2 appears in (7.2).

19 Estimating the term G in (7.2), with M = 2k, we get

k G(r, amj+1 ,mj ,2 ) 2 am ,m ,M = Cr−d/2 exp − j j+1 4r !

= Cr−d/2 exp − (4r)−1e−2/dδ8/(Md)2−2pmj+1/(dM)2−2pmj /d  ·M 2/d2−2δ(mj+1 +mj +k)/(Md)  ≤ Cr−d/2 exp − (4r)−1e−2/dδ8/d  22[k(M−δ)−mj+1(p+δ)−mj (pM+δ)]/(dM) .  Therefore,

−d/2 −1 2[k(1−δ)−(mj +mj+1/M)(p+δ)]/d k G(r, amj+1,mj ,2 ) ≤ Cr exp −C0r 2   where C0 depends on δ. Thus, ∞ ∞ un(t,x) ≤ T n − Z k1,...,kXn=0 m1,...,mXn= N n −d/2 kj −mj C (sj+1 − sj) 2 " jY=1

−1 2[kj (1−δ)−(mj +mj+1/M)(p+δ)]/d · exp −C0 (sj+1 − sj) 2 ds. #  

−m1/(M+1) Rearranging the terms and using 2 ≤ 1 and since mn+1 = 0, we get ∞ ∞ un(t,x) ≤ (7.3) T n − Z k1,...,kXn=0 m1,...,mXn= N n −d/2 kj −(M/(M+1))(mj +mj+1/M) C(sj+1 − sj) 2 " jY=1

−1 2[kj (1−δ)−(mj +mj+1/M)(p+δ)]/d · exp −C0 (sj+1 − sj) 2 ds. #   We wish to replace the above sums over mj by integrals. This would involve replacing each variable mj by a continuous variable wj taking

20 values in [mj,mj +1]. This would decrease the exponential, but would increase the first power of 2 by a multiplicative factor of 2n. We would have ∞ ∞ ∞ |un(t,x)|≤ . . . (7.4) Tn −N −N Z k1,...,kXn=0 Z Z n −d/2 kj −(M/(M+1))(wj +wj+1/M) C(sj+1 − sj) 2 " jY=1

−1 2[kj (1−δ)−(wj +wj+1/M)(p+δ)]/d · exp −C0 (sj+1 − sj) 2 dwds #   where, as with ds, we let dw = dw1 . . . dwn. Next, we wish to make the change of variables w v = w + j+1 , j = 1,...,n. j j M Now wn+1 is fixed, and the reader can easily check that the Jacobian determinant for this change of variables is 1. Since M = 2k ≥ 1 and wj ≥ −N, we have that vj ≥ −2N. After making this change of variables, we can put the integral signs over v inside the product as follows. ∞ ∞ ∞ |un(t,x)|≤ . . . (7.5) Tn −2N −2N Z k1,...,kXn=0 Z Z n −d/2 kj −(M/(M+1))vj C(sj+1 − sj) 2 " jY=1

−1 2[kj (1−δ)−vj (p+δ)]/d · exp −C0 (sj+1 − sj) 2 dvds #   n ∞ ∞ −d/2 kj −(M/(M+1))vj = C(sj+1 − sj) 2 Tn " −2N Z jY=1 kXj =0 Z

−1 2[kj (1−δ)−vj (p+δ)]/d · exp −C0 (sj+1 − sj) 2 dvj ds #   where once more, dv = dv1 . . . dvn. Let Tj,kj denote the term inside the final sum in (7.5). Dropping the subscripts on T,s,k and setting r = sj+1 − sj, we can write ∞ T = Cr−d/22k−(M/(M+1))v (7.6) Z−2N −1 2[k(1−δ)−v(p+δ)]/d · exp −C0r 2 dv.   21 We will split the integral into two ranges for the variable v. For some values of v, the exponential in (7.6) is close to 1. This will happen if the terms following C0 in (7.6) are less than or equal to 1. We must solve r−122[k(1−δ)−v(p+δ)]/d ≤ 1. Simplifying the above expression, we get

2−2v(p+δ)/d ≤ r2−2k(1−δ)/d (7.7) or M(1−δ) − M v Md − k 2 M+1 ≤ r 2(M+1)(p+δ) 2 (M+1)(p+δ) . (7.8) Let Λ = Λ(k,d,p) be the value of v for which equality is attained in (7.8). We will split T into two integrals, T = T (1) + T (2), where

∞ T (1) = I(v)dv ZΛ Λ T (2) = I(v)dv Z−2N and I(v) is the integrand in (7.6). Case 1: An estimate for T (1). By (7.8), and using the fact that

1 M ≤ ≤ 1 2 M + 1 we have ∞ (1) −d/2 k−(M/(M+1))v −1 2[k(1−δ)−v(p+δ)]/d T = Cr 2 exp −C0r 2 dv Λ Z ∞   ≤ Cr−d/22k−(M/(M+1))v dv ZΛ ≤ Cr−d/22k−(M/(M+1))Λ − − d + Md k 1− M(1 δ) = Cr 2 2(M+1)(p+δ) 2 (M+1)(p+δ) − − d + d k 1− M(1 δ)  ≤ Cr 2 4(p+δ) 2 (M+1)(p+δ) .  since 0

22 and if δ is sufficiently small, then (remembering that M = 2k),

k − ∞ M(1−δ) ∞ 2 (1 δ) k 1− k 1− k 2 (M+1)(p+δ) = 2 (2 +1)(p+δ) < ∞   kX=0  kX=0 so that ∞ (1) − d + d T ≤ Cr 2 4(p+δ) . (7.9) j,kj kXj =0 Case 2: An estimate for T (2). Now we turn to the case v ≤ Λ. Here, we assume that

Λ ≥−2N or there will be no values of v for which −2N ≤ v ≤ Λ. We will again drop the subscripts and set r = sj+1 − sj. Since equality is attained in (7.7) when v = Λ, we have

T (2) Λ −d/2 k−(M/(M+1))v −1 2[k(1−δ)−v(p+δ)]/d = Cr 2 exp −C0r 2 dv Z−2N Λ   −d/2 k−(M/(M+1))v 2(Λ−v)(p+δ)/d = Cr 2 exp −C02 dv. −2N Z   Changing variables to z = Λ − v, and since M/(M + 1) < 1, we see that

Λ+2N (2) −d/2 k−(M/(M+1))(Λ−z) 2z(p+δ)/d T = Cr 2 exp −C02 dz 0 Z ∞   −d/2 k−(M/(M+1))Λ z 2z(p+δ)/d ≤ Cr 2 2 exp −C02 dz 0 Z   ≤ Cr−d/22k−(M/(M+1))Λ − − d + d k 1− M(1 δ) ≤ Cr 2 4(p+δ) 2 (M+1)(p+δ) .  Once again, if p< 1 and δ is small, then

k − ∞ M(1−δ) ∞ 2 (1 δ) k 1− k 1− k 2 (M+1)(p+δ) = 2 (2 +1)(p+δ) < ∞   kX=0  kX=0

23 so that − d + d T (2) ≤ Cr 2 4(p+δ) . (7.10) Combining (7.9) and (7.10), if

p< 1 and δ is sufficiently small, then

− d + d T = T (1) + T (2) ≤ Cr 2 4(p+δ) and n d d − + p δ un(t,x) ≤ C (si+1 − si) 2 4( + ) ds. Tn Z iY=1 Now, a calculation of Hu ([Hu01], proof of Theorem 4.1, equation (4.10)) gives, for β > 0,

n n(1−β) n −β t Γ (1 − β) (sk+1 − sk) ds = . (7.11) Tn Γ(1+ n (1 − β)) Z kY=1 We give a quick derivation of Hu’s estimate in the appendix, since he merely cites the result. In (7.11), substitute

d d β = − . 2 4(p + δ)

If β < 1, then Γ(1 − β) is defined. With Cn denoting a term of order Cn for some C > 0, we get

tn(1−β)Γn(1 − β) u (t,x) ≤ ≤C tn(1−β)n−n(1−β) n Γ(1+ n (1 − β)) n since only the denominator has order greater than Cn. Thus, if β < 1, and if A and A1 do not occur,

∞ u(t,x)= un(t,x) < ∞. nX=0 Solving for β < 1, we obtain the requirement that d d − < 1. 2 4(p + δ)

24 For small enough values of δ, this is satisfied if either d ≤ 4 or 1 1 p< + , d ≥ 5. 2 d − 2

This completes the proof of Theorem 1, for LK. As mentioned earlier, since P (AK)= P (L = LK) → 1 as K →∞, it follows that Theorem 1 holds almost surely.

8 Proof of the existence part of Theorem 2 (p ≥ 1)

For p ≥ 1, we also wish to replace L by LK. But in this case our stochastic integrals will no longer be martingales unless we delete the compensator for L − LK . We define this compensator as follows. ∞ −(p+1) ′ −p L(dtdx)= dtdx cpy dy = cpK dtdx. ZK Then, on the event AK, we can replace (1.1) by ∂u = ∆u + c′ K−pu + u ⋄ L˙ (x) ∂t p K u(0,x) = u0(x). If we let ′ −p w(t,x) = exp −cpK t u(t,x) then w(t,x) solves   ∂u = ∆u + u ⋄ L˙ (x) (8.1) ∂t K u(0,x) = u0(x). Therefore, any conclusions which hold almost surely for w(t,x), for all K, must hold for solutions to (1.1). For ease of notation, we write u(t,x) instead of w(t,x) and L in- stead of LK. With this notation, we can prove the following lemma.

Lemma 3 For any q ∈ (p, 2] there exists a positive constant Cq such that for any measurable symmetric function f : Dn → R we have: q E n! f(x(1),...,x(n))L(dx(1)) ...L(dx(n)) (8.2)  ZDn 

n q−1 (1) (n) q (1) (n) ≤ Cq (n!) |f(x ,...,x )| dx ...dx . e n ZD

25 Note that the above lemma gives sufficient conditions for the existence of the multiple stochastic integral In(fn). Proof. We would like to use the Burkholder-Davis-Gundy inequal- ity, and for this reason we need to expand on our formulation of the multiple stochastic integral given in Section (5), so that we have a mar- tingale. We start with the σ-algebra. Recall that we have chosen the first coordinate of x as our time variable. In particular, we can define a filtration. Let m(D), M(D) be the minimum and maximum values of x1, the first coordinate of x, for x ∈ D. For z ∈ [m(D), M(D)], we define the σ-algebra Hz as follows.

Hz = σ L(A) : A ⊂ {x ∈ D : x1 ≤ z} .   Next, let R(z, D)= {x ∈ D : x1 ≤ z}.

Let h(x) be a predictable function with respect to the Hz filtration. Let Yz = h(x)L(dx). ZR(z,D) Then (Yz, Hz) is a martingale, for appropriate conditions on h, and its is

2 (i) 2 hY iz = h (x )yi . (i) x1X≤z

For ease of notation, we drop the subscript z when z = M(D), so that

Y = YM(D),

hY i = hY iM(D).

The Burkholder-Davis-Gundy inequality now implies that for q > 1, there is a constant Cq depending only on q, such that

q q/2 E [|Y | ] ≤ CqE hY i . h i Recall that for r ≤ 1 and a sequence of nonnegative numbers ai, the following elementary inequality holds.

∞ r ∞ r ai ≤ ai . ! Xi=1 Xi=1

26 Setting r = q/2, and using our formula for hY i, we get

∞ q/2 q 2 (i) 2 E [|Y | ] ≤ CE h (x )yi (8.3)  !  Xi=1  ∞  2 (i) q/2 q ≤ CE h (x ) yi " # i=1 X = C E |h (x)|q dx ZD provided q>p. The constant C depends on p, q, and K. Applying (8.3) repeatedly, we get q E f(x(1),...,x(n))L(dx(1)) ...L(dx(n)) (8.4)  ZDn  q n (1) (n) (1) (n ) ≤ C f(x ,...,x ) dx ...dx . e D Z n

Note, that the repeated application of inequality (8.3) was possible (1) e (2) (n) due to the order x1

where h(x(n))= f(x(1),...,x(n−1),x(n))L(dx(1)) ...L(dx(n−1)) (n) R˜ − (x ) Z n 1 1 and ˜ (1) (n−1) (n−1) Rn−1(y)= (x ,...,x ) ∈ Dn−1 : x1

27 and so on. Using (8.3), and the fact that f is symmetric, we get: q E n! f(x(1),...,x(n))L(dx(1)) ...L(dx(n))  ZDn  q = (n!)qE f(x(1),...,x(n))L(dx(1)) ...L (dx(n)) e  ZDn  q q n (1) (n) (1) (n) ≤ (n!) C f(x ,...,x ) dx ...dx De Z n 1 q = (n!)qCn f(x(1),...,x(n )) dx(1) ...dx(n) n!e Dn Z − q = Cn(n!)q 1 f(x(1),...,x(n)) dx(1) ...dx(n). Dn Z

Before proving Theorem 2, we present two simple lemmas. Lemma 4 Let n be a positive integer and q a real number, such that q ≥ 1. Let f : Dn → R be a function in Lq(Dn, dx), where dx is Lebesgue measure. If f˜ is the symmetrization of f, then f˜ ∈ Lq(Dn, dx) and

k f˜ kq ≤ k f kq (8.6) q n where k · kq denotes the norm of the space L (D , dx). Proof. For any permutation σ of the set {1, 2,...,n}, we can n consider the function fσ : D → R, defined by (1) (n) (σ(1)) (σ(n)) fσ(x ,...,x )= f(x ,...,x ). Making the change of variable y(1) = x(σ(1)), . . ., y(n) = x(σ(n)), we can see that

(1) (n) q (1) (n) |fσ(x ,...,x )| dx ...dx n ZD = |f(x(1),...,x(n))|qdx(1) ...dx(n). n ZD Thus for all permutations σ, we have

k fσ kq = k f kq . Since the symmetrization f˜ of f is defined as 1 f˜ = fσ, n! σ X 28 we can apply the triangle inequality to get 1 k fσ kq = k fσ kq n! σ X 1 ≤ k fσ kq n! σ X 1 = k f kq n! σ X = k f kq .

In the proof of Lemma 1 we used the inequality en ≥ nn/n!, for all positive integers n. We would like to prove this inequality for all positive real numbers x and therefore we have to replace n! by Γ(x+1). Thus we will prove the following lemma.

Lemma 5 For any positive real number x, we have

Γ(x + 1) ≥ xxe−x. (8.7)

Proof. If x> 0, then for all t ≥ x, tx ≥ xx. Thus we have:

∞ Γ(x +1) = txe−tdt 0 Z ∞ ≥ txe−tdt x Z ∞ ≥ xxe−tdt x Z ∞ = xx e−tdt Zx = xxe−x.

We can prove now Theorem 2. According to inequalities (8.2) and (8.6), we have:

q E [|un(t,x)| ] (8.8) n q−1 (1) ≤ C (n!) sym (1) (n) |u0(x )| n x ,...,x ZD 

29 n q (k+1) (k) · G(sk+1 − sk,x − x )ds dx Tn Z kY=1  n q n q−1 (k+1) (k) ≤ aC (n!) G(sk+1 − sk,x − x )ds dx, n D " Tn # Z Z kY=1 where a =k u0 k∞ and

n Tn = {(s1,...,sn) ∈ R : 0

n We may assume that k u0 k∞= 1. Let V (Tn)= t /n! be the volume of Tn. Using Jensen’s inequality, we obtain

n q (k+1) (k) G(sk+1 − sk,x − x )ds (8.9) " Tn # Z kY=1 n q q −1 (k+1) (k) = V (Tn) V (Tn) G(sk+1 − sk,x − x )ds " Tn # Z kY=1 n q −1 q (k+1) (k) ≤ V (Tn) V (Tn) G (sk+1 − sk,x − x )ds Tn Z kY=1 n q−1 n t q (k+1) (k) = G (sk+1 − sk,x − x )ds. n! Tn   Z kY=1 Recall that qx2 Gq(r, x)dx = Cr−(q−1)d/2 r−d/2 exp − dx Rd Rd 4r Z Z ! = Cr−(q−1)d/2. (8.10)

Combining (8.8) and (8.9), we see that (n!)q−1 is canceled out by (1/n!)q−1, and by using Fubini’s theorem and (8.10) repeatedly (i.e. integrated first over x(1), then x(2), and so on), we obtain:

q E [|un(t,x)| ] (8.11) n q−1 n q (k+1) (k) ≤ Ct G (sk+1 − sk,x − x )dx ds n Tn " D #   Z Z kY=1 n n q−1 q (k+1) (k) ≤ Ct G (sk+1 − sk,x − x )dx ds n Tn " R #   Z Z kY=1 n q−1 n −(q−1)d/2 = Ct (sk+1 − sk) ds. Tn   Z kY=1 30 Now, an estimate of Hu (7.11) gives

n n(1−(d/2)(q−1)) n d −(q−1)d/2 t Γ (1 − 2 (q − 1)) (sk+1 − sk) ds = . T d n Γ 1+ n 1 − (q − 1) Z kY=1 2    If d < 2/(q − 1), then Γ(1 − (d/2)(q − 1)) is defined. For d < 2/(q − 1), let α = 1 − (d/2)(q − 1) > 0. Then from (8.11) we conclude that ρn E [|u (t,x)|q] ≤ , (8.12) n Γ(1 + nα) for some constant ρ depending on t, q, p, and K. Using the well- known inequality (b + c)q ≤ 2q−1(bq + cq), for all q ≥ 1 and b and c non-negative numbers, an easy induction argument shows that for q−1 an ≥ 0, and c = 2 ,

∞ q ∞ n+1 q an ≤ c an. ! nX=0 nX=0 Applying this inequality to u(t,x), we find

∞ q q E [|u(t,x)| ] = E |un(t,x)| (8.13) " n=0 ! # ∞X n+1 q ≤ E c |un(t,x)| "n=0 # ∞ X n+1 q = c E [|un(t,x)| ] nX=0 ∞ 1 ≤ c (cρ)n. Γ(1 + nα) nX=0 Using now (8.7) we obtain:

∞ enα E [|u(t,x)|q] ≤ c (cρ)n. (8.14) (nα)nα nX=0 Thus, we can see from (8.14) that if d < 2/(q − 1) (or equivalently q < 1 + (2/d)) and p

31 (i) Let Π be the random Poisson set {(x ,yi)}i≥1. For any positive integer k we define the event:

Ak = {ω ∈ Ω : ∀ (x,y) ∈ Π(ω),y ≤ k}.

That means, Ak is the event that all atoms have masses less than or equal to k.

Thus if p < 1 + (2/d), then for all q ∈ (p, 1 + (2/d)) and for all k ≥ 1, E[|u(t,x)|1q ] < ∞. Ak Since, the expectation is the integration with respect to a probability (finite) measure, it follows, that for any q′ and q, such that 1 ≤ q′ ≤ p

′ ′ 1/q q q 1/q E[|u(t,x)1Ak | ] ≤ (E[|u(t,x)1Ak | ]) < ∞.   Thus the solution u(t,x) exists, for all t ≥ 0 and all x ∈ D, and q belongs to the space Lloc(Ω), for all 1 ≤ q < 1 + (2/d). This completes the proof of Theorem 2, if u(t,x) means w(t,x) and L means LK. As mentioned at the beginning of Section 8, this proof carries over to u(t,x) and L, since P (AK) = P (L = LK ) → 1 as K →∞.

9 Uniqueness 9.1 The Main Lemma We consider both cases p< 1 and p ≥ 1. Suppose u(t,x) is a solution to (1.1) in case p ≥ 1 or (1.2) in case p< 1. Then we have

∞ u(t,x)= gn(t,x) (9.1) nX=0 where gn ∈Jn in case p< 1 or gn ∈In in case p ≥ 1.

Lemma 6 Let u(t,x) be as in (9.1). If the gn are uniquely determined from u, then uniqueness holds for (1.1) or (1.2).

32 Proof. Both cases have the same proof, so we will only deal with the case p ≥ 1. Suppose there are two solutions ui(t,x) for i = 1, 2, where ∞ (i) ui(t,x)= gn (t,x) nX=0 (i) and gn (t,x) ∈In. Let

∞ Y (t,x)= u1(t,x) − u2(t,x)= gn(t,x) nX=0 (1) (2) where gn(t,x) = gn (t,x) − gn (t,x). Then Y (t,x) is also a solution to (1.1) with initial condition Y (0,x) = 0, so by (6.2), we have

∞ gn(t,x) = Y (t,x) nX=0 t = G(t − s,x,y)Y (s,y)ds ⋄ L(dy) ZD Z0  ∞ t = G(t − s,x,y)gn(s,y)ds ⋄ L(dy) n=0 ZD Z0  X∞ = g˜n(t,x) nX=1 ∀x ∈ Rd,t ∈ (0, T ]. where

t g˜n(t,x) = G(t − s,x,y)gn−1(s,y)ds ⋄ L(dy) (9.2) ZD Z0  ∈ In.

Since we are assuming that Y (t,x) uniquely determines both gn(t,x) andg ˜n(t,x), it follows that g0(t,x)=0 and

gn(t,x) =g ˜n(t,x) (9.3) for any positive value of n. Thus, (9.2) implies thatg ˜1(t,x) = 0, and hence by (9.3) g1(t,x) = 0. An induction argument now yields gn(t,x) = 0 for all values of n.

33 9.2 Proof of Uniqueness for p< 1 For simplicity in this proof we will assume from the beginning that L˙ is a stable noise (not L˙ K) without truncation of large atoms. We begin by ordering the atoms of L˙ according to size, so that y1 >y2 > · · ·. For z ∈ (1, 2), let Ri(z)L˙ be the transformation of the noise L˙ which replaces yi with zyi.

Lemma 7 Fix z ∈ (1, 2), i ≥ 1. Let

M˙ = Ri(z)L˙

Also, let PM , PL be the probability measures on the canonical proba- bility space of atomic measures, generated by M˙ and L˙ , respectively. Then PM << PL.

Proof. For any k ≥ 1,n ≥ 0 define an event

Ak,n ≡ there are exactly n atoms of L˙ nwith masses greater than or equal to 1/k}

Note that for any k ≥ 1 ∞ Ak,n = Ω n[=0 Ak,l Ak,m = ∅, ∀m 6= l. It is obvious that \ i lim PL Ak,l = 0. (9.4) k→∞ ! l[=0 Fix δ > 0 arbitrary small. By (9.4) we can fix k sufficiently large such that i PL Ak,l ≤ δ. (9.5) ! l[=0 Define i c ∞ A¯k,i ≡ Ak,l = Ak,l. (9.6) ! l[=0 l=[i+1 34 Now will show that

PM << PL on A¯k,i . (9.7) Let us decompose L˙ as follows: L˙ = L˙ 1,k + L˙ 2,k where L˙ 1 includes atoms of size y ≥ 1/k and L˙ 2 includes atoms of size y < 1/k. Note that L˙ 1,k, L˙ 2,k are independent, and 1,k 2,k Ri(z)L˙ = Ri(z)L˙ + L˙ on A¯k,i . Hence, if we define 1,k M˙ 1 = Ri(z)L˙ then, in order to get (9.7), it suffices to show that

PM1 << PL1,k on Ak,l , ∀l ≥ i + 1 (9.8) ˙ ˙ 1,k where PM1 , PL1,k are the measures induced by M1, L respectively. 1,k Fix an arbitrary l ≥ i +1. On Ak,l , M˙ 1 and L˙ have l atoms at the same locations. Therefore, to get the desired absolute continuity we just need to show the absolute continuity of the laws of the correspond- ing atom sizes. To be more precise, from now on we assume that Ak,l ˆ L ˆ L ˆ M ˆ M occurs. Let Y1 ,..., Yl (resp. Y1 ,..., Yl ) be the unordered sizes 1,k ˆ L ˆ L ˆ M ˆ M of the atoms of L (resp. M1). (Y1 ,..., Yl ) and (Y1 ,..., Yl ) are continuous l-dimensional random variables whose laws are sup- ported on (1/k, ∞)l. Moreover, the probability density function of ˆ L ˆ L l (Y1 ,..., Yl ) is positive at any point of (1/k, ∞) . This immediately ˆ M ˆ M implies that conditioned on Ak,l the probability law of (Y1 ,..., Yl ) ˆ L ˆ L is absolutely continuous with respect to the law of (Y1 ,..., Yl ), and since l ≥ i + 1 was arbitrary, (9.8) and (9.7) follow. The end of the proof of the lemma is trivial. Let B be any event such that PL(B) = 0. Then i ∞ PM (B) = PM (B Ak,l)+ PM (B Ak,l) Xl=0 \ l=Xi+1 \ i = PM (B Ak,l) Xl=0 \ i ≤ PM ( Ak,l), (9.9) l[=0 35 where the second equality follows from (9.7). Now it is easy to check the following equality of events:

there are no more than i atoms of L˙ with masses y ≥ 1/k n o = there are no more than i atoms of M˙ with masses y ≥ 1/k n o Therefore, we get

i i PM Ak,l = PL Ak,l . ! ! l[=0 l[=0 Thus, (9.5) and (9.9) imply that

PM (B) ≤ δ.

Since δ > 0 was arbitrary, we get PM (B) = 0 and the result follows.

By Lemma 6 the following lemma gives the required uniqueness for the case p< 1.

Lemma 8 Let p < 1. In (9.1), the functions gn are uniquely deter- mined.

Proof We argue by contradiction. Suppose Lemma 8 were false. Then by subtracting the two series, we form a series

∞ G ≡ gn = 0 nX=0 where gn ∈Jn, and at least one of the gn is not identically 0. In fact, we may assume that

gn = yi1 ...yin hn(xi1 ,...,xin ) (i1,...,iXn)∈Jn with ∞

yi1 ...yin |hn(xi1 ,...,xin )| < ∞ (9.10) nX=0 (i1,...,iXn)∈Jn with probability 1. Actually, each solution is a difference of two series for which the functions hn are nonnegative.

36 Recall that {y1 > y2 > ...} are the ordered sizes of the atoms of our noise L˙ . Fix an arbitrary n ≥ 1. Our goal is to show that, almost surely, y1 ...ynhn(x1,...,xn) = 0 from which it follows that hn = 0 almost everywhere. By Lemma 7, for zi ∈ (1, 2), the probability induced by Ri(zi)L˙ is absolutely continuous with respect to the probability induced by L˙ . Furthermore, Ri(zi) extends to an on J , as follows. We set

∞ ∞ Ri(zi) gn ≡ Ri(zi)gn, nX=0 nX=0 where we define Ri(zi)gn to be a multiple stochastic integral with L˙ ˙ ∞ replaced by Ri(zi)L. Therefore, if G = n=0 gn = 0 almost surely, then R (z ) ∞ g ≡ ∞ R (z )g = 0 almost surely as well. Fur- i i n=0 n n=0 i i n P thermore, if z ∈ (1, 2), then Pi P

R1(z1) ...Rm(zm)G is an analytic function in z1,...,zm. Let Λn,mG be the terms in the expansion of this analytic function which contain z1 ...zn but no other zi. Note that each Λn,mG equals 0 almost surely, and that

lim Λn,mG = y1 ...ynhn(x1,...,xn) = 0 m→∞ almost surely. In fact, the convergence to 0 follows from (9.10), since the operator Λn,m removes more and more terms as m increases. This proves Lemma 8.

9.3 Proof of Uniqueness for p ≥ 1 By Lemma 6 the following lemma gives the required uniqueness for the case p ≥ 1.

Lemma 9 Let p ≥ 1. In (9.1), the functions gn are uniquely deter- mined.

Proof. Fix (t,x) and suppose that u1(t,x), u2(t,x) are two solu- tions. Subtracting, we obtain

Y (t,x) = u1(t,x) − u2(t,x)

37 ∞ ∞ (1) (2) = gn (t,x) − gn (t,x) n=0 n=0 X∞ X =: g˜n(t,x) nX=0 (i) where gn (t,x) corresponds to the solution ui(t,x). Also, eachg ˜n can be expressed as a stochastic integral

(1) (n) (1) (n) g˜n(t,x)= fn(t,x; x ,...,x )L(dx ) ...L(dx ). ZDn Adding these terms, and using the notation from the beginning of Section 8, we finde that

Y (t,x)= f0(t,x)+ f˜1(t,x; y)L(dy) ZR(M(D),D) where

f˜1(t,x; y)= f1(t,x; y) ∞ (1) (n−1) (1) (n−1) + fn(t,x; x ,...,x ,y)L(dx ) ...L(dx ) R˜n−1(y) nX=2 Z and R˜n−1(y) was defined in (8.5). Since Y (t,x)=0 and

z → f˜1(t,x; y)L(dy) ZR(z,D) is a local Hz martingale, it follows that f0(t,x) = 0 and that the integrand f˜1(t,x; y) = 0 for almost every value of y. But f˜1(t,x; y) = 0 is itself a sum of a deterministic function and a stochastic integral. Therefore we may use the same argument to show that f1(t,x; y) = 0 for almost every value of y. Continuing, we can use induction to show that for each value of n, fn(t,x; y1,...,yn) = 0 for almost every value of y1,...,yn.

A A Derivation of Hu’s Estimate

In the proof of Theorem 4.1, equation (4.10) of [Hu02], Hu gives the following estimate (in his equation, α = 1 − d/4, and we use β = d/4). n −β Hn(t) := (sk+1 − sk) ds. Tn(t) Z kY=1 38 Then tn(1−β)Γ(1 − β)n H (t)= . (A.1) n Γ(1 + n(1 − β)) (Hu uses β = γ/4. Also, Hu has 2 instead of 1 in the above formula. His answer is equivalent if we ignore multiplicative factors of n.) Making the change of variables s = αr, ds = αdr, we find

n −β Hn(αt) = (sk+1 − sk) ds Tn(αt) Z kY=1 −nβ n = α α Hn(t) n(1−β) = α Hn(t).

Then we get the recurrence

t −β Hn+1(t) = (t − s) Hn(s)ds 0 Z t −β n(1−β) = (t − s) s Hn(1)ds Z0 t −β n(1−β) = Hn(1) (t − s) s ds Z0 t −n(1−β) −β n(1−β) = t Hn(t) (t − s) s ds. Z0 Let

a = β b = n(1 − β).

Recall that the Euler beta function is defined as follows, for a, b > 0.

1 B(a, b) := (1 − x)a−1xb−1dx. Z0 It is a fundamental identity that

Γ(a)Γ(b) B(a, b)= . Γ(a + b)

Changing variables, we get

t t1−a+bΓ(1 − a)Γ(1 + b) (t − s)−asbds = . Γ(2 − a + b) Z0 39 Therefore, Γ(1 − β)Γ(1 + n(1 − β)) H (t) = t−n(1−β)H (t)t1−β+n(1−β) n+1 n Γ(2 − β + n(1 − β)) Γ(1 − β)Γ(1 + n(1 − β)) = t(1−β)H (t) n Γ(1 + (n + 1)(1 − β))

1−β The product telescopes. Since H1(t)= t , we get (A.1).

References

[AGHKH88] S. Albeverio, F. Gesztesy, R. Høegh-Krohn, and H. Holden. Solvable models in quantum mechanics. Texts and Monographs in Physics. Springer-Verlag, New York, 1988. [Ber96] J. Bertoin. L´evy processes, volume 121 of Cambridge Tracts in . Cambridge University Press, Cambridge, 1996. [Daw93] D.A. Dawson. Measure-valued Markov processes. In Ecole´ d’´et´ede probabilit´es de Saint-Flour, XXI-1991, P. L. Hennequin, editor, number 1180 in Lecture Notes in Mathematics, pages 1–260, Berlin, Heidelberg, New York, 1993. Springer-Verlag. [Hu01] Y. Hu. Heat equations with fractional white noise po- tentials. Appl. Math. Optim., 43(3):221–243, 2001. [Hu02] Y. Hu. Chaos expansion of heat equations with white noise potentials. Potential Anal., 16(1):45–66, 2002. [Jan97] S. Janson. Gaussian Hilbert spaces, volume 129 of Cam- bridge Tracts in Mathematics. Cambridge University Press, Cambridge, 1997. [KXHR94] G. Kallianpur, J. Xiong, G. Hardy, and S. Rama- subramanian. The existence and uniqueness of solu- tions of nuclear space-valued stochastic differential equa- tions driven by Poisson random measures. Stochastics Stochastics Rep., 50(1-2):85–122, 1994. [Lyt03] E. Lytvynov. Orthogonal decompositions for L´evy processes with an application to the gamma, Pascal,

40 and Meixner processes. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 6(1):73–102, 2003. [MR98] R. Mikulevicius and B. Rozovskii. Linear parabolic stochastic PDEs and Wiener chaos. SIAM J. Math. Anal., 29(2):452–480 (electronic), 1998. [Mue98] C. Mueller. The heat equation with L´evy noise. Stochas- tic Process. Appl., 74(1):67–82, 1998. [Myt02] L. Mytnik. Stochastic partial differential equation driven by stable noise. Probab. Theory Related Fields, 123(2):157–201, 2002. [NR97] D. Nualart and B. Rozovskii. Weighted stochastic Sobolev spaces and bilinear SPDEs driven by space-time white noise. J. Funct. Anal., 149(1):200–225, 1997. [NZ89] D. Nualart and M. Zakai. Generalized Brownian func- tionals and the solution to a stochastic partial differential equation. J. Funct. Anal., 84:279–296, 1989. [Par93] E. Pardoux. Stochastic partial differential equations, a review. Bull. Sc. Math., 117:29–47, 1993. [RW86] J. Rosi´nski and W.A. Woyczy´nski. On Itˆostochastic integration with respect to p-stable motion inner clock, integrability of sample paths, double and multiple inte- grals. Ann. Probab., 14:271–286, 1986.

41