<<

Potential in Classical Probability

Nicolas Privault

Department of City University of Hong Kong 83 Tat Chee Avenue Kowloon Tong Hong Kong [email protected]

Summary. These notes are an elementary introduction to classical potential the- ory and to its connection with probabilistic tools such as and the Markov property. In particular we review the probabilistic interpretations of harmonicity, of the , and of the Poisson equation using Brownian motion and stochastic calculus.

Key words: Potential theory, harmonic functions, Markov processes, stochas- tic calculus, partial differential equations.

Mathematics Subject Classification: 31-01, 60J45.

1 Introduction

The origins of potential theory can be traced to the physical problem of recon- structing a repartition of electric charges inside a planar or a spatial domain, given the measurement of the electricalfield created on the boundary of this domain.

In mathematical analytic terms this amounts to representing the values of a functionh inside a domain given the data of the values ofh on the boundary of the domain. In the simplest case of a domain empty of electric charges, the problem can be formulated as that offinding a harmonic functionh on E (roughly speaking, a function with vanishing Laplacian, see 2.2 below), given its values prescribed by a functionf on the boundary∂E§, i.e. as the Dirichlet problem: Δh(y)=0, y E, ∈  h(y)=f(y), y ∂E.  ∈  2 Nicolas Privault

Close connections between the notion of potential and the Markov property have been observed at early stages of the development of the theory, see e.g. [Doo84] and references therein. Thus a number of potential theoretic problems have a probabilistic interpretation or can be solved by probabilistic methods.

These notes aim at gathering both analytic and probabilistic aspects of potential theory into a single document. We partly follow the point of view of Chung [Chu95] with complements on analytic potential theory coming from Helms [Hel69], some additions on stochastic calculus, and probabilistic appli- cations found in Bass [Bas98].

More precisely, we proceed as follow. In Section 2 we give a summary of classical analytic potential theory: Green kernels, Laplace and Poisson equations in particular, following mainly Brelot [Bre65], Chung [Chu95] and Helms [Hel69]. Section 3 introduces the Markovian setting of semigroups which will be the main framework for probabilistic interpretations. A sample of ref- erences used in this domain is Ethier and Kurtz [EK86], Kallenberg [Kal02], and also Chung [Chu95]. The probabilistic interpretation of potential theory also makes significant use of Brownian motion and stochastic calculus. They are summarized in Section 4, see Protter [Pro05], Ikeda and Watanabe [IW89], however our presentation of stochastic calculus is given in the framework of normal martingales due to their links with quantum stochastic calculus, cf. Biane [Bia93]. In Section 5 we present the probabilistic connection between po- tential theory and Markov processes, following Bass [Bas98], Dynkin [Dyn65], Kallenberg [Kal02], and Port and Stone [PS78]. Our description of the Martin boundary in discrete time follows that of Revuz [Rev75].

2 Analytic Potential Theory

2.1 Electrostatic Interpretation

LetE denote a closed region ofR n, more precisely a compact subset having a smooth boundary∂E with surface measureσ. Gauss’s law is the main tool for determining a repartition of electric charges insideE, given the values of the electricalfield created on∂E. It states that given a repartition of charges q(dx) theflux of the electricfieldU across the boundary∂E is proportional to the sum of electric charges enclosed inE. Namely we have

q(dx) =� n(x),U(x) σ(dx), (2.1) 0 � � �E �∂E whereq(dx) is a signed measure representing the distribution of electric charges,� 0 > 0 is the electrical permittivity constant,U(x) denotes the elec- tricfield atx ∂E, andn(x) represents the outer (i.e. oriented towards the exterior ofE∈) unit vector orthogonal to the surface∂E. Potential Theory in Classical Probability 3

On the other hand the divergence theorem, which can be viewed as a particular case of the Stokes theorem, states that ifU:E R n is a 1 vectorfield we have → C

divU(x)dx= n(x),U(x) σ(dx), (2.2) � � �E �∂E where the divergence divU is defined as

n ∂U divU(x) = i (x). ∂x i=1 i � The divergence theorem (2.2) can be interpreted as a mathematical formula- tion of the Gauss law (2.1). Under this identification, divU(x) is proportional to the density of charges insideE, which leads to the Maxwell equation

�0 divU(x)dx=q(dx), (2.3)

whereq(dx) is the distribution of electric charge atx andU is viewed as the induced electricfield on the surface∂E.

Whenq(dx) has the densityq(x) atx, i.e.q(dx) =q(x)dx, and thefield n U(x) derives from a potentialV:R R +, i.e. when → U(x) = V(x), x E, ∇ ∈ Maxwell’s equation (2.3) takes the form of the Poisson equation:

� ΔV(x) =q(x), x E, (2.4) 0 ∈ where the LaplacianΔ = div is given by ∇ n ∂2V ΔV(x) = (x), x E. ∂x2 ∈ j=1 i � In particular, when the domainE is empty of electric charges, the potential V satisfies the Laplace equation

ΔV(x) = 0x E. ∈ As mentioned in the introduction, a typical problem in classical potential theory is to recover the values of the potentialV(x) insideE from its values on the boundary∂E, given thatV(x) satisfies the Poisson equation (2.4). This can be achieved in particular by representingV(x),x E, as an integral with respect to the surface measure over the boundary∂E∈, or by direct solution of the Poisson equation forV(x).

Consider for example the Newton potential kernel 4 Nicolas Privault

q 1 n V(x) = , x R y , � s x y n 2 ∈ \{ } 0 n � − � − n created by a single chargeq aty R , wheres 2 = 2π,s 3 = 4π, and in general ∈ 2πn/2 s = , n 2, n Γ(n/2) ≥

is the surface of the unitn 1-dimensional sphere inR n. − The electricalfield created byV is

q x y n U(x) := V(x) = − , x R y , ∇ � s x y n 1 ∈ \{ } 0 n � − � − cf. Figure 2.1. LettingB(y, r), resp.S(y, r), denote the open ball, resp. the sphere, of centery R n and radiusr> 0, we have ∈ ΔV(x)dx= n(x), V(x) σ(dx) � ∇ � �B(y,r) �S(y,r) = n(x),U(x) σ(dx) � � �S(y,r) q = , �0 whereσ denotes the surface measure onS(y, r).

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Fig. 2.1. Electricalfield generated by a point mass aty = 0.

From this and the Poisson equation (2.4) we deduce that the repartition of electric charge is Potential Theory in Classical Probability 5

q(dx) =qδ y(dx) i.e. we recover the fact that the potentialV is generated by a single charge located aty. We also obtain a version of the Poisson equation (2.4) in distri- bution sense: 1 Δ =s δ (dx), x x y n 2 n y � − � − where the LaplacianΔ x is taken with respect to the variablex. On the other hand, takingE=B(0, r) B(0,ρ) we have∂E=S(0, r) S(0,ρ) and \ ∪

ΔV(x)dx= n(x), V(x) σ(dx) + n(x), V(x) σ(dx) � ∇ � � ∇ � �E �S(0,r) �S(0,ρ) = cs cs = 0, n − n hence 1 n Δx = 0, x R y . x y n 2 ∈ \{ } � − � − The electrical permittivity� 0 will be set equal to 1 in the sequel.

2.2 Harmonic Functions

The notion of will befirst introduced from the mean value property. Let x 1 σr (dy)= n 1 σ(dy) snr − denote the normalized surface measure onS(x, r), and recall that

∞ n 1 y f(x)dx=s n r − f(z)σ r (dz)dr. � �0 �S(y,r) Definition 2.1. A continuous real-valued function on an open subsetO ofR n is said to be harmonic, resp. superharmonic, inO if one has

x f(x) = f(y)σ r (dy), �S(x,r) resp. f(x) f(y)σ x(dy), ≥ r �S(x,r) for allx O andr>0 such thatB(x, r) O. ∈ ⊂ Next we show the equivalence between the mean value property and the van- ishing of the Laplacian. 6 Nicolas Privault

Proposition 2.2.A 2 functionf is harmonic, resp. superharmonic, on an C open subsetO ofR n if and only if it satisfies the Laplace equation Δf(x) = 0, x O, ∈ resp. the partial differential inequality

Δf(x) 0, x O. ≤ ∈ Proof. In spherical coordinates, using the divergence formula and the identity d f(y+rx)σ 0(dx) = x, f(y+rx) σ 0(dx) dr 1 � ∇ � 1 �S(0,1) �S(0,1) r = Δf(y+rx)dx, s n �B(0,1) yields

n 1 Δf(x)dx=r − Δf(y+rx)dx �B(y,r) �B(0,1) n 2 0 =s r − x, f(y+rx) σ (dx) n � ∇ � 1 �S(0,1) n 2 d 0 =s r − f(y+rx)σ (dx) n dr 1 �S(0,1) n 2 d y =s r − f(x)σ (dx). n dr r �S(y,r) Iff is harmonic, this shows that

Δf(x)dx=0, �B(y,r) for ally E andr> 0 such thatB(y, r) O, henceΔf = 0 onO. Conversely, ifΔf =∈ 0 onO then ⊂ y f(x)σ r (dx) �S(y,r) is constant inr, hence

y y f(y) = lim f(x)σ ρ (dx) = f(x)σ r (dx), r>0. ρ 0 → �S(y,ρ) �S(y,r) The proof is similar in the case of superharmonic functions. �� The fundamental harmonic functions based aty R n are the functions which ∈ are harmonic onR n y and depend only onr= x y ,y R n. They satisfy the Laplace equation\{ } � − � ∈ Potential Theory in Classical Probability 7

n Δh(x) = 0, x R , ∈ in spherical coordinates, with

d2h (n 1) dh Δh(r)= (r)+ − (r). dr2 r dr In casen = 2, the fundamental harmonic functions are given by the logarith- mic potential

1 log x y , x=y, −s2 � − � � hy(x) =  (2.5)  + , x=y, ∞  and by the Newton potential kernel in casen 3: ≥

1 1 n 2 , x=y, (n 2)s n x y − � hy(x) =  − � − � (2.6)  + , x=y. ∞  More generally, fora R andy R n, the function ∈ ∈ x x y a, �→ � − � is superharmonic onR n,n 3, if and only ifa [2 n, 0], and harmonic whena=2 n. ≥ ∈ − − We now focus on the Dirichlet problem on the ballE=B(y, r). We consider 1 h0(r)= log(r), r >0, − s2 in casen = 2, and 1 h (r)= , r >0, 0 (n 2)s rn 2 − n − ifn 3, and let ≥ r2 x∗ :=y+ (x y) y x 2 − � − � denote the inverse ofx B(y, r) with respect to the sphereS(y, r). Note the relation ∈ 8 Nicolas Privault

r2 z x ∗ = z y (x y) � − � − − y x 2 − � � − � � � r x y � r = � � − � (z y) � (x y) , �x y r − −� y x − � � � − � � � − � � hence for allz S(y, r) andx B�(y, r), � ∈ ∈ � � x z z x ∗ =r � − � , (2.7) � − � x y � − � since z y x y x y − r − = x z . � − � z y − y x � − � � � − � � − � � � 2 � The functionz h� x(z) is not , hence not harmonic,� on B¯(y, r). Instead of �→ � C � ¯ hx we will usez h x∗ (z), which is harmonic on a neighborhood of B(y, r), to construct a solution �→ of the Dirichlet problem onB(y, r).

3.5 3 2.5 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -3 3 4 -2 -1 0 0 1 2 x 1 2 3 4 -3 -2 -1 y

Fig. 2.2. Graph ofh x and ofh 0( y x z x ∗ /r) whenn = 2. � − �� − �

Lemma 2.3. The solution of the Dirichlet problem (2.10) forE=B(y, r) with boundary conditionh ,x B(y, r), is given by x ∈

y x z x ∗ z h � − �� − � , z B(y, r). �→ 0 r ∈ � � Proof. We have ifn 3: ≥ Potential Theory in Classical Probability 9

n 2 y x z x ∗ r − 1 h � − �� − � = 0 r (n 2)s x y n 2 z x n 2 � � − n� − � − � − ∗� − rn 2 = − h (z), x y n 2 x∗ � − � − and ifn = 2:

y x z x ∗ 1 y x z x ∗ h � − �� − � = log � − �� − � 0 r − s r � � 2 � � 1 y x =h (z) log � − � . x∗ − s r 2 � �

This function is harmonic inz B(y, r) and is equal toh x onS(y, r) from (2.7). ∈ �� Figure 2.3 represents the solution of the Dirichlet problem with boundary conditionh x onS(y, r) forn = 2, as obtained by truncation of Figure 2.2.

0

-0.5

-1

-1.5 -2 2 -1.5 1.5 -1 1 -0.5 0.5 0 0 0.5 -0.5 x 1 -1 y 1.5 -1.5 2 -2

Fig. 2.3. Solution of the Dirichlet problem.

2.3 Representation of a Function onE from its Values on∂E

As mentioned in the introduction, it can be of interest to compute a repartition of charges inside a domainE from the values of thefield generated on the boundary∂E.

In this section we present the representation formula for the values of an harmonic function inside an arbitrary domainE as an integral over its boundary∂E, that follows from the Green identity. This formula uses a kernel which can be explicitly computed in some special cases, e.g. whenE=B(y, r) 10 Nicolas Privault is an open ball, in which case it is called the Poisson formula, cf. Section 2.4 below.

Assume thatE is an open domain inR n with smooth boundary∂E and let ∂ f(x) = n(x), f(x) n � ∇ � denote the normal derivative off on∂E.

Applying the divergence theorem (2.2) to the productsu(x) v(x) andv(x) u(x), where u, v are 2 functions onE, yields Green’s identity: ∇ ∇ C (u(x)Δv(x) v(x)Δu(x))dx= (u(x)∂ v(x) v(x)∂ u(x))dσ(x). (2.8) − n − n �E �∂E On the other hand, takingu = 1 in the divergence theorem yields Gauss’s integral theorem

∂nv(x)dσ(x) = 0, (2.9) �∂E providedv is harmonic onE.

In the next definition,h x denotes the fundamental harmonic function defined in (2.5) and (2.6). Definition 2.4. The Green kernelG E( , ) onE is defined as · · GE(x, y) :=h (y) w (y), x, y E, x − x ∈ n where for allx R ,w x is a smooth solution to the Dirichlet problem ∈ Δw (y)=0, y E, x ∈ (2.10)  w (y)=h (y), y ∂E.  x x ∈ In the case of a general boundary∂E, the Dirichlet problem may have no solution, even when the boundary value functionf is continuous. Note that since (x, y) h x(y) is symmetric, the Green kernel is also symmetric in two variables, i.e. �→ GE(x, y)=G E(y, x), x, y E, ∈ andG E( , ) vanishes on the boundary∂E. The next proposition provides an integral representation· · formula for 2 functions onE using the Green kernel. In the case of harmonic functions,C it reduces to a representation from the values on the boundary∂E, cf. Corollary 2.6 below. Proposition 2.5. For all 2 functionsu onE we have C u(x) = u(z)∂ GE(x, z)σ(dz)+ GE(x, z)Δu(z)dz, x E. (2.11) n ∈ �∂E �E Potential Theory in Classical Probability 11

Proof. We do the proof in casen 3, the casen = 2 being similar. Given ≥ x E, apply Green’s identity (2.8) to the functionsu andh x, whereh x is harmonic∈ onE B(x, r) forr> 0 small enough, to obtain \

hx(y)Δu(y)dy (hx(y)∂nu(y) u(y)∂ nhx(y))dσ(y) E B(x,r) − ∂E − � \ � 1 1 n 2 = ∂ u(y)+ − u(y) dσ(y), n 2 s rn 2 n s rn 1 − �S(x,r) � n − n − � since 1 ∂ 2 n n 2 y ∂ = ρ − = − . �→ n y x n 2 ∂ρ ρ=r − rn 1 � − � − | − In caseu is harmonic, from the Gauss integral theorem (2.9) and the mean value property ofu we get

hx(y)Δu(y)dy (hx(y)∂nu(y) u(y)∂ nhx(y))dσ(y) E B(x,r) − ∂E − � \ � x = u(y)σr (dy) �S(x,r) =u(x).

In the general case we need to pass to the limit asr tends to 0, which gives the same result:

u(x) = hx(y)Δu(y)dy+ (u(y)∂nhx(y) h x(y)∂nu(y))dσ(y). E B(x,r) ∂E − � \ � (2.12) Our goal is now to avoid using the values of the derivative term∂ nu(y) on∂E in the above formula. To this end we note that from Green’s identity (2.8) we have

w (y)Δu(y)dy= (w (y)∂ u(y) u(y)∂ w (y))dσ(y) x x n − n x �E �∂E = (h (y)∂ u(y) u(y)∂ w (y))dσ(y) (2.13) x n − n x �∂E

for any smooth solutionw x to the Dirichlet problem (2.10). Taking the dif- ference between (2.12) and (2.13) yields

u(x) = (h (y) w (y))Δu(y)dy+ u(y)∂ (h (y) w (y))dσ(y) x − x n x − x �E �∂E = GE(x, y)Δu(y)dy+ u(y)∂ GE(x, y)dσ(y), x E. n ∈ �E �∂E �� 12 Nicolas Privault

Using (2.5) and (2.6), Relation (2.11) can be reformulated as 1 1 1 u(x) = u(z)∂ ∂ u(z) σ(dz) (n 2)s n x z n 2 − x z n 2 n − n �∂E � � − � − � − � − � 1 1 + Δu(z)dz, x B(y, r), (n 2)s x z n 2 ∈ − n �E � − � − ifn 3, and ifn = 2: ≥ 1 u(x) = (u(z)∂ log x z (log x z )∂ u(z))σ(dz) − s n � − �− � − � n 2 �∂E (log x z )Δu(z)dz, x B(y, r). − � − � ∈ �E Corollary 2.6. Whenu is harmonic onE we get

u(x) = u(y)∂ GE(x, y)dσ(y), x E. (2.14) n ∈ �∂E As a consequence of Lemma 2.3, the Green kernelG B(y,r) ( , y) relative to the ballB(y, r) is given forx B(y, r) by · ∈ 1 r z x log � − � , z B(y, r) x , x=y, −s y x z x ∈ \{ } �  2 �� − � � − ∗��  GB(y,r) (x, z)=  1 z y  log � − � , z B(y, r) x , x=y,  −s r ∈ \{ }  2 � �   + , z= x,  ∞  ifn = 2, cf. Figure 2.4, and 1 1 rn 2 1 − , (n 2)s z x n 2 − x y n 2 z x n 2  − n �� − � − � − � − � − ∗� − �   z B(y, r) x , x=y,  ∈ \{ } �   GB(y,r) (x, z)=  1 1 1  ,  (n 2)s z y n 2 − rn 2  − n �� − � − − �   z B(y, r) x , x=y,  ∈ \{ }    + , z= x,  ∞  ifn 3.  ≥ The Green kernel onE=R n,n 3, is obtained by lettingr go to infinity: ≥ n 1 R n G (x, z)= =h x(z)=h z(x), x, z R . (2.15) (n 2)s z x n 2 ∈ − n� − � − Potential Theory in Classical Probability 13

3.5

3

2.5

2

1.5

1

0.5

0 -1.5 -1 2 -0.5 0 1 1.5 0.5 0 0.5 1 -1 -0.5 1.5 2 -2 -1.5

Fig. 2.4. Graph ofz G(x, z) withx E=B(y, r) andn = 2. �→ ∈

2.4 Poisson Formula

B(y,r) Givenu a sufficiently integrable function onS(y, r) we let u (x) denote the Poisson integral ofu overS(y, r), defined as: I 1 r2 y x 2 B(y,r) (x) = − � − � u(z)σ(dz), x B(y, r). Iu s r z x n ∈ n �S(y,r) � − � Next is the Poisson formula obtained as a consequence of Proposition 2.5. Theorem 2.7. Letn 2. Ifu has continuous second partial derivatives on E= B¯(y, r) then for≥ allx B(y, r) we have ∈ u(x) = B(y,r) (x) + GB(y,r) (x, z)Δu(z)dz. (2.16) I u �B(y,r) Proof. We use the relation

B(y,r) B(y,r) u(x) = u(z)∂nG (x, z)σ(dz)+ G (x, z)Δu(z)dz, �S(y,r) �B(y,r) x B(y, r), and the fact that ∈ 1 y x z x ∗ z ∂ GB(y,r) (x, z)= ∂ log � − � � − � �→ n − s n r z x 2 � � − � � 1 r2 x y 2 = − � − � (n 2) s r z x 2 − n � − � ifn = 2, and similarly forn 3. ≥ �� 14 Nicolas Privault

Whenu is harmonic on a neighborhood of B¯(y, r) we obtain the Poisson representation formula ofu onE= B¯(y, r) using the values ofu on the sphere∂E=S(y, r), as a consequence of Theorem 2.7. Corollary 2.8. Assume thatu is harmonic on a neighborhood of B¯(y, r). We have 1 r2 y x 2 u(x) = − � − � u(z)σ(dz), x B(y, r), (2.17) s r z x n ∈ n �S(y,r) � − � for alln 2. ≥ Similarly, Theorem 2.7 also shows that

u(x) B(y,r) (x), x B(y, r), ≤I u ∈ whenu is superharmonic onB(y, r). Note also that whenx=y, Relation (2.17) recovers the mean value property of harmonic functions:

1 u(y)= u(z)σ(dz), s rn 1 n − �S(y,r) and the corresponding inequality for superharmonic functions.

The function r2 x y 2 z − � − � �→ x z n � − � is called the onS(y, r) atx B(y, r), cf. Figure 2.5. ∈

7

6

5

4

3

2

1

0 2 1 x 0 -1 -1 -1.5 -2 -2 2 1.5 1 0.5 0 y -0.5

Fig. 2.5. Poisson kernel graphs onS(y, r) forn = 2 for two values ofx B(y, r). ∈

A direct calculation shows that the Poisson kernel is harmonic onR n (S(y, r) z ): \ ∪ { } Potential Theory in Classical Probability 15

r2 y x 2 Δ − � − � = 0, x / S(y, r) z , (2.18) x z x n ∈ ∪{ } � − � hence all Poisson integrals are harmonic functions onB(y, r). Moreover The- orem 2.17 of du Plessis [dP70] asserts that for anyz S(y, r), lettingx tend toz without enteringS(y, r) we have ∈

B(y,r) lim u (x) =u(z). x z → I Hence the Poisson integral solves the Dirichlet problem (2.10) onB(y, r). Proposition 2.9. Givenf a continuous function onS(y, r), the Poisson in- tegral 1 r2 x y 2 B(y,r) (x) = − � − � f(z)σ(dz), x B(y, r), If s r x z n ∈ n �S(y,r) � − � provides a solution of the Dirichlet problem Δw(x) = 0, x B(y, r), ∈  w(x) =f(x), x S(y, r),  ∈ with boundary conditionf. Recall that the Dirichlet problem onB(y, r) may not have a solution when the boundary conditionf is not continuous.

In particular, from Lemma 2.3 we have n 2 B(y,r) r − (z)= hx (z), x B(y, r), Ihx x y n 2 ∗ ∈ � − � − forn 3, and ≥ B(y,r) 1 x y (z)=h x (z) log � − � , x B(y, r), Ihx ∗ − s r ∈ 2 � � forn = 2, wherex ∗ denotes the inverse ofx with respect toS(y, r). This function solves the Dirichlet problem with boundary conditionh x, and the corresponding Green kernel satisfies

B(y,r) B(y,r) G (x, z)=h x(z) (z), x, z B(y, r). −I hx ∈ Whenx=y the solution B(y,r) (z) of the Dirichlet problem onB(y, r) with hy I 1 1 2 n boundary conditionh y is constant and equal to (n 2) − sn− r − onB(y, r), and we have the identity − 2 2 1 r y z 2 n − � − � σ(dx) =r − , z B(y, r), s rn 1 x z n ∈ n − �S(y,r) � − � forn 3, cf. Figure 2.6. ≥ 16 Nicolas Privault

-0.6

-0.8

-1

-1.2

-1.4

-1.6

-1.8 -4 -3 4 -2 3 -1 2 0 1 0 x 1 -1 2 -2 y 3 -3

2 Fig. 2.6. Reduced ofh x relative toD=R B(y, r) withx=y. \

2.5 Potentials and Balayage

In electrostatics, the functionx h (x) represents the Newton potential �→ y created atx R n by a charge aty E, and the function ∈ ∈ x h (x)q(dy) �→ y �E represents the sum of all potentials generated atx R n by a distribution ∈ q(dy) of charges insideE. In particular, whenE=R n,n 3, this sum equals ≥

n x GR (x, y)q(dy) �→ n �R from (2.15), and the general definition of potentials originates from this inter- pretation. Definition 2.10. Given a measureµ onE, the Green potential ofµ is defined as the function x G Eµ(x) := GE(x, y)µ(dy), �→ �E whereG E(x, y) is the Green kernel onE. Potentials will be used in the construction of superharmonic functions, cf. Proposition 5.1 below. Conversely, it is natural to ask whether a superhar- monic function can be represented as the potential of a measure. In general, recall (cf. Proposition 2.5) the relation

u(x) = u(z)∂ GE(x, z)σ(dz)+ GE(x, z)Δu(z)dz, x E, (2.19) n ∈ �∂E �E Potential Theory in Classical Probability 17

which yields, ifE=B(y, r):

u(x) = E(x) + GE(x, z)Δu(z)dz, x B(y, r), (2.20) I u ∈ �E E where the Poisson integral u is harmonic from (2.18). Relation (2.20) can be seen as a decomposition ofIu into the sum of a harmonic function onB(y, r) and a potential. The general question of representing superharmonic functions as potentials is examined next in Theorem 2.12 below, with application to the construction of Martin boundaries in Section 2.6. Definition 2.11. Consider i) an open subsetE ofR n with Green kernelG E, ii) a subsetD ofE, and iii) a non-negative superharmonic functionu onE. The infimum onE over all non-negative superharmonic function onE which are (pointwise) greater thanu onD is called the reduced function ofu relative D toD, and denoted byR u .

In other terms, lettingΦ u denote the set of non-negative superharmonic func- tionsv onE such thatv u onD, we have ≥ RD := inf v Φ . u { ∈ u} The lower regularization

ˆ D D Ru (x) := lim inf Ru (y), x E, y x → ∈ is called the balayage ofu, and is also a superharmonic function.

n In caseD=R B(y, r), the reduced function ofh y relative toD is given by \ D n Rh (z)=h y(z)1 z / B(y,r) +h 0(r)1 z B(y,r) , z R , (2.21) y { ∈ } { ∈ } ∈ cf. Figure 2.7.

D B(y,r) 2 More generally, ifv is superharmonic thenR = v onR D=B(y, r), v I \ cf. p. 62 and p. 100 of Brelot [Bre65]. Hence ifD=B c(y, r) =R n B(y, r) andx B(y, r), we have \ ∈ c RB (y,r)(z)= B(y,r) (z) hx I hx x y z x ∗ =h � − �� − � 0 r � � rn 2 1 = − , z B(y, r). (n 2)s x y n 2 z x n 2 ∈ − n� − � − � − ∗� − 18 Nicolas Privault

0.5

0

-0.5

-1

-1.5

-2 -4 -3 4 -2 3 -1 2 0 1 0 x 1 -1 2 -2 y 3 -3

2 Fig. 2.7. Reduced ofh x relative toD=R B(y, r) withx B(y, r) y . \ ∈ \{ }

Using Proposition 2.5 and the identity

c ΔRB (y,r) =σ y, hy r

c in distribution sense, we can representR B (y,r) onE=B(y, R),R>r, as hy

c RB (y,r)(x) hy

c = RB (y,r)(z)∂ GB(y,R)(x, z)σ(dz)+ GB(y,R)(x, z)σy(dz) hy n r �S(y,R) �B(y,R) =h (R)+ GB(y,R)(x, z)σy(dz), x B(y, R). 0 r ∈ �S(y,r) LettingR go to infinity yields

c n RB (y,r)(x) = GR (x, z)σy(dz) hy r �S(y,r) y = hx(z)σr (dz) �S(y,r) =h (y), x / B(y, r), x ∈ n sinceh x is harmonic onR B(y, r), and \

c B (y,r) y x z x ∗ y R (x) = h0 � − �� − � σ (dz) hy r r �S(y,r) � � y x y x ∗ =h � − �� − � 0 r � � Potential Theory in Classical Probability 19

=h (r), x B(y, r), 0 ∈ which recovers the decomposition (2.21). More generally we have the following result, for which we refer to Theorem 7.12 of Helms [Hel69]. Theorem 2.12. IfD is a compact subset ofE andu is a non-negative super- ˆ D harmonic function onE then Ru is a potential, i.e. there exists a measureµ onE such that

Rˆ D(x) = GE(x, y)µ(dy), x E. (2.22) u ∈ �E D If moreoverR v is harmonic onD thenµ is supported by∂D, cf. Theorem 6.9 in Helms [Hel69].

2.6 Martin Boundary

The Martin boundary theory extends the Poisson integral representation de- scribed in Section 2.4 to arbitrary open domains with a non smooth boundary, or having no boundary. GivenE a connected open subset ofR n, the Martin boundaryΔE ofE is defined in an abstract sense asΔE := Eˆ E, where Eˆ is a suitable compactification ofE. Our aim is now to show\ that every non-negative harmonic function onE admits an integral representation using its values on the boundaryΔE.

For this, letu be non-negative and harmonic onE, and consider an in-

creasing sequence (En)n N of open sets with smooth boundaries (∂En)n N and compact closures, such∈ that ∈

∞ E= En. n=0 �

En Then the balayageR u ofu relative toE n coincides withu onE n, and from Theorem 2.12 it can be represented as

En E Ru (x) = G (x, y)dµn(y) �En

whereµ n is a measure supported by∂E n. Note that since

REn (x) =u(x), x E , k n, u ∈ k ≥ and E lim G (x, y) y ∂E = 0, x E, n | ∈ n →∞ ∈ the total mass ofµ n has to increase to infinity: 20 Nicolas Privault

lim µn(En) = . n →∞ ∞ For this reason one decides to renormalizeµ byfixingx E and letting n 0 ∈ 1 E ˜µn(dy) :=G (x0, y)µn(dy), n N, ∈

so that ˜µn has total mass

˜µ(E ) =u(x )< , n n 0 ∞

independently ofn N. Next, let the kernelK x0 be defined as ∈ GE(x, y) Kx0 (x, y) := E , x, y E, G (x0, y) ∈ with the relation ˆ En Ru (x) = Kx0 (x, y)d˜µn(y). �En The construction of the Martin boundaryΔE ofE relies on the following theorem by Constantinescu and Cornea, cf. Helms [Hel69], Chapter 12. Theorem 2.13. LetE denote a non-compact, locally compact space and con- sider a familyΦ of continuous mappings

f:E [ , ]. → −∞ ∞ Then there exists a unique compact space Eˆ in whichE is everywhere dense, such that:

a) everyf Φ can be extended to a functionf ∗ on Eˆ by continuity, b) the extended∈ functions separate the points of the boundaryΔE= Eˆ E in the sense that if x, y ΔE withx=y, there existsf Φ such that \ ∈ � ∈ f ∗(x)=f ∗(y). � The Martin space is then defined as the unique compact space Eˆ in whichE is everywhere dense, and such that the functions

y K (x, y):x E { �→ x0 ∈ } admit continuous extensions which separate the boundary Eˆ E. Such a compactification Eˆ ofE exists and is unique as a consequence\ of the Constantinescu-Cornea Theorem 2.13, applied to the family

Φ := x K (x, z):z E . { �→ x0 ∈ } In this way the Martin boundary ofE is defined as

ΔE := Eˆ E. \ Potential Theory in Classical Probability 21

The Martin boundary is unique up to an homeomorphism, namely ifx 0, x0� E, then ∈ K (x, z) K (x, z)= x0 x0� Kx0 (x0� , z) and Kx (x, z) K (x, z)= 0� x0 K (x , z) x0� 0

may have different limits in Eˆ� which still separate the boundaryΔE. For

this reason, in the sequel we will drop the indexx 0 inK x0 (x, z). In the next theorem, the Poisson representation formula is extended to Martin boundaries. Theorem 2.14. Any non-negative harmonic functionh onE can be repre- sented as h(x) = K(z, x)ν(dz), x E, ∈ �ΔE whereν is a non-negative Radon measure onΔE.

Proof. Since ˜µn(En) is bounded (actually it is constant) inn N we can ∈ extract a subsequence (˜µn )k converging vaguely to a measureµ on Eˆ, i.e. k ∈N

lim f(x)˜µnk (dx) = f(x)µ(dx), f c(E), k ˆ ˆ ∈C →∞ �E �E see e.g. Ex. 10, p. 81 of Hirsch and Lacombe [HL99]. The continuity ofz K(x, z) inz Eˆ then implies �→ ∈ ˆ En h(x) = lim Ru (x) n →∞ Enk = lim Rˆ u (x) k →∞

= lim K(z, x)νnk (dz) k ˆ →∞ �E = K(z, x)µ(dz), x E. ˆ ∈ �E Finally,ν is supported byΔE since for allf (E ), ∈C c n

f(x)µ(dx) = lim f(x)˜µn (dx) = 0. k k �E →∞ �Ek �� WhenE=B(y, r) one can check by explicit calculation that r2 x y 2 lim Ky(x,ζ)= − � − � , z S(y, r), ζ z z x n ∈ → � − � is the Poisson kernel onS(y, r). In this case we have y µ=σ r , which is the normalized surface measure onS(y, r), and the Martin boundary ΔB(y, r) ofB(y, r) equals its usual boundaryS(y, r). 22 Nicolas Privault 3 Markov Processes

3.1 Markov Property

n Let 0(R ) denote the class of continuous functions tending to 0 at infinity. RecallC thatf is said to tend to 0 at infinity if for allε> 0 there exists a compact subsetK ofR n such that f(x) ε for allx R n K. | |≤ ∈ \ n Definition 3.1. AnR -valued , i.e. a family(X t)t R+ of random variables on a probability space(Ω, ,P), is a Markov process if∈ for F allt R + theσ-fields ∈ + :=σ(X :s t) Ft s ≥ and :=σ(X : 0 s t). Ft s ≤ ≤ are conditionally independent givenX t. + The above condition can be restated by saying that for allA t andB t we have ∈F ∈F P(A B X ) =P(A X )P(B X ), ∩ | t | t | t cf. Chung [Chu95]. This definition naturally entails that:

i) (Xt)t is adapted with respect to ( t)t , i.e.X t is t-measurable, ∈R+ F ∈R+ F t R +, and ii)X∈ is conditionally independent of givenX , for allu t, i.e. u F t t ≥ IE[f(X ) ] = IE[f(X ) X ],0 t u, u |F t u | t ≤ ≤ for any bounded measurable functionf onR n. In particular,

P(X A ) = IE[1 (X ) ] = IE[1 (X ) X ] =P(X A X ), u ∈ |F t A u |F t A u | t u ∈ | t A (R n). Processes with independent increments provide simple examples ∈B of Markov processes. Indeed, if (Xt)t Rn has independent increments, then for all bounded measurable functionsf,g∈ we have

IE[f(X ,...,X )g(X ,...,X ) X ] t1 tn s1 sn | t = IE[f(X X + x, . . . , X X +x) t1 − t tn − t g(X X + x, . . . , X X +x)] × s1 − t sn − t x=Xt = IE[f(X X + x, . . . , X X +x)] t1 − t tn − t x=Xt IE[g(X X + x, . . . , X X +x)] × s1 − t sn − t x=Xt = IE[f(X ,...,X ) X ]IE[g(X ,...,X ) X ], t1 tn | t s1 sn | t 0 s <

In discrete time, a sequence (Xn)n of random variables is said to be a ∈N if for alln N, theσ-algebras ∈ =σ( X :k n ) Fn { k ≤ } and + =σ( X :k n ) Fn { k ≥ } + are independent conditionally toX n. In particular, for every n -measurable bounded random variableF we have F

IE[F n] = IE[F X n], n N. |F | ∈

3.2 Transition Kernels and Semigroups

A transition kernel is a mappingP(x, dy) such that i) for everyx E,A P(x, A) is a probability measure, and ii) for everyA∈ (E �→), the mappingx P(x, A) is a measurable function. ∈B �→

The transition kernelµ s,t associated to a Markov process (Xt)t R+ is defined as ∈ µ (x, A) =P(X A X =x) 0 s t, s,t t ∈ | s ≤ ≤ and we have

µ (X ,A) =P(X A X ) =P(X A ),0 s t. s,t s t ∈ | s t ∈ |F s ≤ ≤

The transition operator (Ts,t)0 s t associated to (Xt)t is defined as ≤ ≤ ∈R+

n Ts,tf(x) := IE[f(X t) X s =x] = f(y)µ s,t(x, dy), x R . | n ∈ �R Lettingp (x) denote the density ofX X we have s,t t − s

n µs,t(x, A) = ps,t(y x)dy, A (R ), − ∈B �A and

Ts,tf(x) = f(y)p s,t(y x)dy. n − �R In discrete time, a sequence (Xn)n N of random variables is called a homoge- neous Markov chain with transition∈ kernelP if

n m IE[f(X ) ] =P − f(X ),0 m n. (3.1) n |F m m ≤ ≤

In the sequel we will assume that (Xt)t is time homogeneous, i.e.µ s,t ∈R+ depends only on the differencet s, and we will denote it byµ t s. In this case − − 24 Nicolas Privault

the family (T0,t)t is denoted by (Tt)t . It defines a transition semigroup ∈R+ ∈R+ associated to (Xt)t , with ∈R+

n Ttf(x) = IE[f(X t) X 0 =x] = f(y)µ t(x, dy), x R , | n ∈ �R and satisfies the semigroup property

T T f(x) = IE[T f(X ) X =x] t s s t | 0 = IE[IE[f(X ) X ] X =x] t+s | s | 0 = IE[IE[f(X ) ] X =x] t+s |F s | 0 = IE[f(X ) X =x] t+s | 0 =T t+sf(x),

which can be formulated as the Chapman-Kolmogorov equation

µs+t(x, A) =µ s µ t(x, A) = µs(x, dy)µt(y, A). (3.2) ∗ n �R By induction, using (3.2) we obtain

P ((X ,...,X ) B B ) x t1 tn ∈ 1 × · · · × n

= µ0,t1 (x, dx1) µ tn 1,tn (xn 1, dxn) ··· ··· − − �B1 �Bn n for 0

If (Xt)t is a homogeneous Markov processes with independent incre- ∈R+ ments, the densityp t(x) ofX t satisfies the convolution property

n ps+t(x) = ps(y x)p t(y)dy, x R , n − ∈ �R which is satisfied in particular by processes with stationary and independent increments such as L´evyprocesses. A typical example of a probability density satisfying such a convolution property is the Gaussian density, i.e.

1 1 2 n pt(x) = exp x n , x R . (2πt)n/2 −2t� �R ∈ � �

From now on we assume that (Tt)t R+ is a strongly continuous Feller semi- ∈ n group, i.e. a family of positive contraction operators on 0(R ) such that C i)T =T T , s, t 0, s+t s t ≥ n n ii)T t( 0(R )) 0(R ), C ⊂C Potential Theory in Classical Probability 25

n n iii)T tf(x) f(x) ast 0,f 0(R ),x R . → → ∈C ∈

The resolvent of (Tt)t is defined as ∈R+

∞ λt n Rλf(x) := e− Ttf(x)dt, x R , ∈ �0 i.e. ∞ λt n Rλf(x) = IE x e− f(X t)dt , x R , ∈ ��0 � n for sufficiently integrablef onR , where IEx denotes the conditional expec- tation given that X =x . It satisfies the resolvent equation { 0 } R R = (µ λ)R R ,λ, µ >0. λ − µ − λ µ We refer to [Kal02] for the following result.

n Theorem 3.2. Let(T t)t R+ be a Feller semigroup on 0(R ) with resolvent ∈ C n Rλ,λ>0. Then there exists an operatorA with domain 0(R ) such that D⊂C 1 R− =λI A,λ>0. (3.3) λ − The operatorA is called the generator of (T t)t and it characterizes ∈R+ (Tt)t R+ . Furthermore, the semigroup (Tt)t R+ is differentiable int for all f ∈and it satisfies the forward and backward∈ Kolmogorov equations ∈D dT f dT f t =T Af, and t =AT f. dt t dt t Note that the integral

∞ Γ(n/2 1) p (x, y)dt= − t 2πn/2 x y n 2 �0 � − � − n 2 = − 2s x y n 2 n� − � − = (n/2 1)h (x) − y of the Gaussian transition density function is proportional to the Newton po- n 2 tential kernelx 1/ x y − forn 3. Hence the resolventR f associated �→ � − � ≥ 0 to the Gaussian semigroupT tf(x) = n f(y)p t(x, y)dy is a potential in the R sense of Definition 2.10, since: � ∞ R0f(x) = Ttf(x)dt �0 ∞ = IEx f(B t)dt ��0 � 26 Nicolas Privault

∞ = f(y) pt(x, y)dtdy n �R �0 n 2 f(y) = − n 2 dy. 2s n x y n �R � − � − More generally, forλ> 0 we have

1 R f(x) = (λI A) − f(x) λ − ∞ λt = e− Ttf(x)dt �0 ∞ λt tA = e− e f(x)dt �0 ∞ λt = e− Ttf(x)dt �0 ∞ λt = f(y)e − pt(x, y)dydt n �0 �R λ n = f(y)g (x, y)dy, x R , n ∈ �R whereg λ(x, y) is theλ-potential kernel defined as

λ ∞ λt g (x, y) := e− pt(x, y)dt, (3.4) �0 andR λf is also called aλ-potential.

Recall that the Hille-Yosida theorem also allows one to construct a strongly continuous semigroup from a generatorA.

In another direction it is possible to associate a Markov process to any time homogeneous transition function satisfyingµ 0(x, dy) =δ x(dy) and the Chapman-Kolmogorov equation (3.2), cf. e.g. Theorem 4.1.1 of Ethier and Kurtz [EK86].

3.3 Hitting Times

Definition 3.3. An a.s. non-negative random variableτ is called a stopping time with respect to afiltration if F t τ t , t >0. { ≤ }∈F t Theσ-algebra is defined as the collection of measurable setsA such that F τ A τ 0. Note that for alls> 0 we have τs , τ s . { } { ≤ } { } { ≥ }∈F τ Potential Theory in Classical Probability 27

Definition 3.4. One says that the process(X t)t R+ has the strong Markov property if for anyP -a.s.finite -stopping timeτ∈ we have F t IE[f(X(τ+t)) ] = IE[f(X(t)) X =x] =T f(X ), (3.5) |F τ | 0 x=Xτ t τ for all bounded measurablef.

n The hitting timeτ B of a Borel setB R is defined as ⊂ τ = inf t >0:X B , B { t ∈ } with the convention inf =+ . A setB such thatP (τ < ) = 0 for all ∅ ∞ x B ∞ x R n is said to be polar. ∈ In discrete time it can be easily shown that hitting times are stopping times, from the relation

n τ n c = τ > n = X / B , { B ≤ } { B } { k ∈ }∈F n k�=0 however in continuous time the situation is more complicated. From e.g. Lemma 7.6 of Kallenberg [Kal02], we have thatτ B is a stopping time pro-

videdB is closed and (X t)t R+ is continuous, orB is open and (X t)t R+ is right-continuous. ∈ ∈

Definition 3.5. The last exit time fromB is denoted byl B and defined as

l = sup t >0:X B , B { t ∈ } withl = 0 ifτ = + . B B ∞ n We say thatB is recurrent ifP x(lB = + ) = 1,x R , and thatB is ∞ ∈ n transient ifl B < ,P -a.s., i.e.P x(lB = + ) = 0,x R . ∞ ∞ ∈

3.4 Dirichlet Forms

Before turning to a presentation of stochastic calculus we briefly describe the notion of , which provides a functional analytic interpretation of the Dirichlet problem. A Dirichlet form is a positive bilinear form defined on a domain ID dense in a Hilbert spaceH of real-valued functions, suchE that i) the space ID, equipped with the norm

f := f 2 + (f, f), � �ID � � H E � is a , and 28 Nicolas Privault

ii) for anyf ID we have min(f, 1) ID and ∈ ∈ (min(f, 1), min(f, 1)) (f, f). E ≤E The classical example of Dirichlet form is given byH=L 2(Rn) and n ∂f ∂g (f, g) := (x) (x)dx. E n ∂x ∂x R i=1 i i � � The generator of is defined by f=g,f ID, if for allh ID we have L E L ∈ ∈ (f, h) = g, h . E −� � H It is known, cf. e.g. Bouleau and Hirsch [BH91], that a self-ajoint operator L onH=L 2(Rn) with domain Dom( ) is the generator of a Dirichlet form if and only if L + f,(f 1) 2 n 0, f Dom( ). �L − �L (R ) ≤ ∈ L On the other hand, is the generator of a strongly continuous semigroup 2L n (Pt)t + onH=L (R ) if and only if (Pt)t + is sub-Markovian, i.e. for all ∈R ∈R f L 2(Rn), ∈ 0 f 1= 0 P tf 1, t R +. ≤ ≤ ⇒ ≤ ≤ ∈ We refer to Ma and R¨ockner [MR92] for more details on the connection be- tween stochastic processes and Dirichlet forms. Coming back to the Dirichlet problem Δu=0, x D, ∈  u(x) = 0, x ∂D.  ∈ Iff andg are 1 with compact support inD we have C  n ∂f ∂g g(x)Δf(x)dx= (x) (x)dx= (f, g). − ∂x ∂x −E D i=1 D i i � � � Here, ID is the subspace of functions inL 2(Rn) whose derivative in distribution sense belongs toL 2(Rn), with norm n ∂f 2 f 2 = f(x) 2dx+ (x) dx. � �ID n | | n ∂xi �R �R i=1 � � � � � Hence the Dirichlet problem forf can be formulated� as � (f, g)=0, E for allg in the completion of c∞(D) with respect to the ID-norm. Finally we mention the notion of capacityC of an open setA, defi�ned·� as C(A) := inf u :u ID andu 1 onA . {� � ID ∈ ≥ } The notion of zero-capacity set isfiner than that of zero-measure sets and gives rise to the notion of properties that hold in the quasi-everywhere sense, cf. Bouleau and Hirsch [BH91]. Potential Theory in Classical Probability 29 4 Stochastic Calculus

4.1 Brownian Motion and the Poisson Process

Let (Ω, ,P ) be a probability space and ( t)t afiltration, i.e. an increasing F F ∈R+ family of subσ-algebras of . We assume that ( t)t R+ is continuous on the right, i.e. F F ∈ t = s, t R +. F F ∈ s>t � 1 Recall that a process (Mt)t R+ inL (Ω) is called an t-martingale if IE[Mt s] = M , 0 s t. ∈ F |F s ≤ ≤

For example, if (Xt)t [0,T] is a Markov process with semigroup (Ps,t)0 s t T satisfying ∈ ≤ ≤ ≤ P f(X ) = IE[f(X ) X ] = IE[f(X ) ],0 s t T, s,t s t | s t |F s ≤ ≤ ≤ on 2(Rn) functions, with C b P P =P ,0 s t u T, s,t ◦ t,u s,u ≤ ≤ ≤ ≤ then (Pt,T f(X t))t [0,T] is an t-martingale: ∈ F IE[P f(X ) ] = IE[IE[f(X ) ] ] t,T t |F s T |F t |F s = IE[f(X ) ] T |F s =P f(X ),0 s t T. s,T s ≤ ≤ ≤ 2 2 Definition 4.1. A martingale(M t)t inL (Ω) (i.e. IE[ Mt ]< ,t ∈R+ | | ∞ ∈ R+) and such that IE[(M M )2 ] =t s,0 s < t, (4.1) t − s |Fs − ≤ is called a normal martingale.

Every square-integrable process (Mt)t with centered independent incre- ∈R+ ments and generating thefiltration ( t)t satisfies F ∈R+ IE[(M M )2 ] = IE[(M M )2],0 s t, t − s |Fs t − s ≤ ≤ hence the following remark.

Remark 4.2. A square-integrable process (Mt)t R+ with centered independent increments is a normal martingale if and only∈ if IE[(M M )2] =t s,0 s t. t − s − ≤ ≤ In our presentation of stochastic integration we will restrict ourselves to nor- mal martingales. As will be seen in the next sections, this family contains Brownian motion and the standard compensated Poisson process as particu- lar cases. 30 Nicolas Privault

2 Remark 4.3. A martingale (Mt)t R+ is normal if and only if (Mt t) t R+ is a martingale, i.e. ∈ − ∈

IE[M 2 t ] =M 2 s,0 s < t. t − |F s s − ≤ Proof. This follows from the equalities

IE[(M M )2 ] (t s) t − s |Fs − − = IE[M 2 M 2 2(M M )M ] (t s) t − s − t − s s|Fs − − = IE[M 2 M 2 ] 2M IE[M M ] (t s) t − s |Fs − s t − s|Fs − − = IE[M 2 ] t (IE[M 2 ] s). t |Fs − − s |Fs −

��

Throughout the remainder of this chapter, (Mt)t R+ will be a normal mar- tingale. ∈

We now turn to the Brownian motion and the compensated Poisson process as the fundamental examples of normal martingales. Our starting point is now a family (ξn)n of independent standard (i.e. centered and with unit ∈N variance) Gaussian random variables underγ N, constructed as the canonical N projections from (R , RN ,γ N) intoR. The measureγ N is characterized by its Fourier transform B

i ξ,α 2 i ∞ ξ α α IE e � �� (N) = IE e n=0 n n �→ � � � � � ∞ α2 /2 = e− n n=0 �1 2 α 2 2 = e− 2 � �� (N) ,α � (N), ∈ 2 i.e. ξ,α 2 is a centered Gaussian with variance α 2 . � (N) � (N) � � 2 � � Let (en)n denote an orthonormal basis ofL (R+). ∈N 2 Definition 4.4. Givenu L (R+) with decomposition ∈ ∞ u= u, e e , � n� n n=0 � 2 2 we letJ 1 :L (R+) L (RN,γ N) be defined as −→ ∞ J (u) = ξ u, e . 1 n� n� n=0 � We have the isometry property

∞ IE[ J (u) 2] = u, e 2IE[ ξ 2] (4.2) | 1 | |� n�| | n| k�=0 Potential Theory in Classical Probability 31

∞ = u, e 2 |� n�| k=0 � 2 = u 2 , � � L (R+) and

iJ (u) ∞ iξ u,e IE e 1 = IE e n� n� n=0 � � � � � 1 2 ∞ u,en 2 = e− 2 � �L (R+) n=0 � 1 2 = exp u 2 , −2� �L (R+) � � 2 henceJ 1(u) is a centered Gaussian random variable with variance u 2 . L (R+) Next is a constructive approach to the definition of Brownian motion,� � using the decomposition t ∞ 1[0,t] = en en(s)ds. n=0 0 � � Definition 4.5. For allt R +, let ∈ t ∞ Bt(ω) :=J 1(1[0,t]) = ξn(ω) en(s)ds. (4.3) n=0 0 � �

2

1.5

1

0.5 t

B 0

-0.5

-1

-1.5

-2 0 0.2 0.4 0.6 0.8 1

Fig. 4.1. Sample paths of one-dimensional Brownian motion.

Clearly,B t B s =J 1(1[s,t]) is a Gaussian centered random variable with variance: − 32 Nicolas Privault

2 2 2 IE[(Bt B s) ] = IE[ J1(1 ) ] = 1 2 =t s, (4.4) − | [s,t] | � [s,t]�L (R+) − cf. Figure 4.1. Moreover, the isometry formula (4.2) shows that ifu 1, . . . , un 2 are orthogonal inL (R+) thenJ 1(u1),...,J1(un) are also mutually orthogonal inL 2(Ω), hence from Corollary 16.1 of Jacod and Protter [JP00], we get the following.

2 Proposition 4.6. Letu 1, . . . , un be an orthogonal family inL (R+), i.e.

u , u 2 = 0,1 i=j n. � i j�L (R+) ≤ � ≤

Then(J 1(u1),...,J1(un)) is a vector of independent Gaussian centered ran- 2 2 dom variables with respective variances u 1 2 ,..., u 1 2 . � �L (R+) � �L (R+)

As a consequence of Proposition 4.6, (Bt)t R+ has centered independent in- crements hence it is a martingale. ∈

Moreover, from Relation (4.4) and Remark 4.2 we deduce the following propo- sition.

Proposition 4.7. The Brownian motion(B t)t is a normal martingale. ∈R+ 1 n Then-dimensional Brownian motion will be constructed as (B t ,...,Bt )t R+ 1 n ∈ where (Bt )t R+ ,...,(B t )t R+ are independent copies of (Bt)t R+ , cf. Fig- ures 5.1 and∈ 5.2 below. ∈ ∈

The compensated Poisson process provides a second example of normal mar- tingale. Let now (τn)n 1 denote a sequence of independent and identically exponentially distributed≥ random variables, with parameterλ> 0, i.e.

n ∞ ∞ λ(s + +s ) IE[f(τ ,...,τ )] =λ e− 1 ··· n f(s , . . . , s )ds ds , 1 n ··· 1 n 1 ··· n �0 �0 for all sufficiently integrable measurablef:R n R. Let also + → T =τ + +τ , n 1. n 1 ··· n ≥

We now consider the canonical point process associated to the family (Tk)k 1 of jump times. ≥

Definition 4.8. The point process(N t)t defined as ∈R+

∞ Nt := 1[Tk, )(t), t R + (4.5) ∞ ∈ k�=1 is called the standard Poisson point process. Potential Theory in Classical Probability 33

The process (Nt)t R+ has independent increments which are distributed ac- cording to the Poisson∈ law, i.e. for all 0 t t <

(Nt1 N t0 ,...,Ntn N tn 1 ) − − − is a vector of independent Poisson random variables with respective parame- ters (λ(t1 t 0),...,λ(t n t n 1)). − − − We have IE[Nt] =λt and Var[N t] =λt,t R +, hence according to Re- 1/2 ∈ mark 4.2, the processλ − (N λt) is a normal martingale. t − Compound Poisson processes provide other examples of normal martin- gales. Given (Yk)k 1 a sequence of independent identically distributed random variables, define the≥ compound Poisson process as

Nt Xt = Yk, t R +. ∈ k�=1 The compensated compound Poisson martingale defined as

Xt λtIE[Y 1] Mt := − , t R +, λVar[Y1] ∈ is a normal martingale. �

4.2 Stochastic Integration

In this section we construct the Itˆostochastic integral of square-integrable adapted processes with respect to normal martingales. Thefiltration ( t)t F ∈R+ is generated by (Mt)t : ∈R+

t =σ(M s : 0 s t), t R +. F ≤ ≤ ∈ A process (Xt)t is said to be t-adapted ifX t is t-measurable for all ∈R+ F F t R +. ∈ p Definition 4.9. LetL ad(Ω R +),p [1, ], denote the space of t-adapted p × ∈ ∞ F processes inL (Ω R +). × Stochastic integrals will befirst constructed as integrals of simple predictable processes. Definition 4.10. Let be a space of random variables dense inL 2(Ω, ,P). S F Consider the space of simple predictable processes(u t)t of the form P ∈R+ n u = F 1 n n (t), t , (4.6) t i (ti 1,ti ] R + − ∈ i=1 � whereF is n -measurable,i=1, . . . , n. i ti 1 F − 34 Nicolas Privault

One easily checks that the set of simple predictable processes forms a linear space. On the other hand, fromP Lemma 1.1 of Ikeda and Watanabe [IW89], p. 22 and p. 46, the space of simple predictable processes is dense inL p (Ω P ad × R+) for allp 1. ≥ Proposition 4.11. The stochastic integral with respect to the normal mar-

tingale(M t)t R+ , defined on simple predictable processes(u t)t R+ of the form (4.6) by ∈ ∈ n ∞ utdMt := Fi(Mti M ti 1 ), (4.7) − − 0 i=1 � � 2 extends tou L (Ω R +) via the isometry formula ∈ ad ×

∞ ∞ ∞ IE utdMt vtdMt = IE utvtdt . (4.8) ��0 �0 � ��0 � Proof. We start by showing that the isometry (4.8) holds for the simple pre- n dictable processu= i=1 Gi1(ti 1,ti], with 0 =t 0 < t1 < t n: − ···

� 2 n 2 ∞ IE utdMt = IE Gi(Mti M ti 1 )  − −  � 0 � �i=1 � �� � � n   2 2 = IE Gi (Mti M ti 1 ) | | − − �i=1 � �

+2IE GiGj(Mti M ti 1 )(Mtj M tj 1 )  − − − −  1 i

+2 IE[IE[GiGj(Mti M ti 1 )(Mtj M tj 1 ) tj 1 ]] − − − − |F − 1 i

+2 IE[GiGj(Mti M ti 1 )IE[(Mtj M tj 1 ) tj 1 ]] − − − − |F − 1 i

2 Proposition 4.12. For anyu L (Ω R +) we have ∈ ad × t ∞ IE usdMs t = usdMs, t R +. 0 F 0 ∈ �� � � � � t � In particular, usdMs is t-measurable,t R +. 0 F ∈

Proof. Letu � have the formu=G1 (a,b], whereG is bounded and a- measurable.∈P F i) If 0 a t we have ≤ ≤ ∞ IE usdMs t = IE[G(Mb M a) t] 0 F − |F �� � � � =GIE[(M M ) ] � b − a |Ft =GIE[(M M ) ] +GIE[(M M ) ] b − t |Ft t − a |Ft =G(M M ) t − a ∞ = 1[0,t](s)usdMs. �0 ii) If 0 t a we have for all bounded -measurable random variableF: ≤ ≤ F t ∞ IE F u dM = IE[FG(M M )] = 0, s s b − a � �0 � hence

∞ ∞ IE usdMs t = IE[G(Mb M a) t] = 0 = 1[0,t](s)usdMs. 0 F − |F 0 �� � � � � This statement is extended� by linearity and density, since from the continuity of the conditional expectation onL 2 we have:

t 2 ∞ IE usdMs IE usdMs t � 0 − 0 F � �� �� � �� t � 2 n ∞� = lim IE us dMs IE usdMs t n � 0 − 0 F � →∞ �� �� � �� � 2 ∞ n ∞ � = lim IE IE us dMs usdMs t n � 0 − 0 F � →∞ � �� � � �� � 2 ∞ n ∞ � lim IE IE us dMs usdMs t ≤ n � � 0 − 0 F �� →∞ �� � � � 2 � ∞ n � lim IE (us u s)dMs ≤ n − →∞ ���0 � � 36 Nicolas Privault

∞ n 2 = lim IE us u s ds n | − | →∞ ��0 � = 0.

��

In particular, since 0 = ,Ω , the Itˆointegral is a centered random variable by Proposition 4.12,F i.e. {∅ }

∞ IE usdMs = 0. (4.9) ��0 � The following is an immediate corollary of Proposition 4.12.

t Corollary 4.13. The indefinite stochastic integral 0 usdMs ofu t R+ ∈ 2 ∈ L (Ω R +) is a martingale, i.e.: �� � ad × t s IE urdMr s = urdMr,0 s t. 0 F 0 ≤ ≤ �� � � � � Clearly, from Definitions 4.5 and� (4.7),J 1(u) coincides with the single stochas- tic integralI 1(u) with respect to (Bt)t . ∈R+ On the other hand, since the standard compensated Poisson martingale (Mt)t = (Nt t) t is a normal martingale, the integral ∈R+ − ∈R+ T utdMt �0 is also defined in the Itˆosense, i.e. as anL 2(Ω)-limit of stochastic integrals of simple adapted processes.

4.3 Quadratic Variation

We now introduce the notion of quadratic variation for normal martingales.

Definition 4.14. The quadratic variation of(M t)t R+ is the process([M,M] t)t R+ defined as ∈ ∈ t 2 [M,M] t =M 2 MsdMs, t R +. (4.10) t − ∈ �0 Let now n n n n n π = 0 =t 0 < t1 <

Proposition 4.15. We have

n 2 [M,M] = lim (M n M n ) , t 0, t ti ti 1 n − − ≥ →∞ i=1 � 2 n where the limit exists inL (Ω) and is independent of the sequence(π )n N of subdivisions chosen. ∈

Proof. As an immediate consequence of the definition 4.7 of the stochastic integral, we have

t M (M M ) = M dM ,0 s t, s t − s s r ≤ ≤ �s and

n ti 2 2 [M,M] tn [M,M] tn =M n M n 2 MsdMs i i 1 ti ti 1 − − − − − n ti 1 � − n ti 2 = (M n M n ) + 2 (M n M )dM , ti ti 1 ti 1 s s − − n − − ti 1 � − hence

n 2 2 IE [M,M] (M n M n ) t ti ti 1  − − −  � i=1 � �  n  2 2 = IE [M,M] n [M,M] n (M n M n ) ti ti 1 ti ti 1  − − − − −  �i=1 � � 2  n t  = 4IE 1 n n (s)(M M n )dM (ti 1,ti ] s ti 1 s  − − −  �i=1 0 � � � n n  ti  2 = 4IE (M M n ) ds s ti 1 n − − �i=1 ti 1 � � − �n n ti n 2 = 4IE (s t i 1) ds n − − �i=1 ti 1 � � � − 4t π . ≤ | |

��

Proposition 4.16. The quadratic variation of Brownian motion(B t)t is ∈R+

[B,B]t = t, t R +. ∈ 38 Nicolas Privault

Proof. (cf. e.g. Protter [Pro05], Theorem I-28). For every subdivision 0 = n n { t0 <

n 2 2 IE t (B n B n ) ti ti 1  − − −  � i=1 � �  n  2 2 n n = IE (B n B n ) (t t ) ti ti 1 i i 1  − − − − −  �i=1 � � n 2 2 (B n B n ) n n 2 ti ti 1 − − = (ti t i 1) IE n n 1 − −  t t −  i=1 � i i 1 � � − − n   2 2 n n 2 = IE[(Z 1) ] (ti t i 1) − − − i=0 � t π IE[(Z 2 1)2], ≤ | | − whereZ is a standard Gaussian random variable. �� A simple analysis of the Poisson paths shows that the quadratic variation of the compensated Poisson process (Mt)t = (Nt t) t is ∈R+ − ∈R+

[M,M] t =N t, t R +. ∈ Similarly for the compensated compound Poisson martingale

Xt λtIE[Y 1] Mt := − , t R +, λVar[Y1] ∈

we have � Nt 2 [M,M] t = Yk , t R +. | | ∈ k�=1 Definition 4.17. The angle bracket M,M t is the unique increasing process such that � � 2 M M,M t, t R +, t − � � ∈ is a martingale. As a consequence of Remark 4.3 we have

M,M t = t, t R +, � � ∈ for every normal martingale. Moreover,

[M,M] t M,M t, t R +, − � � ∈ Potential Theory in Classical Probability 39

is also a martingale as a consequence of Remark 4.3 and Proposition 4.12, since by Definition 4.14 we have

t 2 [M,M] t M,M t = [M,M] t t=M t 2 MsdMs, t R +. (4.11) −� � − t − − ∈ �0 Definition 4.18. We say that the martingale(M t)t has the predictable ∈R+ representation property if any square-integrable martingale(X t)t with re- ∈R+ spect to( t)t can be represented as F ∈R+ t Xt =X 0 + usdMs, t R +, (4.12) ∈ �0 2 where(u t)t R+ L ad(Ω R +) is an adapted process such thatu1 [0,T] 2 ∈ ∈ × ∈ L (Ω R +) for allT>0. × It is known that Brownian motion and the compensated Poisson process have the predictable representation property. This is however not true of compound Poisson processes in general.

Definition 4.19. An equation of the form

t [M,M] t =t+ φsdMs, t R +, (4.13) ∈ �0 where(φ t)t is a square-integrable adapted process, is called a structure ∈R+ equation, cf. Emery [E90].´ As a consequence of (4.11) and (4.12) we have the following proposition. 4 Proposition 4.20. Assume that(M t)t is inL (Ω) and has the predictable ∈R+ representation property. Then(M t)t satisfies the structure equation (4.13), ∈R+ i.e. there exists a square-integrable adapted process(φ t)t such that ∈R+ t [M,M] t =t+ φsdMs, t R +. ∈ �0 Proof. Since ([M,M] t t) t is a martingale, the predictable representation − ∈R+ property shows the existence of a square-integrable adapted process (φt)t R+ such that ∈ t [M,M] t t= φsdMs, t R +. − ∈ �0 �� In particular,

a) the Brownian motion (Bt)t satisfies the structure equation (4.13) with ∈R+ φt = 0, since the quadratic variation of (Bt)t + is [B,B]t =t,t R +. ∈R ∈ Informally we haveΔB = √Δt with equal probabilities 1/2. t ± 40 Nicolas Privault

2 b) The compensated Poisson martingale (Mt)t R+ =λ(N t t/λ )t R+ , where ∈ − 2 ∈ (Nt)t is a standard Poisson process with intensity 1/λ satisfies the ∈R+ structure equation (4.13) withφ t =λ R,t R +, since ∈ ∈ 2 [M,M] t =λ Nt =t+λM t, t R +. ∈ 2 In this case,ΔM t 0,λ with respective probabilities 1 λ − Δt and 2 ∈{ } − λ− Δt.

The Az´emamartingales correspond toφ t =βM t,β [ 2, 0), and provide examples of processes having the chaos representation∈ − property, but whose increments are not independent, cf. Emery [E90].´ Note that not all normal martingales have the predictable representation property. For instance, the compensated compound Poisson martingales do not satisfy a structure equa- tion and do not have the predictable representation property in general.

4.4 Itˆo’sFormula

We consider a normal martingale (Mt)t satisfying the structure equation ∈R+

d[M,M] t = dt+φ tdMt.

Such an equation is satisfied in particular if (Mt)t R+ has the predictable representation property, cf. Proposition 4.20. ∈

The following is a statement of Itˆo’sformula for normal martingales, cf. Emery [E90],´ Proposition 2, p. 70.

Proposition 4.21. Assume thatφ L ad∞ (R+ Ω). Let(X t)t R+ be a process given by ∈ × ∈ t t Xt =X 0 + usdMs + vsds, (4.14) �0 �0 2 where(u s)s R+ ,(v s)s R+ are adapted processes inL ad([0, t] Ω) for allt>0. ∈ 1,2∈ × We have forf (R+ R): ∈C × t f(s, X +φ u ) f(s, X ) f(t, X ) f(0,X ) = s− s s − s− dM (4.15) t − 0 φ s �0 s ∂f t f(s, X s +φ sus) f(s, X s) φ sus (s, Xs) + − − ∂x ds φ2 �0 s t ∂f t ∂f + (s, X )v ds+ (s, X )ds. ∂x s s ∂s s �0 �0

Ifφ s = 0, the terms f(X +φ u ) f(X ) s− s s − s− φs Potential Theory in Classical Probability 41 and f(X s +φ sus) f(X s) φ susf �(Xs) − 2 − φs 1 2 can be replaced by their respective limitsu sf �(Xs− ) and 2 usf ��(Xs− ) asφ s 0. → Examples

i) For thed-dimensional Brownian motion (B t)t R+ ,φ = 0 and the Itˆo formula reads ∈ t 1 t f(B ) =f(B ) + f(B ), dB + Δf(B )ds, t 0 �∇ s s� 2 s �0 �0 for all 2 functionsf, hence C

Ttf(x) = IE x[f(B t)] t 1 t = IE f(x) + f(B ), dB + Δf(B )ds x �∇ s s� 2 s � �0 �0 � 1 t = IE f(x) + Δf(B )ds x 2 s � �0 � 1 t =f(x) + IE [Δf(B )] ds 2 x s �0 1 t =f(x) + T Δf(x)ds, 2 s �0 1 and as a Markov process, (Bt)t has generator Δ. ∈R+ 2 ii) For the compensated Poisson process (Nt t)t R+ we haveφ s = 1,s R +, hence − ∈ ∈

t

f(N t t) =f(0) + (f(1 +N s− s) f(N s− s))d(N s s) − 0 − − − − t � + (f(1 +N s) f(N s) f �(N s))ds, s − − s − − s − �0 which can actually be recovered by elementary calculus. Hence the gener- ator of the compensated Poisson process is

f(x) =f(x + 1) f(x) f �(x). L − − From Itˆo’sformula we have for any stopping timeτ and 2 functionu, under suitable integrability conditions: C

τ 1 τ IE [u(B )] =u(x) + IE u(B ), dB + IE Δu(B )ds x τ x �∇ s s� 2 x s ��0 � ��0 � 42 Nicolas Privault

τ ∞ 1 =u(x) + IE x 1 s τ u(Bs), dBs + IEx Δu(Bs)ds { ≤ } �∇ � 2 ��0 � ��0 � 1 τ =u(x) + IE Δu(B )ds , (4.16) 2 x s ��0 � which is called Dynkin’s formula, cf. Dynkin [Dyn65], Theorem 5.1. Let now

n d n σ:R + R R R × → ⊗ and n b:R + R R × → satisfy the global Lipschitz condition

σ(t, x) σ(t, y) 2 + b(t, x) b(t, y) 2 K 2 x y 2, � − � � − � ≤ � − � n t R +, x, y R . Then there exists (cf. e.g. [Pro05], Theorem V-7) a unique strong∈ solution∈ to the stochastic differential equation

t t Xt =X 0 + σ(s, Xs)dBs + b(s, Xs)ds. �0 �0 In addition, whenσ(s, x) =σ(x) andb(s, x) =b(x) are time independent, (Xt)t is a Markov process with generator ∈R+ 1 n ∂2 n ∂ = a (x) + b (x) , L 2 i,j ∂x ∂x i ∂x i,j=1 i j i=1 i � � wherea=σ T σ.

4.5 Killed Brownian Motion

LetD be a domain inR 2. The transition operator of the Brownian motion D (Bt )t [0,τ∂D ] killed on∂D is defined as ∈ qD(x, A) =P (BD A,τ > t), t x t ∈ ∂D where τ = inf t >0:B ∂D ∂D { t ∈ }

is thefirst hitting time of∂D by (B t)t R+ . By the strong Markov property of Definition 3.4 we have ∈

Px(Bt A) =P x(Bt A, t <τ ∂D ) +P x(Bt A, t τ ∂D ) ∈ D ∈ ∈ ≥ =q t (x, A) +P x(Bt A, t τ ∂D ) D ∈ ≥ =q t (x, A) + IEx[1 Bt A 1 t τ∂D ] { ∈ } { ≥ } D =q t (x, A) + IEx[IE[1 Bt A 1 t τ∂D τ∂D ]] { ∈ } { ≥ } | F Potential Theory in Classical Probability 43

D =q t (x, A) + IEx[1 t τ∂D IE[1 Bt A 1 t τ∂D τ∂D ]] { ≥ } { ∈ } { ≥ } | F D =q t (x, A) + IEx[1 t τ∂D IE[1 Bt τ A B 0 =x] x=Bτ ] { ≥ } { − ∂D ∈ } | ∂D D D =q t (x, A) + IEx[Tt τ∂D 1A(Bτ )1 t τ∂D ], − ∂D { ≥ } hence D D qt (x, A) =P x(Bt A) IE x[Tt τ∂D 1A(Bτ )1 t τ∂D ], ∈ − − ∂D { ≥ } and the killed process has the transition densities

D D pt (x, y) :=p t(x, y) IE x[pt τ∂D (Bτ∂D , y)1 t τ∂D ], x, y D, t>0. − − { ≥ } ∈ (4.17) The Green kernel is defined as

∞ D gD(x, y) := pt (x, y)dt, �0 and the associated Green potential is

∞ GDµ(x) = gD(x, y)µ(dy), �0 with in particular τ∂D GDf(x) := IE f(B t)dt ��0 � whenµ(dx)=f(x)dx has densityf with respect to the Lebesgue measure.

n WhenD=R we haveτ ∂D = a.s. henceG n =G. ∞ R Theorem 4.22. The function(x, y) g D(x, y) is symmetric and continuous onD 2, andx g (x, y) is harmonic �→ onD y ,y D. �→ D \{ } ∈ n Proof. For all bounded domainsA inR , the functionG D1A defined as

τ∂D GD1A(x) = gD(x, y)1A(y)dy = IEx 1A(Bt)dt n �R ��0 �

has the mean value property inD A¯, and the property extends tog D by linear combinations and an limiting\ argument. �� From (4.17), the associatedλ-potential kernel (3.4) is given by

λ ∞ λt λ λ λ g (x, y)= e− pt(x, y)dt=g D(x, y)+ g (z, y)hD(x, dz), (4.18) �0 � where λ ∞ λt D gD(x, y) := e− qt (x, y)dt, �0 and λ λτ∂D D hD(x, A) = IEx[e− 1 B A 1 τ∂D < ]. { τ∂D ∈ } { ∞} 44 Nicolas Privault 5 Probabilistic Interpretations

5.1 Harmonicity

We start by two simple examples on the connection between harmonic func- tions and stochastic calculus. First, we show that Proposition 2.2, i.e. the fact that harmonic functions satisfy the mean value property, can be recovered using stochastic calculus. Let

τr = inf t R + :B t S(x, r) { ∈ ∈ } denote thefirst exit time of (B t)t from the open ballB(x, r). ∈R+

Due to the spatial of Brownian motion,B τr is uniformly dis- tributed onS(x, r), hence

r IEx[u(Bτr )] = u(x)σx(dy), �S(x,r) r whereσ x denotes the uniform surface measure on the sphereS(x, r).

On the other hand, from Dynkin’s formula (4.16) we have

1 τr IE [u(B )] =u(x) + IE Δu(B )ds . x τr 2 x s ��0 � Hence the conditionΔu = 0 implies the mean value property

r u(x) = u(y)σy(dy), (5.1) �S(x,r) and similarly the conditionΔu 0 implies ≤ u(x) u(y)σr(dy). (5.2) ≥ y �S(x,r) From Remark 3, page 134 of Dynkin [Dyn65], for alln 1 we have ≥ 1 IEx[u(Bτ )] u(x) Δu(x) = lim 1/n − , (5.3) n 2 →∞ IEx[τ1/n] which shows conversely that (5.1), resp. (5.2), impliesΔu 0, respΔu = 0, which recovers Proposition 2.2. ≤

Next, we recover the superharmonicity property of the potential

∞ R0f(x) = Ttf(x)dt (5.4) �0 Potential Theory in Classical Probability 45

∞ = IEx f(B t)dt ��0 � ∞ = f(y)p t(x, y)dtdy n �0 �R n 2 n = − hx(y)f(y)dy, x R . 2 n ∈ �R Proposition 5.1. Letf be a non-negative function onR n. Then the potential 2 n R0f given in (5.4) is a superharmonic function provided it is onR . C Proof. For allr> 0, using the strong Markov property (3.5) we have

τ r ∞ g(x) = IEx f(B t)dt + IEx f(B t)dt � 0 � � τr � � τ � r ∞ = IEx f(B t)dt + IEx IE f(B t)dt Bτr 0 τ �� � � �� r � �� τr � = IEx f(B t)dt + IEx[g(Bτr )] � ��0 � IE [g(B )] ≥ x τr r = g(y)σx(dy), �S(x,r) which shows thatg isΔ-superharmonic from Proposition 2.2. �� Other non-negative superharmonic functionals can also be constructed by convolution, i.e. iff isΔ-superharmonic andg is non-negative and sufficiently integrable, then x g(y)f(x y)dy �→ n − �R is non-negative andΔ-superharmonic.

We now turn to an example in discrete time, with the notation of (3.1). Here a functionf is called harmonic when (I P)f = 0, and superharmonic if (I P)f 0. − − ≥ Proposition 5.2. A functionf is superharmonic if and only if the sequence (f(X n))n is a supermartingale. ∈N Proof. We have

IE[f(X m) n] = IE[f(X m n) X 0 =x] x=X |F − | n =P m nf(X n) − f(X ). ≤ n

�� 46 Nicolas Privault

5.2 Dirichlet Problem In this section we revisit the Dirichlet problem using probabilistic tools. Theorem 5.3. Consider an open domainD inR n andf a function onR n, and assume that the Dirichlet problem Δu=0, x D, ∈  u(x) =f(x), x ∂D,  ∈ has a 2 solutionu. Then we have C u(x) = IE [f(B )], x D,¯ x τ∂D ∈ where τ = inf t >0:B ∂D ∂D { t ∈ } is thefirst hitting time of∂D by(B t)t . ∈R+ Proof. For allr> 0 such thatB(x, r) D we have ⊂

u(x) = IEx[f(B τ∂D )] = IE [IE[f(B ) B ]] x τ∂D | τr = IEx[u(Bτr )] = u(y)σr (dy), x D,¯ x ∈ �S(x,r) henceu has the mean value property, thusΔu = 0 by Proposition 2.2. �� 5.3 Poisson Equation In the next theorem we show that the resolvent

∞ λt Rλf(x) = IE x e− f(B t)dt ��0 � ∞ λt n = e− Ttf(x)dt, x R , ∈ �0 solves the Poisson equation. This is consistent with the fact thatR λ = (λI 1 − Δ/2)− , cf. Relation (3.3) in Theorem 3.2. Theorem 5.4. Letλ>0 andf a non-negative function onR n, and assume that the Poisson equation

1 n Δu(x) λu(x)= f(x), x R , (5.5) 2 − − ∈ has a 2 solutionu. Then we have C b ∞ λt n u(x) = IEx e− f(B t)dt , x R . ∈ ��0 � Potential Theory in Classical Probability 47

Proof. By Itˆo’sformula,

t λt λs e− u(Bt) =u(B 0) + e− u(Bs), dBs 0 �∇ � t � t 1 λs λs + e− Δu(Bs)ds λ e− u(Bs)ds 2 0 − 0 � t � t λs λs =u(B ) + e− u(B ), dB e− f(B )ds, 0 �∇ s s� − s �0 �0 hence by taking expectations on both sides and using (4.9) we have

t λt λs e− IE [u(B )] =u(x) IE e− f(B )ds , x t − x s ��0 � and lettingt tend to infinity we get

∞ λs 0 =u(x) IE e− f(B )ds . − x s ��0 � ��

2

1.5

1

0.5

0

-0.5

-1

-1.5

-2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

Fig. 5.1. Sample paths of a two-dimensional Brownian motion.

Whenλ = 0, a similar result holds under the addition of a boundary condition on a smooth domainD. 48 Nicolas Privault

Theorem 5.5. Letf a non-negative function onR n, and assume that the Poisson equation 1 Δu(x) = f(x), x D, 2 − ∈  (5.6)  u(x) = 0, x ∂D, ∈ has a 2 solutionu. Then we have C b  τ∂D u(x) = IE f(B )dt , x D. x t ∈ ��0 � Proof. Similarly to the proof of Theorem 5.4 we have

t 1 t u(Bt) =u(B 0) + u(Bs), dBs + Δu(Bs)ds 0 �∇ � 2 0 � t �t =u(B ) + u(B ), dB f(B )ds, 0 �∇ s s� − s �0 �0 hence τ∂D 0 = IE [u(B )] =u(x) IE f(B )ds . x τ∂D − x s ��0 � We easily check that the boundary conditionu(x) = 0 is satisfied onx ∂D sinceτ = 0 a.s. given thatB ∂D. ∈ ∂D 0 ∈ ��

2 1.5 1 0.5 0 -0.5 -2 -1 -1.5 -1 -1.5 -0.5 0 -2 -2 -1.5 0.5 -1 1 -0.5 0 0.5 1 1.5 1.5 2 2 Fig. 5.2. Sample paths of a three-dimensional Brownian motion.

The probabilistic interpretation of the solution of the Poisson equation can also be formulated in terms of a Brownian motion killed at the boundary∂D, cf. Section 4.5. Proposition 5.6. The solutionu of (5.6) is given by

u(x) =G Df(x) = gD(x, y)f(y)dy, x D, n ∈ �R Potential Theory in Classical Probability 49

whereg D andG Df are the Green kernel and the Green potential of Brownian motion killed on∂D.

Proof. Wefirst need to show that the functiong D is the fundamental solution of the Poisson equation. We have

τ∂D τ∂D D IEx 1A(Bt)dt = IEx 1A(Bt )dt ��0 � ��0 � τ∂D = IEx 1 BD A dt { t ∈ } ��0 � ∞ = IEx 1 BD A dt { t ∈ } ��0 � ∞ = P (BD A)dt x t ∈ �0 ∞ D = qt (x, A)dt �0 ∞ D = pt (x, y)dydt �0 �A =G D1A(x)

= gD(x, y)1A(y)dy, n �R hence

τ∂D D u(x) = IEx f(B t )dt =G Df(x) = gD(x, y)f(y)dy. n ��0 � �R �� The solution of (5.6) can also be expressed using theλ-potential kernelg λ. Proposition 5.7. The solutionu of (5.5) can be represented as

∞ λt D u(x) = e− qt (x, y)f(y)dydt n �0 �R λ λτ∂D + f(y) g (z, y)IE [e− 1 D 1 ]dy, x Bτ dz τ∂D < n n { ∂D ∈ } { ∞} �R �R x D. ∈ Proof. From (4.18) we have

u(x) =R λf(x) = gλ(x, y)f(y)dy n �R λ λ λ = gD(x, y)f(y)dy+ f(y) g (z, y)hD(x, dz)dy n n n �R �R �R 50 Nicolas Privault

∞ λt D = e− qt (x, y)f(y)dydt n �0 �R λ λτ∂D + f(y) g (z, y)IE [e− 1 D 1 ]dy. x Bτ dz τ∂D < n n { ∂D ∈ } { ∞} �R �R �� In discrete time, the potential kernel ofP is defined as

∞ G= P n, n=0 � and satisfies ∞ Gf(x) = IE x f(X n) , �n=0 � � i.e. ∞ G1 (x) = P ( X A ). A x { n ∈ } n=0 � The Poisson equation with second memberf is here the equation

(I P)u=f. − Let τ := inf n 0:X D D { ≥ n ∈ } denote thefirst hitting time of the domainD E. Then the function ⊂ τ 1 D − u(x) := IEx f(X k) , x E, � � ∈ k�=0 solves the Poisson equation

(I P)u(x)=f(x), x E, − ∈  u(x) = 0, x D.  ∈  5.4 Cauchy Problem

This section presents a version of the Feynman-Kac formula. Consider the partial differential equation (PDE)

∂u 1 (t, x) = Δu(t, x) V(x)u(t, x) ∂t 2 −  (5.7)  u(0, x) =f(x).

 Potential Theory in Classical Probability 51

n Proposition 5.8. Assume that f,V b(R ) andV is non-negative. Then the solution of (5.7) is given by ∈C

t n u(t, x) = IEx exp V(B s)ds f(B t) , t R +, x R . (5.8) − ∈ ∈ � � �0 � � Proof. Let

t ˜ n Ttf(x) = IE x exp V(B s)ds f(B t) , t R +, x R . − ∈ ∈ � � �0 � � We have

t+s T˜ f(x) = IE exp V(B )du f(B ) t+s x − u t+s � � �0 � � t t+s = IE exp V(B )du exp V(B )du f(B ) x − u − u t+s � � �0 � � �t � � t+s = IEx IE exp V(B u)du f(B t+s) t − 0 F � � � � � � �� t t�+s = IEx exp V(B u)du IE exp � V(B u)du f(B t+s) t − 0 − t F � � � � � � � � � �� t s � = IE exp V(B )du IE exp V(B )du f(B ) B �=x x − u − u s 0 � 0 0 x=Bt � � � � � � � � � � t � = IE exp V(B )du T˜ f(B ) � x − u s t � � �0 � � = T˜tT˜sf(x),

hence (T˜t)t has the semigroup property. Next we have ∈R+ T˜ f(x) f(x) 1 t t − = IE exp V(B )du f(B ) f(x) t t x − u t − � � � �0 � � � 1 1 t = (IE [f(B )] f(x)) IE V(B )duf(B ) +o(t), t x t − − t x u t ��0 � hence dT˜t 1 t=0f(x) = Δf(x) V(x)f(x), x R, (5.9) dt | 2 − ∈ andu given by (5.8) satisfies

∂u dT˜ (t, x) = t f(x) ∂t dt dT˜ = t T˜ f(x) dt |t=0 t 52 Nicolas Privault 1 = Δ V(x) T˜ f(x) 2 − t � � 1 = Δu(t, x) V(x)u(t, x). 2 −

�� Moreover, (5.9) also yields

1 t f(y)p t(x, y)dy=f(x) + Δxf(y)p s(x, y)dyds n 2 n �R �0 �R 1 t =f(x) + f(y)Δ xps(x, y)dyds, 2 n �0 �R n for all sufficiently regular functionsf onR , wherep t(x, y) is the Gaussian density kernel. Hence we get, after differentiation with respect tot, ∂p 1 t (x, y)= Δ p (x, y). (5.10) ∂t 2 x t n From (5.10) we recover the harmonicity ofh y onR y : \{ } ∞ Δxhy(x) =Δ x pt(x, y)dt �0 ∞ = Δxpt(x, y)dt �0 ∞ ∂p = 2 t (x, y)dt ∂t �0 = 2 lim pt(x, y) 2 lim pt(x, y) t t 0 →∞ − → = 0,

providedx=y. The backward Kolmogorov partial differential equation � ∂u 1 (t, x) = Δu(t, x) V(x)u(t, x), − ∂t 2 −  (5.11)  u(T, x) =f(x),

can be similarly solved as follows.

n Proposition 5.9. Assume that f,V b(R ) andV is non-negative. Then the solution of (5.11) is given by ∈C

T u(t, x) = IE exp V(B s)ds f(B T ) Bt =x , (5.12) � �− t � � � � � n � t R +,x R . ∈ ∈ Potential Theory in Classical Probability 53

Proof. Lettingu(t, x) be defined by (5.12) we have

t exp V(B )ds u(t, B ) − s t � �0 � t T = exp V(B s)ds IE exp V(B s)ds f(B T ) t − 0 � �− t � F � � � � � � � T � = IE exp V(B s)ds f(B T ) t , � �− 0 � F � � � n � t R +,x R , which is a martingale by construction.� Applying Itˆo’sformula to∈ this process∈ we get

t exp V(B )ds u(t, B ) − s t � �0 � t s =u(0,B ) + exp V(B )dτ u(s, B ), dB 0 − τ �∇x s s� �0 � �0 � 1 t s + exp V(B )dτ Δ u(s, B )ds 2 − τ x s �0 � �0 � t s V(B ) exp V(B )dτ u(s, B )ds − s − τ s �0 � �0 � t s ∂u + exp V(B )dτ u(s, B )ds. − τ ∂s s �0 � �0 � The martingale property shows that the absolutely continuousfinite variation terms vanish (see e.g. Corollary II-1 of Protter [Pro05]), hence 1 ∂u Δxu(s, Bs) V(B s)u(s, Bs) + (s, Bs) = 0, s R +, 2 − ∂s ∈ which leads to (5.11). ��

5.5 Martin Boundary

Our aim is now to provide a probabilistic interpretation of the Martin bound- ary in discrete time. Namely we use the Martin boundary theory to study the way a Markov chain with transition operatorP leaves the spaceE, in particular whenE is a union

∞ E= En, n=0 � of transient setsE n,n N. We assume thatE is a metric space with dis- tanceδ and that the Cauchy∈ completion ofE coincides with its Alexandrov compactification (E,ˆx). 54 Nicolas Privault

For simplicity we will assume that the transition operatorP is self-adjoint with respect to a reference measurem. Let nowG denote the potential

∞ Gf(x) := P nf(x), x E. ∈ n=0 � with kernelg( , ), satisfying · · Gf(x) = f(y)g(x, y)m(dy), x E, ∈ �E for a given reference measurem. Fixx 0 E 1. Forf in the space c(E) of compactly supported function onE, define∈ the kernel C g(x, z) kx0 (x, z)= , x, z E, g(x, x0) ∈ and the operator Gf(x) K f(x) := = k (x, z)f(z)m(dz), x E. x0 g(x, x ) x0 ∈ 0 �E Consider a sequence (fn)n dense in c(E) and the metric defined by ∈N C ∞ d(x, y) := ζ K f (x) K f (y) , n| x0 n − x0 n | n=1 � where (ζn)n is a sequence of non-negative numbers such that ∈N ∞ ζn Kx fn < . � 0 �∞ ∞ n=1 � The Martin space Eˆ forX started with distributionrm is constructed as the Cauchy completion of (E,δ+d). ThenK f,f (E), can be extended by x0 ∈C c continuity to Eˆ since for allε> 0 there existsn N such that for all x, y E, ∈ ∈ K f(x) K f(y) | x0 − x0 | K f(x) K f (x) + K f (x) K f (y) + K f (y) K f(y) ≤| x0 − x0 n | | x0 n − x0 n | | x0 n − x0 | ε+ζ d(x, y), ≤ n

and, from Proposition 2.3 of Revuz [Rev75], the sequence (Kx0 fn)n N is dense in K f:f (E) . ∈ { x0 ∈C c }

Ifx ΔE, a sequence (x n)n E converges tox if and only if it converges to ∈ ∈N ⊂ point at infinity ˆx for the metricδ and (Kx f(x n))n converges toK x f(x) 0 ∈N 0 for everyf c(E). Finally we have the following result, for which we refer to [Rev75].∈C

Theorem 5.10. The sequence(X n)n convergesP x -a.s. in Eˆ and the law ∈N 0 ofX underP x is carried byΔE. ∞ 0 Potential Theory in Classical Probability 55 References

[Bas98] R.F. Bass. Diffusions and elliptic operators. Probability and its Applica- tions (New York). Springer-Verlag, New York, 1998. [BH91] N. Bouleau and F. Hirsch. Dirichlet Forms and Analysis on Wiener Space. de Gruyter, 1991. [Bia93] Ph. Biane. Calcul stochastique non-commutatif. In Ecole d’Et´ede Prob- abilit´es de Saint-Flour, volume 1608 of Lecture Notes in Mathematics. Springer-Verlag, 1993. [Bre65] M. Brelot. El´ementsde´ la th´eorieclassique du potentiel. 3e ´edition. Les cours de Sorbonne. 3e cycle. Centre de Documentation Universitaire, Paris, 1965. Re-edited by Association Laplace-Gauss, 1997. [Chu95] K.L. Chung. Green, Brown, and probability. World Scientific, 1995. [Doo84] J.L. Doob. Classical potential theory and its probabilistic counterpart. Springer-Verlag, Berlin, 1984. [dP70] N. du Plessis. An introduction to potential theory. Oliver & Boyd, 1970. [Dyn65] E.B. Dynkin. Markov processes. Vols. I, II. Die Grundlehren der Mathe- matischen Wissenschaften. Academic Press Inc., New York, 1965. [E90]´ M. Emery.´ On the Az´emamartingales. In S´eminaire de Probabilit´esXXIII, volume 1372 of Lecture Notes in Mathematics, pages 66–87. Springer Ver- lag, 1990. [EK86] S.N. Ethier and T.G. Kurtz. Markov processes, characterization and con- vergence. John Wiley & Sons Inc., New York, 1986. [FOT94] M. Fukushima, Y. Oshima, and M. Takeda. Dirichlet Forms and Symmet- ric Markov Processes. de Gruyter, 1994. [Hel69] L.L. Helms. Introduction to potential theory, volume XII of Pure and . Wiley, 1969. [HL99] F. Hirsch and G. Lacombe. Elements of , volume 192 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1999. [IW89] N. Ikeda and S. Watanabe. Stochastic Differential Equations and Diffusion Processes. North-Holland, 1989. [JP00] J. Jacod and Ph. Protter. Probability essentials. Springer-Verlag, Berlin, 2000. [Kal02] O. Kallenberg. Foundations of modern probability. Probability and its Applications. Springer-Verlag, New York, second edition, 2002. [MR92] Z.M. Ma and M. R¨ockner. Introduction to the theory of (nonsymmetric) Dirichlet forms. Universitext. Springer-Verlag, Berlin, 1992. [Pro05] Ph. Protter. Stochastic integration and differential equations. Stochastic Modelling and Applied Probability, Vol. 21. Springer-Verlag, Berlin, 2005. [PS78] S.C. Port and C.J. Stone. Brownian motion and classical potential theory. Academic Press, 1978. [Rev75] D. Revuz. Markov chains. North Holland, 1975.

Index

angle bracket, 39 potential, 17, 44, 50 balayage, 17, 20 harmonic, 6, 14, 20, 46 Brownian motion, 32, 47, 49 function, 45 fundamental harmonic function, 7 capacity, 29 superharmonic function, 6, 15, 45, 46 Cauchy completion, 54, 55 independent increments, 23, 30 problem, 51 Itˆo Chapman-Kolmogorov equation, 25, 27 formula, 41, 42 Constantinescu-Cornea theorem, 20, 21 integral, 34 isometry, 31, 33, 35 Dirichlet form, 28 kernel problem, 2, 8–12, 15, 16, 29, 46 λ-potential kernel, 27, 44, 50 divergence, 3 potential kernel, 51 formula, 6 transition kernel, 24 theorem, 3 Killed Brownian motion, 43, 49 Dynkin formula, 42, 45 Laplace electrical equation, 4, 6, 7 field, 3 Laplacian, 4 permittivity, 3 Markov Feynman-Kac formula, 51 chain, 23, 24 filtration, 29 process, 22 strong Markov property, 27, 43 Gauss sub-Markovian semigroup, 29 integral theorem, 10 Martin law, 3 boundary, 20–22, 54 generator, 26, 28 space, 21, 55 Green martingale, 30, 37 identity, 10 Az´ema,41 kernel, 11, 12, 44 normal, 30, 34, 37 58 Index

Poisson, 34, 37 recurrent, 28 supermartingale, 46 reduced function, 17–19 Maxwell equation, 4 resolvent mean value property, 6, 11, 45 equation, 25 operator, 25 normal derivative, 10 vector, 3 semigroup Feller semigroup, 25 Poisson Gaussian semigroup, 26 compound process, 34 semigroup property, 24, 52 distribution, 33 strongly continuous semigroup, 29 equation, 4, 5, 48, 49, 51 transition semigroup, 24 formula, 10, 14 stochastic differential equation, 43 integral, 14 stochastic integral, 35 kernel, 15, 22 stopping time, 27 martingale, 39, 40 structure equation, 40, 41 process, 33 surface measure, 3 polar set, 27 normalized surface measure, 6 potential, 19, 55 λ-potential, 27 time Green potential, 17, 44, 50 discrete time, 46, 51, 54 logarithmic potential, 7 exit time, 28, 45 Newton potential, 4, 8, 16, 26 predictable representation property, 40 hitting time, 27, 43 process time homogeneous, 24 adapted process, 34 transient, 28, 54 predictable process, 34 transition transition density, 26, 43 quadratic variation, 37–39 transition operator, 24, 43, 54