Topics in Occupation Times and Gaussian Free Fields
Alain-Sol Sznitman∗
Notes of the course “Special topics in probability” at ETH Zurich during the Spring term 2011
∗ Mathematik, ETH Z¨urich. CH-8092 Z¨urich, Switzerland With the support of the grant ERC-2009-AdG 245728-RWPERCRI Foreword The following notes grew out of the graduate course “Special topics in probability”, which I gave at ETH Zurich during the Spring term 2011. One of the objectives was to explore the links between occupation times, Gaussian free fields, Poisson gases of Markovian loops, and random interlacements. The stimulating atmosphere during the live lectures was an encouragement to write a fleshed-out version of the handwritten notes, which were handed .. out during the course. I am immensely grateful to Pierre-Fran¸cois Rodriguez, Artem Sapozhnikov, Bal´azs R´ath, Alexander Drewitz, and David Belius, for their numerous comments on the successive versions of these notes. Contents
0 Introduction 1
1 Generalities 5 1.1 Theset-up...... 5 1.2 The Markov chain X. (withjumprate1)...... 6 1.3 Somepotentialtheory ...... 10 1.4 Feynman-Kacformula ...... 20 1.5 Localtimes ...... 22 1.6 The Markov chain X. (withvariablejumprate) ...... 23
2 Isomorphism theorems 27 2.1 TheGaussianfreefield ...... 27
2.2 The measures Px,y ...... 30 2.3 Isomorphismtheorems ...... 35 2.4 GeneralizedRay-Knighttheorems ...... 38
3 The Markovian loop 51
3.1 Rooted loops and the measure µr onrootedloops ...... 51
3.2 Pointed loops and the measure µp onpointedloops ...... 58 3.3 Restrictionproperty ...... 61 3.4 Localtimes ...... 62 3.5 Unrooted loops and the measure µ∗ onunrootedloops ...... 67
4 Poisson gas of Markovian loops 71 4.1 Poissonpointmeasuresonunrootedloops ...... 71 4.2 Occupationfield...... 72 4.3 Symanzik’s representation formula ...... 76 4.4 Someidentities ...... 80 4.5 Some links between Markovian loops and random interlacements ...... 84
References 93
Index 95
1
0 Introduction
This set of notes explores some of the links between occupation times and Gaussian pro- cesses. Notably they bring into play certain isomorphism theorems going back to Dynkin [4], [5] as well as certain Poisson point processes of Markovian loops, which originated in physics through the work of Symanzik [26]. More recently such Poisson gases of Marko- vian loops have reappeared in the context of the “Brownian loop soup” of Lawler and Werner [16] and are related to the so-called “random interlacements”, see Sznitman [27]. In particular they have been extensively investigated by Le Jan [17], [18]. A convenient set-up to develop this circle of ideas consists in the consideration of a finite connected graph E endowed with positive weights and a non-degenerate killing measure. One can then associate to these data a continuous-time Markov chain Xt, t 0, on E, with variable jump rates, which dies after a finite time due to the killing measure,≥ as well as
(0.1) the Green density g(x, y), x, y E, ∈ (which is positive and symmetric),
t x (0.2) the local times Lt = 1 Xs = x ds, t 0, x E. Z0 { } ≥ ∈ In fact g( , ) is a positive definite function on E E, and one can define a centered Gaussian process· · ϕ , x E, such that × x ∈ (0.3) cov(ϕ ,ϕ )(= E[ϕ ϕ ]) = g(x, y), for x, y E. x y x y ∈ This is the so-called Gaussian free field. 1 2 z It turns out that 2 ϕz, z E, and L∞, z E, have intricate relationships. For instance Dynkin’s isomorphism theorem∈ states in our∈ context that for any x, y E, ∈ z 1 (0.4) L + ϕ2 under P P G, ∞ 2 z z∈E x,y ⊗ has the “same law” as
1 (0.5) (ϕ2) under ϕ ϕ P G, 2 z z∈E x y where Px,y stands for the (non-normalized) h-transform of our basic Markov chain, with the choice h( ) = g( ,y), starting from the point x, and P G for the law of the Gaussian field ϕ , z E· . · z ∈ Eisenbaum’s isomorphism theorem, which appeared in [7], does not involve h-transforms and states in our context that for any x E, s = 0, ∈ 6 z 1 (0.6) L + (ϕ + s)2 under P P G, ∞ 2 z z∈E x ⊗ has the “same law” as
1 ϕx (0.7) (ϕ + s)2 under 1+ P G. 2 z z∈E s 2 0 INTRODUCTION
The above isomorphism theorems are also closely linked to the topic of theorems of Ray- Knight type, see Eisenbaum [6], and chapters 2 and 8 of Marcus-Rosen [19]. Originally, see [13, 21], such theorems came as a description of the Markovian character in the space variable of Brownian local times evaluated at certain random times. More recently, the Gaussian aspects and the relation with the isomorphism theorems have gained promi- nence, see [8], and [19]. Interestingly, Dynkin’s isomorphism theorem has its roots in mathematical physics. It grew out of the investigation by Dynkin in [4] of a probabilistic representation formula for the moments of certain random fields in terms of a Poissonian gas of loops interacting with Markovian paths, which appeared in Brydges-Fr¨ohlich-Spencer [2], and was based on the work of Symanzik [26]. The Poisson point gas of loops in question is a Poisson point process on the state space of loops on E modulo time-shift. Its intensity measure is a multiple αµ∗ of the image µ∗ of a certain measure µrooted, under the canonical map for the equivalence relation identifying rooted loops γ that only differ by a time-shift. This measure µrooted is the σ-finite measure on rooted loops defined by
∞ t dt (0.8) µrooted(dγ)= Q (dγ) , Z x,x t xP∈E 0
t where Qx,x is the image of 1 Xt = x Px under (Xs)0≤s≤t, if X. stands for the Markov chain on E with jump rates{ equal to} 1 attached to the weights and killing measure we have chosen on E. The random fields on E alluded to above, are motivated by models of Euclidean quantum field theory, see [11], and are for instance of the following kind:
2 2 − 1 E(ϕ,ϕ) ϕx − 1 E(ϕ,ϕ) ϕx (0.9) F (ϕ) = F (ϕ) e 2 h dϕx e 2 h dϕx h i Z E 2 Z E 2 R xQ∈E . R xQ∈E with ∞ −vu h(u)= e dν(v), u 0, with ν a probability distribution on R+, Z0 ≥ and (ϕ,ϕ) the energy of the function ϕ corresponding to the weights and killing measure on EE(the matrix (1 , 1 ), x, y E is the inverse of the matrix g(x, y), x, y E in (0.3)). E x y ∈ ∈
w2 w3
y1 x3 x w1 2 y2
x1 y3
Fig. 0.1: The paths w1,...,wk in E interact with the gas of loops through the random potentials. 3
The typical representation formula for the moments of the random field in (0.9) looks like this: for k 1, z ,...,z E, ≥ 1 2k ∈
ϕz1 ...ϕz2k = h i x x − Px∈E vx(Lx+L∞(w1)+···+L∞(wk)) (0.10) Px1,y1 Pxk,yk Q[e ] ⊗···⊗ ⊗ − v L , Q e Px∈E x x pairingsP of z1,...,z2k where the sum runs over the (non-ordered) pairings (i.e. partitions) of the symbols z , z ,...,z into x ,y ,..., x ,y . Under Q the v , x E, are i.i.d. ν-distributed 1 2 2k { 1 1} { k k} x ∈ (random potentials), independent of the x, x E, which are distributed as the total occupation times (properly scaled to takeL account∈ of the weights and killing measure) of 1 the gas of loops with intensity 2 µ, and the Pxi,yi , 1 i k are defined just as below (0.4), (0.5). ≤ ≤ The Poisson point process of Markovian loops has many interesting properties. We 1 1 will for instance see that when α = 2 (i.e. the intensity measure equals 2 µ),
1 2 ( x)x E has the same distribution as (ϕ )x E, where (0.11) L ∈ 2 x ∈ (ϕx)x∈E stands for the Gaussian free field in (0.3).
The Poisson gas of Markovian loops is also related to the model of random interlacements [27], which loosely speaking corresponds to “loops going through infinity”. It appears as well in the recent developments concerning conformally invariant scaling limits, see Lawler-Werner [16], Sheffield-Werner [24]. As for random interlacements, interestingly, in place of (0.11), they satisfy an isomorphism theorem in the spirit of the generalized second Ray-Knight theorem, see [28]. 4 0 INTRODUCTION 5
1 Generalities
In this chapter we describe the general framework we will use for the most part of these notes. We introduce finite weighted graphs with killing and the associated continuous- type Markov chains X., with constant jump rate equal to 1, and X., with variable jump rate. We also recall various notions related to Dirichlet forms and potential theory.
1.1 The set-up We introduce in this section the general set-up, which we will use in the sequel, and recall some classical facts. We also refer to [14] and [10], where the theory is developed in a more general framework. We assume that (1.1) E is a finite non-empty set endowed with non-negative weights (1.2) c = c 0, for x, y E, and c = 0, for x E, x,y y,x ≥ ∈ x,x ∈ so that (1.3) E, endowed with the edge set consisting of the pairs x, y such that { } cx,y > 0, is a connected graph. We also suppose that there is a killing measure on E: (1.4) κ 0, x E, x ≥ ∈ and that (1.5) κ = 0, for at least some x E. x 6 ∈ We also consider a (1.6) cemetery state ∆ not in E
(we can think of κx as cx,∆). With these data we can define a measure on E: (1.7) λ = c + κ , x E (note that λ > 0, due to (1.2) - (1.5)). x x,y x ∈ x yP∈E We can also introduce the energy of a function on E, or Dirichlet form 1 (1.8) (f, f)= c (f(y) f(x))2 + κ f 2(x), E 2 x,y − x x,yP∈E xP∈E for f : E R. → Note that (cx,y)x,y∈E and (κx)x∈E determine the Dirichlet form. Conversely, the Dirich- let form determines (cx,y)x,y∈E and (κx)x∈E. Indeed, one defines, by polarization, for f,g : E R, → 1 (f,g)= [ (f + g, f + g) (f g, f g)] E 4 E −E − − (1.9) 1 = c (f(y) f(x))(g(y) g(x))+ κ f(x)g(x), 2 x,y − − x x,yP∈E xP∈E 6 1 GENERALITIES and one notes that (1 , 1 )= c , for x = y in E, E x y − x,y 6 (1.10) (1 , 1 )= c + κ = λ , for x E, E x x x,y x x ∈ yP∈E so that the Dirichlet form uniquely determines the weights (cx,y)x,y∈E and the killing measure (κx)x∈E. Observe also that by (1.3), (1.5), (1.8), (1.9), the Dirichlet form defines a positive definite quadratic form on the space of functions from E to R, see also (1.39) below. F We denote by ( , ) the scalar product in L2(dλ): · · λ (1.11) (f,g) = f(x)g(x)λ , for f,g : E R . λ x → xP∈E The weights and the killing measure induce a sub-Markovian transition probability on E:
cx,y (1.12) px,y = , for x, y E, λx ∈ which is λ-reversible:
(1.13) λ p = λ p , for all x, y E. x x,y y y,x ∈ One then extends p ,x,y E to a transition probability on E ∆ by setting x,y ∈ ∪{ } κx (1.14) px,∆ = , for x E, and p∆,∆ =1, λx ∈ so the corresponding discrete-time Markov chain on E ∆ is absorbed in the cemetery state ∆ once it reaches ∆. We denote by ∪{ }
Z , n 0, the canonical discrete Markov chain on the space of n ≥ (1.15) discrete trajectories in E ∆ , which after finitely many steps reaches ∆ and from then∪{ on remains} at ∆, and by
(1.16) P the law of the chain starting from x E ∆ . x ∈ ∪{ }
We will attach to the Dirichlet form (1.8) (or, equivalently, to the weights and the killing measure), two continuous-time Markov chains on E ∆ , which are time change of each ∪{ } other, with discrete skeleton corresponding to Zn, n 0. The first chain X. will have a unit jump rate, whereas the second chain X. (defined≥ in Section 1.6) will have a variable jump rate governed by λ.
1.2 The Markov chain X. (with jump rate 1) We introduce in this section the continuous-time Markov chain on E ∆ (absorbed in the cemetery state ∆), with discrete skeleton described by Z , n ∪{0, and} exponential n ≥ 1.2 The Markov chain X. (with jump rate 1) 7 holding times of parameter 1. We also bring into play some of the natural objects attached to this Markov chains.
The canonical space DE for this Markov chain consists of right-continuous functions with values in E ∆ , with finitely many jumps, which after some time enter ∆ and from then on remain∪{ equal} to ∆. We denote by
(1.17) X , t 0, the canonical process on D , t ≥ E θ , t 0, the canonical shift on D : θ (w)( )= w( + t), for w D , t ≥ E t · · ∈ E P the law on D of the Markov chain starting at x E ∆ . x E ∈ ∪{ }
Remark 1.1. Whenever convenient we will tacitly enlarge the canonical space DE and work with a probability space on which (under Px) we can simultaneously consider the discrete Markov chain Zn, n 0, with starting point a.s. equal to x, and an independent sequence of positive variables≥T , n 1, the “jump times”, increasing to infinity, with n ≥ increments Tn+1 Tn, n 0, i.i.d. exponential with parameter 1 (with the convention T = 0). The continuous-time− ≥ chain X , t 0, will then be expressed as 0 t ≥ X = Z , for T t < T , n 0. t n n ≤ n+1 ≥ Of course, once the discrete-time chain reaches the cemetery state ∆, the subsequent “jump times” Tn are only fictitious “jumps” of the continuous time chain.
Examples:
1) Simple random walk on the discrete torus killed at a constant rate
E =(Z/NZ)d, where N > 1, d 1, ≥ endowed with the graph structure, where x, y are neighbors if exactly one of their coordinates differs by 1, and the other coordinates are equal. We pick ± c =1 , x,y E, x,y {x, y are neighbors} ∈ κx = κ> 0 .
d So Xt, t 0, is the simple random walk on (Z/NZ) with exponential holding ≥ κ times of parameter 1, killed at each step with probability 2d+κ , when N > 2, and κ probability d+κ , when N = 2.
2) Simple random walk on Zd killed outside a finite connected subset of Zd, that is:
E is a finite connected subset of Zd, d 1. ≥ c =1 , for x, y E, x,y {|x−y|=1} ∈
κx = 1{|x−y|=1}, for x E, d ∈ y∈PZ \E 8 1 GENERALITIES
κx =2 x E
x′ κx′ =0
Fig. 1.1
d Xt, t 0, when starting in x E, corresponds to the simple random walk in Z with exponential≥ holding times∈ of parameter 1 killed at the first time it exists E.
Our next step is to introduce some natural objects attached to the Markov chain X., such as the transition semi-group, and the Green function.
Transition semi-group and transition density: Unless otherwise specified, we will tacitly view real-valued functions on E, as functions on E ∆ , which vanish at the point ∆. ∪{ } The sub-Markovian transition semi-group of the chain Xt, t 0, on E is defined for t 0, f : E R, by ≥ ≥ →
Rt f(x)= Ex[f(Xt)] for x E n ∈ −t t = e Ex[f(Zn)] (1.18) n! nP≥0 tn = e−t P n f(x)= et(P −I)f(x), n! nP≥0 where I denotes the identity map on RE, and for f: E R, x E, → ∈ (1.15) (1.19) P f(x)= px,y f(y) = Ex[f(Z1)]. yP∈E As a result of (1.13) and (1.18)
(1.20) P and R (for any t 0) are bounded self-adjoint operators on L2(dλ), t ≥ (1.21) R = R R , for t, s 0 (semi-group property). t+s t s ≥ We then introduce the transition density 1 (1.22) rt(x, y)=(Rt 1y)(x) , for t 0, x, y E. λy ≥ ∈
It follows from the self-adjointness of Rt, cf. (1.20), that
(1.23) r (x, y)= r (y, x), for t 0, x, y E (symmetry) t t ≥ ∈ 1.2 The Markov chain X. (with jump rate 1) 9 and from the semi-group property, cf. (1.21), that for t, s 0, x, y E, ≥ ∈
(1.24) rt+s(x, y)= rt(x, z) rs(z,y) λz (Chapman-Kolmogorov equations). zP∈E Moreover due to (1.3), (1.12), (1.18), we see that
(1.25) r (x, y) > 0, for t> 0, x, y E. t ∈
Green function: We define the Green function (or Green density):
∞ ∞ (1.18),(1.22) 1 (1.26) g(x, y)= rt(x, y) dt = Ex 1 Xt = y dt , for x, y E. Z0 Fubini h Z0 { } i λy ∈
Lemma 1.2.
(1.27) g(x, y) (0, ) is a symmetric function on E E. ∈ ∞ × Proof. By (1.23), (1.25) we see that g( , ) is positive and symmetric. We now prove that it is finite. By (1.1), (1.3), (1.5) we see· that· for some N 0, and ε> 0, ≥
(1.28) inf Px[Zn = ∆, for some n N] ε> 0. x∈E ≤ ≥
As a result of the simple Markov property at times, which are multiple of N, we find that
(P kN 1 )(x)= P [Z = ∆, for 0 n kN] E x n 6 ≤ ≤ simple Markov (1 ε)k, for k 1. (1≤.28) − ≥
It follows by a straightforward interpolation that with suitable c,c′ > 0,
n −c′n (1.29) sup (P 1E)(x) c e for n 0. x∈E ≤ ≥
As a result inserting this bound in the last line of (1.18) gives:
n −t t −c′n −c′ (1.30) sup (Rt 1E)(x) c e e = c exp t(1 e ) , x E ≤ n! {− − } ∈ nP≥0 so that
∞ 1 c 1 ′′ (1.31) g(x, y) (Rt 1E)(x) dt −c′ c < , ≤ λy Z ≤ λy 1 e ≤ ∞ 0 − whence (1.27). 10 1 GENERALITIES
1.3 Some potential theory In this section we introduce some natural objects from potential theory such as the equi- librium measure, the equilibrium potential, and the capacity of a subset of E. We also provide two variational characterizations for the capacity. We then describe the orthogo- nal complement under the Dirichlet form of the space of functions vanishing on a subset of K. This also naturally leads us to the notion of trace form (and network reduction). The Green function gives rise to the potential operators
(1.32) Qf(x)= g(x, y) f(y)λ , for f : E R (a function), y → yP∈E the potential of the function f, and
(1.33) Gν(x)= g(x, y) ν , for ν: E R (a measure), y → yP∈E the potential of the measure ν. We also write the duality bracket (between functions and measures on E):
(1.34) ν, f = ν f(x) for f: E R, ν: E R. h i x → → Px In the next proposition we collect several useful properties of the Green function and Dirichlet form.
Proposition 1.3.
(1.35) E(ν,µ) def= ν,Gµ = ν g(x, y) µ , for ν,µ: E R h i x y → x,yP∈E defines a positive definite, symmetric bilinear form.
(1.36) Q =(I P )−1 (see (1.19), (1.32) for notation). − (1.37) G =( L)−1, where − Lf(x)= c f(y) λ f(x), for f: E R. x,y − x → yP∈E (1.38) (Gν,f)= ν, f , for ν: E R and f: E R. E h i → → 2 (1.39) ρ> 0, such that (f, f) ρ f 2 , for all f: E R. ∃ E ≥ k kL (dλ) → (1.40) Gκ =1 (and the killing measure κ is also called equilibrium measure of E).
Proof. (1.35): • One can give a direct proof based on (1.23) - (1.26), but we will instead derive (1.35) with the help of (1.37) - (1.39). The bilinear form in (1.35) is symmetric by (1.27). Moreover, for ν: E R, → (1.38) 0 (Gν,Gν) = ν,Gν = E(ν, ν) (the energy of the measure ν). ≤E h i 1.3 Some potential theory 11
By (1.39), 0 = (Gν,Gν) = Gν = 0, and by (1.37) it follows that ν = ( L) Gν = 0. This proves (1.35)E (assuming⇒ (1.37) - (1.39)). −
(1.36): • By (1.29):
∞ n ∞ n −t t n −t t −c′n e P f(x) dt c e e dt f ∞ Z n! | | ≤ Z n! k k 0 nP≥0 0 nP≥0 ∞ ′ (1.30) −t(1−e−c ) = c f ∞ e dt < . k k Z0 ∞ By Lebesgue’s domination theorem, keeping in mind (1.18), (1.26),
∞ ∞ n ∞ n (1.32) (1.18) −t t n −t t n Qf(x) = Rt f(x)dt = e P f(x)dt = e dt P f(x) Z Z n! Z n! 0 0 nP≥0 nP≥0 0 (1.29) = P n f(x) = (I P )−1f(x), (1 is not in the spectrum of P by (1.29)). − nP≥0 This proves (1.36).
(1.37): • Note that in view of (1.19)
L = λ(I P ) (composition of (I P ) and the multiplication by λ., (1.41) − − − i.e. (λf)(x)= λ f(x) for f: E R, and x E). x → ∈ Hence L is invertible and − (1.36) (1.32) ( L)−1 =(I P )−1 λ−1 = Qλ−1 = G. − − (1.33)
This proves (1.37).
(1.38): • By (1.10) we find that
(1.10) (f,g)= f(x) g(y) (1 , 1 ) = λ f(x) g(x) c f(x) g(y) E E x y x − x,y (1.42) x,yP∈E xP∈E x,yP∈E (1.2) = f, Lg = Lf,g . h − i h− i As a result (1.37) (Gν,f)= LGν, f = ν, f , whence (1.38). E h− i h i (1.39): • Note that for x E, f: E R ∈ → (1.38) f(x)= 1 , f = (G1 , f). h x i E x 12 1 GENERALITIES
Now ( , ) is a non-negative symmetric bilinear form. We can thus apply Cauchy- Schwarz’sE · · inequality to find that
(1.38) f(x)2 (G1 ,G1 ) (f, f) = 1 ,G1 (f, f) ≤E x x E h x xiE = g(x, x) (f, f). E As a result we find that 2 2 (1.43) f 2 = f(x) λ g(x, x) λ (f, f), k kL (dλ) x ≤ x E xP∈E xP∈E and (1.39) follows with −1 ρ = g(x, x) λx. xP∈E (1.40): • By (1.39), ( , ) is positive definite and by (1.9) E · · (1.9) (1.38) (1, f) = κ f(x)= κ, f = (Gκ, f), for all f: E R. E x h i E → Px It thus follows that 1 = Gκ, whence (1.40). Remark 1.4. Note that we have shown in (1.42) that for all f,g: E R, → (1.44) (f,g)= Lf,g = f, Lg . E h− i h − i Since L = λ(I P ), we also find, see (1.11) for notation, − − (1.44’) (f,g)=((I P )f,g) =(f, (I P )g) . E − λ − λ As a next step we introduce some important random times for the continuous-time Markov chain X , t 0. Given K E, we define t ≥ ⊆ H = inf t 0; X K , the entrance time in K, K { ≥ t ∈ } H = inf t> 0; X K and there exists s (0, t) with X = X , K { t ∈ ∈ s 6 0} e the hitting time of K, (1.45) T = inf t 0; X / K , the exit time from K, K { ≥ t ∈ } L = sup t> 0; X K , the time of last visit to K K { t ∈ } (with the convention sup φ = 0, inf φ = ). ∞
HK, HK, TK are stopping times for the canonical filtration ( t)t≥0, on DE (i.e. a [0, ]- F def ∞ valuede map T on DE, see above (1.17), such that T t t = σ(Xs, 0 s t), for each t 0). Of course L is in general not a stopping{ ≤ time.}∈F ≤ ≤ ≥ K Given U E, the transition density killed outside U is ⊆ 1 (1.46) rt,U (x, y)= Px[Xt = y,t Remark 1.5. 1) When U is a connected (non-empty) subgraph of the graph in (1.3), r (x, y), t 0, t,U ≥ x, y U, and gU (x, y), x, y U, simply correspond to the transition density and the Green∈ function in (1.22), (1.26),∈ when one chooses on U - the weights c , x, y U (i.e. restriction to U U of the weights on E), x,y ∈ × - the killing measure κ = κ + c , x U. x x x,y ∈ y∈PE\U e 2) When U is not connected the above remark applies to each connected component of U, and rt,U (x, y) and gU (x, y) vanish when x, y belong to different connected components of U. Proposition 1.6. (U E, A = E U) ⊆ \ (1.48) g (x, y)= g (y, x), for x, y E. U U ∈ (1.49) g(x, y)= g (x, y)+ E [H < , g(X ,y)], for x, y E. U x A ∞ HA ∈ (1.50) E [H < ,g(X ,y)] = E [H < ,g(X , x)], for x, y E x A ∞ HA y A ∞ HA ∈ (Hunt’s switching identity). Proof. (1.48): • This is a direct consequence of the above remark and (1.27). (1.49): • ∞ ∞ (1.26) 1 1 g(x, y) = Ex 1 Xt = y dt = Ex 1 Xt = y,t (1.50): • This follows from (1.48), (1.49) and the fact that g( , ) is symmetric, cf. (1.27). · · Example: Consider x E. By (1.49) we find that for x E (with A = x , U = E x ): 0 ∈ ∈ { 0} \{ 0} g(x, x )=0+ P [H < ] g(x , x ), 0 x x0 ∞ 0 0 14 1 GENERALITIES writing Hx0 for H{x0}, so that g(x, x0) (1.51) Px[Hx0 < ]= , for x E. ∞ g(x0, x0) ∈ A second application of (1.49) now yields (with U = E x ) \{ 0} g(x, x )g(x ,y) (1.52) g (x, y)= g(x, y) 0 0 , for x, y E. U − g(x , x ) ∈ 0 0 Given A E, we introduce the equilibrium measure of A : ⊆ (1.53) e (x)= P [H = ]1 (x) λ , x E. A x A ∞ A x ∈ Its total mass is called the capacity ofe A (or the conductance of A): (1.54) cap(A)= Px[HA = ] λx. x∈A ∞ P e Remark 1.7. As we will see below in the case of A = E the terminology in (1.53) is consistent with the terminology in (1.40). There is an interpretation of the weights (cx,y) and the killing measures (κx) on E as an electric network grounding E at the cemetery point ∆, which is implicit in the use of the above terms, see for instance Doyle-Snell [3]. Before turning to the next proposition, we simply recall that given A E, by our convention in (1.45) ⊆ H < = L > 0 = the set of trajectories that enter A. { A ∞} { A } Also given a measure ν on E, we write (1.55) Pν = νx Px and Eν for the Pν -integral (or “expectation”). xP∈E Proposition 1.8. (A E) ⊆ (1.56) Px[LA > 0,XL− = y]= g(x, y) eA(y), for x, y E, A ∈ (X − is the position of X at the last visit to A, when LA > 0). LA . (1.57) h (x) def= P [H < ]= P [L > 0] = G e (x), for x E A x A ∞ x A A ∈ (the equilibrium potential of A). When A = φ, 6 (1.58) eA is the unique measure ν supported on A such that Gν =1 on A. Let A B E then under P the entrance “distribution” in A and the last exit “distri- ⊆ ⊆ eB bution” of A coincide with eA: − (1.59) PeB [HA < ,XHA = y]= PeB [LA > 0,XL = y]= eA(y), for y E. ∞ A ∈ In particular when B = E, under P , the entrance distribution in A and the exit distribution of A (1.60) κ coincide with eA. 1.3 Some potential theory 15 Proof. (1.56): • Both members vanish when y / A. We thus assume y A. Using the discrete-time Markov chain Z , n 0 (see (1.15)),∈ we can write: ∈ n ≥ pairwise disjoint Px[LA > 0,X − = y]= Px Zn = y, and for all k>n,Zk / A = LA h n≥0{ ∈ }i S Markov property P [Z = y, and for all k>n,Z / A] = x n k ∈ nP≥0 Fubini Px[Zn = y] Py[for all k > 0, Zk / A] = n≥0 ∈ (1.45) P ∞ (1.26) Ex 1 Zn = y Py[HA = ]= Ex 1 Xt = y dt Py[HA = ] = g(x, y) eA(y). { } ∞ Z { } ∞ (1.53) h nP≥0 i h 0 i e e This proves (1.56). (1.57): • Summing (1.56) over y A, we obtain ∈ (1.33) P [H < ]= P [L > 0] = g(x, y) e (y) = Ge (x), whence (1.57). x A ∞ x A A A yP∈A (1.58): • Note that eA is supported on A and GeA =1on A by (1.57). If ν is another such measure and µ = ν eA, − µ,Gµ =0 h i because Gµ =0 on A, and µ is supported on A. By (1.35) it follows that µ = 0, whence (1.58). (1.59), (1.60): • By (1.50) (Hunt’s switching identity): for y E, ∈ (1.58) EeB [HA < ,g(XHA ,y)] = Ey[HA < , (GeB)(XHA )] = Py[HA < ]. ∞ ∞ A⊆B ∞ Denoting by µ the entrance distribution of X in A under PeB : µ = P [H < ,X = x], x E, x eB A ∞ HA ∈ we see by the above identity and (1.57) that Gµ(y) = Ge (y), for all y E, and by A ∈ applying L to both sides, µ = eA. As for the last exit distribution of X. from A under PeB , integrating over eB in (1.56), we find: (1.57) − PeB [LA > 0,XL = y]= eB(x) g(x, y) eA(y) = eA(y), for y E. A A⊆B ∈ xP∈E This completes the proof of (1.59). In the special case B = E, we know by (1.40), (1.58) that eB = κ and (1.60) follows. 16 1 GENERALITIES We now provide two variational problems for the capacity, where the equilibrium measure and the equilibrium potential appear. These characterizations are, of course, strongly flavored by the previously mentioned analogy with electric networks (we refer to Remark 1.7). Proposition 1.9. (A E) ⊆ (1.61) cap(A) = (inf E(ν, ν); ν probability supported on A )−1 { } and when A = φ, the infimum is uniquely attained at eA = eA/cap(A), the normalized equilibrium measure6 of A. (1.62) cap(A) = inf (f, f); f 1 on A , {E ≥ } and the infimum is uniquely attained at hA, the equilibrium potential of A. Proof. (1.61): • When A = φ, both members of (1.61) vanish and there is nothing to prove. We thus assume A = φ and consider a probability measure ν supported on A. By (1.35), we have 6 0 E(ν, ν)= E(ν e + e , ν e + e ) ≤ − A A − A A = E(e , e )+2E(ν e , e )+ E(ν e , ν e ). A A − A A − A − A The last term is non-negative, by (1.35) it only vanishes when ν = eA, and 1 1 E(ν e , e )= ν e (x) g(x, y) e (y) = − =0 . − A A x − A A cap(A) xP∈E yP∈E (1.58) = 1 on A | cap({zA) } We thus find that E(ν, ν) becomes (uniquely) minimal at 1 (1.58) 1 1 E(e , e )= e (x) g(x, y) e (y) = e (A)= . A A cap(A)2 A A cap(A)2 A cap(A) x,yP∈E This proves (1.61). (1.62): • We consider f: E R such that f 1 , and h = Ge , so that → ≥ A A A h (x)= P [H < ]=1, for x A. A x A ∞ ∈ We have (f, f)= (f h + h , f h + h ) E E − A A − A A = (h , h )+2 (f h , h )+ (f h , f h ). E A A E − A A E − A − A 1.3 Some potential theory 17 Again, the last term is non-negative and only vanishes when f = hA, see (1.39). Moreover, we have (1.38) (f h , h )= (f h , Ge ) = e , f h 0, E − A A E − A A h A − Ai ≥ since h =1 on A, f 1 on A, and e is supported on A. A ≥ A So the right-hand side of (1.62) equals (1.38) (h , h )= (Ge , h ) = e , h = e (A) = cap(A). E A A E A A h A Ai A This proves (1.62). Orthogonal decomposition, trace Dirichlet form: We consider U E and set K = E U. Our aim is to describe the orthogonal complement relative to the Dirichlet⊆ form ( , )\ of the space of functions supported in U: E · · (1.63) = ϕ : E R; ϕ(x)=0, for all x K . FU { → ∈ } To this end we introduce the space of functions harmonic in U: (1.64) = h : E R; P h(x)= h(x), for all x U , HU { → ∈ } as well as the space of potentials of (signed) measures supported on K: (1.65) = f : E R; f = Gν, for some ν supported on K . GK { → } Recall that ( , ) is a positive definite quadratic form on the space of functions from E to R (seeE above· · (1.11)). F Proposition 1.10. (orthogonal decomposition) (1.66) = . HU GK (1.67) = , where and are orthogonal, relative to ( , ). F FU ⊕ HU FU HU E · · Proof. (1.66): • (1.37) We first show that U K . Indeed when h U , h = G( L) h = Gν where ν = Lh is supportedH on⊆K Gby (1.64) and (1.41).∈ Hence H . − − HU ⊆ GK To prove the reverse inclusion we consider ν supported on K. Set h = Gν. By (1.37) we know that Lh = LGν = ν, so that Lh vanishes on U. It follows from (1.41) that h , and (1.66) is proved.− Incidentally note that choosing A = K in (1.49), we can ∈ HU multiply both sides of (1.49) by νy and sum over y. The first term in the right-hand side vanishes and we then see that h = Gν satisfies (1.68) h(x)= E [H < , h(X )], for x E. x K ∞ HK ∈ (1.67): • We first note that when ϕ and ν is supported on K, ∈FU (1.38) (Gν,ϕ) = ν,ϕ =0. E h i 18 1 GENERALITIES So the spaces U and U are orthogonal under ( , ). In addition, given f from E R, we can define F H E · · → (1.69) h(x)= E [H < , f(X )], for x E, x K ∞ HK ∈ and note that h(x)= f(x), when x K, ∈ and that by the same argument as above, h is harmonic in U. If we now define ϕ = f h, we see that ϕ vanishes on K, and hence − (1.70) f = ϕ + h, with ϕ and h , ∈FU ∈ HU is the orthogonal decomposition of f. This proves (1.67). As we now explain the restriction of the Dirichlet form to the space U = K , see (1.64) - (1.66), gives rise to a new Dirichlet form on the space of functionsH from KG to R, the so-called trace form. Given f: K R, we also write f for the function on E that agrees with f on K and vanishes on U, when→ no confusion arises. Note that (1.71) f(x)= E [H < , f(X )], for x E, x K ∞ HK ∈ is the unique function one E, harmonic in U, that agrees with f on K, cf. (1.67). Indeed the decomposition (1.67) applied to the case of the function equal to f on K, and to 0 on U, shows the existence of a function in equal to f on K. By (1.68) and (1.66), it is HU necessarily equal to f. We then define for f: K R, the trace form e → ∗(f, f)= (f, f) (1.72) E E (1.69),(1.70) = infe e (g,g); g: E R coincides with f on K , {E → } where we used in the second line the fact that when g coincides with f on K, then g = ϕ + f, with ϕ U , and hence (g,g) (f, f) due to (1.67). We naturally extend this definition for f,g∈F: K R, by settingE ≥E e → e e (1.73) ∗(f,g)= (f, g). E E ∗ It is plain that is a symmetric bilinear form one e the space of functions from K to R. As we now explainE ∗ does indeed correspond to a Dirichlet form on K induced by some (uniquely defined inE view of (1.10)) non-negative weights and killing measure. Proposition 1.11. (K = φ) 6 The quantities defined by ∗ (1.74) cx,y = λx Px[HK < ,XH = y], for x = y in K, ∞ eK 6 =0, for ex = y in K, (1.75) κ∗ = λ P [H = ], for x K, x x x K ∞ ∈ ∗ e (1.76) λx = λx(1 Px[HK < , XH = x]), for x K, − ∞ eK ∈ e 1.3 Some potential theory 19 ∗ ∗ satisfy (1.2) - (1.5), (1.7), with E replaced by K (in particular cx,y = cy,x). The corre- sponding Dirichlet form coincides with ∗, i.e. E 1 2 (1.77) ∗(f, f)= c∗ f(y) f(x) + κ∗ f 2(x), for f: K R. E 2 x,y − x → x,yP∈K xP∈K The corresponding Green function g∗(x, y), x, y in K, satisfies (1.78) g∗(x, y)= g(x, y), for x, y K. ∈ Proof. We first prove that the quantities in (1.74) - (1.76) satisfy (1.2) - (1.5), (1.7), (1.79) with E replaced by K. To this end we note that for x = y in K, 6 (1.73) (1.67) (1.44) ∗(1 , 1 ) = (1 , 1 ) = (1 , 1 ) = L1 (x) −E x y −E x y −E x y y 1 (x)=0 (1.71) (1.80) ey = λ e pe 1 (z) = λe P [H