Topics in Occupation Times and Gaussian Free Fields

Alain-Sol Sznitman∗

Notes of the course “Special topics in probability” at ETH Zurich during the Spring term 2011

∗ Mathematik, ETH Z¨urich. CH-8092 Z¨urich, Switzerland With the support of the grant ERC-2009-AdG 245728-RWPERCRI Foreword The following notes grew out of the graduate course “Special topics in probability”, which I gave at ETH Zurich during the Spring term 2011. One of the objectives was to explore the links between occupation times, Gaussian free fields, Poisson gases of Markovian loops, and random interlacements. The stimulating atmosphere during the live lectures was an encouragement to write a fleshed-out version of the handwritten notes, which were handed .. out during the course. I am immensely grateful to Pierre-Fran¸cois Rodriguez, Artem Sapozhnikov, Bal´azs R´ath, Alexander Drewitz, and David Belius, for their numerous comments on the successive versions of these notes. Contents

0 Introduction 1

1 Generalities 5 1.1 Theset-up...... 5 1.2 The X. (withjumprate1)...... 6 1.3 Somepotentialtheory ...... 10 1.4 Feynman-Kacformula ...... 20 1.5 Localtimes ...... 22 1.6 The Markov chain X. (withvariablejumprate) ...... 23

2 Isomorphism theorems 27 2.1 TheGaussianfreefield ...... 27

2.2 The measures Px,y ...... 30 2.3 Isomorphismtheorems ...... 35 2.4 GeneralizedRay-Knighttheorems ...... 38

3 The Markovian loop 51

3.1 Rooted loops and the measure µr onrootedloops ...... 51

3.2 Pointed loops and the measure µp onpointedloops ...... 58 3.3 Restrictionproperty ...... 61 3.4 Localtimes ...... 62 3.5 Unrooted loops and the measure µ∗ onunrootedloops ...... 67

4 Poisson gas of Markovian loops 71 4.1 Poissonpointmeasuresonunrootedloops ...... 71 4.2 Occupationfield...... 72 4.3 Symanzik’s representation formula ...... 76 4.4 Someidentities ...... 80 4.5 Some links between Markovian loops and random interlacements ...... 84

References 93

Index 95

1

0 Introduction

This set of notes explores some of the links between occupation times and Gaussian pro- cesses. Notably they bring into play certain isomorphism theorems going back to Dynkin [4], [5] as well as certain Poisson point processes of Markovian loops, which originated in physics through the work of Symanzik [26]. More recently such Poisson gases of Marko- vian loops have reappeared in the context of the “Brownian loop soup” of Lawler and Werner [16] and are related to the so-called “random interlacements”, see Sznitman [27]. In particular they have been extensively investigated by Le Jan [17], [18]. A convenient set-up to develop this circle of ideas consists in the consideration of a finite connected graph E endowed with positive weights and a non-degenerate killing measure. One can then associate to these data a continuous-time Markov chain Xt, t 0, on E, with variable jump rates, which dies after a finite time due to the killing measure,≥ as well as

(0.1) the Green density g(x, y), x, y E, ∈ (which is positive and symmetric),

t x (0.2) the local times Lt = 1 Xs = x ds, t 0, x E. Z0 { } ≥ ∈ In fact g( , ) is a positive definite function on E E, and one can define a centered · · ϕ , x E, such that × x ∈ (0.3) cov(ϕ ,ϕ )(= E[ϕ ϕ ]) = g(x, y), for x, y E. x y x y ∈ This is the so-called Gaussian free field. 1 2 z It turns out that 2 ϕz, z E, and L∞, z E, have intricate relationships. For instance Dynkin’s isomorphism theorem∈ states in our∈ context that for any x, y E, ∈ z 1 (0.4) L + ϕ2 under P P G, ∞ 2 z z∈E x,y ⊗  has the “same law” as

1 (0.5) (ϕ2) under ϕ ϕ P G, 2 z z∈E x y where Px,y stands for the (non-normalized) h-transform of our basic Markov chain, with the choice h( ) = g( ,y), starting from the point x, and P G for the law of the Gaussian field ϕ , z E· . · z ∈ Eisenbaum’s isomorphism theorem, which appeared in [7], does not involve h-transforms and states in our context that for any x E, s = 0, ∈ 6 z 1 (0.6) L + (ϕ + s)2 under P P G, ∞ 2 z z∈E x ⊗  has the “same law” as

1 ϕx (0.7) (ϕ + s)2 under 1+ P G. 2 z z∈E s   2 0 INTRODUCTION

The above isomorphism theorems are also closely linked to the topic of theorems of Ray- Knight type, see Eisenbaum [6], and chapters 2 and 8 of Marcus-Rosen [19]. Originally, see [13, 21], such theorems came as a description of the Markovian character in the space variable of Brownian local times evaluated at certain random times. More recently, the Gaussian aspects and the relation with the isomorphism theorems have gained promi- nence, see [8], and [19]. Interestingly, Dynkin’s isomorphism theorem has its roots in mathematical physics. It grew out of the investigation by Dynkin in [4] of a probabilistic representation formula for the moments of certain random fields in terms of a Poissonian gas of loops interacting with Markovian paths, which appeared in Brydges-Fr¨ohlich-Spencer [2], and was based on the work of Symanzik [26]. The Poisson point gas of loops in question is a Poisson point process on the state space of loops on E modulo time-shift. Its intensity measure is a multiple αµ∗ of the image µ∗ of a certain measure µrooted, under the canonical map for the equivalence relation identifying rooted loops γ that only differ by a time-shift. This measure µrooted is the σ-finite measure on rooted loops defined by

∞ t dt (0.8) µrooted(dγ)= Q (dγ) , Z x,x t xP∈E 0

t where Qx,x is the image of 1 Xt = x Px under (Xs)0≤s≤t, if X. stands for the Markov chain on E with jump rates{ equal to} 1 attached to the weights and killing measure we have chosen on E. The random fields on E alluded to above, are motivated by models of Euclidean quantum field theory, see [11], and are for instance of the following kind:

2 2 − 1 E(ϕ,ϕ) ϕx − 1 E(ϕ,ϕ) ϕx (0.9) F (ϕ) = F (ϕ) e 2 h dϕx e 2 h dϕx h i Z E 2 Z E 2 R xQ∈E   . R xQ∈E   with ∞ −vu h(u)= e dν(v), u 0, with ν a on R+, Z0 ≥ and (ϕ,ϕ) the energy of the function ϕ corresponding to the weights and killing measure on EE(the matrix (1 , 1 ), x, y E is the inverse of the matrix g(x, y), x, y E in (0.3)). E x y ∈ ∈

w2 w3

y1 x3 x w1 2 y2

x1 y3

Fig. 0.1: The paths w1,...,wk in E interact with the gas of loops through the random potentials. 3

The typical representation formula for the moments of the random field in (0.9) looks like this: for k 1, z ,...,z E, ≥ 1 2k ∈

ϕz1 ...ϕz2k = h i x x − Px∈E vx(Lx+L∞(w1)+···+L∞(wk)) (0.10) Px1,y1 Pxk,yk Q[e ] ⊗···⊗ ⊗ − v L , Q e Px∈E x x pairingsP of z1,...,z2k   where the sum runs over the (non-ordered) pairings (i.e. partitions) of the symbols z , z ,...,z into x ,y ,..., x ,y . Under Q the v , x E, are i.i.d. ν-distributed 1 2 2k { 1 1} { k k} x ∈ (random potentials), independent of the x, x E, which are distributed as the total occupation times (properly scaled to takeL account∈ of the weights and killing measure) of 1 the gas of loops with intensity 2 µ, and the Pxi,yi , 1 i k are defined just as below (0.4), (0.5). ≤ ≤ The Poisson point process of Markovian loops has many interesting properties. We 1 1 will for instance see that when α = 2 (i.e. the intensity measure equals 2 µ),

1 2 ( x)x E has the same distribution as (ϕ )x E, where (0.11) L ∈ 2 x ∈ (ϕx)x∈E stands for the Gaussian free field in (0.3).

The Poisson gas of Markovian loops is also related to the model of random interlacements [27], which loosely speaking corresponds to “loops going through infinity”. It appears as well in the recent developments concerning conformally invariant scaling limits, see Lawler-Werner [16], Sheffield-Werner [24]. As for random interlacements, interestingly, in place of (0.11), they satisfy an isomorphism theorem in the spirit of the generalized second Ray-Knight theorem, see [28]. 4 0 INTRODUCTION 5

1 Generalities

In this chapter we describe the general framework we will use for the most part of these notes. We introduce finite weighted graphs with killing and the associated continuous- type Markov chains X., with constant jump rate equal to 1, and X., with variable jump rate. We also recall various notions related to Dirichlet forms and potential theory.

1.1 The set-up We introduce in this section the general set-up, which we will use in the sequel, and recall some classical facts. We also refer to [14] and [10], where the theory is developed in a more general framework. We assume that (1.1) E is a finite non-empty set endowed with non-negative weights (1.2) c = c 0, for x, y E, and c = 0, for x E, x,y y,x ≥ ∈ x,x ∈ so that (1.3) E, endowed with the edge set consisting of the pairs x, y such that { } cx,y > 0, is a connected graph. We also suppose that there is a killing measure on E: (1.4) κ 0, x E, x ≥ ∈ and that (1.5) κ = 0, for at least some x E. x 6 ∈ We also consider a (1.6) cemetery state ∆ not in E

(we can think of κx as cx,∆). With these data we can define a measure on E: (1.7) λ = c + κ , x E (note that λ > 0, due to (1.2) - (1.5)). x x,y x ∈ x yP∈E We can also introduce the energy of a function on E, or Dirichlet form 1 (1.8) (f, f)= c (f(y) f(x))2 + κ f 2(x), E 2 x,y − x x,yP∈E xP∈E for f : E R. → Note that (cx,y)x,y∈E and (κx)x∈E determine the Dirichlet form. Conversely, the Dirich- let form determines (cx,y)x,y∈E and (κx)x∈E. Indeed, one defines, by polarization, for f,g : E R, → 1 (f,g)= [ (f + g, f + g) (f g, f g)] E 4 E −E − − (1.9) 1 = c (f(y) f(x))(g(y) g(x))+ κ f(x)g(x), 2 x,y − − x x,yP∈E xP∈E 6 1 GENERALITIES and one notes that (1 , 1 )= c , for x = y in E, E x y − x,y 6 (1.10) (1 , 1 )= c + κ = λ , for x E, E x x x,y x x ∈ yP∈E so that the Dirichlet form uniquely determines the weights (cx,y)x,y∈E and the killing measure (κx)x∈E. Observe also that by (1.3), (1.5), (1.8), (1.9), the Dirichlet form defines a positive definite quadratic form on the space of functions from E to R, see also (1.39) below. F We denote by ( , ) the scalar product in L2(dλ): · · λ (1.11) (f,g) = f(x)g(x)λ , for f,g : E R . λ x → xP∈E The weights and the killing measure induce a sub-Markovian transition probability on E:

cx,y (1.12) px,y = , for x, y E, λx ∈ which is λ-reversible:

(1.13) λ p = λ p , for all x, y E. x x,y y y,x ∈ One then extends p ,x,y E to a transition probability on E ∆ by setting x,y ∈ ∪{ } κx (1.14) px,∆ = , for x E, and p∆,∆ =1, λx ∈ so the corresponding discrete-time Markov chain on E ∆ is absorbed in the cemetery state ∆ once it reaches ∆. We denote by ∪{ }

Z , n 0, the canonical discrete Markov chain on the space of n ≥ (1.15) discrete trajectories in E ∆ , which after finitely many steps reaches ∆ and from then∪{ on remains} at ∆, and by

(1.16) P the law of the chain starting from x E ∆ . x ∈ ∪{ }

We will attach to the Dirichlet form (1.8) (or, equivalently, to the weights and the killing measure), two continuous-time Markov chains on E ∆ , which are time change of each ∪{ } other, with discrete skeleton corresponding to Zn, n 0. The first chain X. will have a unit jump rate, whereas the second chain X. (defined≥ in Section 1.6) will have a variable jump rate governed by λ.

1.2 The Markov chain X. (with jump rate 1) We introduce in this section the continuous-time Markov chain on E ∆ (absorbed in the cemetery state ∆), with discrete skeleton described by Z , n ∪{0, and} exponential n ≥ 1.2 The Markov chain X. (with jump rate 1) 7 holding times of parameter 1. We also bring into play some of the natural objects attached to this Markov chains.

The canonical space DE for this Markov chain consists of right-continuous functions with values in E ∆ , with finitely many jumps, which after some time enter ∆ and from then on remain∪{ equal} to ∆. We denote by

(1.17) X , t 0, the canonical process on D , t ≥ E θ , t 0, the canonical shift on D : θ (w)( )= w( + t), for w D , t ≥ E t · · ∈ E P the law on D of the Markov chain starting at x E ∆ . x E ∈ ∪{ }

Remark 1.1. Whenever convenient we will tacitly enlarge the canonical space DE and work with a probability space on which (under Px) we can simultaneously consider the discrete Markov chain Zn, n 0, with starting point a.s. equal to x, and an independent sequence of positive variables≥T , n 1, the “jump times”, increasing to infinity, with n ≥ increments Tn+1 Tn, n 0, i.i.d. exponential with parameter 1 (with the convention T = 0). The continuous-time− ≥ chain X , t 0, will then be expressed as 0 t ≥ X = Z , for T t < T , n 0. t n n ≤ n+1 ≥ Of course, once the discrete-time chain reaches the cemetery state ∆, the subsequent “jump times” Tn are only fictitious “jumps” of the continuous time chain. 

Examples:

1) Simple on the discrete torus killed at a constant rate

E =(Z/NZ)d, where N > 1, d 1, ≥ endowed with the graph structure, where x, y are neighbors if exactly one of their coordinates differs by 1, and the other coordinates are equal. We pick ± c =1 , x,y E, x,y {x, y are neighbors} ∈ κx = κ> 0 .

d So Xt, t 0, is the simple random walk on (Z/NZ) with exponential holding ≥ κ times of parameter 1, killed at each step with probability 2d+κ , when N > 2, and κ probability d+κ , when N = 2.

2) Simple random walk on Zd killed outside a finite connected subset of Zd, that is:

E is a finite connected subset of Zd, d 1. ≥ c =1 , for x, y E, x,y {|x−y|=1} ∈

κx = 1{|x−y|=1}, for x E, d ∈ y∈PZ \E 8 1 GENERALITIES

κx =2 x E

x′ κx′ =0

Fig. 1.1

d Xt, t 0, when starting in x E, corresponds to the simple random walk in Z with exponential≥ holding times∈ of parameter 1 killed at the first time it exists E. 

Our next step is to introduce some natural objects attached to the Markov chain X., such as the transition semi-group, and the Green function.

Transition semi-group and transition density: Unless otherwise specified, we will tacitly view real-valued functions on E, as functions on E ∆ , which vanish at the point ∆. ∪{ } The sub-Markovian transition semi-group of the chain Xt, t 0, on E is defined for t 0, f : E R, by ≥ ≥ →

Rt f(x)= Ex[f(Xt)] for x E n ∈ −t t = e Ex[f(Zn)] (1.18) n! nP≥0 tn = e−t P n f(x)= et(P −I)f(x), n! nP≥0 where I denotes the identity map on RE, and for f: E R, x E, → ∈ (1.15) (1.19) P f(x)= px,y f(y) = Ex[f(Z1)]. yP∈E As a result of (1.13) and (1.18)

(1.20) P and R (for any t 0) are bounded self-adjoint operators on L2(dλ), t ≥ (1.21) R = R R , for t, s 0 (semi-group property). t+s t s ≥ We then introduce the transition density 1 (1.22) rt(x, y)=(Rt 1y)(x) , for t 0, x, y E. λy ≥ ∈

It follows from the self-adjointness of Rt, cf. (1.20), that

(1.23) r (x, y)= r (y, x), for t 0, x, y E (symmetry) t t ≥ ∈ 1.2 The Markov chain X. (with jump rate 1) 9 and from the semi-group property, cf. (1.21), that for t, s 0, x, y E, ≥ ∈

(1.24) rt+s(x, y)= rt(x, z) rs(z,y) λz (Chapman-Kolmogorov equations). zP∈E Moreover due to (1.3), (1.12), (1.18), we see that

(1.25) r (x, y) > 0, for t> 0, x, y E. t ∈

Green function: We define the Green function (or Green density):

∞ ∞ (1.18),(1.22) 1 (1.26) g(x, y)= rt(x, y) dt = Ex 1 Xt = y dt , for x, y E. Z0 Fubini h Z0 { } i λy ∈

Lemma 1.2.

(1.27) g(x, y) (0, ) is a symmetric function on E E. ∈ ∞ × Proof. By (1.23), (1.25) we see that g( , ) is positive and symmetric. We now prove that it is finite. By (1.1), (1.3), (1.5) we see· that· for some N 0, and ε> 0, ≥

(1.28) inf Px[Zn = ∆, for some n N] ε> 0. x∈E ≤ ≥

As a result of the simple Markov property at times, which are multiple of N, we find that

(P kN 1 )(x)= P [Z = ∆, for 0 n kN] E x n 6 ≤ ≤ simple Markov (1 ε)k, for k 1. (1≤.28) − ≥

It follows by a straightforward interpolation that with suitable c,c′ > 0,

n −c′n (1.29) sup (P 1E)(x) c e for n 0. x∈E ≤ ≥

As a result inserting this bound in the last line of (1.18) gives:

n −t t −c′n −c′ (1.30) sup (Rt 1E)(x) c e e = c exp t(1 e ) , x E ≤ n! {− − } ∈ nP≥0 so that

∞ 1 c 1 ′′ (1.31) g(x, y) (Rt 1E)(x) dt −c′ c < , ≤ λy Z ≤ λy 1 e ≤ ∞ 0 − whence (1.27). 10 1 GENERALITIES

1.3 Some potential theory In this section we introduce some natural objects from potential theory such as the equi- librium measure, the equilibrium potential, and the capacity of a subset of E. We also provide two variational characterizations for the capacity. We then describe the orthogo- nal complement under the Dirichlet form of the space of functions vanishing on a subset of K. This also naturally leads us to the notion of trace form (and network reduction). The Green function gives rise to the potential operators

(1.32) Qf(x)= g(x, y) f(y)λ , for f : E R (a function), y → yP∈E the potential of the function f, and

(1.33) Gν(x)= g(x, y) ν , for ν: E R (a measure), y → yP∈E the potential of the measure ν. We also write the duality bracket (between functions and measures on E):

(1.34) ν, f = ν f(x) for f: E R, ν: E R. h i x → → Px In the next proposition we collect several useful properties of the Green function and Dirichlet form.

Proposition 1.3.

(1.35) E(ν,µ) def= ν,Gµ = ν g(x, y) µ , for ν,µ: E R h i x y → x,yP∈E defines a positive definite, symmetric bilinear form.

(1.36) Q =(I P )−1 (see (1.19), (1.32) for notation). − (1.37) G =( L)−1, where − Lf(x)= c f(y) λ f(x), for f: E R. x,y − x → yP∈E (1.38) (Gν,f)= ν, f , for ν: E R and f: E R. E h i → → 2 (1.39) ρ> 0, such that (f, f) ρ f 2 , for all f: E R. ∃ E ≥ k kL (dλ) → (1.40) Gκ =1 (and the killing measure κ is also called equilibrium measure of E).

Proof. (1.35): • One can give a direct proof based on (1.23) - (1.26), but we will instead derive (1.35) with the help of (1.37) - (1.39). The bilinear form in (1.35) is symmetric by (1.27). Moreover, for ν: E R, → (1.38) 0 (Gν,Gν) = ν,Gν = E(ν, ν) (the energy of the measure ν). ≤E h i 1.3 Some potential theory 11

By (1.39), 0 = (Gν,Gν) = Gν = 0, and by (1.37) it follows that ν = ( L) Gν = 0. This proves (1.35)E (assuming⇒ (1.37) - (1.39)). −

(1.36): • By (1.29):

∞ n ∞ n −t t n −t t −c′n e P f(x) dt c e e dt f ∞ Z n! | | ≤ Z n! k k 0 nP≥0 0 nP≥0 ∞ ′ (1.30) −t(1−e−c ) = c f ∞ e dt < . k k Z0 ∞ By Lebesgue’s domination theorem, keeping in mind (1.18), (1.26),

∞ ∞ n ∞ n (1.32) (1.18) −t t n −t t n Qf(x) = Rt f(x)dt = e P f(x)dt = e dt P f(x) Z Z n! Z n! 0 0 nP≥0 nP≥0 0 (1.29) = P n f(x) = (I P )−1f(x), (1 is not in the spectrum of P by (1.29)). − nP≥0 This proves (1.36).

(1.37): • Note that in view of (1.19)

L = λ(I P ) (composition of (I P ) and the multiplication by λ., (1.41) − − − i.e. (λf)(x)= λ f(x) for f: E R, and x E). x → ∈ Hence L is invertible and − (1.36) (1.32) ( L)−1 =(I P )−1 λ−1 = Qλ−1 = G. − − (1.33)

This proves (1.37).

(1.38): • By (1.10) we find that

(1.10) (f,g)= f(x) g(y) (1 , 1 ) = λ f(x) g(x) c f(x) g(y) E E x y x − x,y (1.42) x,yP∈E xP∈E x,yP∈E (1.2) = f, Lg = Lf,g . h − i h− i As a result (1.37) (Gν,f)= LGν, f = ν, f , whence (1.38). E h− i h i (1.39): • Note that for x E, f: E R ∈ → (1.38) f(x)= 1 , f = (G1 , f). h x i E x 12 1 GENERALITIES

Now ( , ) is a non-negative symmetric bilinear form. We can thus apply Cauchy- Schwarz’sE · · inequality to find that

(1.38) f(x)2 (G1 ,G1 ) (f, f) = 1 ,G1 (f, f) ≤E x x E h x xiE = g(x, x) (f, f). E As a result we find that 2 2 (1.43) f 2 = f(x) λ g(x, x) λ (f, f), k kL (dλ) x ≤ x E xP∈E xP∈E and (1.39) follows with −1 ρ = g(x, x) λx. xP∈E (1.40): • By (1.39), ( , ) is positive definite and by (1.9) E · · (1.9) (1.38) (1, f) = κ f(x)= κ, f = (Gκ, f), for all f: E R. E x h i E → Px It thus follows that 1 = Gκ, whence (1.40). Remark 1.4. Note that we have shown in (1.42) that for all f,g: E R, → (1.44) (f,g)= Lf,g = f, Lg . E h− i h − i Since L = λ(I P ), we also find, see (1.11) for notation, − − (1.44’) (f,g)=((I P )f,g) =(f, (I P )g) . E − λ − λ  As a next step we introduce some important random times for the continuous-time Markov chain X , t 0. Given K E, we define t ≥ ⊆ H = inf t 0; X K , the entrance time in K, K { ≥ t ∈ } H = inf t> 0; X K and there exists s (0, t) with X = X , K { t ∈ ∈ s 6 0} e the hitting time of K, (1.45) T = inf t 0; X / K , the exit time from K, K { ≥ t ∈ } L = sup t> 0; X K , the time of last visit to K K { t ∈ } (with the convention sup φ = 0, inf φ = ). ∞

HK, HK, TK are stopping times for the canonical filtration ( t)t≥0, on DE (i.e. a [0, ]- F def ∞ valuede map T on DE, see above (1.17), such that T t t = σ(Xs, 0 s t), for each t 0). Of course L is in general not a stopping{ ≤ time.}∈F ≤ ≤ ≥ K Given U E, the transition density killed outside U is ⊆ 1 (1.46) rt,U (x, y)= Px[Xt = y,t

Remark 1.5. 1) When U is a connected (non-empty) subgraph of the graph in (1.3), r (x, y), t 0, t,U ≥ x, y U, and gU (x, y), x, y U, simply correspond to the transition density and the Green∈ function in (1.22), (1.26),∈ when one chooses on U

- the weights c , x, y U (i.e. restriction to U U of the weights on E), x,y ∈ × - the killing measure κ = κ + c , x U. x x x,y ∈ y∈PE\U e 2) When U is not connected the above remark applies to each connected component of U, and rt,U (x, y) and gU (x, y) vanish when x, y belong to different connected components of U. 

Proposition 1.6. (U E, A = E U) ⊆ \ (1.48) g (x, y)= g (y, x), for x, y E. U U ∈ (1.49) g(x, y)= g (x, y)+ E [H < , g(X ,y)], for x, y E. U x A ∞ HA ∈ (1.50) E [H < ,g(X ,y)] = E [H < ,g(X , x)], for x, y E x A ∞ HA y A ∞ HA ∈ (Hunt’s switching identity). Proof. (1.48): • This is a direct consequence of the above remark and (1.27).

(1.49): • ∞ ∞ (1.26) 1 1 g(x, y) = Ex 1 Xt = y dt = Ex 1 Xt = y,t

(1.50): • This follows from (1.48), (1.49) and the fact that g( , ) is symmetric, cf. (1.27). · · Example: Consider x E. By (1.49) we find that for x E (with A = x , U = E x ): 0 ∈ ∈ { 0} \{ 0} g(x, x )=0+ P [H < ] g(x , x ), 0 x x0 ∞ 0 0 14 1 GENERALITIES

writing Hx0 for H{x0}, so that

g(x, x0) (1.51) Px[Hx0 < ]= , for x E. ∞ g(x0, x0) ∈

A second application of (1.49) now yields (with U = E x ) \{ 0} g(x, x )g(x ,y) (1.52) g (x, y)= g(x, y) 0 0 , for x, y E. U − g(x , x ) ∈ 0 0  Given A E, we introduce the equilibrium measure of A : ⊆ (1.53) e (x)= P [H = ]1 (x) λ , x E. A x A ∞ A x ∈ Its total mass is called the capacity ofe A (or the conductance of A):

(1.54) cap(A)= Px[HA = ] λx. x∈A ∞ P e Remark 1.7. As we will see below in the case of A = E the terminology in (1.53) is consistent with the terminology in (1.40). There is an interpretation of the weights (cx,y) and the killing measures (κx) on E as an electric network grounding E at the cemetery point ∆, which is implicit in the use of the above terms, see for instance Doyle-Snell [3].  Before turning to the next proposition, we simply recall that given A E, by our convention in (1.45) ⊆ H < = L > 0 = the set of trajectories that enter A. { A ∞} { A } Also given a measure ν on E, we write

(1.55) Pν = νx Px and Eν for the Pν -integral (or “expectation”). xP∈E Proposition 1.8. (A E) ⊆

(1.56) Px[LA > 0,XL− = y]= g(x, y) eA(y), for x, y E, A ∈ (X − is the position of X at the last visit to A, when LA > 0). LA .

(1.57) h (x) def= P [H < ]= P [L > 0] = G e (x), for x E A x A ∞ x A A ∈ (the equilibrium potential of A). When A = φ, 6 (1.58) eA is the unique measure ν supported on A such that Gν =1 on A. Let A B E then under P the entrance “distribution” in A and the last exit “distri- ⊆ ⊆ eB bution” of A coincide with eA:

− (1.59) PeB [HA < ,XHA = y]= PeB [LA > 0,XL = y]= eA(y), for y E. ∞ A ∈ In particular when B = E, under P , the entrance distribution in A and the exit distribution of A (1.60) κ coincide with eA. 1.3 Some potential theory 15

Proof. (1.56): • Both members vanish when y / A. We thus assume y A. Using the discrete-time Markov chain Z , n 0 (see (1.15)),∈ we can write: ∈ n ≥ pairwise disjoint Px[LA > 0,X − = y]= Px Zn = y, and for all k>n,Zk / A = LA h n≥0{ ∈ }i S Markov property P [Z = y, and for all k>n,Z / A] = x n k ∈ nP≥0 Fubini Px[Zn = y] Py[for all k > 0, Zk / A] = n≥0 ∈ (1.45) P ∞ (1.26) Ex 1 Zn = y Py[HA = ]= Ex 1 Xt = y dt Py[HA = ] = g(x, y) eA(y). { } ∞ Z { } ∞ (1.53) h nP≥0 i h 0 i e e This proves (1.56).

(1.57): • Summing (1.56) over y A, we obtain ∈ (1.33) P [H < ]= P [L > 0] = g(x, y) e (y) = Ge (x), whence (1.57). x A ∞ x A A A yP∈A (1.58): • Note that eA is supported on A and GeA =1on A by (1.57). If ν is another such measure and µ = ν eA, − µ,Gµ =0 h i because Gµ =0 on A, and µ is supported on A. By (1.35) it follows that µ = 0, whence (1.58).

(1.59), (1.60): • By (1.50) (Hunt’s switching identity): for y E, ∈ (1.58) EeB [HA < ,g(XHA ,y)] = Ey[HA < , (GeB)(XHA )] = Py[HA < ]. ∞ ∞ A⊆B ∞

Denoting by µ the entrance distribution of X in A under PeB :

µ = P [H < ,X = x], x E, x eB A ∞ HA ∈ we see by the above identity and (1.57) that Gµ(y) = Ge (y), for all y E, and by A ∈ applying L to both sides, µ = eA. As for the last exit distribution of X. from A under

PeB , integrating over eB in (1.56), we find:

(1.57) − PeB [LA > 0,XL = y]= eB(x) g(x, y) eA(y) = eA(y), for y E. A A⊆B ∈ xP∈E This completes the proof of (1.59). In the special case B = E, we know by (1.40), (1.58) that eB = κ and (1.60) follows. 16 1 GENERALITIES

We now provide two variational problems for the capacity, where the equilibrium measure and the equilibrium potential appear. These characterizations are, of course, strongly flavored by the previously mentioned analogy with electric networks (we refer to Remark 1.7).

Proposition 1.9. (A E) ⊆ (1.61) cap(A) = (inf E(ν, ν); ν probability supported on A )−1 { } and when A = φ, the infimum is uniquely attained at eA = eA/cap(A), the normalized equilibrium measure6 of A.

(1.62) cap(A) = inf (f, f); f 1 on A , {E ≥ } and the infimum is uniquely attained at hA, the equilibrium potential of A.

Proof. (1.61): • When A = φ, both members of (1.61) vanish and there is nothing to prove. We thus assume A = φ and consider a probability measure ν supported on A. By (1.35), we have 6 0 E(ν, ν)= E(ν e + e , ν e + e ) ≤ − A A − A A = E(e , e )+2E(ν e , e )+ E(ν e , ν e ). A A − A A − A − A

The last term is non-negative, by (1.35) it only vanishes when ν = eA, and

1 1 E(ν e , e )= ν e (x) g(x, y) e (y) = − =0 . − A A x − A A cap(A) xP∈E   yP∈E  (1.58) = 1 on A | cap({zA) }

We thus find that E(ν, ν) becomes (uniquely) minimal at

1 (1.58) 1 1 E(e , e )= e (x) g(x, y) e (y) = e (A)= . A A cap(A)2 A A cap(A)2 A cap(A) x,yP∈E This proves (1.61).

(1.62): • We consider f: E R such that f 1 , and h = Ge , so that → ≥ A A A h (x)= P [H < ]=1, for x A. A x A ∞ ∈ We have

(f, f)= (f h + h , f h + h ) E E − A A − A A = (h , h )+2 (f h , h )+ (f h , f h ). E A A E − A A E − A − A 1.3 Some potential theory 17

Again, the last term is non-negative and only vanishes when f = hA, see (1.39). Moreover, we have

(1.38) (f h , h )= (f h , Ge ) = e , f h 0, E − A A E − A A h A − Ai ≥ since h =1 on A, f 1 on A, and e is supported on A. A ≥ A So the right-hand side of (1.62) equals

(1.38) (h , h )= (Ge , h ) = e , h = e (A) = cap(A). E A A E A A h A Ai A This proves (1.62).

Orthogonal decomposition, trace Dirichlet form: We consider U E and set K = E U. Our aim is to describe the orthogonal complement relative to the Dirichlet⊆ form ( , )\ of the space of functions supported in U: E · · (1.63) = ϕ : E R; ϕ(x)=0, for all x K . FU { → ∈ } To this end we introduce the space of functions harmonic in U:

(1.64) = h : E R; P h(x)= h(x), for all x U , HU { → ∈ } as well as the space of potentials of (signed) measures supported on K:

(1.65) = f : E R; f = Gν, for some ν supported on K . GK { → } Recall that ( , ) is a positive definite quadratic form on the space of functions from E to R (seeE above· · (1.11)). F Proposition 1.10. (orthogonal decomposition)

(1.66) = . HU GK (1.67) = , where and are orthogonal, relative to ( , ). F FU ⊕ HU FU HU E · · Proof. (1.66): • (1.37) We first show that U K . Indeed when h U , h = G( L) h = Gν where ν = Lh is supportedH on⊆K Gby (1.64) and (1.41).∈ Hence H . − − HU ⊆ GK To prove the reverse inclusion we consider ν supported on K. Set h = Gν. By (1.37) we know that Lh = LGν = ν, so that Lh vanishes on U. It follows from (1.41) that h , and (1.66) is proved.− Incidentally note that choosing A = K in (1.49), we can ∈ HU multiply both sides of (1.49) by νy and sum over y. The first term in the right-hand side vanishes and we then see that h = Gν satisfies

(1.68) h(x)= E [H < , h(X )], for x E. x K ∞ HK ∈ (1.67): • We first note that when ϕ and ν is supported on K, ∈FU (1.38) (Gν,ϕ) = ν,ϕ =0. E h i 18 1 GENERALITIES

So the spaces U and U are orthogonal under ( , ). In addition, given f from E R, we can define F H E · · → (1.69) h(x)= E [H < , f(X )], for x E, x K ∞ HK ∈ and note that h(x)= f(x), when x K, ∈ and that by the same argument as above, h is harmonic in U. If we now define ϕ = f h, we see that ϕ vanishes on K, and hence − (1.70) f = ϕ + h, with ϕ and h , ∈FU ∈ HU is the orthogonal decomposition of f. This proves (1.67).

As we now explain the restriction of the Dirichlet form to the space U = K , see (1.64) - (1.66), gives rise to a new Dirichlet form on the space of functionsH from KG to R, the so-called trace form. Given f: K R, we also write f for the function on E that agrees with f on K and vanishes on U, when→ no confusion arises. Note that (1.71) f(x)= E [H < , f(X )], for x E, x K ∞ HK ∈ is the unique function one E, harmonic in U, that agrees with f on K, cf. (1.67). Indeed the decomposition (1.67) applied to the case of the function equal to f on K, and to 0 on U, shows the existence of a function in equal to f on K. By (1.68) and (1.66), it is HU necessarily equal to f. We then define for f: K R, the trace form e → ∗(f, f)= (f, f) (1.72) E E (1.69),(1.70) = infe e (g,g); g: E R coincides with f on K , {E → } where we used in the second line the fact that when g coincides with f on K, then g = ϕ + f, with ϕ U , and hence (g,g) (f, f) due to (1.67). We naturally extend this definition for f,g∈F: K R, by settingE ≥E e → e e (1.73) ∗(f,g)= (f, g). E E ∗ It is plain that is a symmetric bilinear form one e the space of functions from K to R. As we now explainE ∗ does indeed correspond to a Dirichlet form on K induced by some (uniquely defined inE view of (1.10)) non-negative weights and killing measure. Proposition 1.11. (K = φ) 6 The quantities defined by

∗ (1.74) cx,y = λx Px[HK < ,XH = y], for x = y in K, ∞ eK 6 =0, for ex = y in K, (1.75) κ∗ = λ P [H = ], for x K, x x x K ∞ ∈ ∗ e (1.76) λx = λx(1 Px[HK < , XH = x]), for x K, − ∞ eK ∈ e 1.3 Some potential theory 19

∗ ∗ satisfy (1.2) - (1.5), (1.7), with E replaced by K (in particular cx,y = cy,x). The corre- sponding Dirichlet form coincides with ∗, i.e. E 1 2 (1.77) ∗(f, f)= c∗ f(y) f(x) + κ∗ f 2(x), for f: K R. E 2 x,y − x → x,yP∈K  xP∈K The corresponding Green function g∗(x, y), x, y in K, satisfies (1.78) g∗(x, y)= g(x, y), for x, y K. ∈ Proof. We first prove that the quantities in (1.74) - (1.76) satisfy (1.2) - (1.5), (1.7), (1.79) with E replaced by K. To this end we note that for x = y in K, 6 (1.73) (1.67) (1.44) ∗(1 , 1 ) = (1 , 1 ) = (1 , 1 ) = L1 (x) −E x y −E x y −E x y y 1 (x)=0 (1.71) (1.80) ey = λ e pe 1 (z) = λe P [H 0, is a connected graph. { } x,y Further we know that for all x in E, Px-a.s., the continuous-time chain on E reaches the cemetery state ∆ after a finite time. As a result Py[HK = ] > 0, for at least one y in K, since otherwise the chain starting from any x in K would∞ a.s. never reach ∆. By (1.75) we thus see that κ∗ does not vanish everywhere one K. In addition (1.7) holds by (1.82). We have thus proved (1.79). (1.77): • Expanding the square in the first sum in the right-hand side of (1.77), we see using the ∗ symmetry of cx,y, (1.82), and the second line of (1.74), that the right-hand side of (1.77) equals (1.80),(1.81) λ∗ f 2(x) c∗ f(x) f(y) = x − x,y xP∈K x6=Py in K ∗(1 , 1 ) f 2(x)+ ∗(1 , 1 ) f(x) f(y)= ∗(f, f), E x x E x y E xP∈K x6=Py in K and this proves (1.77). 20 1 GENERALITIES

(1.78): • Consider x K and ψ the restriction to K of g(x, ). By (1.66) we see that g(x, ) = ∈ x · · G1 ( )= ψ ( ), and therefore for any y K we have x · x · ∈ e (1.73) (1.66),(1.67) (1.38) ∗(ψ , 1 ) = (G1 , 1 ) = (G1 , 1 ) = 1 E x y E x y E x y {x=y} (1.38) = ∗(ψ∗, 1e ), if ψ∗( )= g∗(x, ). E x y x · ·

∗ It follows that ψx = ψx for any x in K, and this proves (1.78). Remark 1.12. 1) The trace form, with its expressions (1.72), (1.77), is intimately related to the notion of network reduction, or electrical network equivalence, see [1], p. 56. 2)When K K′ E are non-empty subsets of E, the trace form on K, of the trace form on K′ of⊆ , coincides⊆ with the trace form on K of . Indeed this follows for instance by (1.78) andE the fact that the Green function determinesE the Dirichlet form, see (1.37), (1.10). This feature is referred to as the “tower property” of traces. 

1.4 Feynman-Kac formula Given a function V : E R, we can also view V as a multiplication operator: → (1.83) (V f)(x)= V (x) f(x), for f: E R. → In this short section we recall a celebrated probabilistic representation formula for the operator et(P −I+V ), when t 0. We recall the convention stated above (1.18). ≥ Theorem 1.13. (Feynman-Kac formula) For V, f : E R, t 0, one has → ≥ t t(P −I+V ) (1.84) Ex f(Xt) exp V (Xs)ds = e f (x), for x E, Z ∈ h n 0 oi 

(the case V =0 corresponds to (1.18)).

Proof. We denote by St f(x) the left-hand side of (1.84). By the Markov property we see that for t, s 0, ≥ t+s St+s f(x)= Ex f(Xt+s) exp V (Xu)du h n Z0 oi t s = Ex exp V (Xu)du exp V (Xu)du θt f(Xs) θt h n Z0 o n Z0 o ◦ ◦ i t s

= Ex exp V (Xu)du EXt exp V (Xu)du f(Xs) h n Z0 o h n Z0 o ii t = Ex exp V (Xu)du Ss f(Xt) = St(Ssf)(x)=(StSs)f(x). h n Z0 o i 1.4 Feynman-Kac formula 21

In other words S , t 0 has the semi-group property t ≥ (1.85) S = S S , for t, s 0. t+s t s ≥ Moreover, observe that

t 1 1 (S f f)(x)= E f(X ) exp V (X )ds f(X ) t t t x t s 0 − h n Z0 o − i t s 1 1 R V (Xu)du = E [f(X ) f(X )] + E f(X ) V (X ) e 0 ds , t x t 0 x t t s − h Z0 i and as t 0, → 1 E [f(X ) f(X )] (P I) f(x), by (1.18), t x t − 0 → − whereas by dominated convergence

t s 1 R V (Xu)du E f(X ) V (X ) e 0 ds E [f(X ) V (X )] = V f(x). x t t s x 0 0 h Z0 i →

So we see that

1 (1.86) (St f f)(x) (P I + V ) f(x). t − −→t→0 −

Then considering St+h f(x) St f(x)=(Sh I) St f(x), with h > 0 small, as well as (when t> 0 and 0

S f(x) S f(x)= (S I) S f(x), t−h − t − h − t−h

tkV k∞ one sees that (using in the second case that supu≤t Su f(x) e f ∞) the function t 0 S f(x) is continuous. | | ≤ k k ≥ → t Now dividing by h and letting h 0, we find that → (1.87) t 0 S f(x) is continuously differentiable with derivative: ≥ → t (P I + V ) S f(x). − t It now follows that the function

s [0, t] F (s)= e(t−s)(P −I+V )S f(x) ∈ → s is continuously differentiable on [0, t], with derivative

F ′(s)= e(t−s)(P −I+V )(P I + V ) S f(x)+ e(t−s)(P −I+V )(P I + V ) S f(x)=0. − − s − s

t(P −I+V ) We thus find that F (0) = F (t) so that e f(x)= St f(x). This proves (1.84). 22 1 GENERALITIES

1.5 Local times

In this short section we define the local time of the Markov chain Xt, t 0, and discuss some of its basic properties. ≥ The local time of X. at site x E, and time t 0, is defined as ∈ ≥ t x 1 (1.88) Lt = 1 Xs = x ds . Z0 { } λx

Note that the normalization is different from (0.2) (we have not yet introduced Xt, t 0). We extend (1.88) to the case x = ∆ (cemetery point) with the convention ≥

t ∆ (1.89) λ∆ =1, Lt = 1 Xs = ∆ ds, for t 0. Z0 { } ≥

x By direct inspection of (1.88) we see that for x E, t [0, ) Lt [0, ) is a ∈ x ∈ ∞ → ∈ ∞ continuous non-decreasing function with a finite limit L∞ (because Xt = ∆ for t large enough). We record in the next proposition a few simple properties of the local time.

Proposition 1.14.

(1.90) E [Ly ]= g(x, y), for x, y E. x ∞ ∈

y (1.91) Ex[L ]= gU (x, y), for x, y E, U E. TU ∈ ⊆ t x V (1.92) V (x) L = (Xs)ds, for t 0, V : E ∆ R. t Z λ ≥ ∪{ }→ x∈EP∪{∆} 0 (1.93) Lx θ + Lx = Lx , for x E, s 0 (additive function property). t ◦ s s t+s ∈ ≥ Proof. (1.90): • ∞ y dt (1.26) Ex[L∞]= Ex 1 Xt = y = g(x, y). h Z0 { } λy i (1.91): • Analogous argument to (1.90), cf. (1.45), (1.47), and Remark 1.5.

(1.92): • t x ds V (x) Lt = V (x) 1 Xs = x = Z { } λx x∈EP∪{∆} x∈EP∪{∆} 0 t t V V (x)1 Xs = x ds = (Xs)ds . Z λ { } Z λ 0 x∈EP∪{∆} 0 (1.93): • t+s t s Note that 0 V (Xu)du =( 0 V (Xu)du) θs + 0 V (Xu)du, and apply this identity with 1 ◦ V ( )= 1Rx( ). R R · λx · 1.6 The Markov chain X. (with variable jump rate) 23

1.6 The Markov chain X. (with variable jump rate) Using time change, we construct in this section the Markov chain X. with same discrete skeleton Zn, n 0, as X., but with variable jump rate λx, x E ∆ . We describe the transition semi-group≥ attached to X., relate the local times∈ for ∪{X. and} X., and briefly discuss the Feynman-Kac formula for X.. As a last topic, we explain how the trace process of X. on a subset K of E is related to the trace Dirichlet form introduced in Section 1.4. We define

t x −1 (1.94) Lt = L = λ ds, t 0, t Z Xs ≥ x∈EP∪{∆} 0 so that t R L R is a continuous, strictly increasing, piecewise differentiable ∈ + → t ∈ + function, tending to . In particular it is an increasing bijection of R+, and using the formula for the derivative∞ of the inverse one can write for the inverse function of L.: u u

(1.95) τu = inf t 0; Lt u = λXτv dv = λXv dv, { ≥ ≥ } Z0 Z0 where we have introduced the time changed process (with values in E ∆ ) ∪{ } (1.96) X def= X , for u 0, u τu ≥

(the path of X. thus belongs to DE, cf. above (1.17)). We also introduce the local times of X. (note that the normalization is different from (1.88), but in agreement with (0.2)):

u x def (1.97) Lu = 1 Xv = x dv, for u 0, x E ∆ . Z0 { } ≥ ∈ ∪{ }

Proposition 1.15. Xu, u 0, is a Markov chain with cemetery state ∆ and sub- Markovian transition semi-group≥ on E:

(1.98) R f(x) def= E [f(X )] = etLf(x), for t 0, x E, f : E R, t x t ≥ ∈ →

(i.e. X. has the jump rate λx in x and jumps according to px,y in (1.12), (1.14)). Moreover one has the identities:

(1.99) Xt = XLt , for t 0 (“time t for X. is time Lt for X.”), ≥ and

x (1.100) Lx = L , for x E ∆ , t 0, t Lt ∈ ∪{ } ≥ x (1.101) Lx = L , for x E. ∞ ∞ ∈ 24 1 GENERALITIES

Proof. Markov property of X.: (sketch) • Note that τu, u 0, are t = σ(Xs, 0 s t)-stopping times and using the fact that Lt, t 0, satisfies the≥ additiveF functional≤ property≤ ≥ L = L + L θ , for t, s 0, t+s s t ◦ s ≥ we see that Lτu◦θτv +τv = u + v, and taking inverses

(1.102) τ = τ θ + τ , u+v u ◦ τv v and (1.96) Xu+v = Xτu◦θτv +τv = Xτu θu = Xu θτv . (1.102) ◦ ◦

(Note incidentally that θu = θτu , for u 0, satisfies the semi-flow property θu+v = θu θv, for u, v 0, and, in this notation, the≥ above equality reads X = X θ .) ◦ ≥ u+v u ◦ v It now follows that for u, v 0, B , one has for f : E ∆ R, ≥ ∈Fτv ∪{ }→ strong Markov for X E [f(X )1 ]= E [f(X θ )1 ] = . x u+v B x τu ◦ τv B

Ex[EXτv [f(Xτu )]1B]= Ex[EXv [f(Xu)]1B] .

′ ′ Since for v v, Xv = Xτv′ are (see for instance Proposition 2.18, p. 9 of [12]) τv - measurable, this≤ proves the Markov property. F

(1.98): • From the Markov property one deduces that R , t 0, is a sub-Markovian semi-group. t ≥ Now for f: E R, x E, u> 0, one has → ∈ 1 1 1 (R f f)(x)= E [f(X ) f(X )] = E [f(X ) f(X )]. u u − u x u − 0 u x τu − 0

By (1.95) we see that τu cu, for u 0. We also know that the probability that X. jumps at least twice in [0,≤ t] is o(t) as t≥ 0. So as u 0, → → 1 E [f(X ) f(X )] = u x τu − 0 1 Ex f(Xτu ) f(X0), X. has exactly one jump in [0, τu] + o(1) = u −   1 ′ Ex[f(Z1) f(X0)] Px[LT1 u]+ o (1), with T1 the first jump of X., − u ≤

(1.94) 1 T1 ′ (1.37) = (P f f)(x) Px u + o (1) λx(P f f)(x) = Lf(x). − u hλx ≤ i −→u→0 − So we have shown that: 1 (1.103) (Ru f f)(x) Lf(x) . u − −→u→0 1.6 The Markov chain X. (with variable jump rate) 25

Just as below (1.86) one now shows the corresponding statement to (1.87):

(1.104) u 0 R f(x) is continuously differentiable with derivative LR f(x). ≥ → u u

uL One then concludes in the same fashion as below (1.87), that e f(x)= Ru f(x), and this proves (1.98).

(1.99): • By (1.96), XL = Xτ = Xt, for t 0, whence (1.99). t Lt ≥ (1.100): • x x d x dLu dLt (1.97) 1 (1.96) 1 dLt LL = = 1 XLt = x = 1 Xt = x = , t u L (1.94) dt du = t × dt { } λXt { } λx dt

except when t is a jump time of X., and integrating we find (1.100). (1.101): • x Letting t in (1.100) yields L = Lx , that is (1.101). →∞ ∞ ∞ One then has the Feynman-Kac formula for X.. Theorem 1.16. (Feynman-Kac formula for X.) For V, f : E R, u 0, one has → ≥ u u(L+V ) (1.105) Ex[f(Xu) exp V (Xv)dv = e f(x), for x E. n Z0 oi ∈ Proof. The proof is similar to that of (1.84). One simply uses (1.98) in place of (1.18). Remark 1.17. As a closing remark for Chapter 1, we briefly sketch a link between the Markov chain obtained as the trace of X. on a non-empty subset K of E and the trace form ∗, cf. (1.72) and Proposition 1.11. To this end we introduce E u K x (1.106) L = L = 1 Xv K ∆ dv, for u 0, u u Z ∈ ∪{ } ≥ x∈KP∪{∆} 0  which is a continuous non-decreasing function of u tending to infinity, and its right- continuous inverse,

(1.107) τ K = inf u 0; LK > v , for v 0. v { ≥ u } ≥ The trace process of X. on K is defined as

K (1.108) X = X K , for v 0 v τ v ≥ (intuitively at time v, X .K is at the location where X. sits once L.K accumulates v + ε units of time, with ε 0). With similar arguments as in the case of X. (see the proof of Proposition 1.15, in→ particular, using the strong Markov property of X. and the fact K K that, in the notation from below (1.102), X = X θ K ), one can show that under u+v u ◦ τ v P , x K ∆ , X K, v 0, is a Markov chain on K with cemetery state ∆. x ∈ ∪{ } v ≥ 26 1 GENERALITIES

One can further show that its corresponding sub-Markovian transition semi-group on K has the form

∗ (1.109) RK f(x)= E [f(X K )] = etL f(x), for x K, f: K R, t x t ∈ → where in the notation of (1.74), (1.76),

(1.110) L∗f(x)= c∗ f(y) λ∗ f(x), for x K, f: K R. x,y − x ∈ → yP∈K To see this last point one notes that if stands for the generator of X.K , the inverse of has the K K matrix: L −L × ∞ ∞ K Ex 1 X v = y dv = Ex 1 Xu = y du (1.111) h Z0 { } i h Z0 { } i (1.101),(1.90) (1.78) = g(x, y) = g∗(x, y), for x, y K. ∈ A similar identity holds for the continuous-time chain on K with variable jump rate λ.∗ ∗ ∗ ∗ attached to the weights cx,y and the killing measure κ.. Its generator is L (by Proposition 1.15), and we thus find that = L∗.  L 27

2 Isomorphism theorems

In this chapter we will discuss the isomorphism theorems of Dynkin and Eisenbaum mentioned in the introduction, cf. (0.4) - (0.7), as well as some of the so-called generalized Ray-Knight theorems, in the terminology of Marcus-Rosen [19]. We still need to introduce some objects such as the Gaussian free field and the measures on paths entering the Dynkin isomorphism theorem. We keep the same set-up and notation as in Chapter 1.

2.1 The Gaussian free field In this section we define the Gaussian free field. We also describe the conditional law of the field given its values on a given subset K of E, as well as the law of its restriction to K. Interestingly this brings into play the orthogonal decomposition under the Dirichlet form and the notion of trace form discussed in Section 1.4. As we now see we can use g(x, y), x, y E, as the function of a centered Gaussian field indexed by E. An important∈ step to this effect is (1.35). We endow the canonical space RE of functions on E with the canonical product σ- algebra and with the canonical coordinates

(2.1) ϕ : f RE ϕ (f)= f(x), x E. x ∈ → x ∈ Proposition 2.1. There exists a unique probability P G on RE, under which

(ϕ ) is a centered Gaussian field with covariance (2.2) x x∈E EG[ϕ ϕ ]= g(x, y), for x, y E. x y ∈ Proof. Uniqueness Under such a P G, for any ν: E R, ν,ϕ = ν ϕ is a centered Gaussian variable → h i x∈E x x with P G νx νy E [ϕx ϕy]= νx νy g(x, y)= E(ν, ν). Px,y x,yP∈E As a result

G ihν,ϕi − 1 E(ν,ν) (2.3) E [e ]= e 2 , for any ν : E R. → This specifies the characteristic function of P G, and hence P G is unique.

Existence We give both an abstract and a concrete construction of the law P G.

Abstract construction:

We choose νℓ, 1 ℓ E an orthonormal basis for E( , ), cf. (1.35), of the space of measures, and consider≤ ≤ the | | dual basis f , 1 i E , of· functions· so: ν , f = δ , for i ≤ ≤ | | h ℓ ii ℓ,i 1 i, ℓ E . If ξi, i 1 are i.i.d. N(0, 1) variables on some auxiliary space (Ω, ,P ), we≤ define≤| the| random function≥ A

(2.4) ψ( ,ω)= ξ (ω) f ( ). · i i · 1≤Pi≤|E| 28 2 ISOMORPHISM THEOREMS

For any x E, 1 = |E| E(1 , ν ) ν , so that ∈ x ℓ=1 x ℓ ℓ P |E| ψ(x, ω)= 1 , ψ( ,ω) = E(1 , ν ) ξ (ω) ν , f = E(1 , ν ) ξ . h x · i x ℓ i h ℓ ii x ℓ ℓ 1≤ℓ,iP≤|E| ℓP=1 It now follows that for x, y E ∈ P P E [ψ(x, ω) ψ(y,ω)] = E(1x, νℓ) E(1y, νℓ′ ) E [ξℓ ξℓ′ ]= E(1x, νℓ) E(1y, νℓ) ′ 1≤ℓ,ℓP≤|E| 1≤Pℓ≤|E| Parseval = E(1x, 1y)= g(x, y).

So the law of ψ( ,ω) on RE satisfies (2.2). · Concrete construction:

The matrix g(x, y), x, y E, has inverse L1x, 1y , x, y E, cf. (1.37), and hence under the probability ∈ h− i ∈

1 1 (2.5) P G = exp (ϕ,ϕ) dϕ |E| 2 x 2 √ − E x∈E (2π) det G n o Q (1.44) (using that ϕ ϕ L1 , 1 = Lϕ,ϕ = (ϕ,ϕ)), (ϕ ) is a centered x,y∈E x yh− x yi h− i E x x∈E Gaussian vectorP with covariance g(x, y), x, y E, i.e. (2.2) holds. ∈ Remark 2.2. In the above abstract construction the dual basis fi, 1 i E , of νℓ, 1 ℓ E , is simply given by ≤ ≤ | | ≤ ≤| | (2.6) f = Gν , 1 i E . i i ≤ ≤| | Indeed νℓ, fi = νℓ, Gνi = E(νℓ, νi)= δℓ,i. Note that fi, 1 i E , is an orthonormal basis underh i( , ):h i ≤ ≤| | E · · (1.38) (f , f )= (Gν , Gν ) = ν , Gν = E(ν , ν )= δ . E i j E i j h i ji i j i,j 

Conditional expectations: We consider K E, U = E K, and want to describe the conditional law under P G of ⊆ \ (ϕx)x∈U given (ϕx)x∈K, as well as the law of (ϕx)x∈K. The orthogonal decomposition in Proposition 1.10 together with the description of the trace form in Proposition 1.11 will be useful for this purpose. We write P G,U for the law on RE of the centered Gaussian field with covariance (2.7) EG,U [ϕ ϕ ]= g (x, y), for x, y E, x y U ∈ (so ϕ = 0, P G,U -a.s., when x K). x ∈ Proposition 2.3. (K = φ) 6 For x E, define on RE the σ(ϕ ,y K)-measurable ∈ y ∈

hx = Ex[HK < , ϕX ] ∞ HK (2.8) = P [H < , X = y] ϕ (so h = ϕ , for x K). x K ∞ HK y x x ∈ yP∈K 2.1 The Gaussian free field 29

Then we can write

(2.9) ϕ = ψ + h , for x E (so ψ =0, for x K). x x x ∈ x ∈ Under P G,

(2.10) (ψ ) is independent from σ(ϕ ,y K), x x∈E y ∈ and

G,U (2.11) (ψx)x∈E is distributed as (ϕx)x∈E under P .

In addition, in the notation of (1.77),

1 1 (2.12) (ϕ ) has the law exp ∗(ϕ,ϕ) dϕ , x x∈K |K| − 2 E x (2π) 2 detK×KG n o xQ∈K p where detK×KG denotes the determinant of the K K-matrix obtained by restricting g( , ) to K K. × · · × Proof. (2.10): • For any x E, ψ belongs to the linear space generated by the centered jointly Gaussian ∈ x collection ϕz, z E. In addition when x E and y K, we find that by (2.8), (2.9) and (2.2), ∈ ∈ ∈

EG[ψ ϕ ]= EG[ϕ ϕ ] EG[h ϕ ] x y x y − x y = g(x, y) P [H < ,X = z] g(z,y) − x K ∞ HK zP∈K (1.49) = g(x, y) E [H < ,g(X ,y)] = 0. − x K ∞ HK The claim (2.10) now follows.

(2.11): • G,U Since ψx = 0, for x K, and P -a.s., ϕx = 0, for x K, we only need to focus on the ∈ G ∈ U law of (ψx)x∈U under P . When F is a bounded measurable function on R , we find that |E| by (2.5), setting c as the inverse of (2π) 2 √det G, we have

G 1 E F (ψx)x∈U = c F (ϕx hx)x∈U exp (ϕ,ϕ) dϕx. Z E − − 2 E   R  n o xQ∈E

Using Proposition 1.10 and (1.71), (1.72), we see that for ϕ in RE,

(ϕ,ϕ)= (ϕ h, ϕ h)+ ∗(ϕ ,ϕ ), E E − − E |K |K

E where ϕ|K denotes the restriction to K of ϕ R . We then make a change of variables in the above integral. We set ϕ′ = ϕ h ,∈ for x U, and ϕ′ = ϕ , for x K, and x x − x ∈ x x ∈ 30 2 ISOMORPHISM THEOREMS

′ note that the Jacobian of this transformation equals 1, so that x∈E dϕx = x∈E dϕx. We thus see that for all F as above: Q Q G E [F (ψx)x∈U ]=  (2.13) 1 1 ∗ c F (ϕx)x∈U exp (ϕU ,ϕU ) (ϕ|K,ϕ|K) dϕx, Z E − 2 E − 2 E R  n oxQ∈E where we have set ϕU (x)=1U (x)ϕx. Integrating over the variables ϕx, x K, we find that for a suitable constant c′, ∈

′ 1 = c F (ϕx)x∈U exp (ϕU ,ϕU ) dϕx Z U − 2 E R  n oxQ∈U and using Remark 1.5 and (2.5)

G,U = E F (ϕx)x∈U .   This proves (2.11).

(2.12): • A simple modification of the above calculation replacing F (ψx)x∈U by H (ϕx)x∈K , with H a bounded measurable functions on RK, yields (2.12).   Remark 2.4. As a result of Proposition 2.3, when x is a given point of U, under P G, conditionally on the variables ϕ , y K, y ∈ ϕ is distributed as a Gaussian variable with mean (2.14) x Ex[HK < ,ϕX ], and variance gU (x, x). ∞ HK Note that by Proposition 1.9 (and Remark 1.5), for x U, ∈ g (x, x) is the inverse of the minimum energy of a function taking (2.15) U the value 1 on x and 0 on K

(this provides an interpretation of the conditional variance as an effective resistance be- tween x and K ∆ ).  ∪{ }

2.2 The measures Px,y In this section we introduce a further ingredient of the Dynkin isomorphism theorem, namely the kind of measures on paths that appear in (0.4), which live on paths in E with finite duration that go from x to y. We provide several descriptions of these measures, and derive an identity for the Laplace transform of the local time of the path, which prepares the ground for the proof of the Dynkin isomorphism theorem in the next section. We introduce the space of E-valued trajectories with duration t 0: ≥ Γ = the space of right-continuous functions [0, t] E, (2.16) t → with finitely many jumps, left-continuous at t.

We still denote by X , 0 s t, the canonical coordinates, and by convention we set s ≤ ≤ Xs = ∆ (cemetery point) if s > t. 2.2 The measures Px,y 31

We then define the space of E-valued trajectories with finite duration as

(2.17) Γ = Γt . t>[0 For γ Γ, we denote the duration of γ by ∈ (2.18) ζ(γ) = the unique t> 0 such that γ Γ . ∈ t The σ-algebra we choose on Γ is simply obtained by “transport”. We use the bijection Φ: Γ1 (0, ) Γ: × ∞ → Φ (w, t) Γ1 (0, ) γ( )= w · Γ, ∈ × ∞ −→ · t ∈ where we endow Γ1 (0, ) with the canonical product σ-algebra (and Γ1 is endowed with the σ-algebra generated× ∞ by the maps X , 0 s 1, from Γ into E). We thus take s ≤ ≤ 1 the image by Φ of the σ-algebra on Γ1 (0, ), and obtain the σ-algebra on Γ. We define for x, y E, t> 0, the measure × ∞ ∈

t Px Px,y the image on Γt of 1 Xt = y , (2.19) { } λy under the map: (X ) from D X = y into Γ ( Γ). s 0≤s≤t E ∩{ t } t ⊆

t Note that the total mass of Px,y is

t 1 (1.22) (2.20) Px,y[Γt]= Px[Xt = y] = rt(x, y). λy

We then define the finite measure Px,y on Γ via:

∞ t (2.21) Px,y[B]= Px,y [B] dt, Z0

t for any measurable subset B of Γ (noting that t> 0 Px,y defines a finite measure kernel from (0, ) to Γ). The total mass of P is → ∞ x,y ∞ ∞ t (2.20) (1.26) (2.22) Px,y[Γ] = Px,y[Γ] dt = rt(x, y) dt = g(x, y). Z0 Z0

The next proposition describes some relations between the measures Px,y and Px. In particular it provides an interpretation of Px,y as a non-normalized h-transform of Px, with h( ) = g( ,y), see for instance Section 3.9 of [19]. The Remark 2.6 below gives yet · · an other description of Px,y. Proposition 2.5. (x, y E) ∈ For 0 < t < < t , x ,...,x E, one has 1 ··· n 1 n ∈

Px,y[Xt = x1,...,Xt = xn]= Px[Xt = x1,...,Xt = xn] g(xn,y)= (2.23) 1 n 1 n rt1 (x, x1) rt2−t1 (x1, x2) ...rtn−tn−1 (xn−1, xn) g(xn,y) λx1 ...λxn . 32 2 ISOMORPHISM THEOREMS

If K E and H is defined as in (1.45), for B σ(X ,s 0) and ζ as in (2.18), ⊆ K ∈ HK ∧s ≥ (2.24) P [B,H ζ]= E [B H < , g(X ,y)] . x,y K ≤ x ∩{ K ∞} HK Proof. (2.23): • ∞ (2.21) t Px,y[Xt = x1,...,Xt = xn] = P [Xt = x1,...,Xt = xn] , dt 1 n Z x,y 1 n ∞ 0 dt Markov property at time tn = Px[Xt1 = x1,...,Xtn = xn, Xt = y] = Ztn λy ∞

Ex[Xt1 = x1,...,Xtn = xn, rt−tn (xn,y)] dt Ztn (1.26) = Px[Xt1 = x1,...,Xtn = xn] g(xn,y), and the second equality of (2.23) follows from the Markov property.

(2.24): • ∞ t (2.18),(2.19) Px,y[B,HK ζ]= Px,y[B,HK ζ] dt = ≤ Z0 ≤ ∞ dt strong Markov property Ex[B,HK t, Xt = y] = Z0 ≤ λy ∞ Fubini Ex[B,HK t, rt−HK (XHK ,y)] dt = Ex[B,HK < , g(XHK ,y)]. Z0 ≤ (1.26) ∞

Remark 2.6. Given γ Γ, we can introduce N(γ) 0, the number of jumps of γ strictly ∈ ≥ before ζ(γ), the duration of γ, and when N(γ) = n 1, we can consider 0 < T1(γ) < < T (γ) <ζ(γ) the successive jump times of X , 0≥ s ζ(γ). ··· n s ≤ ≤ As we now explain, for n 1, ti > 0, 1 i n, t > 0, and x1,...,xn E, one has the following formula complementing≥ Proposition≤ ≤ 2.5: ∈

Px,y[N = n, XT1 = x1,...,XTn = xn, T t + dt ,...,T t + dt ,ζ t + dt]= (2.25) 1 ∈ 1 1 n ∈ n n ∈ cx,x1 cx1,x2 ...cxn−1,y −t δxn,y 1 0 < t1 < t2 < < tn < t e dt1 ...dtn dt, λx λx1 ...λxn−1 λy { ··· } where the precise meaning of (2.25) is obtained by considering some subsets A1,...,An, A of (0, ), replacing “T1 t1 + dt1”,...,“Tn tn + dtn”, “ζ t + dt” by T1 A ,..., ∞T A , ζ A ∈, in the left-hand side∈ of (2.25), and in∈ the right-hand{ side∈ 1} { n ∈ n} { ∈ } multiplying by 1A1 (t1) ... 1An (tn)1A(t), and integrating the expression over the variables t1,...,tn, t. 2.2 The measures Px,y 33

To find (2.25), we note that for t> 0,

(2.19) P t [N = n, X = x ,...,X = x , T t + dt ,...,T t + dt ] = x,y T1 1 Tn n 1 ∈ 1 1 n ∈ n n

Px[X. has n jumps in [0, t], XT1 = x1,...,XTn = xn, T t + dt ,...,T t + dt ] δ λ−1 = 1 ∈ 1 1 n ∈ n n xn,y y

Px[Z1 = x1,Z2 = x2,...,Zn = xn] Px[Tn

(2.26) P [N =0, ζ t + dt]= δ e−t dt. x,y ∈ x,y ∨ We record an interesting consequence of (2.25), (2.26). We denote by γ Γ the “time ∨ ∨ ∈∨ reversal” of γ Γ, i.e. γ is the element of Γ such that ζ(γ) = ζ(γ), γ(0) = γ(ζ), ∨ ∈ ∨ γ(ζ)= γ(0), and γ(s) = lim γ(ζ s ε), for 0

Observe that on N = n one can reconstruct γ from T1,...,Tn,ζ, and XT1 ,...,XTn . It is then a straightforward{ } consequence of (2.25), (2.26) that

∨ (2.27) P is the image of P under the map γ γ. y,x x,y → We will later see some analogous formulas to (2.25), (2.26) for rooted loops in Proposition 3.1, see also (3.40). 

∞ z We will now provide some formulas for moments of 0 V (Xs)ds and L∞ under the measure Px,y. R Proposition 2.7. (x, y E) ∈ For V : E R and n 0, one has, in the notation of (1.32) and (1.83), → ≥ ∞ n n y y (2.28) Ex,y V (Xs)ds = n! (QV ) g (x), with g ( )=(G1y)( )= g( ,y). Z · · · h 0  i  For x , x ,...,x E, one has 1 2 n ∈ n xi (2.29) Ex,y L∞ = g(x, xσ(1)) g(xσ(1), xσ(2)) ...g(xσ(n),y), hiQ=1 i σP∈Sn with the set of permutations of 1,...,n . Sn { } When G V < 1, one has k | |k∞ (2.30) E exp V (z) Lz = (I GV )−1gy (x)= (I GV )−1G1 (x). x,y ∞ − − y h n zP∈E oi   34 2 ISOMORPHISM THEOREMS

Proof. We begin with a slightly more general calculation and consider V ,...,V : E R and 1 n → n ∞

Ex,y Vi(Xs)ds = Ex,y V1(Xs1 ) ...Vn(Xsn ) ds1 ...dsn i=1 Z0 ZRn h Q i h + i and decomposing over the various orthants

= Ex,y[V1(Xs ) ...Vn(Xs )] ds1 ...dsn Z 1 n σP∈Sn 0

integrating over dsσ(n) and summing over xn

(1.32) y = rsσ(1) (x, x1) Vσ(1)(x1) λx1 ...λxn−1 (QVσ(n)g )(xn−1) σ∈Sn Z0

y = (QVσ(1) QVσ(2) ...QVσ(n)g )(x). σP∈Sn In other words, we have:

n ∞ y (2.31) Ex,y Vi(Xs) ds = (QVσ(1)QVσ(2) ...QVσ(n)g )(x). Z hiQ=1 0 i σP∈Sn (2.28): • We choose V = V , 1 i n, and (2.31) yields (2.28). i ≤ ≤ (2.29): • 1 xi We chose Vi = λ 1xi , 1 i n, and note that QVi = g , so that (2.31) yields xi ≤ ≤ n xi Ex,y L∞ = g(x, xσ(1)) g(xσ(1), xσ(2)) ...g(xσ(n),y), hiQ=1 i σP∈Sn i.e., (2.29) is proved.

(2.30): • ∞ n z (1.92) 1 V (2.32) Ex,y exp V (z) L = Ex,y (Xs) ds . ∞ n! Z λ h n zP∈E oi h nP≥0  0  i The calculation below shows we can apply dominated convergence:

∞ n n 1 V (2.28) V y Ex,y | | (Xs) ds = Q | | g (x) n! Z λ λ nP≥0 h 0  i nP≥0    (1.33) = ((G V )ngy)(x) < , | | ∞ nP≥0 since G V < 1. k | |k∞ 2.3 Isomorphism theorems 35

So the left hand-side of (2.32) equals

∞ n 1 V (2.28) n y Ex,y (Xs) ds = (GV ) g (x) n! Z λ (1.33) nP≥0 h 0  i nP≥0  and since GV ∞ ∞ < 1, we find that k kL (E)→L (E) (GV )n =(I GV )−1, so that − nP≥0 E exp V (x) Lx = (I GV )−1gy (x), x,y ∞ − h n xP∈E oi  i.e. (2.30) holds.

2.3 Isomorphism theorems This section is devoted to the isomorphism theorems of Dynkin and Eisenbaum, which explore the nature of the relations between occupation time and free field. We refer to Marcus-Rosen [19] for a discussion of these theorems under very general assumptions. z We first state and prove the Dynkin isomorphism theorem, which shows that L∞, z E, under P , has similar features, in a suitable sense, to 1 ϕ2, z E, under P G. ∈ x,y 2 z ∈ Theorem 2.8. (Dynkin isomorphism theorem) For any x, y E, and bounded measurable F on RE, one has ∈ 1 1 (2.33) E EG F Lz + ϕ2 = EG ϕ ϕ F ϕ2 . x,y ⊗ ∞ 2 z z∈E x y 2 z z∈E h   i h   i In other words: 1 Lz + ϕ2 under P P G, has the same “law” as ∞ 2 z z∈E x,y ⊗  1 ϕ2 under ϕ ϕ P G ( signed measure when x = y!). 2 z z∈E x y ← 6  Proof. We will show that for small V : E R, → 1 1 (2.34) E EG exp V (z) Lz + ϕ2 = EG ϕ ϕ exp V (z) ϕ2 x,y ⊗ ∞ 2 z x y 2 z h n zP∈E  oi h n zP∈E oi (i.e. both integrals are well-defined and equal). Let us first explain how the claim (2.33) then follows.

For any V : E R, it follows by application of (2.34) to u0V and u0V for small u0 > → z 1 2 G − 1 2 0, that cosh(u0 z V (z)(L∞ + 2 ϕz)) is integrable for Px,y P and cosh(u0 z V (z) 2 ϕz)) G ⊗ is integrable underP ϕx ϕy P , and hence the functions P | | 1 u ( u ,u )+ i R( C) E EG exp u V (z) Lz + ϕ2 ∈ − 0 0 ⊇ −→ x,y ⊗ ∞ 2 z h n zP∈E  oi and 1 u ( u ,u )+ i R EG ϕ ϕ exp u V (z) ϕ2 ∈ − 0 0 −→ x y 2 z h n zP∈E oi 36 2 ISOMORPHISM THEOREMS are analytic. And by (2.34) they are equal for u small and real. Hence they agree in ( u ,u )+ i R, and in particular for the choice u = i, that is for any V : E R, − 0 0 → 1 1 E EG exp i V (z) Lz + ϕ2 = EG ϕ ϕ exp i V (z) ϕ2 . x,y ⊗ ∞ 2 z x y 2 z h n zP∈E  oi h n zP∈E oi z 1 2 G This means that the characteristic function of the law of L∞ + 2 ϕz, z E, under Px,y P ∈ 1 2 ⊗ equals the characteristic function of the law (i.e. image measure) of 2 ϕz, z E, under G 1 2 ∈ ϕx ϕy P . It follows that these laws are equal (in particular the law of 2 ϕz, z E, under G ∈ the signed measure ϕx ϕy P , is a positive measure!), and (2.33) is proved. We now turn to the proof of (2.34). We already know by (2.30) that when V is small

(2.35) E exp V (z) Lz = (I GV )−1 G1 (x). x,y ∞ − y h n zP∈E oi  Moreover, by (2.5), we see that for small V , the random variable exp 1 V (z) ϕ2 { 2 z∈E z} G def is P -integrable, because when V is small, thanks to (1.39), V (ϕ,ϕ)P= (ϕ,ϕ) V (z) ϕ2 is a positive definite quadratic form. In addition, forE any ϕ: E E R: − z∈E z → P (1.44) (2.36) (ϕ,ϕ) = Lϕ,ϕ Vϕ,ϕ = ( L V ) ϕ,ϕ . EV h− i−h i h − − i Thus if we define the probability measure on RE:

G,V 1 1 (2.37) P = exp V (ϕ,ϕ) dϕx, |E| 1 2 2 (2π) 2 √det G EG exp V (z) ϕ n − E o x∈E 2 z Q   zP∈E  G,V then, the field ϕz, z E, is a centered Gaussian field under P , with covariance matrix −1 ∈ ′ ( L V ) 1 , 1 ′ , z, z E. Note that h − − z z i ∈ (1.37) (2.38) ( L V )−1 = ( L(I GV ))−1 =(I GV )−1( L)−1 =(I GV )−1 G. − − − − − − − As a result, we see that when V is small:

(2.35) EG,V [ϕ ϕ ]= (I GV )−1 G1 (x) = E exp V (z) Lz . x y − y x,y ∞  h n zP∈E oi Multiplying these equalities by the P G-expectation in the denominator of the normalizing constant of P G,V in (2.37) yields (2.34), and this concludes the proof of (2.33).

The fact that the measures Px,y and not simply the measures Px appear in the Dynkin isomorphism theorem makes its use somewhat cumbersome in a number of applications. We now discuss the Eisenbaum isomorphism theorem, which does not make use of the measures Px,y, but instead directly involves the measures Px.

As a preparation, we record the statements corresponding to (2.28) - (2.30), when Px,y is replaced by Px. 2.3 Isomorphism theorems 37

Proposition 2.9. For V : E R, and n 0, one has → ≥ ∞ n n (2.39) Ex V (Xs) ds = n! (QV ) 1E (x), for x E. Z ∈ h 0  i  For x, x ,...,x E, one has 1 n ∈ n xi Ex L = g(x, xσ ) g(xσ , xσ ) ...g(xσ n , xσ n ) (2.40) ∞ (1) (1) (2) ( −1) ( ) h iQ=1 i σP∈Sn (Kac’s moment formula).

When G V < 1, one has k | |k∞ (2.41) E exp V (z) Lz = (I GV )−11 (x), for x E. x ∞ − E ∈ h n zP∈E oi  Proof. The proofs are straightforward modifications of the proofs of (2.28), (2.29), (2.30), replacing (2.23) by the identity

Px[Xt1 = x1,...,Xtn = xn]=

(2.42) rt1 (x, x1) rt2−t1 (x1, x2) ...rtn−tn−1 (xn−1, xn) λx1 ...λxn , for 0 < t < < t and x, x ,...,x E. 1 ··· n 1 n ∈

We are now ready to state and prove Theorem 2.10. (Eisenbaum isomorphism theorem) For any x E, s =0, and bounded measurable F on RE, one has ∈ 6 2 2 G z (ϕz + s) G ϕx (ϕz + s) (2.43) Ex E F L∞ + = E 1+ F , ⊗ h  2 z∈Ei h s   2 z∈Ei in other words: 1 Lz + (ϕ + s)2 under P P G, has the same “law” as ∞ 2 z z∈E x ⊗  1 ϕx (ϕ + s)2 under 1+ P G ( signed measure!). 2 z z∈E s ←   Proof. The same arguments we employed below (2.34), show that it suffices to prove that for small V : E R, → 2 G z (ϕz + s) Ex E exp V (z) L∞ + = ⊗ h n z∈E  2 oi P 2 (2.44) ϕx (ϕz + s) EG 1+ exp V (z) s 2 h  n zP∈E oi (i.e. both integrals are well-defined and equal).

We already know by (2.41) that for small V ,

(2.45) E exp V (z) Lz = (I GV )−11 (x). x ∞ − E h n zP∈E oi  38 2 ISOMORPHISM THEOREMS

We further note that when V is small exp 1 V (z)(ϕ + s)2 is P G is integrable (see { 2 z∈E z } above (2.36)), and P

ϕx 1 G 2 1 V (z)(ϕ +s)2 E 1+ exp V (z)(ϕz + s) 2 P z G z∈E s 2 z∈E E ϕx e    P 2  =1+ = 1 V (z)(ϕ +s)2 G (ϕz + s)  2 P z  E exp V (z) s EG e z∈E z∈E 2 (2.46)   P    G,V E ϕx exp s V (z)ϕz 1 1+   zP∈E  , G,V s E exp s V (z)ϕz   zP∈E  where P G,V is defined in (2.37). We are going to use the next Lemma 2.11. If (X,Y ) is a two-dimensional centered Gaussian vector, then for s =0, 6 E[X exp sY ] (2.47) { } = E[XY ]. s E[exp sY ] { } Proof. For t, s in R, we have

1 E[exp tX + sY ] = exp E[(tX + sY )2] { } n2 o 1 = exp t2 E[X2]+2ts E[XY ]+ s2E[Y 2] . 2 n o By differentiating in t and setting t = 0, we find

s2 E[X exp sY ]= s E[XY ] exp E[Y 2] { } n 2 o = s E[XY ]E[exp sY ]. { } This proves (2.47).

We apply (2.47) with X = ϕx and Y = z∈E V (z) ϕz. We find that the last expression in (2.46) equals P

G,V G,V 1+ E ϕx V (z) ϕz =1+ E [ϕx ϕz] V (z) (2.48) h zP∈E i zP∈E (2.38) = 1+ (I GV )−1 GV (x). −  Note that (I GV )−1 =(I GV )−1(I GV +GV )= I+(I GV )−1GV , so the expressions in (2.45) and− (2.48) are equal.− The claim− (2.43) follows. −

2.4 Generalized Ray-Knight theorems We will now discuss some so-called “generalized Ray-Knight theorems” (we will explain the terminology below, see also [8] and [19] for a presentation of these results in a general framework). These results are closely linked to the isomorphism theorems of Dynkin and Eisenbaum, see also [6] and [19]. 2.4 Generalized Ray-Knight theorems 39

We begin with a direct application of the isomorphism theorem of Eisenbaum. We consider x E, and assume that κ vanishes for x = x . We set 0 ∈ x 6 0 (2.49) U = E x , \{ 0} Px−a.s. G,U (so TU = Hx0 , for all x E, by our assumptions on κ), and denote by P the law on RE of the centered Gaussian∈ free field with covariance

(2.50) EG,U [ϕ ϕ ]= g (x, y), for x, y E. x y U ∈ Theorem 2.12. (Generalized first Ray-Knight theorem) For any x E, and s =0, ∈ 6 z 1 2 G,U LHx + (ϕz + s) under Px P , has the same “law” as (2.51)  0 2 z∈E ⊗ 1 2 ϕx G,U (ϕz + s) under (1 + s ) P .  2 z∈E

Proof. This is the direct application of (2.43) to the case where U replaces E, gU ( , ) z z · · replaces g( , ) and LH now plays the role of L∞, with the help of Remark 1.5 1) and · · x0 2). Remark 2.13. 1) The above terminology stems from the fact that the corresponding statement in the case of Brownian motion when x0 = 0 < x in R, can be shown to be equivalent, see Marcus-Rosen [19], p. 367, to the statement

z 2 2 L +(B + B )1 z x has the same law as (2.52) H0 z−x z−x { ≥ } z≥0 2 2  (Bz + Bz )z≥0, e

z e where (Lt , z R, t 0), (Bz, z 0), (Bz, z 0) are independent and respectively distributed as∈ the local≥ time process≥ of Brownian motion≥ starting at x and two independent copies of Brownian motion starting at 0. Notee incidentally that in the present situation ′ ′ ′ G,{0}c=U gU (z, z )=2 z z , for z, z 0, so that (ϕz, z 0) under P is distributed as √2 B , z 0. ∧ ≥ ≥ z ≥ The statement (2.52) can be shown to be equivalent to the more classical statement of the first Ray-Knight theorem (see Marcus and Rosen [19], p. 52):

z L , for z [0, x], under Px (the law of Brownian motion starting from x), H0 ∈ has the law of a two-dimensional squared Bessel process starting at 0, (2.53) and then proceeds from x as a zero-dimensional squared Bessel process z x+y (i.e. conditional on (LH0 )0≤z≤x, (LH0 )y≥0 is distributed as a zero-dimensional squared Bessel process), which stems from the work of Ray [21] and Knight [13]. We also recall that (see Revuz- Yor [23], chapter XI) the law of the δ-dimensional squared Bessel process starting from x, with δ 0, and x 0, is that of the solution of the stochastic differential equation: ≥ ≥ t (2.54) Zt = x +2 Zs dBs + δt, t 0, Z0 p ≥ 40 2 ISOMORPHISM THEOREMS

δ where Bx, s 0, is a Brownian motion. This law is commonly denoted by BESQ (x). It satisfies the important≥ relation, see Revuz-Yor [23], p. 410:

′ If Z. and Z.′ are independent, respectively BESQδ(x) and BESQδ (x′)- (2.55) ′ distributed, then Z. + Z′ is BESQδ+δ (x + x′)-distributed, for δ, δ′, x, x′ 0. . ≥ 2) The type of argument which enables to go from (2.52) to (2.53) uses (2.55) (when δ = δ′ = 1, noting that for a 0, BESQ1(a) is the law of the square of Brownian motion starting from √a), as well as≥ the following property: If X,Y,Z are random n-vectors with non-negative components such that (2.56) X,Z are independent, and Y,Z are independent, and X + Z law= Y + Z, then X law= Y.

n t X n (Proof. Let (t)= E[e− Pi i i ], for t R , be the Laplace transform of X, and , LX ∈ + LY LZ be defined analogously, then we see, by independence, that X (t) Z(t) = X+Z(t) = (t) = (t) (t). Now (t) > 0, and simplifying we seeL thatL (t) =L (t), for LY +Z LY LZ LZ LX LY all t Rn , so that X law= Y .) ∈ + It is important to realize that the assumption that the components of X,Y,Z are non-negative is important for the validity of (2.56)! One can find in Feller [9], p. 506, an example of random variables X,Y,Z such that X,Z are independent, Y,Z are independent, and X+Z law= Y +Z, but X has not the same law as Y ! (These examples come from the fact that there are characteristic functions of distributions, which are supported in [ 1, 1], and there are distinct distributions which can have the same restriction of their respective− characteristic function to [ 1, 1]. Thus, if two distinct characteristic functions agree in [ 1, 1], their products by a− characteristic function supported in [ 1, 1] will be equal. Interpreting− products of characteristic functions in terms of characteristic− functions of sums of independent variables, one obtains the desired examples.) 3) Simple random walk in continuous time on Z also satisfies a corresponding Ray-Knight theorem (or (2.51)). Actually, when x0 = 0 < x, an integer, and when one chooses 1 cz,z+1 = 2 for all z (for the weights), then (2.57) g (z, z′)=2 z z′, for all z, z′ N, when U = Z 0 . U ∧ ∈ \{ } ′ Indeed, when z z 1, by the strong Markov property, letting H0 denote the entrance time of Z , n ≥0, in ≥0 , n ≥ { } H0 ′ ′ ′ ′ g (z, z )= g (z , z )= E ′ 1 Z = z , U U z { k } h kP=0 i ′ and the number of visits to z before hitting 0 under Pz′ is geometric with success param- eter 1 (and hence has expectation 2z′). So g ( , ) corresponds to the restriction of the 2z′ U · · Brownian Green function gR\{0}( , ) to N N! It now follows from (2.51) (in this slightly more general context) that · · ×

z (LH0 )z∈N under Px, for x 1 integer, has the same law as the ≥ z (2.58) restriction to z N of the Brownian local times LH0 , z 0, under Wiener measure∈ starting from x (see (2.53)). ≥  2.4 Generalized Ray-Knight theorems 41

As a preparation for the proof of the generalized second Ray-Knight theorem we record the following fact concerning Gaussian vectors. Proposition 2.14. Let n 1, ψ = (ψ ,...,ψ ) be a centered Gaussian vector with ≥ 1 n invertible covariance matrix A, and V = diag(v1,...,vn) be a deterministic diagonal matrix, such that A−1 V is positive definite. Then for any real numbers b , 1 i n, − i ≤ ≤ n 2 (ψi + bi) E exp vi = h n i=1 2 oi (2.59) P 1 1 2 exp vi bi + vi bi Aij vj bj det(I AV ) n 2 1≤i≤n 1≤i,j≤n o − P P p e where A =(A−1 V )−1 =(I AV )−1 A. − − Moreover,e when ψ is an independent copy of ψ, and α, α, β, β are real numbers such that e e e (2.60) α2 + α2 = β2 + β2, e then for any b , 1 i n, e i ≤ ≤ 2 2 law 2 2 (2.61) (ψi + αbi) +(ψi + α bi) 1≤i≤n = (ψi + βbi) +(ψi + βbi) 1≤i≤n.   Proof. e e e e (2.59): • Note that by assumption A−1 V is positive definite, hence invertible, and A−1 V = − − A−1(I AV ), so that I AV is invertible and A =(A−1 V )−1 =(I AV )−1 A. Moreover, − − − − n 2 e n n (ψi + bi) 1 2 1 n v b2 E exp v = E exp v b ψ + v ψ e 2 Pi=1 i i i 2 i i i 2 i i h n iP=1 oi h n iP=1 iP=1 oi and using that A−1 V = A−1, − n e 1 n E exp v b ψ + v ψ2 = i i i 2 i i h n iP=1 iP=1 oi n n 1 1 −1 n exp vi bi xi xi Aij xj dx1 ...dxn = (2π) 2 √det A ZRn i=1 − 2 i,j=1 n P P o e 1 n det A 2 E exp v b ψ det A i i i  e h n iP=1 oi e where under P ,(ψ1,...,ψn) is a centered Gaussian vector with covariance matrix A, and E denotes the P -expectation. Since det A = det A/det(I AV ), the last expression equals e − e e e 1 e 1 n exp vi bi Aij vj bj . det(I AV ) · n2 i,j=1 o − P p e This proves (2.59). 42 2 ISOMORPHISM THEOREMS

(2.61): • By (2.59), we see that for small v , 1 i n, one has i ≤ ≤ n 2 2 (ψi + αbi) (ψi + αbi) E exp vi + = i 2 2 (2.62) h n P=1 h e e ioi 1 1 n 1 exp v b2(α2 + α2)+ v b A v b (α2 + α2) det(I AV ) 2 i i 2 i i ij j j − n  iP=1 1≤Pi,j≤n o e e e and we obtain the same expression if we now replace α by β and α by β, thanks to (2.60).

The same argument as below (2.34) then shows that the randome vece tors on the left- hand side and on the right-hand of (2.61) have the same characteristic function. The claim (2.61) now follows. We continue with some preparation for the generalized second Ray-Knight theorem. For the next proposition, once again we assume that the killing measure κ vanishes everywhere except at a point:

(2.63) there is x E, such that κ = λ> 0, and κ = 0, for x = x . 0 ∈ x0 x 6 0 We write U = E x , and recall that \{ 0} (1.52) g(x, x ) g(x ,y) g (x, y) = g(x, y) 0 0 U − g(x , x ) (2.64) 0 0 (1.91) y = Ex[LH ], for x, y E. x0 ∈ We introduce (in the same fashion as in (2.50))

(2.65) P G,U , the probability on RE under which ϕ , x E, is a centered x ∈ Gaussian field with covariance EG,U [ϕ ϕ ]= g (x, y), for x, y E. x y U ∈ (2.66) Y , an exponential random variable with parameter λ under Q.

Proposition 2.15. (under (2.63) - (2.66))

x 1 2 G,U L∞ + ϕx under Px0 P , has same law as 2 x∈E ⊗ (2.67)   1 2 G,U (ϕx + √2Y ) under P Q. 2 x∈E ⊗ Proof. Consider Z,Z′ independent centered Gaussian variables with variance λ−1, inde- pendent from ϕ , x E, and ϕ′ , x E, two independent copies P G,U -distributed. x ∈ x ∈ By (2.61), we find that

2 ′ ′ 2 law 2 ′ 2 ′2 2 (ϕx + Z) +(ϕx + Z ) x∈E = ϕx +(ϕx + Z + Z ) x∈E p  law 2 ′ √ 2  = ϕx +(ϕx + 2Y ) x∈E ,  where Y is an exponential random variable with parameter λ, independent of (ϕx)x∈E ′ 2 ′2 law and (ϕx)x∈E, and we used that Z + Z = 2Y (use polar coordinates). 2.4 Generalized Ray-Knight theorems 43

Thus if we show that

x 1 2 1 ′ 2 law 1 2 1 ′ ′ 2 (2.68) L∞ + ϕx + (ϕx) = (ϕx + Z) + (ϕx + Z ) ,  2 2 x∈E 2 2 x∈E it will follow that

x 1 2 1 ′ 2 law 1 2 1 ′ √ 2 L∞ + ϕx + (ϕx) = ϕx + (ϕx + 2Y ) ,  2 2 x∈E  2 2 x∈E and, “simplifying” on both sides, i.e. applying (2.56), we will conclude that

x 1 ′ 2 law 1 ′ √ 2 L∞ + (ϕx) = (ϕx + 2Y ) ,  2 x∈E  2 x∈E i.e. (2.67) will be proved. It remains to prove (2.68). By the same argument as below (2.34), it suffices to prove that for small V : E R, →

G,U G,U x 1 2 1 ′ 2 Ex0 E E exp V (x) L∞ + ϕx + (ϕx) = ⊗ ⊗ h n x∈E  2 2 oi (2.69) ′ P 1 1 EG,U EG,U EZ,Z exp V (x) (ϕ + Z)2 + (ϕ′ + Z′)2 , ⊗ ⊗ 2 x 2 x h n xP∈E  oi where EZ,Z′ denotes the expectation relative to the probability governing Z,Z′. Observe that (writing E in place of EG,U EZ,Z′ ), ⊗ 2 −1 E[(ϕx + Z)(ϕy + Z)] = E[ϕx ϕy]+ E[Z ]= gU (x, y)+ λ , i.e.

ϕ + Z, x E is a centered Gaussian field with covariance (2.70) x g (x, y)+ ∈λ−1, x,y E. U ∈ Lemma 2.16. (under (2.63), with U = E x ) \{ 0} (2.71) g(x, y)= g (x, y)+ λ−1, for x, y E. U ∈ Proof. We have by (2.64)

g(x, x0) g(x0,y) g(x, y)= gU (x, y)+ , for x, y E. g(x0, x0) ∈

Since κ = 0, for x = x0, we see that Px[Hx0 < ] = 1, for any x E, and by (1.51) g(x, x )= g(x , x )=6 g(x , x), for all x E. Hence∞ ∈ 0 0 0 0 ∈ (2.72) g(x, y)= g (x, y)+ g(x , x ), for x, y E. U 0 0 ∈

To compute g(x0, x0), we use the fact that

(1.26) 1 (2.73) g(x , x ) = E 1 Z = x . 0 0 λ x0 { n 0} x0 h nP≥0 i 44 2 ISOMORPHISM THEOREMS

Under Px0 , n≥0 1 Zn = x0 , the total number of visits to x0, is a geometric random { } κ variable withP success parameter x0 = λ (i.e. the probability to jump to the cemetery λx0 λx0 point). As a result we obtain

−1 λ λx0 Ex0 1 Zn = x0 = λ = λ , { } x0 h nP≥0 i   and 1 λx0 −1 g(x0, x0)= = λ . λx0 λ With (2.72) we now find (2.71). By (2.41), we see that for small V

(2.74) E exp V (x) Lx = (I GV )−11 (x ), x0 ∞ − E 0 h n xP∈E oi  and by (2.59), with bi 0, and A playing the role of G or GU , where GU stands for the matrix with components≡ g (x, y), x, y E, we have for small V : U ∈ 1 2 1 ′ ′ 2 ′ P V (x)( 2 (ϕx+Z) + 2 (ϕx+Z ) ) G,U G,U Z,Z x∈E E E E e ] det(I GU V ) (2.75) ⊗ ⊗ 2 ′ 2 = − .  ϕx (ϕx) P V (x)( 2 + 2 ) det(I GV ) EG,U EG,U ex∈E − ⊗   We will now see that the expressions in (2.74) and (2.75) are equal. We observe that by Cramer’s rule for the inverse of a matrix det(M) (I GV )−11 (x )= , − E 0 det(I GV )  − where M is the matrix obtained by replacing the x -column in I GV with 1 everywhere. 0 − Subtracting the x0-row from all other rows of M, we obtain a matrix M with coefficients M , which for x, y = x equal x,y 6 0 f f M = δ g(x, y) V (y)+ g(x ,y) V (y) x,y x,y − 0 (2.71) f = δ g (x, y) V (y), x,y − U and the x0-column of M vanishes everywhere except at row x0: f Mx,x0 = δx,x0 . f Clearly det M = det M, and if we develop the determinant of M along the x0-column, we find that det M = det M = det (I G V ) = det(I G V ). f − U |U×U − fU  We have thus provedf that the expressions in (2.74) and (2.75) are equal. This concludes the proof of (2.69) and hence of (2.67). We will later give another proof of the above proposition based instead on the Dynkin isomorphism theorem. For the time being we proceed with the generalized second Ray-Knight theorem. 2.4 Generalized Ray-Knight theorems 45

We now consider the situation where the killing measure vanishes on E, i.e. (1.1) - (1.3) hold but instead of our usual set-up,

(2.76) κ =0, for all x E. x ∈ 0 0 We denote by Xt , t 0, the canonical process on the space DE of right-continuous E-valued trajectories≥ with finitely many jumps on finite intervals and infinitely many 0 jumps, and by Px , x E, the law of the walk with jump rate 1, and Markovian transition 0 cx,y∈ 0 probability px,y = 0 , for x, y E, with λx = y E cx,y, for x E. λx ∈ ∈ ∈ The local time of the walk is defined by P

t x 0 1 (2.77) ℓt = 1 Xs = x ds 0 , for x E, t 0. Z0 { } λx ∈ ≥

x x 0 x The map t 0 ℓt 0, is continuous non-decreasing, ℓ0 = 0, Pz -a.s., ℓ∞ = , for any x, z E, since≥ we→ are≥ now in a recurrent situation. ∞ ∈ We now consider a special point

(2.78) x E, 0 ∈ and keep the notation U = E x . \{ 0} 0 0 Note that the law of Xt∧H , t 0, under Px agrees with that of Xt∧Hx , t 0, under x0 ≥ 0 ≥ Px, when we instead pick the killing measure κ with the unique non-vanishing point x0 of 0 (2.78), as in (2.63). In particular, the killed Green function gU ( , ) (attached to the walk X0, t 0) coincides with g ( , ), and · · t ≥ U · · 0 0 y (2.79) gU (x, y)= gU (x, y)= Ex[ℓH ], for x, y E. x0 ∈ We now introduce the right-continuous inverse of t ℓx0 : → t (2.80) σ = inf t 0; ℓx0 >u , for u 0. u { ≥ t } ≥ Theorem 2.17. (Generalized second Ray-Knight theorem) Keeping the notation of (2.65), for any u> 0,

x 1 2 0 G,U ℓσu + ϕx under Px P , has the same law as 2 x∈E 0 ⊗ (2.81)   1 2 G,U (ϕx + √2u) under P ,  2 x∈E (we will later explain the origin of the above terminology, see Remark 2.19).

Proof. Consider as in (2.66) an exponential variable Y with parameter λ> 0 (see (2.63)), under some auxiliary probability Q. First assume that we can show that

x 0 ℓσ under Px Q, has the same law as (2.82) Y x∈E 0 ⊗ x  L∞ x∈E under Px0 ,  and let us explain how (2.81) follows. 46 2 ISOMORPHISM THEOREMS

By (2.67) and (2.82), we then find that under P 0 P G,U Q, x0 ⊗ ⊗ x 1 2 1 √ 2 (2.83) ℓσY + ϕx has the same law as (ϕx + 2Y ) .  2 x∈E  2 x∈E

As a result for any V : E R : → + ∞ 1 E0 EG,U exp V (x) ℓx + ϕ2 e−λudu = Z x0 ⊗ − σu 2 x 0 h n xP∈E  oi (2.84) ∞ G,U 1 2 −λu E exp V (x) (ϕx + √2u) e du, Z − 2 0 h n xP∈E oi

0 G,U (indeed, multiplying both members by λ yields on the left-hand side the Px0 P Q- expectation of exp V (x)(ℓx + 1 ϕ2 ) , and on the right-hand side the⊗ correspond-⊗ {− x∈E σY 2 x } ing expectation of expP V (x) 1 (ϕ + √2Y )2 ). {− x∈E 2 x } Now P 1 u 0 E0 EG,U exp V (x) ℓx + ϕ2 0, ≥ → x0 ⊗ − σu 2 x ≥ h n xP∈E  oi

x is a bounded right-continuous (because u σu is right-continuous, t 0 ℓt 0 is continuous, and we use dominated convergence)→ function. Similarly,≥ u → 0 ≥ EG,U [exp V (x) 1 (ϕ + √2u)2 ] 0, is a bounded continuous function. By≥ (2.84)→ {− x∈E 2 x } ≥ their LaplaceP transforms are equal and they are hence equal. But this implies that for any u> 0, the Laplace transform of the law of ℓx + 1 ϕ2 under P 0 P G,U , is equal σu 2 x x∈E x0 ⊗ 1 √ 2  G,U to the Laplace transform of the law of 2 (ϕx + 2u) x∈E under P , and the claim (2.81) follows.  So there remains to prove (2.82). For this purpose it is convenient to introduce the time-changed process defined simi- larly as in (1.96):

0 0 X = X 0 , for u 0, where u τu ≥ u τ 0 = inf t 0; ℓ0 u = λ0 dv, with ℓ0 = ℓx (2.85) u t Xv t t { ≥ ≥ } Z0 x∈E t P 0 0 = 1/λ 0 ds, (t 0 ℓ 0 is an increasing bijection of R ). Xs t + Z0 ≥ → ≥

If we now define, cf. (1.97),

u x 0 (2.86) ℓu = 1 X v = x dv, for u 0, x E, Z0 { } ≥ ∈ then as in (1.99), (1.100) we see that

0 0 x x (2.87) X = X 0 , t 0, and ℓ = ℓ 0 , for x E, t 0. t ℓt ≥ t ℓt ∈ ≥ 2.4 Generalized Ray-Knight theorems 47

Now, corresponding to (2.80), we can introduce

(2.87) x0 0 (2.88) σv = inf u 0; ℓu > v = ℓσ , for v > 0, { ≥ } (2.80) v so that

x (2.87) x (2.88) x (2.89) ℓσ = ℓℓ0 = ℓσ , for any x E. Y σY Y ∈ The key to the identity in law (2.82) will come from the next representation of the law of X. under Px. Lemma 2.18. (x E) ∈ def 0 Zu = Xu, for u< σY , def= ∆, for u σ , (2.90) ≥ Y has the same law (on D ) under P 0 Q as E x ⊗ X , u 0, under P . u ≥ x Proof. We consider 0 = u

0 0 0 x0 0 −λℓu (2.92) Ex[f0(Xu0 ) f1(Xu1 ) ...fn(Xun ) e ].

0 By the corresponding statement to (1.98), we know that Xu, u 0, is a Markov chain with Markovian transition semi-group ≥

0 0 0 R f(x)= E0[f(X )] = etL f(x), t 0, where t x t ≥ L0 f(x)= c f(y) λ0 f(x), for f: E R. x,y − x → yP∈E

The application of the Markov property to (2.92) at times un−1,...,u0 shows that the expression in (2.92) equals

0 0 0 (f0 Su1 f1 Su2−u1 f2 ...fn−1 Sun−un−1 fn)(x), where 0 def 0 x0 S f(x) = E0[f(X ) e−λℓu ], for f: E R, x E, u 0. u x u → ∈ ≥ The corresponding version of the Feynman-Kac formula (1.105), see (2.86), shows that

0 0 u(L −λ1{x }) uL Su f(x)= e 0 f(x)= e f(x), 48 2 ISOMORPHISM THEOREMS because κ = λ = λ λ0 , cf. (2.63). We have thus found that x0 x0 − x0 E0 EQ[f (Z ) f (Z ) ...f (Z )]=(f eu1L f e(u2−u1)L f ...f e(un−un−1)L f )(x) x ⊗ 0 u0 1 u1 n un 0 1 2 n−1 n and by (1.98) we see that this is equal to Ex[f0(Xu0 ) f1(Xu1 ) ...fn(Xun )]. From this and the fact that Z. and X. remain in ∆ once they reach ∆, one easily deduces that the finite dimensional marginals of Z. and X. coincide, whence (2.90). This concludes the proof of the lemma. We can now conclude the proof of (2.82).

(2.89) x For all x E, we have ℓx = ℓ = ∞ 1 Z = x du, by the definition of Z. So ∈ σY σY 0 { u } under P 0 Q R x0 ⊗ ∞ ∞ x law (ℓσY )x∈E = 1 Zu = x du = 1 Xu = x du (2.93)  Z0 { } x∈E (2.90)  Z0 { } x∈E (1.97) x (1.101) x = (L∞)x∈E = (L∞)x∈E, under Px0 , and we have completed the proof of (2.82) and hence of (2.81). Remark 2.19. 1) The “generalized second Ray-Knight theorem” was originally proved in [8]. The termi- nology stems from the fact that in the case of Brownian motion when x0 = 0, the statement corresponding to (2.81) yields that, see Marcus-Rosen [19], p. 53, for any u> 0,

x 2 √ 2 (2.94) Lσu + Bx x≥0 has the same law as (Bx + u) x≥0,   z when (Lt , z R, t 0) and (Bx, x 0) are independent and respectively distributed as the local∈ time process≥ of a Brownian≥ motion starting at 0, and a Brownian motion starting at 0, and we have set

σ = inf t 0; L0 >u . u { ≥ t } This statement, using arguments described earlier, is equivalent to the more traditional formulation:

x Under Wiener measure starting at 0, (Lσu )x≥0 has the same (2.95) law as a zero-dimensional squared Bessel process starting at u, (i.e. BESQ0(u) in the notation below (2.54)).

2) The same argument that we used below (2.57), shows that one also has a similar 1 identity for the random walk on Z, when cz,z+1 = 2 , for all z. Namely for u> 0, (ℓx ) under P has same law as the restriction to integer (2.96) σu x∈N 0 times x N of Lx under Wiener measure in (2.95). ∈ σu 3) In the case of random interlacements on a transient weighted graph, one can establish an identity in law in the spirit of the generalized second Ray-Knight theorem, see [28]. It relates the field of occupation times of random interlacements at level u to the Gaussian free field on the transient weighted graph, cf. (4.86).  2.4 Generalized Ray-Knight theorems 49

Complement: a proof of (2.67) based on the Dynkin isomorphism theorem We now provide a second proof of Proposition 2.15, which makes direct use of the Dynkin isomorphism theorem. We recall the notation (2.63) - (2.66).

Second proof of (2.67): By the Dynkin isomorphism theorem, cf. (2.33), we see that

x 1 2 G 1 2 (2.97) L∞ + ϕx under P x0,x0 P , has the same law as ϕx under µ0,  2 x∈E ⊗ 2 x∈E where we have introduced the probabilities 1 P = P (on Γ), x0,x0 g(x , x ) x0,x0 (2.98) 0 0 1 2 G E µ0 = ϕx0 P (on R ). g(x0, x0)

The next observation is that if we consider the process with trajectories in DE,

Xt+ (γ) = lim Xt+ε(γ), for γ Γ, ε↓0 ∈

(the only time where Xt(γ) = Xt+ (γ) is when t = ζ(γ), the duration of γ, see below (2.16)), we have the following:6

(2.99) the law on D of X + , t 0, under P is equal to P . E t ≥ x0,x0 x0 Indeed g( , x ) = g(x , x ), by (2.71), and looking at (2.23), we see that the finite di- · 0 0 0 + mensional distributions of Xt, t 0, under P x0,x0 , which coincide with those of Xt , t 0, under P , since P [ζ≥= t] = 0, for any t, are equal to the finite dimensional ≥ x0,x0 x0,x0 distributions of Xt, t 0 (i.e. the canonical process on DE), under Px0 . The claim (2.99) follows. ≥ x 1 ∞ As a result of (2.99) and of the formula L = 1 Xs = x ds, x E, we see that ∞ λx 0 { } ∈ R (2.100) Lx , x E, under P , has the same law as Lx , x E, under P . ∞ ∈ x0,x0 ∞ ∈ x0 As for the right-hand side of (2.97), we use the following Lemma 2.20.

1 2 1 2 (2.101) ϕx under µ0, has the same law as (ψx + X) , 2 x∈E 2 x∈E G,U where ψx, x E, is P -distributed and independent from X, which is distributed as 2 ′2 2∈1 ′ (Z +Z +Z ) 2 , where Z,Z , Z are independent centered Gaussian variables with variance λ−1. e e Proof. Note that for any x, y E: ∈ (2.71) EG[(ϕ ϕ ) ϕ ]= g(x, x ) g(x , x ) = 0, and x − x0 x0 0 − 0 0 (2.71) EG[(ϕ ϕ )(ϕ ϕ )] = g (x, y). x − x0 y − x0 U 50 2 ISOMORPHISM THEOREMS

As a result (this is also a special case of Proposition 2.3), we find that: ϕ and (ϕ ϕ ) are independent under P G, (2.102) x0 x − x0 x∈E and (ϕ ϕ ) is P G,U -distributed. x − x0 x∈E As a consequence, under µ0,

1 2 1 2 law 1 2 ϕx = (ϕx ϕx0 + ϕx0 ) = (ψx + ϕx0 )  2 x∈E 2 − x∈E  2 | | x∈E G,U where ψ and ϕx0 are independent in the last expression, with ψ having distribution P , and ϕ being| under| the law µ . The claim (2.101) will thus follow once we see that | x0 | 0 (2.103) under µ , ϕ has the distribution of the variable X. 0 | x0 | To this end we note that for f: R R, bounded measurable + → 3 (2.98),(2.71) λ 2 λt2 µ0 G 2 2 − 2 E [f( ϕx0 )] = λE [ϕx0 f( ϕx0 )] = 1 t f( t ) e dt | | | | (2π) 2 ZR | | 3 ∞ 2λ 2 λr2 2 − 2 = 1 r f(r) e dr, (2π) 2 Z0 and that using polar coordinates in R3,

3 3 2 ∞ λ 2 λ|z| λ 2 λr2 − 2 2 − 2 E[f(X)] = 3 f( z ) e dz = 3 4π r f(r) e dr (2π) 2 ZR3 | | (2π) 2 Z0 = Eµ0 [f( ϕ )], whence the claim (2.101). | x0 | 

x By (2.97), (2.100), (2.101) we can conclude, with L∞, x E, now considered under Px0 G ∈ (and (ψx + Z)x∈E, having the law P , see (2.70), (2.71)), that:

x 1 2 law 1 2 L∞ + (ψx + Z) = (ψx + X) .  2 x∈E 2 x∈E Adding to both sides an independent copy 1 (ψ′ )2, x E, of 1 ψ2, x E, we see that 2 x ∈ 2 x ∈ x 1 2 1 ′ 2 law 1 2 1 ′ 2 L∞ + (ψx + Z) + (ψx) = (ψx + X) + (ψx)  2 2 x∈E 2 2 x∈E 2 law 1 2 2 1 ′ ′ 2 (2.104) = ψx + Z + Z + (ψx + Z ) (2.61)  2  q  2 x∈E e law 1 2 1 ′ ′ = (ψx + √2Y ) + (ψx + Z ) , 2 2 x∈E where Y , as in (2.66), is an independent exponential variable with parameter λ. Of course, 1 2 1 ′ ′ 2 2 (ψx + Z) has the same law as 2 (ψx + Z ) , x E, and simplifying on both sides of (2.104), i.e. applying (2.56), we obtain: ∈ x 1 ′ 2 law 1 √ 2 (2.105) L∞ + (ψx) = (ψx + 2Y ) .  2 x∈E  2 x∈E This is simply a reformulation of (2.67). 51

3 The Markovian loop

In this chapter we will introduce the measure describing the Markovian loop and study some of its properties. For this purpose it will be convenient to first discuss the rooted (or based) loops as well as pointed loops. The Markovian loops come up as unrooted loops, which live in the space of rooted loops modulo time-shift. We refer to Le Jan [18] for an extensive discussion of Markovian loops and their properties.

3.1 Rooted loops and the measure µr on rooted loops This section is devoted to the introduction of the general set-up for loops, the construction of the σ-finite measure µr governing rooted loops, and the discussion of some of its basic properties. We first introduce the space of rooted loops of duration t> 0:

L = the space of right-continuous functions [0, t] E, (3.1) r,t with finitely many jumps and same value in→ 0 and t.

We denote by X , 0 s t, the canonical coordinates, and we extend periodically the s ≤ ≤ function s [0, t] Xs(γ), for any γ Lr,t, so that Xs(γ) is well-defined for any s R. Let us underline∈ that→ the spaces L are∈ pairwise disjoint, as t varies over (0, ). ∈ r,t ∞ We then define the space of rooted loops via the formula

(3.2) Lr = Lr,t, t>[0 and for γ L , a rooted loop, we denote the duration of γ by ∈ r (3.3) ζ(γ) = the unique t> 0 such that γ L . ∈ r,t

We define a σ-algebra on Lr in a similar fashion as below (2.18), i.e. we identify Lr with L (0, )via themap(w, t) L (0, ) γ( )= w( · ) L , and endow L (0, ) r,1× ∞ ∈ r,1× ∞ → · t ∈ r r,1× ∞ with the canonical product σ-algebra (where Lr,1 is endowed with the σ-algebra generated by the maps X , 0 s 1, from L into E). s ≤ ≤ r,1 It will be convenient to parametrize rooted loops with the help of the random variables, which we now introduce. The discrete duration of the rooted loop γ is

N(γ) = (the total number of jumps of 0 s t X ) 1 (3.4) s if t = ζ(γ) > 0 stands for the duration≤ ≤ of→γ. ∨

When N(γ)= n> 1,

(3.5) 0 < T (γ) < < T (γ) < T (γ) ζ(γ) 1 ··· n−1 n ≤ are the successive jump times of the rooted loop γ, and

(3.6) Z0(γ)= γ(0), Z1(γ)= γ(T1),...,Zn−1(γ)= γ(Tn−1), Zn(γ)= γ(Tn)= Z0(γ) are the successive positions of the rooted loop. 52 3 THE MARKOVIAN LOOP

We also extend the definition of Z (γ) to all p Z by periodicity (so that when p ∈ N(γ)= n, Zn(γ)= Z0(γ), Zn+1(γ)= Z1(γ),... ). In the case where N(γ) = 1, the rooted loop γ does not move away from its initial position Z0(γ), and has duration ζ(γ); we will call γ a trivial loop. The case where

(3.7) N(γ)= n> 1, and Tn(γ)= ζ(γ), corresponds to the situation where the rooted loop has a jump at time ζ(γ) (which taking into account the periodicity of the function s X (γ), “corresponds to time 0”); we will → s call γ such that either N(γ)=1, or N(γ)= n> 1 and Tn(γ)= ζ(γ), a pointed loop. With the above variables we can of course reconstruct γ L since ∈ r for N(γ) = 1, γ(s)= Z , for 0 s ζ(γ), 0 ≤ ≤ for N(γ)= n> 1, γ(s)= Z0, for 0 s < T1(γ), (3.8) ≤ = Z , for T (γ) s < T (γ), for 1 k n 1, k k ≤ k+1 ≤ ≤ − = Z , for T (γ) s ζ(γ). 0 n ≤ ≤ We also define the shift θ : L L , when v R, via: v r → r ∈ θ (γ) L , for γ L , is the rooted loop γ′ such that (3.9) v r r ζ(γ′)=∈ ζ(γ), and∈X (γ′)= X (γ), for all s R. s s+v ∈

The measure µr on rooted loops: For any t> 0 and x E, we have defined the finite measure P t in (2.19) as the image of ∈ x,x 1 X = x Px on Γ , under the map (X ) . The measure P t is in fact concentrated t λx t s 0≤s≤t x,x { } t on Γt γ(0) = γ(t) Lr,t, cf. (3.1), and we can view t> 0 Px,x as a positive measure kernel∩{ from (0, ) to} ⊆L . We then introduce the measure on→L : ∞ r r ∞ t dt t (3.10) µr[B]= P (B) λx , for measurable subsets B of L . Z x,x t r xP∈E 0

Note that for a> 0, by (2.20), µ [ζ a]= ∞ r (x, x) λ dt 1 g(x, x) λ < r ≥ x∈E a t x t ≤ a x∈E x , whereas µr[Lr] = . So we see that µrPis a σR-finite measure. WeP now collect some ∞ ∞ useful properties of µr.

Proposition 3.1. For 0 t < < t < t, x ,...,x E, one has ≤ 1 ··· k 1 k ∈

µr[Xt1 = x1,...,Xtk = xk, ζ t + dt]= ∈ dt r (x , x ) λ ...r (x , x ) λ r (x , x ) λ , when k> 1, (3.11) t2−t1 1 2 x2 tk−tk−1 k−1 k xk t1+t−tk k 1 x1 t dt = r (x , x ) λ , when k =1, t 1 1 x1 t

(see below for the precise meaning of this formula). 3.1 Rooted loops and the measure µr on rooted loops 53

When n> 1, for t > 0, 1 i n, t> 0, and x ,...,x E, one has i ≤ ≤ 0 n−1 ∈ µr(N = n, Z0 = x0,...,Zn−1 = xn−1,

(3.12) T1 t1 + dt1,...,Tn tn + dtn, ζ t + dt)= ∈ ∈ ∈ e−t p p ...p 1 0 < t < < t < t < t dt dt ...dt dt x0,x1 x1,x2 xn−1,x0 { 1 ··· n−1 n } t 1 2 n where px,y are defined in (1.12) (see below for the precise meaning of this formula). When n =1, for x E, t> 0, 0 ∈ dt (3.13) µ [N =1,Z = x , ζ t + dt]= e−t r 0 0 ∈ t (see below for the precise meaning of this formula). Proof. (3.11): • The precise meaning of (3.11) is obtained by considering some measurable A (tk, ), replacing “ζ t + dt” on the left-hand side, by “ζ A”, and on the right-hand⊆ ∞ side ∈ ∈ multiplying by 1A(t) and integrating the expression over t. When k > 1, we have for t > tk, with the help of the Markov property (see also the proof of (2.23)):

t (2.19) Px,x[Xt1 = x1,...,Xtk = xk] λx =

rt1 (x, x1) λx1 rt2−t1 (x1, x2) λx2 ...rtk−tk−1 (xk−1, xk) λxk rt−tk (xk, x) λx

Summing over x and applying the Chapman-Kolmogorov identity, cf. (1.24), we find that

t Px,x[Xt1 = x1,...,Xtk = xk] λx = (3.14) xP∈E rt2−t1 (x1, x2) λx2 ...rtk−tk−1 (xk−1, xk) λxk rt1+t−tk (xk, x1) λx1 , and (3.11) follows from the above formula and (3.10).

When k = 1, (3.14) is replaced for t > t1 by t (3.15) Px,x[Xt1 = x1] λx = rt (x1, x1) λx1 xP∈E and the last line of (3.11) follows similarly. (3.12): • We use a similar procedure as indicated at the beginning of the proof (3.11) to give a precise meaning to (3.12), see also Remark 2.6. We then observe that for t> 0, x E, ∈ (2.19) P t [N = n, Z = x ,...,Z = x , T t + dt ,...,T t + dt ] λ = x,x 0 0 n−1 n−1 1 ∈ 1 1 n ∈ n n x

Px0 [X. has n jumps in [0, t], XT1 = x1,...,XTn−1 = xn−1,XTn = x0,

T1 t1 + dt1,...,Tn tn + dtn] δx,x0 = (3.16) ∈ ∈ P [Z = x ,Z = x ,...,Z = x ,Z = x ] P [T

where we have also denoted by Tk, k 1, the successive jump times for the continuous Markov chain, see Remark 1.1. ≥ Summing over x E and multiplying by dt , in view of (3.10), (3.16) yields ∈ t µr[N = n, Z0 = x0,...,Zn−1 = xn−1, T1 t1 + dt1, Tn tn + dtn, ζ t + dt]= ∈ e−t∈ ∈ p p ...p 1 0 < t < t < < t < t dt dt ...dt dt, x0,x1 x1,x2 xn−1,x0 { 1 2 ··· n } t 1 2 n i.e. we have proved (3.12). (3.13): • As above, we can write for t> 0 and x E, ∈ t Px,x[N =1,Z0 = x0] λx0 = Px0 X. has no jump in [0, t] δx,x0 = −t   Px0 [T1 > t] δx,x0 = e δx,x0 .

dt so that summing over x and multiplying by t yields e−t µ [N =1,Z = x ,ζ t + dt]= dt, r 0 0 ∈ t i.e. we have proved (3.13).

We continue the discussion of the properties of the measure µr on rooted (also called based) loops, which was introduced in (3.10). In particular we will see that µr is invariant under time-shift and, in a suitable sense, under time-reversal as well. Proposition 3.2. For n> 1, x ,...,x E, one has 0 n−1 ∈ 1 (3.17) µ [N = n, Z = x ,...,Z = x ]= p p ...p = r 0 0 n−1 n−1 n x0,x1 x1,x2 xn−1,x0 1 cx ,x cx ,x ...cx ,x 0 1 1 2 n−1 0 , n λx0 λx1 ...λxn−1

1 (3.18) µ [N = n]= Tr(P n) (recall notation from (1.19)). r n

(3.19) µ [N > 1] = log det(I P ) < . r − − ∞  (3.20) µ [N > 1, (Z ) ]= µ [N > 1, (Z ) ], for any k Z, r k+m m∈Z ∈· r m m∈Z ∈· ∈ (stationarity property of the discrete loop).

(3.21) θ µ = µ , for any v R, in the notation of (3.9), v ◦ r r ∈ (stationarity property of the continuous-time loop). Proof. (3.17): • µr[N = n, Z0 = x0,...,Zn−1 = xn−1] −t (3.12) e = p ...p dt ...dt dt x0,x1 xn−1,x0 t 1 n Z0

(3.18): • µr[N = n]= µr[N = n, Z0 = x0,...,Zn−1 = xn−1] x0,...,xPn−1∈E (3.17) 1 1 = 1 ,P n 1 = Tr(P n), n h x0 x0 i n whence (3.18). xP0∈E

(3.19): • ∞ 1 ∞ 1 µ [N > 1] = Tr(P n)= Tr(P n), since Tr(P ) = 0, r n n nP=2 nP=1 ∞ 1 = Tr P n (the series is convergent by (1.29)). n  nP=1  As already observed, by (1.29) all the eigenvalues of the self-adjoint operator P on L2(dλ), say γ γ , belong to ( 1, 1). So we have the identity 1 ≤···≤ |E| − ∞ 1 (3.22) P n = log(I P ), n − − nP=1 which can for instance be seen after diagonalization of P in some orthonormal basis of L2(dλ), and hence we find that

|E| |E| µ [N > 1] = Tr( log(I P )) = log(1 γ )= log (1 γ ) r − − − − i − − i iP=1 iQ=1 = log(det(I P )), − − and (3.19) is proved.

(3.20): • We pick n> 1, and first show that (3.23) µ [N = n, (Z ) ]= µ [N = n, (Z ) ]. r k+m m∈Z ∈· r m m∈Z ∈·

On N = n , m Zm has period n (so Zm+ℓn = Zm, for all m, ℓ Z), see below (3.6).{ We can} thus→ assume that 0

µr[N = n, (Zk+m)0≤m

Summing over n> 1 in (3.23) yields (3.20). 56 3 THE MARKOVIAN LOOP

(3.21): • We use the following lemma: Lemma 3.3. (t> 0, v R) ∈ (3.24) θ P t λ = P t λ . v ◦ x,x x x,x x  xP∈E  xP∈E Proof. The measure P t λ is concentrated on L , and for γ L , θ (γ) = γ, x∈E x,x x r,t ∈ r,t ℓt for any ℓ Z, due toP the fact that s R Xs(γ) E has period t. We can thus assume that∈ 0

rt2−t1 (x1, x2) λx2 ...rtn−tn−1 (xn−1, xn) λxn rt1 (xn, x1) λx1 (recall that tn = t).

Note that 0 < v + tk+1 t< < v = v + tn t < v + t1 < < v + tk = t, and therefore using once again the calculation− ··· in (3.14), the− expression on··· the right-hand side of (3.25) equals:

rtk+2−tk+1 (xk+1, xk+2) λxk+2 ...rtn−tn−1 (xn−1, xn) λxn

rt1 (xn, x1) λx1 ...rtk+1−tk (xk, xk+1) λxk+1 . This shows that (3.25) holds and completes the proof of (3.24).

We can now complete the proof of (3.21). When B is a measurable subset of Lr, we thus find that: ∞ (3.10) t dt µr[B] = P [B] λx , Z x,x t 0 xP∈E and therefore ∞ −1 t −1 dt θv µr[B]= µr[θv (B)] = Px,x[θv (B)] λx ◦ Z0 x∈E t ∞ P (3.24) t dt = P [B] λx = µr[B]. Z x,x t 0 xP∈E This completes the proof of (3.21).

We will now highlight some invariance properties of the measure µr under time- ∨ reversal. For this purpose it is convenient to introduce the time-reversal map θ: Lr Lr via: → ∨ ′ θ(γ) Lr, for γ Lr, is the rooted loop γ such that (3.26) ′ ∈ ∈ ′ ζ(γ )= ζ(γ) and Xs(γ ) = lim X−s−ε(γ)= X(−s)−(γ), for all s R. ε↓0 ∈ 3.1 Rooted loops and the measure µr on rooted loops 57

Proposition 3.4.

(3.27) µ [N > 1, (Z ) ]= µ [N > 1, (Z ) ] r −m m∈Z ∈· r m m∈Z ∈· (time-reversal invariance of the discrete loop).

∨ (3.28) θ µ = µ (time-reversal invariance of the continuous loop). ◦ r r Proof. (3.27): • One could use (3.28), but it is instructive to give a direct proof. For n> 1, x0,...,xn−1 E, one has by (3.17) ∈

1 cx ,x cx ,x ...cx ,x µ [N = n, Z = x ,...,Z = x ]= 0 1 1 2 n−1 0 , r 0 0 n−1 n−1 n λx0 ...λxn−1 whereas

µr[N = n, Z0 = x0,Z−1 = x1,...,Z−(n−1) = xn−1]=

µr[N = n, Z0 = x0,Z1 = xn−1,Z2 = xn−2,...,Zn−1 = x1] (3.17) 1 cx ,x cx ,x ...cx ,x cx ,x = 0 n−1 n−1 n−2 2 1 1 0 = µ [N = n, Z = x ,...,Z = x ]. n r 0 0 n−1 n−1 λx0 λxn−1 ...λx1 Summing over n> 1 yields the claim (3.27).

(3.28): • We first show that for t> 0,

∨ t t (3.29) θ P λx = P λx. ◦ x,x x,x  xP∈E  xP∈E Indeed, for arbitrary k 1, 0 < t < < t < t, and x ,...,x E, by (3.14) we have ≥ 1 ··· k 1 k ∈ t Px,x[Xt1 = x1,...,Xtk = xk] λx = xP∈E rt2−t1 (x1, x2) ...rtk−tk−1 (xk−1, xk) rt1+t−tk (xk, x1) λx1 ...λxk ,

t whereas using (3.26) and the facts that s Xs(γ) has period t, when γ Lr,t, and Px,x[v is a jump time of s X (γ)] = 0, for any→v R, ∈ → s ∈ ∨ θ P t λ [X = x ,...,X = x ]= ◦ x,x x t1 1 tk k  xP∈E  (with t′ = t t < t′ = t t < < t′ = t t ) 1 − k 2 − k−1 ··· k − 1 t (3.14) P [X ′ = x ,X ′ = x ,...,X ′ = x ] λ = x,x t1 k t2 k−1 tk 1 x xP∈E r ′ ′ (x , x ) r ′ ′ (x , x ) ...r ′ ′ (x , x ) λ ...λ = t2−t1 k k−1 t3−t2 k−1 k−2 t1+t−tk 1 k x1 xk

(using the symmetry of r ( , ), see (1.23), and the definition of the t′ ) s · · i

rtk−tk−1 (xk−1, xk) rtk−1−tk−2 (xk−2, xk−1) ...rt1+t−tk (xk, x1) λx1 ...λxk , 58 3 THE MARKOVIAN LOOP and the claim (3.29) now follows.

As a result, for a measurable subset B of Lr, we find that

∨ ∞ ∨ (3.10) t −1 dt θ µr[B] = P θ (B) λx ◦ Z x,x t 0 xP∈E   ∞ (3.29) t dt (3.10) = P [B] λx = µr[B], Z x,x t 0 xP∈E and this completes the proof of (3.28).

3.2 Pointed loops and the measure µp on pointed loops

In this section we define the σ-finite measure µp governing pointed loops and relate it to the measure µr on rooted loops. As it turns out, the measures µr and µp agree on functions invariant under the shift θ , v R. v ∈ We introduce the measurable subspace of Lr of pointed loops, see below (3.7),

(3.30) L = γ L ; γ is trivial, or N(γ)= n> 1, and T (γ)= ζ(γ) p { ∈ r n } = γ L ; γ is trivial, or 0 is a jump time of s R X (γ) . { ∈ r ∈ → s }

It is convenient to introduce for γ Lr with N(γ) = n > 1, the variables describing the successive durations between jumps∈ of the loop γ (see also Figure 3.1):

(3.31) σ (γ)= T (γ)+ ζ(γ) T (γ), 0 1 − n σ (γ)= T (γ) T (γ),...,σ (γ)= T (γ) T (γ), 1 2 − 1 n−1 n − n−1 so that

(3.32) σ (γ)+ + σ (γ)= ζ(γ). 0 ··· n−1

Tm+1

σm Tm the variables σℓ and Tk in a rooted loop γ with N(γ)= n and ζ(γ)= t

T2 σ1

σ0 T Tn 1 t 0 ∼ Fig. 3.1 3.2 Pointed loops and the measure µp on pointed loops 59

In the case where γ is a pointed loop with N(γ)= n> 1, the variables Z0,...,Zn−1 and σ0,...,σn−1 enable to reconstruct γ, with the help of (3.8) and the identity Tn(γ)= ζ(γ). We now introduce a measure µp on Lp via:

−t dt µp[N =1,Z0 = x0,ζ t + dt]= e , for any x0 E, t> 0,  ∈ t ∈   µp[N = n, Z0 = x0,...Zn−1 = xn−1,  (3.33)  σ s + ds ,...,σ s + ds ]=  0 ∈ 0 0 n−1 ∈ n−1 n−1 1 −(s0+···+sn−1) px ,x ...px ,x e ds0 ...dsn−1, for n> 1, x0,...,xn−1 E,  n 0 1 n−1 0 ∈  s ,...,s > 0.  0 n−1  The meaning of this formla is the same as in (3.11) - (3.13). We will now relate the measure µr on rooted loops and µp on pointed loops.

When γ Lr is such that N(γ) = n > 1, the loops θTm(γ)(γ), for 1 m n, are pointed. We∈ will denote by θ the corresponding map from L N = n≤ γ≤ γ′ = Tm r ∩{ } ∋ → θTm(γ)(γ) Lp N = n . Thus θTm (1 N = n µr) is a measure on Lp N = n . As we will see∈ below∩{ it corresponds} to a◦ type{ of size-biased} modification of 1∩{N = n µ}. { } p Proposition 3.5. (n> 1) For any 1 m n, x ,...,x E, s ,s ,...,s > 0, ≤ ≤ 0 n−1 ∈ 0 1 n−1 θ (1 N = n µ )[Z = x ,...,Z = x , Tm ◦ { } r 0 0 n−1 n−1 σ s + ds ,...,σ s + ds ]= (3.34) 0 ∈ 0 0 n−1 ∈ n−1 n−1 sn m p ...p , − e−(s0+···+sn−1) ds ds ...ds . x0,x1 xn−1,x0 s + + s 0 1 n−1 0 ··· n−1 For any 1 m n, ≤ ≤ σ (3.35) θ (1 N = n µ )= n n−m 1 N = n µ . Tm ◦ { } r σ + + σ { } p 0 ··· n−1 When F : Lr R+ is a bounded measurable function such that F θv = F , for all v R, then one has → ◦ ∈

(3.36) F dµr = F dµp (this equality holds as well when n =1). Z{N=n} Z{N=n} Proof. (3.34): • We consider h0,...,hn−1 bounded measurable functions on (0, ), and we extend the definition of σ for ℓ 0,...,n 1 on N = n , to all ℓ Z, using∞ periodicity (i.e. so ℓ ∈{ − } { } ∈ that σℓ+kn = σℓ for all ℓ, k Z). So we find that ∈

1 Z0 = x0,...,Zn−1 = xn−1 h0(σ0) ...hn−1(σn−1) d θT (1 N = n µr) = Z { } m ◦ { }  (3.12) 1 Zm = x0,Zm+1 = x1,...Zm−1 = xn−1 h0(σm) h1(σm+1) ...hn−1(σm−1) dµr = Z{N=n} { }

px0,x1 ...pxn−1,x0 h0(tm+1 tm) ...hn−m−1(tn tn−1) hn−m(t tn + t1) Z − − −t − 0

We can perform a change of variables in the above integral, replacing t1, t2,...tn, t by t1,s1,s2,...sn−1,s0, where

t = t , t = t + s , t = t + s + s ,...,t = t + s + + s , 1 1 2 1 1 3 1 1 2 n 1 1 ··· n−1 t = s + + s + s , 1 ··· n−1 0 which bijectively maps the region 0 < t < t < < t < t into the region 0 < t 0,...,sn−1 > 0. So the above expression equals:

px0,x1 ...pxn−1,x0 h0(sm) ...hn−m−1(sn−1)hn−m(s0)hn−m+1(s1) ...hn−1(sm−1) Z 00,...,sn−1>0 1 e−(s0+···+sn−1) dt ds ...ds = (s + + s ) 1 0 n−1 0 ··· n−1

px0,x1 ...pxn−1,x0 h0(sm) ...hn−m−1(sn−1)hn−m(s0)hn−m+1(s1) ...hn−1(sm−1) Zs >0,...,s >0 0 n−1 s 0 e−(s0+···+sn−1) ds ...ds = (s + + s ) 0 n−1 0 ··· n−1 (by relabeling variables)

px0,x1 ...pxn−1,x0 h0(s0) ...hn−1(sn−1) Zs0>0,...,sn−1>0 sn−m e−(s0+···+sn−1) ds ...ds , (s + + s ) 0 n−1 0 ··· n−1 and this proves (3.34).

(3.35): •

Combining (3.34) and (3.33), we see that for h0,...,hn−1 as above and x0,...xn−1 E, one has ∈

1 Z0 = x0,...,Zn−1 = xn−1 h0(σ0) ...hn−1(σn−1) d(θT (1 N = n µr)= Z { } m ◦ { } σn−m 1 Z0 = x0,...,Zn−1 = xn−1 h0(σ0) ...hn−1(σn−1) n d(1 N = n µp). Z { } σ + + σn { } 0 ··· −1

σn−m From this we see that θT (1 N = n µr) and n 1 N = n µp have the same m ◦ { } σ0+···+σn−1 { } (finite since n> 1) total mass and using Dynkin’s lemma we conclude that they coincide on the σ-algebra of Lp N = n generated by the variables Z0,...,Zn−1, σ0,...,σn−1. This is the full σ-algebra∩{ of measurable} subsets of L N = n , and (3.35) follows. p ∩{ }

(3.36): • On N = n , with n > 1, we set θ (γ) def= θ (γ), for 1 m n, and note that { } Tm Tm(γ) ≤ ≤ 3.3 Restriction property 61

F θ = F on N = n (since F θ = F for all v R). As a result we find that ◦ Tm { } ◦ v ∈ 1 n F dµr = F θTm dµr Z Z n m ◦ {N=n} {N=n} P=1 1 n = Fd θT (1 N = n µr) n Z m ◦ { } mP=1  n (3.35) 1 σn−m = F n dµp n m Z σ + + σn P=1 {N=n} 0 ··· −1

= F dµp, Z{N=n} and (3.36) follows.

We now continue with the discussion of the properties of the measures on rooted and pointed loops we have introduced. Our next topic will be the restriction property.

3.3 Restriction property This is a short section where we state and prove the restriction property. Informally said, the measures µr and µp have the property that restricting them to the set of loops that remain in U amounts to working with the modified weights and killing measure corresponding to killing when exiting U, see Remark 1.5. When U is a subset of E, we define

(3.37) L = γ L ; X (γ) U for all s R notation= γ L ; γ U , r,U { ∈ r s ∈ ∈ } { ∈ r ⊆ } Proposition 3.6. (restriction property) When U is a connected (non-empty) subset of E

(3.38) 1Lr,U µr = µr,U , and

(3.39) 1Lr,U µp = µp,U , where µr,U and µp,U respectively stand for the analogues of µr in (3.10) and µp in (3.33), when U replaces E, and U is endowed with the weights cx,y, x, y U, and the killing measure κ = κ + c , x U (cf. Remark 1.5). (When U∈is not connected the x x y∈E\U x,y ∈ above identities applyP to the different connected components of U, which induce a partition of Lr,U .) e Proof. (3.38): • When n> 1, t > 0, 1 i n, t> 0, and x ,...,x U, by (3.12): i ≤ ≤ 0 n−1 ∈ µ (N = n, Z = x ,...,Z = x , T t + dt ,...,T t + dt , r 0 0 n−1 n−1 1 ∈ 1 1 n ∈ n n ζ t + dt)= (3.40) ∈ −t cx0,x1 cx1,x2 ...cxn−1,x0 e 1 0 < t1 < < tn < t dt1 ...dtndt. λx0 λx1 ...λxn−1 { ··· } t 62 3 THE MARKOVIAN LOOP

Note that λ def= c + κ = λ , for x U, and hence the expression in (3.40) equals x y∈U x,y x x ∈ P µ (N e= n, Z = x ,...,Z = x , T t + dt ,...,T = t + dt ,ζ t + dt). r,U 0 0 e n−1 n−1 1 ∈ 1 1 n n n ∈ By (3.13), a similar equality holds when n = 1.

So we see that for each n 1, 1Lr,U µr and µr,U coincide on the σ-algebra of Lr,U N = n generated by the variables≥ Z ,...,Z , T ,...T ,ζ. By (3.8), this is the full∩ { } 0 n−1 1 n−1 σ-algebra of measurable subsets of Lr,U , and (3.38) follows. (3.39): • The argument is similar, and now uses the formula (3.33) for µp.

3.4 Local times In this section we define the local time of rooted loops and derive an identity, which will play an important role in the next chapter, when calculating the Laplace transform of the occupation field of a Poisson gas of Markovian loops. We define the local time of the rooted loop γ L at x E: ∈ r ∈ ζ(γ) 1 (3.41) Lx(γ)= 1 Xs(γ)= x ds . Z0 { } λx Note that the local time is invariant under time-shift: (3.42) L θ = L , for any x E, v R, x ◦ v x ∈ ∈ and also invariant under time-reversal: ∨ (3.43) L θ = L , for any x E. x ◦ x ∈ As a consequence of (3.36) and (3.42), we see that we can indifferently use µr or µp when evaluating “expectations” of functions of (Lx)x∈E . An important role is played by the next proposition, which computes the “de-singularized Laplace transform” of the field of local times of Markovian loops. Proposition 3.7.

−vLx v (3.44) For v 0 and x E, (1 e ) dµr = log 1+ . ≥ ∈ Z{N=1} −  λx  For V : E R , one has: → + − P V (x)Lx x E 1 e ∈ dµr = log det(I + GV ) Z −  (3.45) = log det(I + √V G √V ) det G = log V , −  det G  where G =(V L)−1 (the various members of (3.45) are finite, non-negative, and equal). V − In particular, one has

−vLx (3.46) (1 e ) dµr = log 1+ vg(x, x) , for v 0, and x E. Z − ≥ ∈  3.4 Local times 63

Remark 3.8. Note that gV (x, y)=(GV 1y)(x), x, y E, can be interpreted as the Green function one obtains when choosing κV = κ + V (x),∈x E, as a new killing measure.  x x ∈ Proof of Proposition 3.7. (3.44): • ζ(γ) Since Lx(γ) = 0, when N(γ) = 1 and Z0(γ) = x, and Lx(γ)= , when N(γ) = 1 and 6 λx Z0(γ)= x, we find that

∞ −t (3.13) v −vLx − t e (1 e ) dµr = (1 e λx ) dt. Z{N=1} − Z0 − t

To compute this last integral we use the identity: for 0 a < b and 0 a′ < b′, ≤ ≤ (3.47) b b b′ b′ ′ −a′t −b′t dt −tt′ ′ −at′ −bt′ dt e e = e dtdt = e e ′ . Za − t Za Za′ Za′ − t So choosing a = 0 and letting b , a′ = 1, b′ =1+ v , we obtain: →∞ λx 1+ v λx ′ −vLx dt v (1 e ) dµr = ′ = log 1+ , Z{N=1} − Z1 t  λx  whence (3.44).

(3.45): • t We begin with a preparatory calculation. By the definition of Px,x in (2.19) and Ly in (3.41) we see that

− P V (y)Ly t V t 1 − R (Xs)ds 1 E 1 e y∈E = P [X = x] E 1 X = x e 0 λ . x,x − x t λ − x { t } λ   x   x Using Feynman-Kac’s formula (1.84) (for V = 0, this boils down to (1.18)), we find that for V : E R , t> 0, x E, → + ∈ − P V (y)Ly t 1 t(P −I) t(P −I− V ) (3.48) E 1 e y∈E = e 1 e λ 1 (x) 0. x,x − λ x − x ≥   x  In combination with (3.10), we obtain that

V y L ∞ − P ( ) y V dt y∈E t(P −I) t(P −I− ) (3.49) 1 e dµr = Tr(e ) Tr(e λ ) , Z − Z − t  0 (where both sides can possibly be infinite). We first consider the case of “small V ” and the general case. The case of a “small V ”: • We know by (1.29), see also above (3.22), that all eigenvalues of the self-adjoint operator P on L2(dλ) belong to ( 1, 1). The operator P V is also self-adjoint on L2(dλ), and − − λ using the variational characterization of the largest and smallest eigenvalues of P V : − λ V V sup f, P f ; (f, f)λ =1 and inf f, P f ; (f, f)λ =1 , n  − λ  λ o n  − λ  λ o 64 3 THE MARKOVIAN LOOP respectively, we can pick ε> 0, so that

V (3.50) for V <ε, all eigenvalues of P belong to ( 1, 1). k k∞ − λ −

tP t(P − V ) We then expand e and e λ , and write:

k k t(P −I) t(P −I− V ) −t t k V (3.51) Tr(e ) Tr(e λ )= e Tr(P ) Tr P − k! − − λ kP≥1 h   i (where we took into account the cancellation of the term k = 0). Note that when 0

∞ k − P V (y)Ly t dt y∈E k −t 1 e dµr = Tr(P ) e Z − Z k! t  kP≥1 0 ∞ k t V k dt Tr P e−t − Z k! − λ t kP≥1 0    (3.52) 1 1 V k = Tr(P k) Tr P k − k − λ kP≥1 kP≥1    V and as below (3.22) = log det(I P ) + log det I P + − −  − λ  V = log det I +(I P )−1 .  − λ  By (1.37) and (1.41), we know that (I P )−1 λ−1 = G, so − V det I +(I P )−1 = det(I + GV ),  − λ  and in addition, “multiplying and dividing” by det(√V ) (i.e. by det(√V + η), with η > 0, which one lets go to zero), we find that

det(I + GV ) = det(I + √V G√V ).

Writing further that det G = det(I P )−1 det(λ−1) and that, since λ(I P ) = L, cf. (1.41), − − − V −1 −1 det I P + = det(λ ) det(V L) = det(λ )/det GV ,  − λ  − we find that

V det G log det(I P ) + log det I P + = log . − −  − λ  det GV  Combining these identities, we see that we have proved (3.45) under (3.50), i.e. when V is “small”. 3.4 Local times 65

We now treat the general case.

The general case: • −β P V (x)Lx Note that the function β 0 (1 e x∈E ) dµr [0, ] is non-decreasing, ≥ → Lr − ∈ ∞ finite for small β by the first step,R and we can use the inequality 1 e−(a+b) 1 e−a + 1 e−b for a, b 0, to conclude that finiteness holds for all β 0,− that is: ≤ − − ≥ ≥ −β P V (x)Lx x E (3.53) 1 e ∈ ) dµr < , for β [0, ) Z − ∞ ∈ ∞ Lr (and actually increases to + as β , if V is not identically equal to 0, see also below (3.10)). In addition, as we explain∞ below,→∞ it follows, by domination, that the function

−β P V (x)Lx x E (3.54) β z C; Rez > 0 1 e ∈ ) dµr is analytic. ∈{ ∈ } −→ Z − Lr Indeed, we first note that the integrand in (3.54) has a modulus bounded by 2 when β z C; Rez > 0 , and that µ[N > 1] < , by (3.19). There only remains to∈ show { domination∈ on }N = 1 . To this end we observe∞ that the integrand in (3.54) 1 { } equals β V (x)L exp β V (x)L u du, and has a modulus bounded by 0 x∈E x {− x∈E x } β x RE V (Px)Lx, for β in the sameP domain as above. By (3.13), 1 N = 1 Lz is µ- | | ∈ { } integrableP for each z E. We have thus shown domination of the integrand in (3.54) when β remains in a compact∈ subset of z C; Rez > 0 . The claim (3.54) follows. { ∈ } On the other hand √V G√V is self-adjoint for the canonical scalar product , on h· ·i RE. By (1.35), all eigenvalues of √V G√V are non-negative (because f,Gf 0, for all f : E R). It follows that h i ≥ → (3.55) β 0 det(I + β √V G√V ) 1 is a non-decreasing function, ≥ −→ ≥ and, in addition, that

β > 0 log det(I + β √V G√V ) has an analytic extension (3.56) −→ to z C; Rez > 0 . { ∈ } By the first step, this function agrees with the function in (3.54) for small β in (0, ), and hence for β (0, ) by analyticity. ∞ ∈ ∞ We have thus shown that

−β P V (x)Lx x E (3.57) 1 e ∈ ) dµr = log det(I + β √V G√V ), for all β 0, Z − ≥ and, in particular, for β = 1. Since the equality of the three terms on the right-hand side of (3.45) has been shown below (3.52) (the proof is easily extended to the case of a general V 0), we thus have completed the proof of (3.45). ≥

(3.46): • This is a special case of (3.45) when V = v1{x} with v 0, x E. By ordering E so that x is the first element of E, we can select the basis 1 ,1≥ i ∈E, of the space of functions xi ≤ ≤ 66 3 THE MARKOVIAN LOOP on E. The matrix representing I + GV in this basis is triangular and has the coefficients 1+ g(x, x) v, 1,..., 1 on the diagonal, so that log det(I + GV ) = log(1 + g(x, x)v), and (3.46) follows.  We can now combine the restriction property, see Proposition 3.6, and the above proposition to find, as a direct application: Proposition 3.9. For V : U R , one has: → + − P V (x)Lx x E 1 e ∈ ) dµr = log det(I + GU V ) Z − {γ⊆U} (3.58) = log det(I + √V GU √V ) det G = log U,V , −  det GU  where we have set (see below (3.39) for notation),

G =(V L )−1, with L f(x)= c f(y) λ f(x), for x U, and f : U R, U,V − U U x,y − x ∈ → yP∈U and we have tacitly extended V by 0 outside U in the first and second line of (3.58), and the determinants in the last line of (3.58) are U U-determinants. ×

−vLx (3.59) (1 e ) dµr = log 1+ vgU (x, x) , for v 0, and x U. Z − ≥ ∈ {γ⊆U}  We will also record a variation on Proposition 3.7 in the case where we work with the measure Px,y in place of µr. This identity will be used in the next chapter when proving Symanzik’s representation formula. Most of the work has actually already been done when proving (2.30). Proposition 3.10. For V : E R and x, y E, one has → + ∈ (3.60) E exp V (z) Lz = g (x, y), x,y − ∞ V h n zP∈E oi with g (x, y)=(G 1 )(x) and G =(V L)−1, as in (3.45). V V y V − Proof. By (2.30), we already know that when V is small so that GV ∞ < 1 (recall V 0 here), we have k k ≥

E exp V (z) Lz = (I + GV )−1 G 1 (x). x,y − ∞ y h n zP∈E oi  Then we observe that ( L)−1 = G, cf. (1.37), and hence V L = L(I + GV ), so that − − − −1 (3.61) GV =(I + GV ) G. We have thus proved (3.60) when the smallness assumption GV < 1 holds. k k∞ In the general case V 0, βV L is a self-adjoint operator for the usual scalar product , , which has all its eigenvalues≥ −> 0, when β 0 (cf. (1.42), (1.39)). Using a similar identityh· ·i to (3.61) in the neighborhood of β 0≥ arbitrary: 0 ≥ G =(I +(β β ) G V )−1 G , for β β small enough, β 0, βV − 0 β0V β0V | − 0| ≥ 3.5 Unrooted loops and the measure µ∗ on unrooted loops 67 one sees that

(3.62) β > 0 g (x, y) has an analytic extension to a neighborhood of (0, ). −→ βV ∞ On the other hand, by domination one sees that

(3.63) β z C; Rez > 0 E exp β V (z) Lz is analytic, ∈{ ∈ } −→ x,y − ∞ h n zP∈E oi and coincides for small positive β with gβV (x, y). Hence this equality also holds for all β 0, and choosing β = 1, we obtain (3.60). ≥

3.5 Unrooted loops and the measure µ∗ on unrooted loops

In the last section of this chapter we finally introduce the σ-finite measure µ∗ governing loops. We also define unit weights, which are helpful with calculations involving µ∗.

We define an equivalence relation on the set Lr of rooted loops: (3.64) γ γ′ if and only if γ = θ (γ′) for some v R, ∼ v ∈ and we denote by L∗ the set of equivalence classes of rooted loops (also referred to as unrooted loops), and by π∗ the canonical map

π∗ γ L γ∗ = π∗(γ) L∗, the equivalence class of γ for the (3.65) ∈ r −→ ∈ relation in (3.64). ∼ We endow L∗ with the σ-algebra (see below (3.3) for notation)

(3.66) ∗ = B L∗; (π∗)−1(B) is a measurable subset of L . L ⊆ r  In other words, ∗ is the largest σ-algebra on L∗ such that the map π∗: L L∗ is L r → measurable (we recall that Lr is endowed with the σ-algebra introduced below (3.3)). ∗ ∗ As a consequence of (3.64), when F : L R is measurable, F π : Lr R is invariant under all θ , v R, and measurable. It now→ follows from (3.36)◦ (and its→ straightforward v ∈ extension to the case n = 1) that the measures µr and µp, see (3.10), (3.33), have the same image on L∗. We thus introduce the loop measure (or unrooted loop measure)

(3.67) µ∗ = π∗ µ = π∗ µ , ◦ r ◦ p which is straightforwardly seen to be a σ-finite measure on (L∗, ∗) (indeed, if ζ(γ∗) def= ∗ ∗ L ∗ ζ(γ), for any γ Lr with π (γ) = γ , is the duration of the unrooted loop γ , then for a> 0, µ∗(ζ a)=∈ µ (ζ a) < , see below (3.10)). ≥ r ≥ ∞ The following notion introduced by Lawler-Werner [16] is convenient to handle com- putations with µ∗. Definition 3.11. (unit weight) A measurable function T : L [0, ) is called unit weight when r → ∞ ζ(γ) (3.68) T (θvγ) dv =1, for any γ Lr. Z0 ∈ 68 3 THE MARKOVIAN LOOP

Examples: 1) T (γ)= ζ(γ)−1, for γ L , is a trivial example of unit weight. ∈ r 2) A less trivial example is the following: pick x E, and set ∈ T (γ)= ζ(γ)−1, if γ(t) = x for all t R, 6 ∈  (3.69)  1 γ(0) = x  T (γ)= { } , if γ(t)= x for some t R. ζ(γ) ∈ 0 1 γ(s)= x ds  { }  R Indeed, in the first case (notation: “x / γ”), one has ∈ ζ(γ) T (θvγ) dv = ζ(γ)/ζ(γ)=1, Z0 whereas in the second case (notation: “x γ”), one has ∈ ζ(γ) ζ(γ) ζ(γ) T (θvγ) dv = 1 γ(v)= x dv/ 1 γ(s)= x ds =1. Z0 Z0 { } Z0 { } 

The interest of the above definition comes from the following

Lemma 3.12. If T is a unit weight, then for any non-negative measurable F on L∗, one has (see (2.21) for notation):

∗ ∗ (3.70) F dµ = F π (γ) T (γ) dPx,x(γ) λx. Z ∗ Z ◦ L xP∈E Lr Proof. By (2.21) we see that

∞ ∗ Fubini t ∗ (3.71) F π (γ) T (γ) dPx,x(γ) = λx E [F π T ] dt. Z ◦ Z x,x ◦ xP∈E Lr 0 xP∈E

Moreover, by (3.24), we find that for each t> 0,

t t ∗ 1 t ∗ λx E [F π T ]= dv λx E [(F π θv)(T θv)] x,x ◦ t Z x,x ◦ ◦ ◦ xP∈E 0  xP∈E  and since F π∗ θ = F π∗, and T is a unit weight, so that P t -a.s., t T θ dv = 1, ◦ ◦ v ◦ x,x 0 ◦ v the right-hand side of (3.71) equals R

∞ t ∗ dt (3.10) ∗ (3.67) ∗ λx Ex,x[F π ] = F π dµr = F dµ , Z ◦ t Z ◦ Z ∗ 0 xP∈E Lr L and the claim (3.70) follows. 3.5 Unrooted loops and the measure µ∗ on unrooted loops 69

Example: We consider T as in (3.69) and F 0 measurable on L∗. We write using similar notation as below (3.69): ≥

∗ (3.70) ∗ F dµ = F π (γ) T (γ) dPy,y(γ) λy Z ∗ Z ◦ {γ ∋x} yP∈E {γ∋x}

(3.69) ∗ 1 γ(0) = x = F π (γ) ζ(γ) { } dPy,y(γ) λy y∈E Z{γ∋x} ◦ 1 γ(s)= x ds P 0 { } R ζ(γ) −1 ∗ ds = F π (γ) 1 γ(s)= x dPx,x(γ) Z{γ∋x} ◦  Z0 { } λx 

(3.41) ∗ −1 = F π (γ) Lx(γ) dPx,x(γ). Z{γ∋x} ◦

In other words, we have proved that

∗ ∗ ∗ 1 (3.72) 1 γ x dµ = π dPx,x , for x E, { ∋ } ◦ Lx  ∈ or, alternatively, since Lx is invariant under θv, cf. (3.42),

(3.72’) 1 γ∗ x L µ∗ = π∗ P , for x E, { ∋ } x ◦ x,x ∈ ∗ ∗ ∗ where we have set Lx(γ )= Lx(γ), for any γ with π (γ)= γ .

Note that by (2.34) the total mass of Px,x equals g(x, x), so that

∗ (3.73) Lx dµ = g(x, x), for x E. Z{γ∗∋x} ∈  70 3 THE MARKOVIAN LOOP 71

4 Poisson gas of Markovian loops

In this chapter we study the Poisson point process on the space L∗ of unrooted loops with intensity measure αµ∗, with α a positive number. In particular, we relate the occupation field of this gas of loops to the Gaussian free field, and prove Symanzik’s representation formula. At the end of the chapter we explore several precise meanings for the notion of “loops going through infinity” and relate them to random interlacements.

4.1 Poisson point measures on unrooted loops In this section we briefly introduce the set-up for Poisson point measures on the set of loops, and recall some basic identities for the Laplace transforms of these Poisson point measures. We consider pure point measures ω on (L∗, ∗), i.e. σ-finite measures of the form ∗ L ω = δγ∗ , where γ , i I, is an at most countable collection of unrooted loops, such i∈I i i ∈ that Pω(A) < , for all A = γ∗ L∗; a ζ(γ∗) b , with 0 0, we consider (4.6) P : the Poisson point measure on (L∗, ∗) with intensity αµ∗ α L (see (3.67) for the definition of µ∗). We will call probability measure P on (Ω, ), the α A Poisson gas of Markovian loops at level α> 00. Note that under Pα the point measure ω is a.s. infinite, but by (3.19) its restriction to N > 1 L∗, is a.s. a finite point measure (we use the notation N(γ∗)= N(γ) for any{γ with }π∗ ⊆(γ)= γ∗). 72 4 POISSON GAS OF MARKOVIAN LOOPS

Lemma 4.2. Consider a measurable function Φ: L∗ R , then → + −hω,Φi −Φ ∗ (4.7) Eα[e ] = exp α (1 e ) dµ n − ZL∗ − o ∗ (where ω, Φ = φ(γ ), for ω = δγ∗ Ω). h i i∈I i i∈I i ∈ If Φ vanishesP on γ∗: ζ(γ∗) < a Pfor some a> 0, then { } ihω,Φi iΦ ∗ (4.8) Eα[e ] = exp α (e 1) dµ . n ZL∗ − o Proof. (4.7): • When Φ = n a 1 , with A , 1 i n, pairwise disjoint, measurable and µ∗(A ) < , i=1 i Ai i ≤ ≤ i ∞ one has P n n ∗ k (4.4),(4.5) ∗ (αµ (A )) −hω,Φi −aiω(Ai) −aik −αµ (Ai) i Eα[e ]= Eα e = e e (4.6) k! hiQ=1 i iQ=1  kP≥0  n (4.9) = exp αµ∗(A )(1 e−ai ) − i − n iP=1 o = exp α (1 e−Φ) dµ∗ , n − ZL∗ − o so (4.7) holds. ∗ For a general Φ: L R+, we can construct Φℓ Φ as ℓ , with each Φℓ of the above type (by the usual→ measure-theoretic induction↑ construction→ ∞ and the σ-finiteness of µ∗), and find that

(4.9) −hω,Φi monotone −hω,Φℓi −Φℓ ∗ Eα[e ] = lim Eα[e ] = lim exp α (1 e ) dµ convergence ℓ→∞ ↓ ℓ→∞ ↓ n − ZL∗ − o

monotone= exp α (1 e−Φ) dµ∗ , whence (4.7). convergence n − ZL∗ − o (4.8): • The claim follows by similar measure theoretic induction; note that, as remarked below ∗ iΦ (3.10), µ (ζ a) < , so that ω, ζ a < , Pα-a.s., and e 1 is both bounded and µ∗-integrable.≥ ∞ h { ≥ }i ∞ −

4.2 Occupation field We introduce the occupation field of the Poisson gas of Markovian loops in this section, and calculate its Laplace transform. Later on we relate the law of the occupation field of 1 the Poisson gas at level α = 2 to the free field. For ω Ω, x E, we define the occupation field (also called field of occupation times) of∈ω at x∈via: (ω)= ω, L [0, ], (see (3.41), (3.42) for notation), Lx h xi ∈ ∞ (4.10) ∗ = Lx(γ ), if ω = δγ∗ Ω , i i ∈ iP∈I iP∈I where L (γ∗) for γ∗ L∗ is defined below (3.72’). x ∈ 4.2 Occupation field 73

As a consequence of Proposition 3.7 and Lemma 4.2, we obtain the following important theorem describing the Laplace transform of the field of occupation times of a Poisson gas of Markovian loops: Theorem 4.3. (α> 0) For V : E R , one has in the notation of (3.45): → + − P V (x)Lx x E −α Eα e ∈ = det (I + GV )   (4.11) = det (I + √V G√V )−α det G α = V .  det G  For x E and v 0, one has ∈ ≥ −vLx −α (4.12) Eα[e ]= 1+ vg(x, x) .  In particular, one finds that: is Γ α,g(x, x) -distributed Lx  α−1 (4.13) 1 s − s (i.e. has density e g(x,x) 1 s> 0 ), Γ(α) g(x, x)α { } and that (4.14) P -a.s., < , for every x E. α Lx ∞ ∈

Proof. (4.11): • We first note that by (4.10), for ω Ω, V : E R , one has ∈ → + V (x) (ω)= ω, Φ , where Φ(γ∗)= V (x) L (γ∗), for γ∗ L∗. Lx h i x ∈ xP∈E xP∈E We will use (4.7) together with (3.45). Specifically, we have

− P V (x)Lx (4.7) x E −hω,Φi −Φ ∗ Eα e ∈ = Eα e = exp α (1 e ) dµ . − Z ∗ −     n L o Now, by (3.67), we obtain the identity:

(3.67) ∗ − P V (x)Lx(γ) −Φ ∗ −Φ◦π x E (1 e ) dµ = 1 e dµr = 1 e ∈ dµr(γ). Z ∗ − Z − Z − L Lr  Lr  As a result, we find that

− P V (x)Lx − P V (x)Lx x E x E Eα e ∈ = exp α 1 e ∈ dµr − Z −   n Lr  o (3.45) = det (I + GV ) −α { } = det (I + √V G√V ) −α { } det G α = V , and (4.11) follows.  det G  74 4 POISSON GAS OF MARKOVIAN LOOPS

(4.12): • In the special case V = v1 , with v 0, x E, it follows by (3.46) that x ≥ ∈ −vLx −α Eα[e ]=(1+ vg(x, x)) , and (4.12) follows.

(4.14): • −vLx −α Pα[ x < ] = lim Eα[e ] = lim 1+ vg(x, x) =1, L ∞ v→0 v→0  and (4.14) follows.

(4.13): • −α The Laplace transform of the Γ α,g(x, x) -distribution is 1+vg(x, x) , see [9], p. 430, and (4.13) follows.   Remark 4.4. One can also introduce the occupation field of non-trivial loops:

(ω) def= 1 N > 1 ω, L Lx h { } xi (4.15) ∗ ∗ = 1 N(γ ) > 1 Lx(γ ), if ω = δγ∗ Ω, and x E. b { i } i i ∈ ∈ iP∈I iP∈I By (3.44), (3.46) we know, for instance, that for v 0 and x E, ≥ ∈ 1+ vg(x, x) (4.16) (1 e−vLx ) dµ = log . − r 1+ v Z{N>1}  λx 

The same proof used for (4.12) now yields that for v 0, and x E, ≥ ∈ −α −vL 1+ vg(x, x) (4.17) E [e bx ]= . α 1+ v  λx 

In particular, letting v in (4.17) we find that →∞ α (4.18) P [ =0]= λ g(x, x) − , for any x E. α Lx x ∈  b In the same vein, combining (3.44) and (3.45) yields that

− P V (x)Lx det GV x E 1 e ∈ dµr = log + log det λ log det(λ + V ) Z{N>1} − −  det G  − (4.19)  det(I P V )−1 det(I P V ) = log − = log − , − det(I P )−1 det(I P ) − − where P V f(x) def= cx,y f(y), for x E, and f: E R, and we used the y∈E λx+V (x) ∈ → equalities P

det G = det(I P )−1 det λ−1 and det G = det(I P V )−1 det(λ + V )−1 − V − 4.2 Occupation field 75

(recall G =(V L)−1 and note that V L =(λ + V )(I P V )), as well as the identity V − − − − P V (x)Lx x E −V (x)Lx 1 e ∈ dµr = 1 e dµr, Z − Z − {N=1}  xP∈E {N=1}  since Lx(γ) = 0, for all x = γ(0), when N(γ) = 1. We refer to Remark 3.8 for the 6 V probabilistic interpretation of GV , a similar interpretation holds for P as well. By the same proof as in (4.11), we thus see that

− V (x)L V −α P bx det(I P ) (4.20) E e x∈E = − , for V : E R. α det(I P ) →    −  Letting V (x) for every x E, we find that ↑∞ ∈ α (4.21) P [ =0, for all x]= det(I P ) , α Lx −  b a formula which also follows from the identity

(3.67) (3.19) α (4.22) Pα[ω(N > 1) = 0] = exp αµr(N > 1) = det(I P ) . (4.4) {− } −  

Occupation time of the Poisson gas of Markovian loops and the free field:

We now want to relate the field of occupation times x, x E, to the Gaussian free field, i.e. cf. (2.2), the unique probability P G on RE, underL which∈

the canonical coordinates ϕx, x E, are a centered Gaussian field (4.23) ∈ with covariance EG[ϕ ϕ ]= g(x, y), for x, y E. x y ∈ Here is the crucial link, due to Le Jan [17], between the occupation times of the Poisson 1 gas of Markovian loops with the choice α = 2 and the free field (see also (0.11)):

Theorem 4.5.

1 2 G (4.24) ( x)x∈E under Pα= 1 , has same law as ϕx under P . L 2  2 x∈E

Proof. We consider V : E R . On the one hand, we know by (4.11) that → + − 1 (4.25) E 1 exp V (x) x = det(I + GV ) 2 . 2 − L h n xP∈E oi On the other hand, by (2.59) and the fact that V L is positive definite, see (1.39), (1.44), we find that −

G V (x) 2 − 1 (4.26) E exp ϕ = det (I + GV ) 2 . − 2 x h n xP∈E oi

As a result of (4.25) and (4.26), the Laplace transforms of x, x E, under P 1 , and of L ∈ 2 1 ϕ2 , x E, under P G coincide and this yields (4.24). 2 x ∈ 76 4 POISSON GAS OF MARKOVIAN LOOPS

The above result somehow complements the picture stemming from the isomorphism theorems of Dynkin and Eisenbaum, see (2.33) and (2.43). In particular, for any x, y E, ∈ z (L + z)z∈E under Px,y P 1 , has the same “law” as ∞ 2 (4.27) L ⊗ 1 2 G ϕz under ϕx ϕy P ( signed measure when x = y!).  2 z∈E ← 6

As we will now see (4.27) and (4.24) are very close to (0.10), i.e. the representation formula of Symanzik for the moments of random fields of the type (0.9), which we had mentioned in the Introduction.

4.3 Symanzik’s representation formula In this section we state and prove Symanzik’s representation formula. It expresses the moments of certain interacting fields in terms of a Poisson gas of loops and a collection of paths interacting with a random potential. In a way this formula embodies one of the starting points for the various developments covered in these notes. We begin with a lemma concerning even moments of centered Gaussian vectors, which will be used in the derivation of Symanzik’s formula.

Lemma 4.6. Let k 1, and (ψ1,...,ψ2k) be a centered Gaussian vector with covariance matrix A , 1 i, j ≥ 2k, then i,j ≤ ≤ 2k k (4.28) E ψi = cov(Di), hiQ=1 i D1∪···∪DPk={1,...,2k} iQ=1 where the sum is over all pairings D1,...,Dk of 1,..., 2k , i.e. over all unordered partitions of 1,..., 2k into k disjoint sets each containing{ } two elements, and where { } cov(D) def= A , for D = s, t . (Note that some ψ , 1 i 2k, may coincide.) s,t { } i ≤ ≤

Proof. For λ ,...,λ R, 1 2k ∈ 2k 1 2k 2 E exp λ ψ = exp E λ ψ , i i 2 i i h n iP=1 oi n h iP=1  io so that by taking 2k partial derivatives,

2k ∂ ∂ 1 2k 2 E ψ = ... exp E λ ψ i 2 i i hi=1 i ∂λ1 ∂λ2k n h i=1  io λ1,...,λ2k=0 Q P 1 ∂ ∂ n = n ... λi λj Ai,j . n≥0 2 n! ∂λ1 ∂λ2k  1≤i,j≤2k  λ1,...,λ2k=0 P P

In the above series, only the term n = k survives and when looking at

∂ ∂ k ... λi λj Ai,j , ∂λ1 ∂λ2k  1≤i,j≤2k  λ1,...,λ2k=0 P

4.3 Symanzik’s representation formula 77 the only terms to survive are such that each of the k factors gets hit by two (distinct) derivatives, which keep alive exactly two terms (corresponding to the choice of order between i and j) in each factor. Keeping in mind that pairing corresponds to unordered partitions we obtain 1 1 ∂ ∂ k k ... λi λj Ai,j = the right-hand side of (4.28). 2 k! ∂λ1 ∂λ2k  1≤i,j≤2k  λ1=···=λ2k=0 P

Our claim (4.28) follows. Remark 4.7. The Feynman graphs (or diagrams), see [11], p. 146, provide a graphical representation of the “pairings” in the above formula (4.28). One attaches a half-edge to each of 2k distinct vertices and one chooses a match for each vertex so as to obtain one such pairing corresponding to a graph on the 2k vertices, where each vertex belongs to exactly one edge. 6 5 6 5

1 4 1 4

2 3 2 3

Fig. 4.1 When 2k = 6, an example of a Feynman diagram on the right-hand side where the half-edges are paired together. It illustrates an unordered pairing corresponding to D = 2, 6 , D = 1, 4 , D = 3, 5 , see also Fig. 4.2. 1 { } 2 { } 3 { } 

We now consider a probability ν on R+, and define as below (0.9)

∞ (4.29) h(u)= e−vu dν(v), for u 0, Z0 ≥ the Laplace transform of ν. We are interested in the probability measure on RE

2 G,h 1 1 ϕ (4.30) P = exp (ϕ,ϕ) h x dϕ, Zh − 2 E 2 n o xQ∈E   where the normalizing constant Zh is given by the formula:

2 1 ϕx (4.31) Zh = exp (ϕ,ϕ) h dϕ (with dϕ = dϕx). Z E − 2 E 2 R n o xQ∈E   xQ∈E Similar measures arise in mathematical physics, in the context of Euclidean quantum field theory, see [11]. In this context a natural choice for h would be h(u) = e−λu2+σu, with λ > 0 and σ R, see Chapter 17 of [11]. The assumption (4.29) however rules out such ∈ 78 4 POISSON GAS OF MARKOVIAN LOOPS a choice since it implies that log h is convex (by H¨older’s inequality). Nevertheless, it simplifies the presentation made below. G,h We write — for the expectation relative to P , i.e. h ih 1 ϕ2 − 2 E(ϕ,ϕ) x RE F (ϕ) e h( 2 ) dϕ x∈E (4.32) F h = R Q − 1 E(ϕ,ϕ) ϕ2 h i 2 x RE e h( 2 ) dϕ R xQ∈E when, for instance, F : RE R is a bounded measurable function. Note that when → ν = δ0, then h =1E, and one recovers the free field: G,h=1 (4.33) P E = P G, in the notation of (2.2). Symanzik’s formula will provide a representation of the moments of the random field G,h governed by P in terms of the occupation field of a Poisson gas of loops and of the local times of walks interacting via random potentials. A variant of the formula is, for instance, used in Section 3 of [2], to obtain bounds on critical temperatures. As a last ingredient, we introduce an auxiliary probability space (Ω, , P) endowed with a collection V (x, ω), x E, of non-negative random variables (the randomA poten- tials) such that ∈ e e e e (4.34) under P, the variables V (x, ω), x E, are (non-negative) i.i.d. ν-distributed. ∈

Let us point oute that the probabilitye Q in (0.10) coincides with P Pα= 1 . We are now ⊗ 2 ready to state and prove Symanzik’s formula. e w2 w3

y1 x3 x w1 2 y2

x1 y3

Fig. 4.2: The paths w1,w2,w3 in E, and the gas of loops interact through the random potentials (k = 3, and the z1,...,z6 are distinct). Theorem 4.8. (Symanzik’s representation formula) For any k 1, z ,...,z E, one has ≥ 1 2k ∈

ϕz1 ...ϕz2k h = h i x x − P V (x,ω)(Lx(ω)+L∞(w1)+···+L∞(wk)) 1 x∈E e (4.35) Ex1,y1 Exk,yk E E e ⊗···⊗ ⊗ ⊗ 2 , − P V (x,ω)Lx(ω)  pairings of e x∈E e {1,...,P2k} E E 1 e ⊗ 2   e where xi,yi = zℓ; ℓ Di , 1 i k, and D1,...,Dk stands for the (unordered) pairing{ of 1},...,{2k , and∈ w}denotes≤ the≤ integration variable under P . { } i xi,yi 4.3 Symanzik’s representation formula 79

Proof. By (3.60), we know that for ω Ω, x, y E, ∈ ∈ z − P V (z,ω)L∞ z∈E ee (4.36) Ex,y e e = gV (·,ω)(x, y),   e which is a symmetric function of x, y, so that the expression under the sum in the right- hand side of (4.35) only depends on the pairing and is therefore well-defined. E Let F be a bounded measurable function on R . Denote by (F )h the numerator of (F )h (4.32) so that F h = . We have the identity: h i (1)h 1 2 (4.34) − 2 [E(ϕ,ϕ)+ P V (x,ω)ϕx] x E e (F )h = E F (ϕ) e ∈ dϕ h ZRE i e |E| 1 G,V (·,ω) (4.37) = E E e F (ϕ)(2π) 2 (det GV (·,ω)) 2 ⊗ e   e (4.11) − P V (x,ω)Lx(ω) |E| 1 G,V (·,ω) x∈E e = E E 1 E e F (ϕ) e (2π) 2 (det G) 2 , ⊗ 2 ⊗   e −1 below (3.45) where the second line is a consequence of (2.5), (2.36), and (V ( , ω) L) = GV (·,ω). As a result of (4.28), we thus find that · − e e

(ϕz1 ...ϕz2k )h =

− P V (x,ω)Lx(ω) x∈E e E E 1 GV (·,ω)(x1,y1) ...GV (·,ω)(xk,yk) e 2 pairings of ⊗ e e × {1,...,2k}   P e |E| 1 (4.36) (4.38) (2π) 2 (det G) 2 = x x − P V (x,ω)(Lx(ω)+L∞(w1)+···+L∞(wk)) 1 x∈E e Ex1,y1 Exk,yk E E e 2 pairings of ⊗···⊗ ⊗ ⊗ × {1,...,2k}   P e |E| 1 (2π) 2 (det G) 2 .

In the same way, we find by (4.37) that

− P V (x,ω)Lx(ω) |E| 1 x E e (4.39) (1)h = E E 1 e ∈ (2π) 2 (det G) 2 . ⊗ 2   e Taking the ratio of (4.38) and (4.39) precisely yields (4.35). Remark 4.9. When k = 1, Symanzik’s representation formula (4.35) becomes

z − P V (z,ω)(Lz (ω)+L∞(w)) z∈E e Ex,y E E 1 e 2 ϕx ϕy h = ⊗ ⊗ .  − P V (z,ω)Lz(ω)  h i z E e eE E 1 e ∈ ⊗ 2   e We explain below another way to obtain this identity. Recall that (4.27) combines 1 2 Dynkin’s isomorhism theorem and the identity in law of ( x)x∈E under P 1 , with ( ϕx)x∈E L 2 2 under P G, stated in Theorem 4.5. 1 2 G − Pz E V (z,ω)ϕz By (4.27) the numerator equals E E [ϕx ϕy e 2 ∈ e ], whereas by (4.24) the G − 1 ⊗V (z,ω)ϕ2 denominator equals E E [e 2 Pz∈E z ]. Keeping in mind (4.29) and (4.34), one e e easily recovers the left-hand⊗ side of the above quality.  e 80 4 POISSON GAS OF MARKOVIAN LOOPS

4.4 Some identities In this section we discuss some further formulas concerning the Poisson gas of Markovian loops. In particular given two disjoint subsets of E, we derive a formula for the probability that no loop of the Poisson gas visits both subsets. In the next section, as an application, we will link the so-called random interlacements with various notions of “loops going through infinity” for the Poisson cloud of Markovian loops. Given U E, we can consider the field of occupation times of loops contained in U: ⊆ U (ω)= 1 γ∗ U ω, L Lx h { ⊆ } xi (4.40) ∗ ∗ = 1 γ U Lx(γ ), if ω = δγ∗ Ω, x E, { i ⊆ } i i ∈ ∈ Pi∈I iP∈I

∗ and we used the slightly informal notation 1 γi U in place of 1 γi Lr,U , where γ L is any rooted loop such that π∗(γ )= {γ∗, and⊆ L} has been defined{ ∈ in (3.37).} i ∈ r i i r,U Proposition 4.10. (α> 0) Given K E, U = E K, and V : E R , ⊆ \ → + U α − P V (x)(Lx−Lx ) det GV det GU x E Eα e ∈ =  det G det GU,V  (4.41)   det G α = K×K V  detK×K G 

(where det A def= det(A ), for A an E E-matrix, and we write det G , resp. det G , K×K |K×K × U U,V in place of detU×U GU , resp. detU×U GU,V , with the notation from below (3.58)). If K K = φ, and U = E K , for i =1, 2, then 1 ∩ 2 i \ i −α det G det GU1∩U2 Pα[no loop intersects both K1 and K2]= , det GU1 det GU2  (4.42) det G α = K1×K1 U2 ,  detK1×K1 G 

(and we used a similar convention as above for det GU ).

Proof. (4.41): • U Observe that x x , x E, is the field of occupation times of loops which are not L −L ∈U contained in U, whereas x , x U, is the field of occupation times of loops which are contained in U: L ∈

U = 1 ∗ c ω, L , x E, Lx −Lx h {γ ⊆U} xi ∈ U = 1 ∗ ω, L , x E, Lx h {γ ⊆U} xi ∈ 4.4 Some identities 81 and by (4.5) we see that they constitute independent collections of random variables. So we find that for V : E R , → + α U U det GV (4.11) − P V (x)Lx independence − P V (x)(Lx−Lx ) − P V (x)Lx = E e x∈E = E e x∈E E e x∈E det G α α × α         U α (3.58) − P V (x)(Lx−Lx ) det GU,V x E = Eα e ∈ . (4.7) det G    U  The equality in the first line of (4.41) follows. To prove the second equality in (4.41) we will use the next lemma: Lemma 4.11. (Jacobi’s determinant identity) Assume that A is a (k + ℓ) (k + ℓ)-matrix which is invertible and that × W X B C A = , A−1 = ,  Y Z  D E where B and W are k k matrices, then × (4.43) det Z = det A det B. Proof. We know that B C W X BW + CY BX + CZ I = = , D E  Y Z  DW + EY DX + EZ and hence BX + CZ =0(k ℓ-matrix) and DX + EZ = I (ℓ ℓ-matrix). × × As a result, we find that B C I X B BX + CZ B 0 = = . D E 0 Z  D DX + EZ D I

Taking determinants, we find that det A−1 det Z = det B, and the claim (4.43) follows. We choose A = L, A−1 = G, B = G , Z = L , so that − |K×K − |U×U det Z = det( L ) = (det G )−1, det A = (det G)−1, − |U×U U and (4.43) yields det G (4.44) = det G notation= det G . det G |K×K K×K U  −1 In the same way, with A = V L, A = GV , B = GV |K×K, Z = (V L)U×U , we find that − −

det GV (4.45) = det GV |K×K. det GU,V 82 4 POISSON GAS OF MARKOVIAN LOOPS

Coming back to the expression after the first equality in (4.41), we see that this expression G equals ( det V |K×K )α, and this completes the proof of (4.41). det G|K×K

(4.42): • We first note that the left-hand side of (4.42) equals

Pα ω, Lx Lx =0 = hD  xP∈K1  xP∈K2 E i (4.4) (4.46) Pα 1 N > 1 ω, Lx Lx =0 = { } D  xP∈K1  xP∈K2 E i exp αµ∗ γ∗ : L (γ∗) > 0 and L (γ∗) > 0, and N(γ∗) > 1 . − x x n  xP∈K1 xP∈K2 o Now we have µ∗[N(γ∗) > 1] < (it equals log det(I P ) by (3.19) and (3.67)), so we can write: ∞ − −

∗ µ N > 1, Lx > 0 and Lx > 0 = x∈K1 x∈K2 ∗ P ∗ P  (4.47) µ N > 1, Lx > 0 + µ N > 1, Lx > 0  xP∈K1   xP∈K2  µ∗ N > 1, L > 0 . − x  x∈KP1∪K2  The next lemma will be useful to evaluate the above terms. Lemma 4.12. (α> 0,K E, U = E K) ⊆ \ With the notation of (4.15), one has

−α det G −α (4.48) P =0 = det G λ = λ , α Lx |K×K x det G x  x∈K   x∈K   U x∈K  P b Q Q ∗ (4.49) µ N > 1, Lx > 0 = log det G|K×K λx .  xP∈K   xQ∈K  Proof. Note that (4.49) is a direct consequence of (4.48) and the identity

(4.4) P =0 = exp αµ∗ N > 1, L > 0 . α Lx − x  xP∈K  n  xP∈K o b

We hence only need to prove (4.48). We use (4.20) with the choice V = ρ1K where ρ . We then find that ↑∞

V =ρ1 −ρ P Lx (4.20) det I P K −α x K b (4.50) Pα x =0 = lim Eα e ∈ = lim − . L λ→∞ λ→∞ det I P  xP∈K      b −

V =ρ1K V =ρ1K Observe that P 1U P (i.e. limρ→∞ P f(x)=1U (x)(P f)(x), for all x E), λ−→→∞ ∈ by the definition of P V below (4.19). The matrix for I 1 P is block diagonal: − U I 0 ,   (I P )|U×U ←− − 4.4 Some identities 83 and, therefore,

V =λ1K α α det(I P ) − det(I P )|U×U ) − lim − = − . λ→∞ det(I P ) det(I P )  −   − 

Now, det G = (det( L) )−1 =( λ )−1(det(I P ) )−1, and similarly, U − |U×U x∈U x − |U×U Q −1 det G = λ det(I P ) −1. x − xQ∈E   Coming back to (4.50), we have shown that

−α det G λx −α x∈E det G Pα x =0 =  Q  = λx , L det G λ det G h xP∈K i U x  U xQ∈K  b  xQ∈U  and the proof of (4.48) is completed with (4.44). We now return to (4.46), (4.47), and find that (recall K K = φ) 1 ∩ 2

Pα[no loop intersects both K1 and K2]= det G det G det G −α det G det G −α U1∩U2 = U1∩U2 det GU1 det GU2 det G  det GU1 det GU2  (4.44) det G −α = K1×K1 , detK1×K1 GU2  since K = U (U U ). This concludes the proof of (4.42) and of Proposition 4.10. 1 2\ 1 ∩ 2

Special case: loops going through a point We specialize the above formula (4.42) to find the probability that loops in the Poisson cloud going through a base point x all avoid some K not containing x:

E

x K

Fig. 4.3

Corollary 4.13. (α> 0) Consider x E, and K E not containing x, then ∈ ⊆ Pα[all loops going through x do not intersect K]= (4.51) E [H < ,g(X , x)] α 1 x K ∞ HK .  − g(x, x)  84 4 POISSON GAS OF MARKOVIAN LOOPS

Proof. In the notation of (4.42) we pick K1 = x , K2 = K. Setting U = E K, (4.42) yields that the left-hand side of (4.51) equals { } \

g (x, x) α (1.49) g(x, x) E [H < ,g(X , x)] α U = − x K ∞ HK ,  g(x, x)   g(x, x)  and (4.51) follows.

4.5 Some links between Markovian loops and random interlace- ments In this section we discuss various limiting procedures making sense of the notion of “loops going through infinity”, and see random interlacements appear as a limit object. We begin with the case of Zd, d 3. Random interlacements have been introduced in [27], and we refer to [27] for a more≥ detailed discussion of the Poisson point process of random interlacements. We will recover random interlacements on Zd, d 3, by the consideration of “loops going through infinity”. More precisely, we consider d≥ 3, and ≥ d Un, n 1, a non-decreasing sequence of finite connected subsets of Z , (4.52) ≥ d with Un = Z , n as well as S

(4.53) x Zd, a “base point”. ∗ ∈

For fixed n 1, we endow the connected subset Un, playing the role of E in (1.1), with the weights:≥ 1 cn = 1 x y =1 , for x, y U , x,y 2d {| − | } ∈ n and with the killing measure:

n 1 κx = 1 x y =1 , for x Un, d 2d {| − | } ∈ y∈ZP\Un very much in the spirit of what is done in Example 2) above (1.18) (except for the fact 1 n n n we now replace 1 by ). Note that λ = c + κ = 1, for all x Un. 2d x y∈Un x,y x ∈ P We write Ωn for the space corresponding to (4.2), of pure point measures on the set of n unrooted loops contained in Un, and Pα for the corresponding Poisson gas of Markovian loops at level α, see (4.6).

an unrooted loop first the limit n , →∞ contained in Un then the limit x∗ Un →∞ and going through x∗ x∗

Fig. 4.4 4.5 Some links between Markovian loops and random interlacements 85

We want to successively take the limit n , and then x∗ . The first limit corresponds to the construction of a Poisson→ gas ∞ of unrooted loo→ps ∞on Zd. We will not really discuss this Poisson measure, which can be defined in a rather similar fashion to what we have done at the beginning of this chapter, but of course escapes the set-up of a finite state space E with weights and killing measure satisfying (1.1) - (1.5). For the second limit (i.e. x ), we will also adjust the level α as a function of x . ∗ →∞ ∗ The fashion in which we tune α to x∗ is dictated by the Green function of simple random walk on Zd: ∞ Zd d (4.54) gZd (x, y)= Ex 1 Xt = y dt , for x, y Z , h Z0 { } i ∈ Zd where Px denotes the canonical law of continuous-time simple random walk with jump d rate 1 on Z starting at x, and Xt, t 0, the canonical process. Taking advantage of translation invariance we introduce the≥ function

def d (4.54’) g(x) = g d (0, x), for x Z (so g d (x, y)= g(y x)). Z ∈ Z − The function g( ) is known to be positive, finite (recall d 3), symmetric, i.e. g( x) = g(x), and has the· asymptotic behavior ≥ −

−(d−2) g(x) cd x , as x , (4.55) ∼ | | 2 →∞ d d 1 where cd = = Γ 1 d , (d 2) B(0, 1) 2 2 − π 2 − | |   where x stands for the Euclidean norm of x, and B(0, 1) for the volume of the unit ball of Rd (see| | for instance [15], p. 31). | | We will choose α according to the formula:

g(0) 2(d−2) (4.56) α = u 2 x∗ , with u 0. cd | | ≥ We introduce for ω Ω , the subset of U of points visited by the unrooted loops in the ∈ n n support of the pure point measure ω, which pass through the base point x∗: (ω)= z U ; there is a γ∗ in the support of ω, (4.57) Jn,x∗ ∈ n  which goes through x∗ and z , for ω Ωn. ∈

Note that (ω) = φ, when x / U , and (ω) x , when at least one γ∗ in the Jn,x∗ ∗ ∈ n Jn,x∗ ∋ ∗ support of ω goes through x∗. For the next result we will use the fact that (1.57) and (1.58) in the case of continuous- time simple random walk with jump rate 1 on Zd take the following form: when K is a finite subset of Zd, Zd d P [H < ]= g d (x, y) e (y), for x Z (4.58) x K ∞ Z K ∈ yP∈K (with HK as in (1.45)), where the equilibrium measuree

e (y)= P Zd [H = ]1 (y), y Zd K y K ∞ K ∈ (with H as in (1.45)), K e e 86 4 POISSON GAS OF MARKOVIAN LOOPS is the unique measure supported on K such that the equality in (4.58) holds for all x K. ∈ Its mass capZd (K) is the capacity of K. The next theorem relates the so-called “random interlacement at level u” to the the set when n , and then x , under the measure Pn, with α as in (4.56). Jn,x∗ →∞ ∗ →∞ α In this set of notes we will not introduce the full formalism of the Poisson point process of random interlacements but only content ourselves with the description of the random interlacement at level u, see Remark 4.15 below. Theorem 4.14. (d 3) ≥ For u 0 and K Zd finite, one has ≥ ⊆

n −u capZd (K) (4.59) lim lim P g(0) 2(d−2) [ n,x∗ K = φ]= e . α=u |x∗| x∗→∞ n→∞ c2 J ∩ d Proof. By (4.51) we have, as soon as x U and x / K, ∗ ∈ n ∗ ∈ Pn[ K = φ]= Pn[all loops going through x do not meet K] α Jn,x∗ ∩ α ∗ Zd E [HK < TU ,gU (XH , x )] α = 1 x∗ n n K ∗ ,  − gUn (x∗, x∗) 

d with gUn ( , ) the Green function of simple random walk on Z killed when exiting Un. Clearly, by· · monotone convergence,

d g (x, y) g d (x, y), for x, y Z , when n . Un ↑ Z ∈ →∞ So we see that when x / K: ∗ ∈ Zd d α n Ex∗ [HK < ,gZ (XHK , x∗)] (4.60) lim Pα[ n,x∗ K = φ]= 1 ∞ n J ∩  − gZd (x∗, x∗)  (the formula holds also when x K). ∗ ∈ Now g d (x , x )= g(0), and, as x , we have by (4.55) Z ∗ ∗ ∗ →∞

Zd (4.55) Zd −(d−2) E [H < ,g d (X , x )] P [H < ] c x x∗ K ∞ Z HK ∗ ∼ x∗ K ∞ d | ∗| (4.58) −(d−2) 2 (cd x∗ ) capZd (K), (4∼.55) | | and, in particular, with α as in (4.56),

α Zd d d lim Ex∗ [HK < , gZ (XHK , x∗)] = u capZ (K). x∗→∞ g(0) ∞ Coming back to (4.60) we readily obtain (4.59). Remark 4.15. One can define a translation invariant random subset of Zd denoted by u, the so-called random interlacement at level u, see [27], with distribution characterized byI the identity:

(4.61) P[ u K = φ]= e−u capZd (K), for all K Zd finite. I ∩ ⊆ 4.5 Some links between Markovian loops and random interlacements 87

Coming back to (4.59), note that for any disjoint finite subsets K,K′ of Zd one has by an inclusion-exclusion argument: Pn[ K = φ and K′]= α Jn,x∗ ∩ Jn,x∗ ⊇ n E 1 c (x) 1 1 c (x) = α Jn,x∗ Jn,x∗ ′ − h xQ∈K xQ∈K i |A| n ( 1) Pα[ n,x∗ (K A)= φ], ′ − J ∩ ∪ AP⊆K where we expanded the last product in the second line to find the last line. In the same fashion, we see that for disjoint finite subsets K,K′ of Zd we have P[ u K = φ and u K′]= I ∩ I ⊇ ( 1)|A| P[ u (K A)= φ]= ( 1)|A| e−u capZd (K∪A). ′ − I ∩ ∪ ′ − AP⊆K AP⊆K n As a result, Theorem 4.14 can be seen to imply that under the measure P g(0) 2(d−2) , α=u |x∗| c2 d the law of n,x∗ converges in an appropriate sense (i.e. convergence of all finite dimensional marginal distributions)J to the law of u, as n , and then x .  I →∞ ∗ →∞ We continue with the discussion of links between random interlacements and “loops going through infinity” in the Poisson cloud of Markovian loops. We begin with a variation on (4.59) in the context of Zd, d 3, where we will give a different meaning to the informal notion of “loops going through≥ infinity”. d We consider a sequence Un, n 1, as in (4.52) of finite connected subsets of Z , d 3, ≥ d ≥ which increases (in the wide sense) to Z . The role of the base point x∗, cf. (4.53), is now replaced by the complement of the Euclidean ball: (4.62) B def= x Zd; x R , with R> 0. R { ∈ | | ≤ }

Zd

R K Un 0 BR

an unrooted loop first the limit n , →∞ contained in Un then the limit R c →∞ and touching BR Fig. 4.5 88 4 POISSON GAS OF MARKOVIAN LOOPS

By analogy with (4.57), we introduce for ω Ω , ∈ n ∗ n,R(ω)= z Un; there is a γ in the support of ω, (4.63) K { ∈ which goes through Bc and z . R } We now choose α according to Rd−2 (4.64) α = u , with u 0, and cd as in (4.55). cd ≥ The corresponding statement to (4.59) is now the following. By the argument of Remark u 4.15, it can be interpreted as a convergence of the law of n,R to the law of , as n and then R . K I →∞ →∞ Theorem 4.16. (d 3) ≥ For u 0 and K Zd finite, one has ≥ ⊆ n −u capZd (K) (4.65) lim lim P Rd−2 [ n,R K = φ]= e . R→∞ n→∞ α=u cd K ∩ Proof. We assume that R is large enough so that K B and n sufficiently large so ⊆ R that BR Un. In the notation of (4.42), we chose K1 = K and K2 = Un BR, so that K K = φ. Then (4.42) yields that \ 1 ∩ 2 α n detK×K GBR Pα[ n,R K = φ]= . K ∩  detK×K GUn  By (1.49), we write det G = det(A B(R)), K×K BR n − n where A is the K K-matrix: n × A (x, y)= g (x, y), for x, y K, n Un ∈

(R) and Bn , the K K-matrix × (R) Zd c Bn (x, y)= Ex [HB < TUn , gUn (XHBc ,y)], for x, y K. R R ∈ Likewise, by the above definitions, we find that

detK×K GUn = det(An) . When n , →∞

(4.66) lim An = A, where A(x, y)= gZd (x, y), for x, y K n ∈ (R) (R) (R) Zd c d (4.67) lim Bn = B , where B (x, y)= Ex [HB < , gZ (XHBc ,y)], n R ∞ R for x, y K. ∈ The matrix A is known to be invertible (one can base this on a similar calculation as in the proof of (1.35), see also [25], P2, p. 292). So we find that:

(R) α n det(A B ) −1 (R) α (4.68) lim Pα[ n,R K = φ]= − = det(I A B ) . n→∞ K ∩ det A −    4.5 Some links between Markovian loops and random interlacements 89

d Z c For x K, Px -a.s., HB < , and XHBc ∂BR, so that ∈ R ∞ R ∈ (R) Zd d B (x, y)= Ex [gZ (XHBc ,y)] R

(4.55) c d , for x, y K, as R . ∼ Rd−2 ∈ →∞ It follows that c 1 −1 (R) d −1 1 (4.69) det(I A B )=1 d−2 Tr(A 1K×K)+ o d−2 , as R , − − R R  →∞ where 1 denotes the K K matrix with all coefficients equal to 1. K×K × Coming back to (4.58), we see that A−1 1 = C, where C is the K K-matrix K×K × with coefficients C(x, y)= e (x), for x, y K. Since e (x) = cap d (K), we have K ∈ x∈K K Z found that P

−1 (R) cd 1 (4.70) det(I A B )=1 d−2 capZd (K)+ o d−2 , as R . − − R R  →∞ Inserting this formula into (4.68), with α as in (4.64), immediately yields (4.65).

Complement: random interlacements and Poisson gas of loops coming from infinity on a transient weighted graph So far we only discussed links between random interlacements and a Poisson cloud of “loops going though infinity”, in the case of Zd, d 3. ≥ We now discuss another construction, which applies to the general set-up of an (infinite) transient weighted graph with no killing. We consider a countable (in particular infinite) set Γ endowed with non-negative weights c , x, y Γ (i.e. satisfying (1.2) with Γ in place of E), so that x,y ∈ Γ endowed with the set of edges consisting of x, y { } (4.71) such that cx,y > 0, is connected, locally finite, (i.e. each x Γ has a finite number of neighbors), ∈ and the simple random walk with jump rate 1 on Γ induced by these weights (4.72) c , x, y Γ, is transient. x,y ∈ This is what we mean by a transient weighted graph (i.e. with no killing). We consider, as in (4.52),

Un, n 1, a non-decreasing sequence of finite connected (4.73) subsets≥ of Γ increasing to Γ (i.e. U = Γ and U U ), n n ⊆ n+1 nS≥1 as well as a point

(4.74) x∗ not in Γ (which will play the role of the point “at infinity for each Un”).

We consider the finite graph with vertex set En = Un x∗ , endowed with the weights c ∪{ } obtained by collapsing UN on x∗ : 90 4 POISSON GAS OF MARKOVIAN LOOPS

Un G Un ↑

G

Fig. 4.6

cn = c , when x, y U x,y x,y ∈ n (4.75) cn = cn = c , when y U . x∗,y y,x∗ x,y ∈ n x∈PG\Un In addition we choose on E the killing measure κn, x E , concentrated on x , so that n x ∈ n ∗ n κx∗ = λn > 0, with lim λn = , (4.76) ∞ κn =0, for x U = E x . x ∈ n n\{ ∗} For the continuous-time walk on Γ with jump rate 1, one can show that when K is a 0 finite subset of G, setting λx = y∈Γ cx,y, for x Γ, and gΓ( , ) for the Green function 1 Γ ∞ ∈ · · (i.e. gΓ(x, y)= 0 Ex [ 1 Xt =Py dt], for x, y Γ), λy 0 { } ∈ R (4.77) P Γ[H < ]= g (x, y) e (y), for x Γ, x K ∞ Γ K ∈ yP∈K where eK is the equilibrium measure of K:

(4.78) e (y)= P Γ[H = ]1 (y) λ0, for y Γ K y K ∞ K y ∈ e (for instance one approximates the left-hand side of (4.77) by P Γ[H < T ], with n , x K Un →∞ and applies (1.57), (1.53) to the walk killed when exiting Un).

The total mass of eK is the capacity of K:

(4.79) capΓ(K)= eK(y). yP∈K n We write Ωn for the space of unrooted loops on En and Pα for the Poisson gas of Markovian n loops at level α> 0, on the above finite set En endowed with the weights c in (4.75) and the killing measure κn in (4.76). 4.5 Some links between Markovian loops and random interlacements 91

We also introduce the random subset of Un: (ω)= z U ; there is a γ∗ in the support of ω (4.80) Jn { ∈ n which goes through x and z . ∗ } We now specify α via the formula, see (4.76),

(4.81) α = uλ , with u 0. n ≥ The statement corresponding to (4.59) and (4.65), which in the present context links the Poisson gas of loops on En going through “the point x∗ at infinity”, with the interlacement at level u on G is coming next. We refer to Remark 1.4 of [27] and [29] for a more detailed description of the Poisson point process of random interlacements in this context. Theorem 4.17. For u 0 and K G finite, one has ≥ ⊆ n −u capG(K) (4.82) lim Pα=uλ [ n K = φ]= e . n→∞ n J ∩ Proof. For large n, K U , and we can write by (4.51): ⊆ n n α n Ex∗ [HK < , gn(XHK , x∗)] Pα[ n K = φ]= 1 ∞ , J ∩  − gn(x∗, x∗)  where P n stands for the law of the walk on E with unit jump rate, starting at x x n ∈ En, attached to the weights and killing measure in (4.75), (4.76) and gn( , ) for the corresponding Green function. · · By (2.71), we know that

(4.83) g (x , z)= λ−1, for all z E , n ∗ n ∈ n and, as a result, α Pn[ K = φ]= 1 P n [H < ] α Jn ∩ − x∗ K ∞ (4.84) (1.57) 1 α = 1 capn(K) , (4.83)  − λn  where capn(K) stands for the capacity of K in En. n By (1.53) and the fact that κ vanishes on Un, we know that

(4.85) cap (K)= P n[H = ] λ0. n x K ∞ x xP∈K n e In addition, we know that Px -a.s.,

−1 HK = = Hx < HK θ ( HK = ), { ∞} { ∗ } ∩ Hx∗ { ∞} e e because Un is finite and the walk is only killed at x∗. So applying the strong Markov property at time Hx∗ we find that:

P n[H = ]= P n[H < H ] P n [H = ] x K ∞ x x∗ K × x∗ K ∞ e = P Γ[T < He ] (1 P n [H < ]), x Un K × − x∗ K ∞ e 92 4 POISSON GAS OF MARKOVIAN LOOPS

using the fact that the walk on G and on En “agree up to time TUn ”. Note, in addition, that (1.57) n 0 (4.83) 1 0 (4.76) Px∗ [HK < ] gn(x∗,y) λy = λy 0, ∞ ≤. λn n−→→∞ (1 53) yP∈K yP∈K and that P Γ[T < H ] P Γ[H = ], as n . x Un K ↓ x K ∞ →∞ Coming back to (4.85), we havee shown thate

Γ 0 (4.78) (4.86) lim capn(K)= Px [HK = ] λx = capΓ(K). n x∈K ∞ (4.79) P e If we now insert this identity in (4.84) and keep in mind that α = uλn, we readily find (4.82).

Remark 4.18. 1) By a similar argument as described in Remark 4.15, the above theorem can be seen to n u imply that under Pα=uλn , the law of n converges to the law of in the sense of finite dimensional marginal distributions, asJn goes to infinity. I

2) A variation on the approximation scheme, which we employed to approximate random interlacements on a transient weighted graph, can be used to prove an isomorphism the- orem for random interlacements, see [28]. One can define the random field (Lx,u)x∈Γ of occupation times of continuous-time random interlacements at level u (this random field is governed by a probability denoted by P). One can also define the canonical law P G on RΓ of the Gaussian free field attached to the transient weighted graph under considera- G tion: under P the canonical field (ϕx)x∈Γ is a centered Gaussian field with covariance G E [ϕx ϕy] = gΓ(x, y), for x, y Γ, with gΓ( , ) the Green function. The isomorphism theorem from [28] states that ∈ · ·

1 2 G Lx,u + ϕx under P P , has the same law as 2 x∈Γ ⊗ (4.87)   1 2 G (ϕx + √2u) under P .  2 x∈Γ The above identity in law is intimately related to the generalized second Ray-Knight theorem, see Theorem 2.17, and characterizes the law of (Lx,u)x∈Γ.  REFERENCES 93

References

[1] M.T. Barlow. Diffusions on fractals, in volume 1690 of Lecture Notes in Math. Ecole d’´et´ede Probabilitt´es de St. Flour 1995, 1–112, Springer, Berlin, 1998.

[2] D. Brydges, J. Fr¨ohlich, and T. Spencer. The random walk representation of classical spin systems and correlation inequalities. Comm. Math. Phys., 83(1):123–150, 1982.

[3] P. Doyle and J. Snell. Random walks and electric networks. The Carus Mathematical Monographs, second printing, Washington DC, 1984.

[4] E.B. Dynkin. Markov processes as a tool in field theory. J. of Funct. Anal., 50(1):167– 187, 1983.

[5] E.B. Dynkin. Gaussian and non-Gaussian random fields associated with Markov processes. J. of Funct. Anal., 55(3):344–376, 1984.

[6] N. Eisenbaum. Dynkin’s isomorphism theorem and the Ray-Knight theorems. Probab. Theory Relat. Fields, 99:321–335, 1994.

[7] N. Eisenbaum. Une version sans conditionnement du theoreme d’isomorphisme de Dynkin. In S´eminaire de Probabilit´es, XXIX, volume 1613, Lecture Notes in Math- ematics, 266–289, Springer, Berlin, 1995.

[8] N. Eisenbaum, H. Kaspi, M.B. Marcus, J. Rosen and Z. Shi. A Ray-Knight theorem for symmetric Markov processes. Ann. Probab., 28(4):1781–1796, 2000.

[9] W. Feller. An introduction to and its applications, volume 1. 3rd edition, John Wiley & Sons, New York, 1957.

[10] M. Fukushima, Y. Oshima, and M. Takeda. Dirichlet forms and symmetric Markov processes. Walter de Gruyter, Berlin, 1994.

[11] J. Glimm and A. Jaffe. Quantum Physics. Springer, Berlin, 1981.

[12] I. Karatzas and S. Shreve. Brownian motion and stochastic calculus. Springer, Berlin, 1988.

[13] F.B. Knight. Random walks and a sojourn density process of Brownian motion. Trans. Amer. Math. Soc., 109(4):56–76, 1963.

[14] T. Kumagai. Random walks on disordered media and their scaling limits. Notes of St. Flour lectures, available at http://www.kurims.kyoto-u.ac.jp/ kumagai/StFlour-Cornell.html, 2010. ∼ [15] G.F. Lawler. Intersections of random walks. Birkh¨auser, Basel, 1991.

[16] G.F. Lawler and W. Werner. The brownian loop soup. Probab. Theory Relat. Fields, 128:565–588, 2004.

[17] Y. Le Jan. Markov loops and renormalization. Ann. Probab., 38(3):1280–1319, 2010.

[18] Y. Le Jan. Markov paths, loops and fields, volume 2026 of Lecture Notes in Math. Ecole d’Et´ede Probabilit´es de St. Flour, Springer, Berlin, 2012. 94 REFERENCES

[19] M.B. Marcus and J. Rosen. Markov processes, Gaussian processes, and local times. Cambridge University Press, 2006. [20] J. Neveu. Processus ponctuels, in volume 598 of Lecture Notes in Math., Ecole d’Et´e de Probabilit´es de St. Flour 1976, 249–447, Springer, Berlin, 1977. [21] D. Ray. Sojourn times of diffusion process. Illinois Journal of Math., 7:615–630, 1963. [22] S.I. Resnick. Extreme Values, regular variation, and point processes. Springer, New York, 1987. [23] D. Revuz and M. Yor. Continuous martingales and Brownian motion. Springer, Berlin, 1991. [24] S. Sheffield and W. Werner. Conformal loop ensembles: the Markovian characteri- zation and the loop-soup construction. To appear in Ann. Math., also available at arXiv:1006.2374.

[25] F. Spitzer. Principles of random walk. Springer, Berlin, second edition, 2001. [26] K. Symanzik. Euclidean quantum field theory. In: Scuola internazionale di Fisica “Enrico Fermi”, XLV Corso, 152-223, Academic Press, 1969. [27] A.S. Sznitman. Vacant set of random interlacements and percolation. Ann. Math., 171:2039–2087, 2010. [28] A.S. Sznitman. An isomorphism theorem for random interlacements. Electron. Com- mun. Probab., 17, 9, 1-9, 2012. [29] A. Teixeira. Interlacement percolation on transient weighted graphs. Electron. J. Probab., 14:1604–1627, 2009. Index capacity, 14, 16 Markovian loops, 75 variational problems, 16 measure Px,y, 30, 66 conductance, 14 continuous-time loop, 54 occupation field, 72 stationarity property, 54 occupation field of Markovian loop, 72 time-reversal invariance, 56 occupation field of non-trivial loops, 74 occupation time, 75 Dirichlet form, 5 orthogonal decomposition, 17 pointed loops, 52 trace form, 17, 18, 25 measure µp on pointed loops, 58 tower property, 20 Poisson gas of Markovian loops, 71 discrete loop, 54 Poisson point measure, 71 stationarity property, 54 potential operators, 10 time-reversal invariance, 56 random interlacements, 48, 80, 84, 87, 89 energy, 5, 10 random interlacement at level u, 86, 91 entrance time, 12 Ray-Knight theorem, 38 equilibrium measure, 14, 16 restriction property, 61, 66 equilibrium potential, 14, 16 rooted loops, 51 exit time, 12 measure µr on rooted loops, 52 rooted Markovian loop, 51 Feynman diagrams, 77 Feynman-Kac formula, 20, 25 Symanzik’s representation formula, 78 time of last visit, 12 Gaussian free field, 27, 75 trace Dirichlet form, 17, 18, 25 conditional expectations, 28 trace process, 25 generalized Ray-Knight theorems transient weighted graph, 89 first Ray-Knight theorem, 39 transition density, 8 second Ray-Knight theorem, 44, 48, 92 killed transition density, 12 Green function, 9 killed, 12 unit weight, 67 unrooted loops, 67 hitting time, 12 measure µr on unrooted loops, 67 Isomorphism theorem variable jump rate, 22 Dynkin isomorphism theorem, 35 Eisenbaum isomorphism theorem, 37 weights, 5 for random interlacements, 92 jump rate 1, 6 killing measure, 5 local time, 22, 62 loops going through a point, 83 loops going through infinity, 84, 87, 89 Markov chain X., 22 Markov chain X., 6

95