<<

1GaussianintegralsandWick’stheorem

1.1 Gaussian integrals Exercise 1.1. Compute the basic, one-dimensional Gaussian integral and show that it is given by ∞ − 1 λx2 2π dxe 2 = , Re(λ) > 0 . (1) λ !−∞ " The standard trick is to take the square and use polar coordinates.

Exercise 1.2. Show that

∞ 2 − 1 λx2+ax 2π a dxe 2 = exp , Re(λ) > 0 . (2) λ 2λ !−∞ " # $

Next, we turn to n-dimensional Gaussian integrals. We consider an element x of Rn and a symmetric n n matrix M and define a quadratic form xT Mx. Integrating over Rn,we obtain the Gaussian× integral

1 dnx exp xT Mx = −1 . (3) −2 N ! # $ Exercise 1.3. Show that det M = . (4) N % (2π)n What are the conditions on the matrix M for the integral in eq. (3) to exist?

Exercise 1.4. Similarly, show that for a Rn and if M −1 exists, ∈ 1 1 dnx exp xT Mx + aT x =exp aT M −1a . (5) N −2 2 ! # $ # $

Gaussian integrals appear in many areas of , for instance in statistical physics and in path-integral quantization. In Feynman’s path-integral formulation of , one needs to compute an integral of the form

x ,t x ,t = x(t) eiS[x(t)] (6) ⟨ f f | i i⟩ D ! to obtain the amplitude that a particle which was at point xi at time ti can be found at point xf at time tf ,whereS[x(t)] is the associated with the path x(t)andthesymbol

1 x(t) indicates that one should sum over all paths x(t) starting at x(t )=x and ending at D i i x(t )=x . & f f As it stands, it is not clear what eq. (6) means. To define it, one discretizes time tk = t + k ∆suchthant = t ,t = t . The integral over all paths takes the form i · 0 i n+1 f n x(t) dx = dnx, (7) D −→ k ! k'=1 ! ! where xk = x(tk) is the position after k time steps. We see that we encounter integrals over Rn as in eq. (3). To evaluate the oscillatory expression (6), one first computes it for imaginary (Euclidean) time t = iτ with τ R. Performing this so-called Wick rotation − ∈ leads to iS[x(t)] SE [x(τ)] so that one can work with a real exponent as in eq. (3). After performing the integrals→− and taking ∆ 0 one then analytically continues the result back to physical time values. Doing so, the path→ integral for a harmonic oscillator (or a free particle) boils down to the evaluation of Gaussian integrals like in eq. (3). The path integral formulation plays an important role in (QFT), where amongst others it forms the basis for numerical computations. In field theories one integrates over all field configurations φ(t, ⃗x) instead of the path x(t). In discretized form, one then integrates over the field values φ φ(t ,⃗x). The discrete set of points (t ,⃗x) is called i ≡ i i i i a lattice and to obtain the continuum result one takes the limit where the distance between lattice points (the lattice spacing) goes to zero.

1.2 Generating function, Wick’s theorem If we include the normalization factor , we can view the integrand in eq. (3), viz. N 1 ρ(x)= exp xT Mx , (8) N −2 # $ as a probability distribution in Rn since it is normalized and strictly positive as long as M is a real, symmetric and positive1 matrix. We can then compute expectation values as

A(x) dnxρ(x) A(x) . (9) ⟨ ⟩≡ ! The m-point correlation functions

x ...x m (10) ⟨ i1 i ⟩ play an important role when one computes path integrals and we now analyze them in detail. The result is Wick’s theorem, which provides the basis for perturbative computations in QFT. To obtain the expectation values in eq. (10), let us consider (b x b x ) i i ≡ i i i

bixi n bixi 1 n ( (b) e = d xρ(x)e = b ...b m d xρ(x) x ...x m . (11) Z ≡⟨ ⟩ m! i1 i i1 i m≥0 ! ) ! 1All its eigenvalues are strictly positive.

2 The quantity (b) is the generating function of moments of the probability distribution ρ(x): Z 1 (b)= b ...b m x ...x m . (12) Z m! i1 i ⟨ i1 i ⟩ m≥0 ) The inverse relation can be written as ∂ ∂ xi1 ...xim = ... (b) . (13) ⟨ ⟩ ∂b ∂b m Z * i1 i +b=0 In general, for an arbitrary probability density ρ(x), (b) cannot be calculated exactly. But it can easily be evaluated for the Gaussian probabilityZ distribution. According to eq. (5), the generating function of the moments of the Gaussian distribution is

1 (b)= dnx exp xT Mx + bT x Z N −2 ! # $ = exp(bT x) (14) ⟨ ⟩ 1 =exp bT M −1b . 2 # $ The inverse relation is

∂ ∂ 1 −1 xi1 ...xim = ... exp bi(M )ijbj . (15) ⟨ ⟩ ∂b ∂b m 2 * i1 i , -+b=0

Exercise 1.5. Show that M is symmetric and, if M −1 exists, it is symmetric as well.

Exercise 1.6. Prove that:

∂ 1 −1 −1 1 −1 i exp bi(M )ijbj =(M )i1kbk exp bi(M )ijbj , ∂b 1 2 2

∂ ∂ . 1 −1 / −1 .−1 −1/ 1 −1 i i exp bi(M )ijbj = (M )i1i2 +(M )i1kbk(M )i2lbl exp bi(M )ijbj . ∂b 1 ∂b 2 2 2 . / 0 1 . /

∂ ∂ 1 −1 Exercise 1.7. Find a general expression for i ... i exp bi(M )ijbj . [Hint: Looking ∂b 1 ∂b n 2 at the explicit expressions for the lowest few derivatives, you should observe a simple pattern, . / which can then be established using induction. ]

Exercise 1.8. Use the previous results at b = 0, relevant for eq. (15), to verify that

−1 xi1 xi2 =(M )i1i2 , ⟨ ⟩ (16) x x x x =(M −1) (M −1) +(M −1) (M −1) +(M −1) (M −1) , ⟨ i1 i2 i3 i4 ⟩ i1i2 i3i4 i1i3 i2i4 i1i4 i2i3 3 and that the correlators of an odd number of points vanish,

x ...x m =0. (17) ⟨ i1 i2 +1 ⟩

Exercise 1.9. Derive Wick’s theorem:

x ...x m = x x ... x m x m , (18) ⟨ i1 i2 ⟩ ⟨ k1 k2 ⟩ ⟨ k2 −1 k2 ⟩ P ) where the sum is over all pairings P , i.e. all possible ways to group the indices i1,i2,...,i2m into m pairs (k1,k2),...,(k2m−1,k2m). The theorem states that for the Gaussian integral all higher-point correlators reduce to products of the 2-point correlation function, which is given by the inverse of the matrix in the exponent of the Gaussian integral:

x x =(M −1) . (19) ⟨ i1 i2 ⟩ i1i2

(As usual in mathematics, the proof of the final theorem is trivial after you have prepared it with a highly non-trivial proof of a lemma, in our case the general form in Exercise 1.7.)

4