<<

Fermionic Functional Integrals and the Group

Joel Feldman Department of University of British Columbia Vancouver B.C. CANADA V6T 1Z2 [email protected] http://www.math.ubc.ca/ feldman ∼ Horst Knorrer, Eugene Trubowitz Mathematik ETH{Zentrum CH-8092 Zuric h SWITZERLAND

Abstract The Renormalization Group is the name given to a technique for analyzing the qualitative behaviour of a class of physical systems by iterating a map on the vector space of interactions for the class. In a typical non-rigorous application of this technique one assumes, based on one’s physical intuition, that only a certain nite dimensional subspace (usually of dimension three or less) is important. These notes concern a technique for justifying this approximation in a broad class of Fermionic models used in condensed matter and high energy physics.

These notes expand upon the Aisenstadt Lectures given by J. F. at the Centre de Recherches Mathematiques, Universite de Montreal in August, 1999. Table of Contents

Preface p i I Fermionic Functional Integrals p 1 § I.1 Grassmann Algebras p 1 §I.2 Grassmann Integrals p 7 §I.3 Dierentiation and Integration by Parts p 11 §I.4 Grassmann Gaussian Integrals p 14 §I.5 Grassmann Integrals and Fermionic Quantum Theories p 18 §I.6 The Renormalization Group p 24 §I.7 Wick Ordering p 30 §I.8 Bounds on Grassmann Gaussian Integrals p 34 § II Fermionic Expansions p 40 § II.1 Notation and Denitions p 40 §II.2 Expansion { Algebra p 42 §II.3 Expansion { Bounds p 44 §II.4 Sample Applications p 56 § Gross{Neveu2 p 57 Naive Many-fermion2 p 63 Many-fermion2 { with sectorization p 67 Appendices A Innite Dimensional Grassmann Algebras p 75 § B Pfaans p 86 § C Propagator Bounds p 93 § D Problem Solutions p 100 § References p 147 Preface

The Renormalization Group is the name given to a technique for analyzing the qualitative behaviour of a class of physical systems by iterating a map on the vector space of interactions for the class. In a typical non-rigorous application of this technique one assumes, based on one’s physical intuition, that only a certain nite dimensional subspace (usually of dimension three or less) is important. These notes concern a technique for justifying this approximation in a broad class of fermionic models used in condensed matter and high energy physics. The rst chapter provides the necessary mathematical background. Most of it is easy algebra { primarily the denition of Grassmann algebra and the denition and basic properties of a family of linear functionals on Grassmann algebras known as Grassmann Gaussian integrals. To make I really trivial, we consider only nite dimensional Grass- § mann algebras. A simple{minded method for handling the innite dimensional case is presented in Appendix A. There is also one piece of analysis in I { the Gram bound on § Grassmann Gaussian integrals { and a brief discussion of how Grassmann integrals arise in quantum eld theories. The second chapter introduces an expansion that can be used to establish ana- lytic control over the Grassmann integrals used in fermionic quantum eld theory models, when the covariance (propagator) is \really nice". It is also used as one ingredient in a renormalization group procedure that controls the Grassmann integrals when the covari- ance is not so nice. To illustrate the latter, we look at the Gross{Neveu2 model and at many-fermion models in two space dimensions.

i I. Fermionic Functional Integrals

This chapter just provides some mathematical background. Most of it is easy algebra { primarily the denition of Grassmann algebra and the denition and basic prop- erties of a class of linear functionals on Grassmann algebras known as Grassmann Gaussian integrals. There is also one piece of analysis { the Gram bound on Grassmann Gaussian integrals { and a brief discussion of how Grassmann integrals arise in quantum eld the- ories. To make this chapter really trivial, we consider only nite dimensional Grassmann algebras. A simple{minded method for handling the innite dimensional case is presented in Appendix A.

I.1 Grassmann Algebras

Denition I.1 (Grassmann algebra with coecients in C) Let be a nite di- V mensional vector space over C. The Grassmann algebra generated by is V

∞ = n V V n=0 V M V where 0 = C and n is the n{fold antisymmetric product of with itself. V V V Thus, ifV a1, . . . , aD isVa basis for , then is a vector space with elements of the form { } V V D V

f(a) = βi , ,i ai ai 1 ··· n 1 · · · n n=0 1 i <

with the coecients βi , ,i C. (In dierential geometry, the traditional notation uses 1 ··· n ∈ a a in place of a a .) Addition and scalar multiplication is done compo- i1 ∧ · · · ∧ in i1 · · · in nentwise. That is,

D D

α βi , ,i ai ai + γ δi , ,i ai ai 1 ··· n 1 · · · n 1 ··· n 1 · · · n n=0 1 i <

= αβi , ,i + γδi , ,i ai ai 1 ··· n 1 ··· n 1 · · · n n=0 1 i <

a a = a a i j − j i and aiai = 0 for all i. For example

a2 + 3a3 a1 + 2a1a2 = a2a1 + 3a3a1 + 2a2a1a2 + 6a3a1a2

   = a a 3a a 2a a a 6a a a − 1 2 − 1 3 − 1 2 2 − 1 3 2 = a a 3a a + 6a a a − 1 2 − 1 3 1 2 3

Remark I.2 If a , . . . , a is a basis for , then has basis { 1 D} V V V a a n 0, 1 i < < i D i1 · · · in ≥ ≤ 1 · · · n ≤ 

The dimension of n is D and the dimension of is D D = 2D. Let V n V n=0 n V  V P  = (i , , i ) 1 i , , i D Mn 1 · · · n ≤ 1 · · · n ≤  be the set of all multi{indices of degree n 0 . Note that i , i does not have to be in ≥ 1 2 · · · increasing order. For each I n set aI = ai1 ain . Also set a = 1 . Every element ∈ M · · · ∅ f(a) has a unique representation ∈ V V D

f(a) = βI aI n=0 I X ∈MXn with the coecients β C antisymmetric under permutations of the elements of I. For I ∈ example, a a = 1 a a a a = β a + β a with β = β = 1 . 1 2 2 1 2 − 2 1 (1,2) (1,2) (2,1) (2,1) (1,2) − (2,1) 2   2 Problem I.1 Let be a complex vector space of dimension D. Let s . Then s V ∈ V D n has a unique decomposition s = s0 + s1 with s0 C and s1 . ProV ve that, if ∈ ∈ n=1 V s0 = 0, then there is a unique s0 with ss0 = 1 and a uniqueL s00 V with s00s = 1 6 ∈ V ∈ V and furthermore V V D n 1 n s1 s0 = s00 = + ( 1) n+1 s0 s − 0 n=1 X

It will be convenient to generalise the concept of Grassmann algebra to allow for coecients in more general algebras than C. We shall allow the space of coecients to be any [BS]. Here are the denitions that do it.

Denition I.3 (Superalgebra)

(i) A superalgebra is an associative algebra S with unit, denoted 1, together with a de-

composition S = S+ S such that 1 S+, ⊕ − ∈ S+ S+ S+ S S S+ · ⊂ − · − ⊂

S+ S S S S+ S · − ⊂ − − · ⊂ − and ss0 = s0s if s S or s0 S ∈ + ∈ + ss0 = s0s if s, s0 S − ∈ − The elements of S+ are called even, the elements of S odd. − (ii) A graded superalgebra is an associative algebra S with unit, together with a decompo-

sition S = ∞ S such that 1 S , S S S for all m, n 0, and such that the m=0 m ∈ 0 m · n ⊂ m+n ≥ decompositionL S = S+ S with ⊕ −

S+ = Sm S = Sm − m even M mModd gives S the structure of a superalgebra.

Example I.4 Let be a complex vector space. The Grassmann algebra S = = V V m m m over is a graded superalgebra. In this case, Sm = , S+ = V and m 0 V V V m even V ≥ SL=V m . In these notes, all of the thatV we use will Lbe GrassmannV − m odd V algebras.L V

3 Denition I.5 (Tensor product of two superalgebras)

(i) Recall that the tensor product of two nite dimensional vector spaces S and T is the n vector space S T constructed as follows. Consider the set of all formal sums si ti ⊗ i=1 ⊗ with all of the si’s in S and all of the ti’s in T. Let be the smallest equivalenceP relation ≡ on this set with s t + s0 t0 s0 t0 + s t ⊗ ⊗ ≡ ⊗ ⊗ (s + s0) t s t + s0 t ⊗ ≡ ⊗ ⊗ s (t + t0) s t + s t0 ⊗ ≡ ⊗ ⊗ (zs) t s (zt) ⊗ ≡ ⊗ for all s, s0 S, t, t0 T and z C. Then S T is the set of all equivalence classes under ∈ ∈ ∈ ⊗ this relation. Addition and scalar multiplication are dened in the obvious way.

(ii) Let S and T be superalgebras. We dene multiplication in the tensor product S T by ⊗

s (t+ + t ) (s+ + s ) t = ss+ t+t + ss+ t t + ss t+t ss t t ⊗ − − ⊗ ⊗ ⊗ − − ⊗ − − ⊗ −     for s S, t T , s S , t T . This multiplication denes an algebra structure on ∈ ∈ ± ∈ ± ± ∈ ± S T. Setting (S T)+ = (S+ T+) (S T ), (S T) = (S+ T ) (S T+) we ⊗ ⊗ ⊗ ⊕ − ⊗ − ⊗ − ⊗ − ⊕ − ⊗ get a superalgebra. If S and T are graded superalgebras then the decomposition S T = ⊗ ∞ (S T)m with m=0 ⊗

L (S T)m = Sm1 Tm2 ⊗ ⊗ m +m =m 1 M2 gives S T the structure of a graded superalgebra. ⊗

Denition I.6 (Grassmann algebra with coecients in a superalgebra) Let V be a complex vector space and S be a superalgebra, the Grassmann algebra over with V coecients in S is the superalgebra

= S S V ⊗ V ^ ^ If S is a graded superalgebra, so is . S V V 4 Remark I.7 It is natural to identify s S with the element s 1 of = S and to ∈ ⊗ S V ⊗ V identify ai1 ain with the element 1 ai1 ain of . UnderV this idenVtication · · · ∈ V ⊗ · · · S V V V s a a = s 1 1 a a = s a a i1 · · · in ⊗ ⊗ i1 · · · in ⊗ i1 · · · in   Every element of has a unique representation S V V D si , ,i ai ai 1 ··· n 1 · · · n n=0 1 i <

with the coecients si , ,i S. Every element of also has a unique representation 1 ··· n ∈ S V D V

sI aI n=0 I X ∈MXn with the coecients s S antisymmetric under permutation of the elements of I. If I ∈ S = 0, with 0 the complex vector space with basis b , , b , then every element of V V { 1 · · · D} Vhas a unique representation S V V D

βI,J bJ aI n,m=0 I∈Mn X J∈MXm with the coecients β C separately antisymmetric under permutation of the elements I,J ∈ of I and under permutation of the elements of J.

Problem I.2 Let be a complex vector space of dimension D. Every element s of V V D n has a unique decomposition s = s0 + s1 with s0 C and s1 . Dene V ∈ ∈ n=1 V D L V s s0 1 n e = e n! s1 n=0 n P o Prove that if s, t with st = ts, then, for all n IN, ∈ V ∈ V n n n m n m (s + t) = m s t − m=0 X  and eset = etes = es+t

5 Problem I.3 Use the notation of Problem I.2. Let, for each α IR, s(α) . ∈ ∈ V Assume that s(α) is dierentiable with respect to α (meaning that if we write s(αV) = D

si1, ,in (α)ai1 ain , every coecient si1, ,in (α) is dierentiable with n=0 1 i <

d n n 1 dα s(α) = ns(α) − s0(α) and d s(α) s(α) dα e = e s0(α)

Problem I.4 Use the notation of Problem I.2. If s0 > 0, dene

D n−1 n ( 1) s1 ln s = ln s0 + − n s0 n=1 X  with ln s IR. 0 ∈ a) Let, for each α IR, s(α) . Assume that s(α) is dierentiable with respect to α, ∈ ∈ V that s(α)s(β) = s(β)s(α) for allVα and β and that s0(α) > 0 for all α. Prove that

d s0(α) dα ln s(α) = s(α) b) Prove that if s with s IR, then ∈ V 0 ∈ V ln es = s

Prove that if s with s > 0, then ∈ V 0 V eln s = s

Problem I.5 Use the notation of Problems I.2 and I.4. Prove that if s, t with ∈ V st = ts and s0, t0 > 0, then V ln(st) = ln s + ln t

6 Problem I.6 Generalise Problems I.1{I.5 to with S a nite dimensional graded S V superalgebra having S0 = C. V

Problem I.7 Let be a complex vector space of dimension D. Let s = s + s V 0 1 ∈ V D n with s0 C and s1 . Let f(z) be a complex valued function that is analyticV ∈ ∈ n=1 V 1 (n) n in z < r. Prove thatLif s0

I.2 Grassmann Integrals

Let be a nite dimensional complex vector space and S a superalgebra. Given V any ordered basis a , . . . , a for , the Grassmann integral da da is dened to { 1 D} V · D · · · 1 D 1 n be the unique linear map from to S which is zero on R− and obeys S V ⊕n=0 V V a a da da = 1 V 1 · · · D D · · · 1 Z

Problem I.8 Let a , , a be an ordered basis for . Let b = D M a , 1 i D, 1 · · · D V i j=1 i,j j ≤ ≤ be another ordered basis for . Prove that P V da da = det M db db · D · · · 1 · D · · · 1 Z Z In particular, if b = a for some permutation σ S i σ(i) ∈ D da da = sgn σ db db · D · · · 1 · D · · · 1 Z Z

Example I.8 Let be a two dimensional vector space with basis a , a and 0 be a V { 1 2} V second two dimensional vector space with basis b , b . Set S = 0. Let λ C 0 { 1 2} V ∈ \ { } and let S be the 2 2 skew symmetric matrix V × 0 1 S = λ 1 −0  λ  7 1 Use Sij− to denote the matrix element of 0 λ S 1 = − λ 0  −  in row i and column j. Then, using the denition of the exponential of Problem I.2 and 2 2 recalling that a1 = a2 = 0,

1 −1 1 Σij aiS aj λ[a a a a ] λa a e− 2 ij = e− 2 1 2− 2 1 = e− 1 2 = 1 λa a − 1 2 and

1 −1 Σ b a Σij aiS aj b a +b a λa a e i i i e− 2 ij = e 1 1 2 2 e− 1 2

= 1 + (b a + b a ) + 1 (b a + b a )2 1 λa a 1 1 2 2 2 1 1 2 2 − 1 2 n on o = 1 + b a + b a b b a a 1 λa a 1 1 2 2 − 1 2 1 2 − 1 2 n on o = 1 + b a + b a (λ + b b )a a 1 1 2 2 − 1 2 1 2 Consequently, the integrals

1 Σ a S−1a e− 2 ij i ij j da da = λ 2 1 − Z Σ b a 1 Σ a S−1a e i i i e− 2 ij i ij j da da = (λ + b b ) 2 1 − 1 2 Z and their ratio is

1 −1 Σibiai Σij aiS aj e e− 2 ij da2da1 (λ + b1b2) 1 1 2 Σij biSij bj 1 −1 = − = 1 + b1b2 = e− 2 Σij aiSij aj λ λ R e− da2da1 − R Example I.9 Let be a D = 2r dimensional vector space with basis a , , a and V { 1 · · · D} 0 be a second 2r dimensional vector space with basis b , , b . Set S = 0. Let V { 1 · · · D} V λ1, , λr be nonzero complex numbers and let S be the D D skew symmetricVmatrix · · · × r 0 1 S = − λm 1 0 m=1 λm M   All matrix elements of S are zero, except for r 2 2 blocks running down the diagonal. × For example, if r = 2, 0 1 0 0 − λ1 1 0 0 0 S =  λ1  0 0 0 1 − λ2  0 0 1 0   λ2    8 Then, by the computations of Example I.8,

r r 1 −1 Σij aiS aj λma2m−1a2m e− 2 ij = e− = 1 λma2m 1a2m − − m=1 m=1 Y Y  and r 1 −1 Σ b a Σij aiS aj b a +b a λ a a e i i i e− 2 ij = e 2m−1 2m−1 2m 2m e− m 2m−1 2m m=1 Yr 

= 1 + b2m 1a2m 1 + b2ma2m (λm + b2m 1b2m)a2m 1a2m − − − − − m=1 Y  This time, the integrals

r 1 Σ a S−1a e− 2 ij i ij j da da = ( λ ) D · · · 1 − m Z m=1 Yr 1 −1 Σibiai Σij aiS aj e e− 2 ij daD da1 = ( λm b2m 1b2m) · · · − − − m=1 Z Y and their ratio is

1 −1 Σibiai Σij aiS aj r e e− 2 ij daD da1 1 1 2 Σij biSij bj 1 −1 · · · = 1 + λ b1b2 = e− Σij aiS aj R e− 2 ij daD da1 m=1 · · · Y  R

Lemma I.10 Let be a D = 2r dimensional vector space with basis a , , a and 0 V { 1 · · · D} V be a second D dimensional vector space with basis b , , b . Set S = 0. Let S be a { 1 · · · D} V D D invertible skew symmetric matrix. Then V × 1 −1 Σibiai 2 Σij aiSij aj e e− daD da1 1 Σ b S b · · · = e− 2 ij i ij j 1 Σ a S−1a e− 2 ij i ij j da da R D · · · 1 R Proof: Both sides of the claimed equation are rational, and hence meromorphic, functions of the matrix elements of S. So, by analytic continuation, it suces to consider matrices S with real matrix elements. Because S is skew symmetric, S = S for all 1 j, k D. jk − kj ≤ ≤ Consequently, ıS is self{adjoint so that has an orthonormal basis of eigenvectors of S • V all eigenvalues of S are pure imaginary • 9 Because S is invertible, it cannot have zero as an eigenvalue. Because S has real matrix elements, S~v = µ~v implies S~v = µ~v (with designating \complex conjugate") so that the eigenvalues and eigenvectors of S come in complex conjugate pairs. • Call the eigenvalues of S, ı 1 , ı 1 , ı 1 and set ± λ1 ± λ2 · · · ± λr r 0 1 T = − λm 1 0 m=1 λm M   By Problem I.9, below, there exists a real orthogonal D D matrix R such that RtSR = T . × Dene D D t t ai0 = Rijaj bi0 = Rijbj j=1 j=1

t P tP 1 1 t Then, as R is orthogonal, RR = 1l so that S = RT R , S− = RT − R . Consequently,

t t t ibi0 ai0 = RijbjRikak = bjRjiRikak = bjδj,kak = ibiai i,j,k i,j,k j,k P P P where δj,k is one if j = k and zero otherwise. Similarly,

1 1 ijaiSij− aj = ijai0 Tij− aj0 ijbiSijbj = ijbi0 Tijbj0

Hence, by Problem I.8 and Example I.9

1 −1 0 0 1 0 −1 0 Σ b a Σij aiS aj Σ b a Σij a T a e i i i e− 2 ij da da e i i i e− 2 i ij j da da D · · · 1 = D · · · 1 1 Σ a S−1a 1 Σ a0 T −1a0 e− 2 ij i ij j da da e− 2 ij i ij j da da R D · · · 1 R D · · · 1 0 0 1 0 −1 0 Σib a Σij aiT aj R e i i e− 2 ij da0 da0R 1 0 0 1 D 1 2 Σij biTij bj 2 Σij biSij bj = 1 0 −1 0 · · · = e− = e− 2 Σij aiTij aj R e− da0 da10 D · · · R

Problem I.9 Let S be a matrix ◦ λ be a real number ◦ ~v and ~v be two mutually perpendicular, complex conjugate unit vectors ◦ 1 2 S~v = ıλ~v and S~v = ıλ~v . ◦ 1 1 2 − 2 10 Set w~ = 1 ~v ~v w~ = 1 ~v + ~v 1 √2ı 1 − 2 2 √2 1 2   a) Prove that w~ and w~ are two mutually perpendicular, real unit vectors • 1 2 Sw~ = λw~ and Sw~ = λw~ . • 1 2 2 − 1 b) Suppose, in addition, that S is a 2 2 matrix. Let R be the 2 2 matrix whose rst × × column is w~1 and whose second column is w~2. Prove that R is a real orthogonal matrix 0 λ and that RtSR = . λ −0   c) Generalise to the case in which S is a 2r 2r matrix. ×

I.3 Dierentiation and Integration by Parts

Denition I.11 (Left Derivative) Left derivatives are the linear maps from to V D that are determined as follows. For each ` = 1, , D , and I n theV left V · · · ∈ n=0 M ∂ derivative aI of the Grassmann monomial aI is V ∂a` S

∂ 0 if ` / I aI = ∂a` J ∈ ( 1)| | aJ aK if aI = aJ a aK  − ` In the case of , the left derivative is determined by linearity and S V V 0 if ` / I ∂ J ∈ ∂a saI = ( 1)| |s aJ aK if aI = aJ a` aK and s S+ `  − J ∈ ( 1)| |s aJ aK if aI = aJ a` aK and s S  − − ∈ − 

Example I.12 For each I = i , i in , { 1 · · · n} Mn ∂ ∂ ∂a ∂a aI = 1 in · · · i1 1 ∂ ∂ 2 I ( I 1) ∂a ∂a aI = ( 1) | | | |− i1 · · · in −

11 Proposition I.13 (Product Rule, Leibniz’s Rule) For all k, ` = 1, , D , all I, J D and any f , · · · ∈ n=0 Mn ∈ S V ∂ (a) ak = δk,` ∂a` S V ∂ ∂ I ∂ (b) aIaJ = aI aJ + ( 1)| | aI aJ ∂a` ∂a` − ∂a` ∂ (c) f daD da1 = 0  ∂a` · · · (d) The linear operators ∂ and ∂ anticommute. That is, R  ∂ak ∂a`

∂ ∂ + ∂ ∂ f = 0 ∂ak ∂a` ∂a` ∂ak 

Proof: Obvious, by direct calculation.

i Problem I.10 Let P (z) = ci z be a power series with complex coecients and i 0 ≥ innite radius of convergence andPlet f(a) be an even element of . Show that S V ∂ ∂ V P f(a) = P 0 f(a) f(a) ∂a` ∂a`    

Example I.14 Let be a D dimensional vector space with basis a , , a and 0 a V { 1 · · · D} V second D dimensional vector space with basis b , , b . Think of eΣibiai as an element { 1 · · · D} of either 0 or ( 0). (Remark I.7 provides a natural identication between ∧V V V ⊕ V z 0, V 0 and V( 0).) By the last problem, with ai replaced by bi, P (z) = e ∧V V ∧V V V ⊕ V andV f(b)V= i biai, V

∂ Σibiai Σibiai e = e a` P ∂b` Iterating ∂ ∂ Σibiai ∂ ∂ Σibiai ∂b ∂b e = ∂b ∂b e ain i1 · · · in i1 · · · in−1 ∂ ∂ Σibiai = ∂b ∂b e ain−1 ain i1 · · · in−2 Σibiai = e aI where I = (i , , i ) . In particular, 1 · · · n ∈ Mn

∂ ∂ Σibiai aI = e ∂bi ∂bin 1 · · · b1, ,bD=0 ···

where, as you no doubt guessed, I βIbI b , ,b =0 means β . 1 ··· D ∅ P 12 Denition I.15 Let be a D dimensional vector space and S a superalgebra. Let V S = Sij be a skew symmetric matrix of order D . The Grassmann Gaussian integral with covariance S is the linear map from to S determined by S V

V 1 Σibiai Σij biSij bj e dµS(a) = e− 2 Z

Remark I.16 If the dimension D of is even and if S is invertible, then, by Lemma V I.10, 1 −1 Σij aiS aj f(a) e− 2 ij daD da1 f(a) dµS(a) = 1 −1 · · · Σij aiS aj R e− 2 ij daD da1 Z · · · The Gaussian measure on IRD with covarianceR S (this time an invertible symmetric matrix) is 1 Σ x S−1x e− 2 ij i ij j d~x dµS(~x) = 1 Σ x S−1x e− 2 ij i ij j d~x This is the motivation for the name \GrassmannR Gaussian integral". Denition I.15, however, makes sense even if D is odd, or S fails to be invertible. In particular, if S is the

zero matrix, f(a) dµS(a) = f(0). R

Proposition I.17 (Integration by Parts) Let S = Sij be a skew symmetric matrix of order D . Then, for each k = 1, , D ,  · · · D ∂ ak f(a) dµS(a) = Sk` f(a) dµS(a) ∂a` Z `=1 Z P

Proof 1: This rst argument, while instructive, is not complete. For it, we make the additional assumption that D is even. Since both sides are continuous in S, it suces to consider S invertible. Furthermore, by linearity in f, it suces to consider f(a) = aI, with I . Then, by Proposition I.13.c, ∈ Mn

D 1 −1 ∂ 2 ΣaiSij aj 0 = Sk` aI e− daD da1 ∂a` · · · `=1 Z PD   D I 1 −1 ∂ 1 2 ΣaiSij aj = Sk` aI ( 1)| | aI S− am e− daD da1 ∂a` − − `m · · · `=1 Z  m=1  P   P 13 D 1 −1 I 1 −1 ∂ 2 ΣaiSij aj 2 ΣaiSij aj = Sk` aI e− daD da1 ( 1)| | aI ak e− daD da1 ∂a` `=1 · · · − − · · · Z   Z PD 1 −1 1 −1 ∂ 2 ΣaiSij aj 2 ΣaiSij aj = Sk` aI e− daD da1 ak aI e− daD da1 ∂a` · · · − · · · `=1 Z Z P   Consequently, n ∂ ak aI dµS(a) = Sk` aI dµS(a) ∂a` Z `=1 Z P

Proof 2: By linearity, it suces to consider f(a) = a with I = i , , i . I { 1 · · · n} ∈ Mn Then

∂ ∂ ∂ Σibiai ak aI dµS(a) = e dµS(a) ∂bk ∂bi ∂bin 1 · · · b1, ,bD =0 Z Z ··· ∂ ∂ ∂ Σibiai = e dµ S(a) ∂bk ∂bi ∂bin 1 · · · b1, ,bD=0 Z ··· ∂ ∂ ∂ 1 Σ b S b = e− 2 i` i i` ` ∂bk ∂bi ∂bin 1 · · · b1, ,bD =0 ··· n ∂ ∂ ∂ 1 Σ b S b = ( 1) e− 2 i` i i` ` ∂bi ∂bin ∂bk − 1 · · · b1, ,bD =0 ··· 1 n ∂ ∂ Σij biSij bj = ( 1) Sk` b`e− 2 ∂bi ∂bin − − 1 · · · b1, ,bD=0 ` ··· X n ∂ ∂ Σibiai = ( 1) Sk` b` e dµS(a) ∂bi ∂bin − − 1 · · · b1, ,bD =0 ` Z ··· X n ∂ ∂ ∂ Σibiai = ( 1) Sk` e dµS(a) ∂bi ∂bin ∂a` − 1 · · · b1, ,bD =0 ` Z ··· X ∂ ∂ ∂ Σibiai = Sk` e dµS(a) ∂a` ∂bi ∂bin 1 · · · b1, ,bD =0 ` Z ··· X ∂ = Sk` aI dµS(a) ∂a` X` Z

I.4 Grassmann Gaussian Integrals We now look more closely at the values of

1 ∂ ∂ 2 Σi`biSi`b` ai1 ain dµS(a) = e− ∂bi ∂bin · · · 1 · · · b1, ,bD =0 Z ···

14 Obviously,

1 dµS(a) = 1 Z 1 ∂ ∂ 2 Σi`biSi`b` and, as i,` biSi`b` is even, ∂b ∂b e− has the same parity as n and i1 · · · in P a a dµ (a) = 0 if n is odd i1 · · · in S Z To get a little practice with integration by parts (Proposition I.17) we use it to evaluate a a dµ (a) when n = 2 i1 · · · in S R D a a dµ = S ∂ a dµ i1 i2 S i1m ∂am i2 S Z m=1 Z PD  

= Si1m δm,i2 dµS m=1 Z P = Si1i2 and n = 4

D a a a a dµ = S ∂ a a a dµ i1 i2 i3 i4 S i1m ∂am i2 i3 i4 S Z m=1 Z P   = (S a a S a a + S a a ) dµ i1i2 i3 i4 − i1i3 i2 i4 i1i4 i2 i3 S Z = S S S S + S S i1i2 i3i4 − i1i3 i2i4 i1i4 i2i3

Observe that each term is a sign times Siπ(1)iπ(2) Siπ(3)iπ(4) for some permutation π of (1, 2, 3, 4). The sign is always the sign of the permutation. Furthermore, for each odd j, π(j) is smaller than all of π(j + 1), π(j + 2), (because each time we applied · · · ∂ ai dµS = Si m dµS, we always used the smallest available `). From ` · · · m ` ∂am · · · thisR you would probablyP guessR that

a a dµ (a) = sgnπ S S (I.1) i1 · · · in S iπ(1)iπ(2) · · · iπ(n−1)iπ(n) π Z X where the sum is over all permutations π of 1, 2, , n that obey · · · π(1) < π(3) < < π(n 1) and π(k) < π(k + 1) for all k = 1, 3, 5, , n 1 · · · − · · · − Another way to write the same thing, while avoiding the ugly constraints, is

a a dµ (a) = 1 sgnπ S S i1 · · · in S 2n/2(n/2)! iπ(1)iπ(2) · · · iπ(n−1)iπ(n) π Z X 15 where now the sum is over all permutations π of 1, 2, , n. The right hand side is precisely · · · the denition of the Pfaan of the n n matrix whose (`, m) matrix element is S . × i`im Pfaans are closely related to determinants. The following Proposition gives their main properties. This Proposition is proven in Appendix B.

Proposition I.18 Let S = (Sij) be a skew symmetric matrix of even order n = 2m . a) For all 1 k = ` n , let M be the matrix obtained from S by deleting rows k and ≤ 6 ≤ k` ` and columns k and ` . Then,

n k+` Pf(S) = sgn(k `) ( 1) Sk` Pf (Mk`) `=1 − − P In particular, n ` Pf(S) = ( 1) S1` Pf (M1`) `=2 − P b) If 0 C S = Ct 0  −  where C is a complex m m matrix. Then, ×

1 m(m 1) Pf(S) = ( 1) 2 − det(C) −

c) For any n n matrix B , × Pf BtSB = det(B) Pf(S)  d) Pf(S)2 = det(S)

Using these properties of Pfaans (in particular the \expansion along the rst row" of Proposition I.18.a) we can easily verify that our conjecture (I.1) was correct.

Proposition I.19 For all even n 2 and all 1 i , , i D, ≥ ≤ 1 · · · n ≤

ai1 ain dµS(a) = Pf Siki` · · · 1 k,` n Z h i ≤ ≤

16 Proof: The proof is by induction on n . The statement of the proposition has already been veried for n = 2 . Integrating by parts, n ∂ ai ai dµS(a) = Si ` ai ai dµS(a) 1 · · · n 1 ∂a` 2 · · · n Z `=1 Z Pn   = ( 1)j S a a a a dµ (a) − i1ij i2 · · · ij−1 ij+1 · · · in S j=2 Z P Our induction hypothesis and Proposition I.18.a now imply

ai1 ain dµS(a) = Pf Siki` · · · 1 k,` n Z h i ≤ ≤

Hence we may use the following as an alternative to Denition I.15.

Denition I.20 Let S be a superalgebra, be a nite dimensional complex vector space V and S a skew symmetric bilinear form on . Then the Grassmann Gaussian integral on V with covariance S is the S{linear map S V V f(a) f(a) dµS(a) S ∈ S V 7→ ∈ ^ Z that is determined as follows. Choose a basis a 1 i D for . Then i ≤ ≤ V  a a a dµ (a) = Pf S i1 i2 · · · in S iki` 1 k,` n Z ≤ ≤   where Sij = S(ai, aj).

Proposition I.21 Let S and T be skew symmetric matrices of order D . Then

f(a) dµS+T (a) = f(a + b) dµS(a) dµT (b) Z Z h Z i

Proof: Let be a D dimensional vector space with basis a , . . . , a . Let 0 and 00 V { 1 D} V V be two more copies of with bases b , . . . , b and c , . . . , c respectively. It suces V { 1 D} { 1 D} Σiciai to consider f(a) = e . Viewing f(a) as an element of 00 , ∧SV V

1 Σiciai V Σij ci(Sij +Tij )cj f(a) dµS+T (a) = e dµS+T (a) = e− 2 Z Z 17 Σici(ai+bi) Viewing f(a + b) = e as an element of ( 0+ 00) , ∧S V V V V Σici(ai+bi) Σicibi Σiciai f(a + b) dµS(a) = e dµS(a) = e e dµS(a) Z Z Z Σ c b 1 Σ c S c = e i i i e− 2 ij i ij j

1 Σicibi Σij ciSij cj Now viewing e e− 2 as an element of 00 0 ∧SV V

1V Σij ciSij cj Σicibi f(a + b) dµS(a) dµT (b) = e− 2 e dµT (b) Z Z Z h i 1 Σ c S c 1 Σ c T c = e− 2 ij i ij j e− 2 ij i ij j

1 Σ c (S +T )c = e− 2 ij i ij ij j

as desired.

Problem I.11 Let and 0 be vector spaces with bases a , . . . , a and b , . . . , b 0 V V { 1 D} { 1 D } respectively. Let S and T be D D and D0 D0 skew symmetric matrices. Prove that × ×

f(a, b) dµS(a) dµT (b) = f(a, b) dµT (b) dµS(a) Z h Z i Z h Z i

Problem I.12 Let be a D dimensional vector space with basis a , . . . , a and 0 be V { 1 D} V a second copy of with basis c , . . . , c . Let S be a D D skew symmetric matrix. V { 1 D} × Prove that Σ c a 1 Σ c S c e i i i f(a) dµ (a) = e− 2 ij i ij j f(a Sc) dµ (a) S − S Z Z

Here Sc i = j Sijcj.  P

I.5 Grassmann Integrals and Fermionic Quantum Field Theories

In a quantum mechanical model, the set of possible states of the system form (the rays in) a Hilbert space and the time evolution of the system is determined by a H self{adjoint operator H on , called the Hamiltonian. We shall denote by a ground H 18 state of H (eigenvector of H of lowest eigenvalue). In a quantum eld theory, there is additional structure. There is a special family, ϕ(x, σ) x IRd, σ S of operators ∈ ∈ on , called annihilation operators. Here d isthe dimension of space and S is a nite H set. You should think of ϕ(x, σ) as destroying a particle of σ at x. The adjoints, d ϕ†(x, σ) x IR , σ S , of these operators are called creation operators. You should ∈ ∈ think of ϕ† (x, σ) as creating a particle of spin σ at x. All states in can be expressed as H linear combinations of products of annihilation and creation operators applied to . The time evolved annihilation and creation operators are

iHt iHt iHt iHt e ϕ(x, σ)e− e ϕ†(x, σ)e−

If you are primarily interested in thermodynamic quantities, you should analytically con- tinue these operators to imaginary t = iτ −

Hτ Hτ Hτ Hτ e ϕ(x, σ)e− e ϕ†(x, σ)e−

βH 1 because the density matrix for temperature T is e− where β = kT . The imaginary time operators (or rather, various inner products constructed from them) are also easier to deal with mathematically rigorously than the corresponding real time inner products. It has turned out tactically advantageous to attack the real time operators by rst concentrating on imaginary time and then analytically continuing back. If you are interested in grand canonical ensembles (thermodynamics in which you adjust the average energy of the system through β and the average density of the system through the chemical potential µ) you replace the Hamiltonian H by K = H µN, where − N is the number operator and µ is the chemical potential. This brings us to

Kτ Kτ ϕ(τ, x, σ) = e ϕ(x, σ)e− (I.2) Kτ Kτ ϕ(τ, x, σ) = e ϕ†(x, σ)e−

Note that ϕ(τ, x, σ) is neither the complex conjugate, nor adjoint, of ϕ(τ, x, σ). In any quantum mechanical model, the quantities you measure (called observ- ables) are represented by operators on . The expected value of the observable when H O 19 the system is in state is , , where , is the inner product on . In a quantum h O i h · · i H eld theory, all expected values are determined by inner products of the form p ( ) , T ϕ(τ`, x`, σ`) `=1 D E ( ) Q Here the ϕ signies that both ϕ and ϕ may appear in the product. The symbol T designates the \time ordering" operator, dened (for fermionic models) by

( ) ( ) ( ) ( ) Tϕ(τ1, x1, σ1) ϕ(τp, xp, σp) = sgnπ ϕ(τπ(1), xπ(1), σπ(1)) ϕ(τπ(p), xπ(p), σπ(p)) · · · · · · where π is a permutation that obeys τ τ for all 1 i < p. There is also a tie π(i) ≥ π(i+1) ≤ breaking rule to deal with the case when τ = τ for some i = j, but it doesn’t interest us i j 6 here. Observe that ( ) ( ) ϕ(τ ,x ,σ ) ϕ(τ ,x ,σ ) π(1) π(1) π(1) · · · π(p) π(p) π(p) Kτ ( ) K(τ τ ) ( ) K(τ τ ) ( ) Kτ = e π(1) ϕ † (x ,σ )e π(2)− π(1) ϕ † e π(p)− π(p−1) ϕ † (x ,σ )e− π(p) π(1) π(1) · · · π(p) π(p) The time ordering is chosen so that, when you sub in (I.2), and exploit the fact that is

K(τπ(i) τπ(i−1)) p ( ) an eigenvector of K, every e − that appears in , T `=1 ϕ(τ`, x`, σ`) has

τπ(i) τπ(i 1) 0. Time ordering is introduced to ensure thatD theQ inner product is Ewell{ − − ≤ dened: the operator K is bounded from below, but not from above. Thanks to the sgnπ

p ( ) in the denition of T, , T `=1 ϕ(τ`, x`, σ`) is antisymmetric under permutations of ( ) the ϕ(τ`, x`, σ`)’s. D Q E Now comes the connection to Grassmann integrals. Let x = (τ, x). Please forget, for the time being, that x does not run over a nite set. Let be a vector space with V basis ψ , ψ . Note, once again, that ψ is NOT the complex conjugate of ψ . It { x,σ x,σ} x,σ x,σ is just another vector that is totally independent of ψx,σ. Then, formally, it turns out that ( ) ¯ p p ψ e (ψ,ψ) dψ dψ p ( ) `=1 x`,σ` A x,σ x,σ x,σ ( 1) , T ϕ(x`, σ`) = − (ψ,ψ¯) `=1 R Q eA x,σ dψQx,σ dψx,σ D Q E The exponent (ψ, ψ) is called the actionR and isQdetermined by the Hamiltonian A H. A typical of interest is that corresponding to a gas of fermions (e.g. electrons), of strictly positive density, interacting through a two{body potential u(x y). It is − dd+1k k2 (ψ, ψ) = (2π)d+1 ik0 2m µ ψk,σψk,σ A − σ S − − ∈ Z P 4 d+1  λ d ki d+1 (2π) δ(k1+k2 k3 k4)ψ ψ u^(k k )ψ 0 ψ 0 2 (2π)d+1 − − k1,σ k3,σ 1 3 k2,σ k4,σ − σ,σ0 S i=1 − ∈ Z P Q 20 Here ψk,σ is the of ψx,σ and u^(k) is the Fourier transform of u(x). The zero component k0 of k is the dual variable to τ and is thought of as an energy; the nal d components k are the dual variables to x and are thought of as momenta. So, in , A 2 k is the kinetic energy of a particle and the delta function δ(k + k k k ) enforces 2m 1 2 − 3 − 4 conservation of energy/momentum. As above, µ is the chemical potential, which controls the density of the gas, and u^ is the Fourier transform of the two{body potential. More generally, when the fermion gas is subject to a periodic potential due to a crystal lattice, the quadratic term in the action is replaced by

dd+1k (2π)d+1 ik0 e(k) ψk,σψk,σ − σ S − ∈ Z  where e(k) is the dispersion relationP minus the chemical potential µ. I know that quite a few people are squeamish about dealing with innite dimen- sional Grassmann algebras and integrals. Innite dimensional Grassmann algebras, per se, are no big deal. See Appendix A. It is true that the Grassmann \Cartesian measure" x,σ dψx,σ dψx,σ does not make much sense when the dimension is innite. But this prob- lemQ is easily dealt with: combine the quadratic part of the action with dψx,σ dψx,σ A x,σ to form a Gaussian integral. Formally, Q ( ) ¯ p p (ψ,ψ) ψx ,σ eA dψx,σ dψx,σ ( 1)p , (ϕ)(τ , x , σ ) = `=1 ` ` x,σ T ` ` ` (ψ,ψ¯) − `=1 R eA x,σ dψx,σ dψx,σ D E Q Q (I.3) Q p ( ) W (ψ,ψ¯) R ψ e Q dµS(ψ, ψ) = `=1 x`,σ` W (ψ,ψ¯) R Q e dµS(ψ, ψ) where R

4 d+1 λ d ki d+1 W (ψ, ψ) = (2π) δ(k1+k2 k3 k4)ψ ψ u^(k k )ψ 0 ψ 0 2 (2π)d+1 − − k1,σ k3,σ 1 3 k2,σ k4,σ − σ,σ0 S i=1 − ∈ Z P Q and dµ (ψ, ψ) is the Grassmann Gaussian integral with covariance determined by · S R ψk,σψp,σ0 dµS(ψ, ψ) = 0 Z ψk,σψp,σ0 dµS(ψ, ψ) = 0 Z (I.4) 0 δσ,σ d+1 ψk,σψp,σ0 dµS(ψ, ψ) = (2π) δ(k p) ik0 e(k) − Z − 0 δσ,σ d+1 ψk,σψp,σ0 dµS(ψ, ψ) = (2π) δ(k p) − ik0 e(k) − Z − 21 Problem I.13 Let be a complex vector space with even dimension D = 2r and basis V ψ , , ψ , ψ , , ψ . Again, ψ need not be the complex conjugate of ψ . Let A be an { 1 · · · r 1 · · · r} i i r r matrix and dµ (ψ, ψ) the Grassmann Gaussian integral obeying × · A R ψiψj dµA(ψ, ψ) = ψiψj dµA(ψ, ψ) = 0 ψiψj dµA(ψ, ψ) = Aij Z Z Z a) Prove

ψ ψ ψ ψ dµ (ψ, ψ) in · · · i1 j1 · · · jm A Z m = ( 1)`+1A ψ ψ ψ ψ ψ dµ (ψ, ψ) − i1j` in · · · i2 j1 · · · 6 j` · · · jm A `=1 Z P Here, the ψ signies that the factor ψ is omitted from the integrand. 6 j` j` b) Prove that, if n = m, 6 ψ ψ ψ ψ dµ (ψ, ψ) = 0 in · · · i1 j1 · · · jm A Z c) Prove that

ψin ψi1 ψj1 ψjn dµA(ψ, ψ) = det Aikj` · · · · · · 1 k,` n Z h i ≤ ≤ Σ (ζ¯ ψ +ψ¯ ζ ) d) Let 0 be a second copy of with basis ζ , , ζ , ζ , , ζ . View e i i i i i as V V { 1 · · · r 1 · · · r} an element of 0 . Prove that ∧V V

V Σi(ζ¯iψi+ψ¯iζi) Σi,j ζ¯iAij ζj e dµA(ψ, ψ) = e Z

To save writing, lets rename the generators of the Grassmann algebra from ψ , ψ to a . In this new notation, the right hand side of (I.3) becomes { k,σ k,σ} { i} p G(i , , i ) = 1 a eW (a) dµ (a) where Z = eW (a) dµ (a) 1 · · · p Z i` S S Z `=1 Z Q These are called the Euclidean Green’s functions or Schwinger functions. They determine all expectation values in the model. Let c be the basis of a second copy of the vector { i} space with basis a . Then all expectation values in the model are also determined by { i} (c) = 1 eΣiciai eW (a) dµ (a) (I.5) S Z S Z 22 (called the generator of the Euclidean Green’s functions) or equivalently, by

(c) = log 1 eΣiciai eW (a) dµ (a) (I.6) C Z S Z (called the generator of the connected Green’s functions) or equivalently (see Problem I.14, below), by (c) = log 1 eW (a+c) dµ (a) (I.7) G Z S Z (called the generator of the connected, freely amputated Green’s functions).

Problem I.14 Prove that

(c) = 1 c S c + Sc C − 2 ij i ij j G − 

where Sc i = j Sijcj.  P If S were really nice, the Grassmann Gaussian integrals of (I.5{I.7) would be very easy to dene rigorously, even though the Grassmann algebra is innite dimensional. V The real problem is that S is not really nice. However, one canV write S as the limit of really nice covariances. So the problem of making sense of the right hand side of, say, (I.7) may be expressed as the problem of controlling the limit of a sequence of well{dened quantities. The renormalization group is a tool that is used to control that limit. In the version of the renormalization group that I will use, the covariance S is written as a sum

∞ S = S(j) j=1 X with each S(j) really nice. Let J ( J) (j) S ≤ = S j=1 X and

1 W (a+c) W (a) J (c) = log e dµS(≤J) (a) where ZJ = e dµS(≤J) (a) G ZJ Z Z 23 (The normalization constant Z is chosen so that (0) = 0.) We have to control the limit J GJ of (c) as J . We have rigged the denitions so that it is very easy to exhibit the GJ → ∞ relationship between (c) and (c): GJ GJ+1 1 W (b+c) J+1(c) = log e dµS(≤J+1) (b) G ZJ+1 Z 1 W (b+c) = log e dµ (≤J) (J+1) (b) ZJ+1 S +S Z 1 W (a+b+c) = log e dµ (≤J) (b) dµ (J+1) (a) by Proposition I.21 ZJ+1 S S Z h Z i ZJ J (c+a) = log eG dµ (J+1) (a) ZJ+1 S Z

ZJ Problem I.15 We have normalized J+1 so that J+1(0) = 0. So the ratio in G G ZJ+1

ZJ J (c+a) J+1(c) = log eG dµS(J+1) (a) G ZJ+1 Z had better obey

ZJ+1 J (a) = eG dµ (J+1) (a) ZJ S Z Verify by direct computation that this is the case.

I.6 The Renormalization Group

By Problem I.15,

(c) = (J+1) (c) (I.8) GJ+1 S GJ where the \renormalization group map" is dened in

Denition I.22 Let be a vector space with basis c . Choose a second copy 0 of and V { i} V V denote by a the basis element of 0 corresponding to the element c . Let S be a skew i V i ∈ V symmetric bilinear form on 0. The renormalization group map : is V S S V → S V dened by V V

(W )(c) = log 1 eW (c+a)dµ (a) where Z = eW (a)dµ (a) S ZW,S S W,S S Z Z 24 for all W for which eW (a)dµ (a) is invertible in S. ∈ S V S V R Problem I.16 Prove that

= (J) (J−1) (1) (W ) GJ S ◦ S ◦ · · · ◦ S

= (1) (2) (J) (W ) S ◦ S ◦ · · · ◦ S

For the rest of this section we restrict to S = C. Observe that we have normalized the renormalization group map so that S(W )(0) = 0. Dene the subspaces

∞ (>0) = n V V n=1 V M V ∞ (e) = n V n=2 V V nMeven V of and the projection V V (>0) P (>0) : V −→ V WV(a) WV (a) W (0) 7−→ − Then

Lemma I.23

i) If Z = 0, then Z (>0) = 0 and W,S 6 P W,S 6

(W ) = (P (>0)W ) (>0) S S ∈ V V ii) If Z = 0 and W (e) , then (W ) (e) . W,S 6 ∈ V S ∈ V V V Proof: i) is an immediate consequence of

W (0) W (a) ZP (>0)W,S = e− e dµS(a) Z (P (>0)W )(c+a) W (0) W (c+a) e dµS(a) = e− e dµS(a) Z Z 25 ii) Observe that W is in (e) if and only if W (0) = 0 and W ( a) = W (a). ∈ V V − Suppose that ZW,S = 0 Vand set W V0 = S(W ). We already know that W 0(0) = 0. Suppose 6 also that W ( a) = W (a). Then − W ( c+a) W (c a) e − dµS(a) e − dµS(a) W 0( c) = ln = ln eW (a)dµ (a) eW (a)dµ (a) − R S R S As a dµ (a) = 0 for all I oddRwe have R I S | | R f( a) dµ (a) = f(a) dµ (a) − S S Z Z for all f(a) and ∈ V V W (c a) W (c+a) e − dµS(a) e dµS(a) ln = ln = W 0(c) eW (a)dµ (a) eW (a)dµ (a) R S R S

Hence W 0( c) = W 0(c).R R −

Example I.24 If S = 0, then (W ) = W for all W (>0) . To see this, observe S=0 ∈ V that W (c + a) = W (c) + W~ (c, a) with W~ (c, a) a polynomialVin c and a with every term having degree at least one in a. When S = 0, a dµ (a) = 0 for all I 1 so that I S | | ≥ R n n n W (c + a) dµ0(a) = W (c) + W~ (c, a) dµ0(a) = W (c) Z Z   and W (c+a) W (c) e dµ0(a) = e Z In particular W (a) W (0) e dµ0(a) = e = 1 Z Combining the last two equations

eW (c+a)dµ (a) ln 0 = ln eW (c) = W (c) eW (a)dµ (a) R 0 We conclude that R S = 0, W (0) = 0 = (W ) = W ⇒ S

26 Example I.25 Fix any 1 i, j D and λ C and let W (a) = λa a . Then ≤ ≤ ∈ i j

λ(ci+ai)(cj +aj ) λcicj λaicj λciaj λaiaj e dµS(a) = e e e e dµS(a) Z Z λcicj = e 1 + λaicj 1 + λciaj 1 + λaiaj dµS(a) Z     λcicj 2 = e 1 + λaicj + λciaj + λaiaj + λ aicjciaj dµS(a) Z h i λcicj 2 = e 1 + λSij + λ cjciSij h i 1 Hence, if λ = , ZW,S = 1 + λSij is nonzero and 6 − Sij

W (c+a) λc c 2 e dµ (a) e i j 1 + λS + λ c c S 2 S ij j i ij λcicj λ Sij = = e 1 + cjci eW (a)dµ (a) 1 + λS 1+λSij R S  ij  2 λ Sij λ   R λcicj 1+λS cicj 1+λS cicj = e − ij = e ij

We conclude that

1 λ W (a) = λaiaj, λ = S = S(W )(c) = cicj 6 − ij ⇒ 1 + λSij

In particular Z0,S = 1 and

S(0) = 0 for all S.

Denition I.26 Let be any nite dimensional vector space and let f : C and E E → F : . For each ordered basis B = (e , , e ) of dene f~ : CD C and E → E 1 · · · D E B → F~ : CD C, 1 i D by B;i → ≤ ≤ f(λ e + + λ e ) = f~ (λ , , λ ) 1 1 · · · D D B 1 · · · D F (λ e + + λ e ) = F~ (λ , , λ )e + + F~ (λ , , λ )e 1 1 · · · D D B;1 1 · · · D 1 · · · B;D 1 · · · D D We say that f is polynomial (rational) if there exists a basis B for which f~ (λ , , λ ) B 1 · · · D is a polynomial (ratio of two polynomials). We say that F is polynomial (rational) if there exists a basis B for which F~ (λ , , λ ) is a polynomial (ratio of two polynomials) for B;i 1 · · · D all 1 i D. ≤ ≤ 27 Remark I.27 If f is polynomial (rational) then f~ (λ , , λ ) is a polynomial (ratio of B 1 · · · D two polynomials) for all bases B. If F is polynomial (rational) then F~ (λ , , λ ) is a B;i 1 · · · D polynomial (ratio of two polynomials) for all 1 i D and all bases B. ≤ ≤

Proposition I.28 Let be a finite dimensional vector space and fix any skew symmetric V bilinear form S on . Then V i) Z : (>0) C W,S V −→ VW (a) eW (a) dµ (a) 7−→ S Z is polynomial. ii) (W ) : (>0) (>0) S V −→ V eW (a+c)dµ (c) VW (a) lnV S eW (c)dµ (c) 7−→ R S R is rational.

Proof: i) Let D be the dimension of . If W (0) = 0, eW (a) = D 1 W (a)D is V n=0 n! polynomial in W . Hence, so is ZW,S. P

W (a+c) W (c) ii) As in part (i), e dµS(c) and e dµS(c) are both polynomial. By example W (c) I.25, e dµS(c)R is not identically zero.R Hence R eW (a+c)dµ (c) W~ (a) = S eW (c)dµ (c) R S R is rational and obeys W~ (0) = 1. Hence

D ( 1)n−1 n ln W~ = − W~ 1 n − n=1 X  is polynomial in W~ and rational in W .

28 Theorem I.29 Let be a finite dimensional vector space. V i) If S and T are any two skew symmetric bilinear forms on , then is defined as V S ◦ T a rational function and = S ◦ T S+T ii) ( ) S a skew symmetric bilinear form on is an abelian group under compo- S · V D(D 1)/2 sition and is isomorphic to IR − , where D is the dimension of . V

Proof: i) To verify that is dened as a rational function, we just need to check S ◦ T that the range of T is not contained in the zero set of ZW,S. This is the case because

T (0) = 0, so that 0 is in the range of T , and Z0,S = 1.

As ZW,T , ZΩT (W ),S and ZW,S+T are all rational functions of W that are not identically zero

W Z = 0, Z = 0, Z = 0 ∈ V W,T 6 ΩT (W ),S 6 W,S+T 6  V is an open dense subset of . On this subset V V eW (c+a) dµ (a) (W ) = ln S+T S+T eW (a) dµ (a) R S+T W (a+b+c) e dµT (b) dµS(a) = ln R W (a+b) R  R e dµT (b) dµS(a) eΩT (W )(a+c) eW (b) dµ (b) dµ (a) = ln R  R  T S eΩT (W )(a) eW (b) dµ (b) dµ (a) R R T S eΩT (W )(a+c) dµ (a) = ln R R S eΩT (W )(a) dµ (a) R S = S RT (W )  D(D 1)/2 ii) The additive group of D D skew symmetric matrices is isomorphic to IR − . × By part (i), S ( ) is a homomorphism from the additive group of D D skew 7→ S · × symmetric matrices onto ( ) S a D D skew symmetric matrix . To verify that S · × it is an isomorphism, we just need to verify that the map is one{to{one. Suppose that

S(W ) = T (W )

29 for all W with Z = 0 and Z = 0. Then, by Example I.25, for each 1 i, j D, W,S 6 W,T 6 ≤ ≤ λ λ = 1 + λSij 1 + λTij 1 1 for all λ = , . But this implies that Sij = Tij. 6 − Sij − Tij

I.7 Wick Ordering

Let be a D{dimensional vector space over C and let S be a superalgebra. Let V S be a D D skew symmetric matrix. We now introduce a new basis for that is × S V adapted for use with the Grassmann Gaussian integral with covariance S. TVo this point, we have always selected some basis a , . . . , a for and used a a n 0, 1 { 1 D} V i1 · · · in ≥ ≤ i1 < < in D as a basis for . The new basis will be denoted · · · ≤ S V :a a :V n 0, 1 i < < i D i1 · · · in ≥ ≤ 1 · · · n ≤  The basis element :ai1 ain : will b e called the Wick ordered product of ai1 ain . · · · · · · Let 0 be a second copy of with basis b , . . . , b . Then :a a : is deter- V V { 1 D} i1 · · · in ∂ ∂ mined by applying ∂b ∂b to both sides of i1 · · · in . Σ b a . 1 Σ b S b Σ b a .e i i . = e 2 i ij j e i i

and setting all of the b ’s to zero. This determines Wick ordering as a linear map on . i S V Clearly, the map depends on S, even though we have not included any S in the notation.V It is easy to see that :1: = 1 and that :a a : is i1 · · · in a polynomial in a , , a of degree n with degree n term precisely a a • i1 · · · in i1 · · · in an even, resp. odd, element of when n is even, resp. odd. • S V ∂ ∂ antisymmetric under permutationsV of i1, , in (because ∂b ∂b is antisym- • · · · i1 · · · in metric under permutations of i , , i ). 1 · · · n So the linear map f(a) :f(a): is bijective. The easiest way to express a a in terms 7→ i1 · · · in ∂ ∂ of the basis of Wick ordered monomials is to apply ∂b ∂b to both sides of i1 · · · in Σ b a 1 Σ b S b . Σ b a . e i i = e− 2 i ij j .e i i .

and set all of the bi’s to zero.

30 Example I.30

:ai: = ai ai = :ai:

:a a : = a a S a a = :a a : + S i j i j − ij i j i j ij :a a a : = a a a S a S a S a a a a = :a a a : + S :a : + S :a : + S :a : i j k i j k − ij k − ki j − jk i i j k i j k ij k ki j jk i By the antisymmetry property mentioned above, :a a : vanishes if any two of the i1 · · · in indices i are the same. However :a a : :a a : need not vanish if one of the i ’s j i1 · · · in j1 · · · jm k is the same as one of the j`’s. For example

:a a : :a a : = a a S a a S = 2S a a + S2 i j i j i j − ij i j − ij − ij i j ij  

Problem I.17 Prove that

:f(a): = f(a + b) dµ S(b) − Z f(a) = :f:(a + b) dµS(b) Z

Problem I.18 Prove that ∂ :f(a): = . ∂ f(a) . ∂a` . ∂a` .

Problem I.19 Prove that

D ∂ :g(a)ai: f(a) dµS(a) = Si` :g(a): f(a) dµS(a) ∂a` Z `=1 Z P

Problem I.20 Prove that

:f:(a + b) = :f(a + b):a = :f(a + b):b

Here : : means Wick ordering of the a ’s and : : means Wick ordering of the b ’s. · a i · b i Precisely, if a , b , A , B are bases of four vector spaces, all of the same dimension, { i} { i} { i} { i} 1 . ΣiAiai+ΣiBibi . ΣiBibi . ΣiAiai . ΣiAiai+ΣiBibi 2 Σij AiSij Aj .e . a = e .e . a = e e 1 . ΣiAiai+ΣiBibi . ΣiAiai . ΣiBibi . ΣiAiai+ΣiBibi 2 Σij BiSij Bj .e . b = e .e . b = e e

31 Problem I.21 Prove that

. . :f:S+T (a + b) = .f(a + b). a,S b,T

Here S and T are skew symmetric matrices, : : means Wick ordering with respect to · S+T . . S + T and . . a,S means Wick ordering of the ai’s with respect to S and of the bi’s with · b,T respect to T .

Proposition I.31 (Orthogonality of Wick Monomials)

a) If m > 0 :a a : dµ (a) = 0 i1 · · · im S Z b) If m = n 6 :a a : :a a : dµ (a) = 0 i1 · · · im j1 · · · jn S Z c) :a a : :a a : dµ (a) = det S i1 · · · im jm · · · j1 S ik j` 1 k,` m Z ≤ ≤   Note the order of the indices in :a a :. jm · · · j1

Proof: a) b) Let 00 be a third copy of with basis c , . . . , c . Then V V { 1 D}

1 1 . Σ biai . . Σ ciai . Σ biSij bj Σ biai Σ ciSij cj Σ ciai .e . .e . dµS(a) = e 2 e e 2 e dµS(a) Z Z 1 1 Σ biSij bj Σ ciSij cj Σ (bi+ci)ai = e 2 e 2 e dµS(a) Z 1 Σ b S b 1 Σ c S c 1 Σ (b +c )S (b +c ) = e 2 i ij j e 2 i ij j e− 2 i i ij j j

Σ b S c = e− i ij j

∂ ∂ Now apply ∂b ∂b and set all of the bi’s to zero. i1 · · · im

∂ ∂ Σ biSij cj Σ biSij cj 0 0 0 0 e− = e− Si1j cj Simj cj ∂bi ∂bim 1 1 m m 1 · · · bi=0 − j0 · · · − j0 bi=0 1 m m P  P  = ( 1) S 0 c 0 S 0 c 0 i1j1 j1 imjm jm − 0 · · · 0 j1 jm P  P  32 ∂ ∂ Now apply ∂c ∂c and set all of the ci’s to zero. To get a nonzero answer, it is j1 · · · jn necessary that m = n. c) We shall prove that

:a a : :a a : dµ (a) = det S im · · · i1 j1 · · · jm S ik j` 1 k,` m Z ≤ ≤   This is equivalent to the claimed result. By Problems I.19 and I.18, m `+1 . . :aim ai1 : :aj1 ajm : dµS(a) = ( 1) Si1j` :aim ai2 : . ajk . dµS(a) · · · · · · − · · · 1≤k≤m Z `=1 Z k6=` X Q The proof will be by induction on m. If m = 1, we have

:ai1 : :aj1 : dµS(a) = Si1j1 1 dµS(a) = Si1j1 Z Z as desired. In general, by the inductive hypothesis, m `+1 :aim ai1 : :aj1 ajm : dµS(a) = ( 1) Si1j` det Sip jk 1≤p,k≤m · · · · · · − p6=1, k6=` Z `=1 X   = det Sip jk 1 k,p m ≤ ≤  

Problem I.22 Prove that

:f(a): dµS(a) = f(0) Z

Problem I.23 Prove that n ei . . 0 0 . a`i,µ . dµS(ψ) = Pf T(i,µ),(i ,µ ) Z i=1 µ=1 Y Q   where 0 if i = i0 T(i,µ),(i0,µ0) = S` ,` 0 0 if i = i0  i,µ i ,µ 6 n Here T is a skew symmetric matrix with i=1 ei rows and columns, numbered, in order

(1, 1), , (1, e1), (2, 1), (2, e2), , (n, enP). The product in the integrand is also in this · · · · · · · · · order. Hint: Use Problems I.19 and I.18 and Proposition I.18.

33 I.8 Bounds on Grassmann Gaussian Integrals

We now prove some bounds on Grassmann Gaussian integrals. While it is not really necessary to do so, I will make some simplifying assumptions that are satised in applications to quantum eld theories. I will assume that the vector space generating V the Grassmann algebra has basis ψ(`, κ) ` X, κ 0, 1 , where X is some nite ∈ ∈ { } set. Here, ψ(`, 0) plays the r^ole of ψx,σ of I.5 and ψ(`, 1) plays the r^ole of ψx,σ of I.5. I § § will also assume that, as in (I.4), the covariance only couples κ = 0 generators to κ = 1 generators. In other words, we let A be a function on X X and consider the Grassmann × Gaussian integral dµ (ψ) on with · A V R V 0 if κ = κ0 = 0 A(`, `0) if κ = 0, κ0 = 1 ψ(`, κ)ψ(`0, κ0) dµA(ψ) =  A(`0, `) if κ = 1, κ0 = 0 Z  − 0 if κ = κ0 = 1 We start o with the simple bound 

Proposition I.32 Assume that there is a Hilbert space and vectors f , g , ` X in H ` ` ∈ H such that

A(`, `0) = f`, g`0 for all `, `0 X h iH ∈ Then n

ψ `i, κi) dµA(ψ) f`i g`i i=1 ≤ k kH k kH Z 1≤i≤n 1≤i≤n κYi=0 κYi=1 Q

Proof: Dene F = 1 i n κ = 0 ≤ ≤ i F =  1 i n κ = 1 ≤ ≤ i By Problem I.13, if the integral doesnot vanish, the cardinalit y of F and F coincide and

there is a sign such that ± n

ψ `i, κi) dµA(ψ) = det A`i,`j i∈F ± ¯ Z i=1 j∈F Q h i The proposition is thus an immediate consequence of Gram’s inequality. For the conve- nience of the reader, we include a proof of this classical inequality below.

34 Lemma I.33 (Gram’s inequality) Let be a Hilbert space and u , , u , H 1 · · · n v , , v . Then 1 · · · n ∈ H n det ui, vj ui vi h i 1 i,j n ≤ k k k k h i ≤ ≤ i=1 Y Here, , and are the inner product and norm in , respectively. h · · i k · k H

Proof: We start with three reductions. First, we may assume that the u , , u 1 · · · n are linearly independent. Otherwise the determinant vanishes, because its rows are not independent, and the inequality is trivially satised. Second, we may also assume that each vj is in the span of the ui’s, because, if P is the orthogonal projection onto that span, n n det ui, vj = det ui, P vj while i=1 P ui vi i=1 ui vi . h i 1 i,j n h i 1 i,j n k k k k ≤ k k k k ≤ ≤ ≤ ≤ Third,h we mai y assume thath v1, i , vn are linearlyQ independent. OtherwiseQ the de- · · · terminant vanishes, because its columns are not independent. Denote by U the span of u , , u . We have just shown that we may assume that u , , u and v , , v 1 · · · n 1 · · · n 1 · · · n are two bases for U.

Let αi be the projection of ui on the orthogonal complement of the subspace i 1 spanned by u1, , ui 1 . Then αi = ui + − L~ijuj for some complex numbers L~ij · · · − j=1 and αi is orthogonal to u1, , ui 1 and hencePto α1, , αi 1. Set · · · − · · · − 1 α − if i = j k ik Lij =  0 if i < j  1  α − L~ if i > j k ik ij Then L is a lower triangular matrixwith diagonal entries

1 L = α − ii k ik such that the linear combinations

i u0 = L u , i = 1, , n i ij j · · · j=1 X are orthonormal. This is just the Gram-Schmidt orthogonalization . Similarly, let βi be the projection of vi on the orthogonal complement of the subspace spanned by

35 v1, , vi 1 . By Gram-Schmidt, there is a lower triangular matrix M with diagonal · · · − entries 1 M = β − ii k ik such that the linear combinations

i v0 = M v , i = 1, , n i ij j · · · j=1 X are orthonormal. Since the vi0’s are orthonormal and have the same span as the vi’s, they form an orthonormal basis for U. As a result, ui0 = j ui0 , vj0 vj0 so that P u0 , v0 v0 , u0 = u0 , u0 = δ i j j k h i ki i,k j X and the matrix ui0 , vj0 is unitary and consequently has determinant of modulus one. As h i L u , v M † = u0 , v0 h i ji i j h i h i we have

n n 1 1 det u , v = det− L det− M = α β u v h i ji k ik k ik ≤ k ikk ik i=1 i=1   Y Y since u 2 = α + (u α ) 2 = α 2 + u α 2 α 2 k jk k j j − j k k jk k j − jk ≥ k jk v 2 = β + (v β ) 2 = β 2 + v β 2 β 2 k jk k j j − j k k jk k j − jk ≥ k jk Here, we have used that α and u α are orthogonal, because α is the orthogonal j j − j j projection of u on some subspace. Similarly, β and v β are orthogonal. j j j − j

Problem I.24 Let a , 1 i, j n be complex numbers. Prove Hadamard’s inequality ij ≤ ≤ n n 1/2 2 det aij aij ≤ i=1 j=1 | | h i   Q P from Gram’s inequality. (Hin t: Express a = a , e where e = (0, , 1, , 0) , j = ij h i ji j · · · · · · 1, , n is the standard basis for Cn .) · · · 36 Problem I.25 Let be a vector space with basis a , , a . Let S 0 be a skew V { 1 · · · D} `,` symmetric D D matrix with ×

S(`, `0) = f`, g`0 for all 1 `, `0 D h iH ≤ ≤

for some Hilbert space and vectors f`, g`0 . Set F` = f` g` . Prove that H ∈ H k kHk kH n p a dµ (a) F i` S ≤ i` Z `=1 1 ` n ≤Y≤ Q

We shall need a similar bound involving a Wick monomial. Let : : denote · Wick ordering with respect to dµA.

Proposition I.34 Assume that there is a Hilbert space and vectors f , g , ` X in H ` ` ∈ H such that

A(`, `0) = f`, g`0 for all `, `0 X h iH ∈ Then

m n . . n ψ ` , κ ψ `0 , κ0 dµ (ψ) 2 f g f 0 g 0 i i . j j . A `i `i `j `j i=1 j=1 ≤ k kH k kH k kH k kH Z 1≤i≤m 1≤i≤m 1≤j≤n 1≤j≤n κi=0 κi=1 κ0 =0 κ0 =1 Q  Q  Y Y Yj Yj

Proof: By Problem I.17,

m n . . ψ `i, κi . ψ `j0 , κj0 . dµA(ψ) Z i=1 j=1 Q m Q n = ψ `i, κi ψ `j0 , κj0 + φ `j0 , κj0 dµA(ψ) dµ A(φ) − Z i=1 j=1 Q  mQ    = ψ `i, κi ψ `j0 , κj0 dµA(ψ) φ `j0 , κj0 dµ A(φ) ± i=1 j J j J − J 1, ,n Z ∈ Z 6∈ ⊂{X··· } Q  Q  Q  There are 2n terms in this sum. For each term, the rst factor is, by Proposition I.32, no larger than

f` g` f`0 g`0 i H i H j H j H 1≤i≤mk k 1≤i≤mk k j∈J k k j∈J k k κ =0 κ =1 κ0 =0 κ0 =1 Yi Yi Yj Yj

37 and the second factor is, again by Proposition I.32 (since A(`, `0) = f`, g`0 ), no − h − iH larger than

f`0 g`0 j H j H j6∈J k k j6∈J k k κ0 =0 κ0 =1 Yj Yj

Problem I.26 Prove, under the hypotheses of Proposition I.34, that n ei . . √ √ . ψ `i,µ, κi,µ . dµA(ψ) 2 f`i,µ 2 g`i,µ µ=1 ≤ k kH k kH Z i=1 1≤i≤n 1≤i≤n Y Q  1≤Yµ≤ei 1≤Yµ≤ei κi,µ=0 κi,µ=1

δσ,σ0 Finally, we specialise to almost the full structure of (I.4). We only replace ik e(k) 0− by a general matrix valued function of the (momentum) k. This is the typical structure in 1 d+1 dk quantum eld theories. Let S be a nite set. Let E 0 (k) L IR , , for each σ,σ ∈ (2π)d+1 σ, σ0 S, and set  ∈ dd+1k ik (y x) 0 0 Cσ,σ (x, y) = (2π)d+1 e · − Eσ,σ (k) Z Let dµ (ψ) be the Grassmann Gaussian integral determined by · C R 0 if κ = κ0 = 0 Cσ,σ0 (x, y) if κ = 0, κ0 = 1 ψσ(x, κ)ψσ0 (y, κ0) dµC(ψ) =  Cσ0,σ(y, x) if κ = 1, κ0 = 0 Z  − 0 if κ = κ0 = 1 d+1 for all x, y IR and σ, σ0 S.  ∈ ∈

Corollary I.35

m n (m+n)/2 . . n dk sup ψ (x , κ ) ψ 0 (x0 , κ0 ) dµ (ψ) 2 E(k) d+1 σi i i . σj j j . C (2π) xi,σi,κi i=1 j=1 ≤ k k x0 ,σ0 ,κ0 j j j Z  Z  Q Q

ei n Σi . . dk 2 sup ψ (x , κ ) ψ (x , κ ) dµ (ψ) 2 E(k) d+1 . σi,1 i,1 i,1 σi,ei i,ei i,ei . C (2π) xi,µ,σi,µ i=1 · · · ≤ k k κi,µ Z  Z  Q 2 S Here E(k) denotes the norm of the matrix Eσ,σ0 (k) as an operator on ` C| | . k k σ,σ0 S ∈    38 Proof: Dene X = (i, µ) 1 i n, 1 µ e ≤ ≤ ≤ ≤ i

A (i,µ), (i0, µ 0) = C (x , x 0 0 ) σi,µ,σi0,µ0 i,µ i ,µ Let (i, µ), κ , (i, µ) X, κ 0, 1 be generators of a Grassmann algebra and let ∈ ∈ { } dµA() be the Grassmann Gaussian measure on that algebra with covariance A. This construction has been arranged so that

0 0 0 0 0 0 ψσi,µ (xi,µ, κi,µ)ψσi0,µ0 (xi ,µ , κi ,µ ) dµC(ψ) = (i, µ), κi,µ (i0, µ0), κi ,µ ) dµA() Z Z   and consequently

n . . .ψσ (xi,1, κi,1) ψσ (xi,e , κi,e ). dµC(ψ) i,1 · · · i,ei i i Z i=1 n Q . . = . (i, 1), κi,1) ((i, ei), κi,e . dµA() · · · i Z i=1 Q  2 d+1 dk S Let = L IR , C| | and H (2π)d+1 ⊗  Eσ,σ (k) ik xi,µ ik xi,µ i,µ fi,µ(k, σ) = e · E(k) δσ,σ gi,µ(k, σ) = e · i,µ √ E(k) k k k k p If E(k) = 0, set g (k, σ) = 0. Then k k i,µ

A (i, µ), (i0, µ0) = fi,µ, gi0,µ0 h iH  2 2 and, since σ S Eσ,σi,µ (k) E(k) , ∈ ≤ k k P 1/2 f , g = E(k) = E(k) dk i,µ i,µ k k L2 k k (2π)d+1 H H Z p  

The Corollary now follows from Proposition I.34 and Problem I.26.

39 II. Fermionic Expansions

This chapter concerns an expansion that can be used to exhibit analytic control over the right hand side of (I.7) in fermionic quantum eld theory models, when the covariance S is \really nice". It is also used as one ingredient in a renormalization group procedure that controls the right hand side of (I.8) when the covariance S is not so nice.

II.1 Notation and Denitions Here are the main notations that we shall use throughout this chapter. Let A be the Grassmann algebra generated by a , , a . Think of a , , a • { 1 · · · D} { 1 · · · D} as some nite approximation to the set ψ , ψ x IRd+1, σ , of x,σ x,σ ∈ ∈ {↑ ↓} elds integrated out in a renormalizationgroup transformation like

¯ ¯ W (ψ, ψ) W (, ) = log 1 eW (ψ+Ψ,ψ+Ψ)dµ (ψ, ψ) → Z S Z C be the Grassmann algebraf generated by c , , c . Think of c , , c as • { 1 · · · D} { 1 · · · D} some nite approximation to the set , x IRd+1 , σ , x,σ x,σ ∈ ∈ { ↑ ↓ } of elds that are arguments of the output of the renormalization group transfor-

mation. AC be the Grassmann algebra generated by a , , a , c , , c . • { 1 · · · D 1 · · · D} S = (S ) be a skew symmetric matrix of order D . Think of S as the \sin- • ij gle scale" covariance S(j) of the Gaussian measure that is integrated out in the renormalization group step. See (I.8). dµ (a) be the Grassmann, Gaussian integral with covariance S . It is the • · S uniqueR linear map from AC to C satisfying

1 Σ ciai Σ ciSij cj e dµS(a) = e− 2 Z In particular

aiaj dµS(a) = Si,j Z = (i , , i ) 1 i , , i D be the set of all multi indices of • Mr 1 · · · r ≤ 1 · · · r ≤

degree r 0 . For each I r set aI = ai1 air . By convention, a = 1 . ≥ ∈ M · · · ∅ 40 the space (AC)0 of \interactions" is the linear subspace of AC of even Grassmann • polynomials with no constant term. That is, polynomials of the form

W (c, a) = wl,r(L, J) cLaJ l,r∈IN L∈Ml 1≤lP+r∈2ZZ J∈MP r Usually, in the renormalization group map, the interaction is of the form W (c+a). (See Denition I.22.) We do not require this.

Here are the main objects that shall concern us in this chapter.

Denition II.1 a) The renormalization group map : (AC)0 C0 is →

1 W (c,a) W (0,a) (W )(c) = log e dµS(a) where ZW,S = e dµS(a) ZW,S Z Z

W (0,a) 1 It is dened for all W ’s obeying e dµS(a) = 0. The factor ensures that 6 ZW,S (W )(0) = 0, i.e. that (W )(c) conRtains no constant term. Since (W ) = 0 for W = 0

1 d (W )(c) = dε (εW )(c) dε Z0 (II.1) 1 W (c, a) eεW (c,a) dµ (a) 1 W (0, a) eεW (0,a) dµ (a) = S dε S dε eεW (c,a) dµ (a) − eεW (0,a) dµ (a) Z0 R S Z0 R S Thus to get bounds onRthe renormalization group map, itRsuces to get bounds on

b) the Schwinger functional : AC C, dened by S →

(f) = 1 f(c, a) eW (c,a) dµ (a) S (c) S Z Z where (c) = eW (c,a) dµ (a) . Despite our notation, (f) is a function of W and S Z S S as well as f. R

c) Dene the linear map R : AC AC by →

. W (c,a+b) W (c,a) . R(f)(c, a) = .e − 1. f(c, b) dµS(b) − b Z 41 . . where . . denotes Wick ordering of the b{eld (see I.6) and is determined by · b § 1 . Σ Aiai+Σ Bibi+Σ Cici . 2 Σ BiSij Bj Σ Aiai+Σ Bibi+Σ Cici .e . b = e e where a , A , b , B , c , C are bases of six isomorphic vector spaces. { i} { i} { i} { i} { i} { i} If you don’t know what a Feynman diagram is, skip this paragraph. Diagram- 1 W (c,a) matically, (c) e f(c, a) dµS(a) is the sum of all connected (f is viewed as a single, Z connected, vertex)R Feynman diagrams with one f{vertex and arbitrary numbers of W { vertices and S{lines. The operation R(f) builds parts of those diagrams. It introduces those W {vertices that are connected directly to the f{vertex (i.e that share a common S{ line with f) and it introduces those lines that connect f either to itself or to a W {vertex.

Problem II.1 Let D

F (a) = f(j1, j2) aj1 aj2 j1X,j2=1 D

W (a) = w(j1, j2, j3, j4) aj1 aj2 aj3 aj4 j ,j ,j ,j =1 1 2X3 4 with f(j1, j2) and w(j1, j2, j3, j4) antisymmetric under permutation of their arguments.

a) Set

1 λW (a) λW (a) (λ) = F (a) e dµS(a) where λ = e dµS(a) S λ Z Z Z Z d` Compute ` (λ) for ` = 0, 1, 2. dλ S λ=0

b) Set . λW (a+b) λW (a) . R(λ) = .e − 1. F (b) dµS(b) − b Z d` Compute ` R(λ) for all ` IN. dλ λ=0 ∈

II.2 The Expansion { Algebra To obtain the expansion that will be discussed in this chapter, expand the 1 (1l R)− of the following Theorem, which shall be proven shortly, in a power series − in R.

42 Theorem II.2 Suppose that W AC0 is such that the kernel of 1l R is trivial. Then, ∈ − for all f in AC , 1 (f) = (1l R)− (f) dµ (a) S − S Z

Proposition II.3 For all f in AC and W AC0, ∈

W (c,a) W (c,a) W (c,a) f(c, a) e dµS(a) = f(c, b) dµS(b) e dµS(a) + R(f)(c, a) e dµS(a) Z Z Z Z

Proof: Subbing in the denition of R(f),

W (c,a) W (c,a) f(c, b) dµS(b) e dµS(a) + R(f)(c, a) e dµS(a) Z Z Z . W (c,a+b) W (c,a) . W (c,a) = .e − . b f(c, b) dµS(b) e dµS(a) Z  Z  . W (c,a+b) . = .e . b f(c, b) dµS(b) dµS(a) Z Z . W (c,a+b) W (c,a) . . W (c,a+b) . W (c,a) since .e − . b = .e . b e− . Continuing,

W (c,a) W (c,a) f(c, b) dµS(b) e dµS(a) + R(f)(c, a) e dµS(a) Z Z Z . W (c,a+b) . = .e . a f(c, b) dµS(b) dµS(a) by Problem I.20 Z Z W (c,b) = f(c, b) e dµS(b) by Problems I.11, I.22 Z W (c,a) = f(c, a) e dµS(a) Z

Proof of Theorem II.2: For all g(c, a) AC ∈

(1l R)(g) eW (c,a) dµ (a) = (c) g(c, a) dµ (a) − S Z S Z Z 1 by Proposition II.3. If the kernel of 1l R is trivial, then we may choose g = (1l R)− (f). − − So W (c,a) 1 f(c, a) e dµ (a) = (c) (1l R)− (f)(c, a) dµ (a) S Z − S Z Z 43 W The left hand side does not vanish for all f AC (for example, for f = e− ) so (c) is ∈ Z nonzero and

1 W (c,a) 1 f(c, a) e dµ (a) = (1l R)− (f)(c, a) dµ (a) (c) S − S Z Z Z

II.3 The Expansion { Bounds

Denition II.4 (Norms) For any function f : C, dene Mr → f = max max f(J) k k 1 i r 1 k D J∈Mr ≤ ≤ ≤ ≤ j =k Pi f = f(J) ||| ||| J r ∈MP The norm , which is an \L1 norm with one argument held xed", is appropriate for k · k kernels, like those appearing in interactions, that become translation invariant when the cutos are removed. Any f(c, a) AC has a unique representation ∈

f(c, a) = fl,r(k1, , kl, j1, , jr) ck1 ckl aj1 ajr l,r 0 k1,···,kl · · · · · · · · · · · · P≥ j1,P···,jr with each kernel f (k , , k , j , , j ) antisymmetric under separate permutation of l,r 1 · · · l 1 · · · r its k arguments and its j arguments. Dene

f(c, a) = αl+r f k kα k l,rk l,r 0 X≥ f(c, a) = αl+r f ||| |||α ||| l,r||| l,r 0 X≥ In the norms f and f on the right hand side, f is viewed as a function from k l,rk ||| l,r||| l,r to C. Ml+r

Problem II.2 Dene, for all f : C and g : C with r, s 1 and r + s > 2, Mr → Ms → ≥ f g : r+s 2 C by ∗ M − → D f g(j1, , jr+s 2) = f(j1, , jr 1, k)g(k, jr, , jr+s 2) ∗ · · · − · · · − · · · − Xk=1 44 Prove that

f g f g and f g min f g , f g k ∗ k ≤ k k k k ||| ∗ ||| ≤ ||| ||| k k k k ||| ||| 

Denition II.5 (Hypotheses) We denote by

H + J (HG) bH :bJ: dµS(b) F| | | | for all H, J r 0 r ≤ ∈ ≥ M R S (HS) S F2D k k ≤ the main hypotheses on S that we shall use. Here S is the norm of Denition II.4 k k applied to the function S : (i, j) S . So F is a measure of the \typical size of ∈ M2 7→ ij elds b in the support of the measure dµS(b)" and D is a measure of the decay rate of S. Hypothesis (HG) is typically veried using Proposition I.34 or Corollary I.35. Hypothesis (HS) is typically veried using the techniques of Appendix C.

Problem II.3 Let : : denote Wick ordering with respect to the covariance S. · S a) Prove that if

H + J b :b : dµ (b) F| | | | for all H, J H J S S ≤ ∈ Mr Z r 0 [≥ then H + J b :b : dµ (b) z F | | | | for all H, J H J zS zS ≤ | | ∈ Mr Z r 0 p  [≥ ( H + J )/2 Hint: rst pro ve that bH :bJ:zS dµzS(b) = z | | | | bH :bJ:S dµS(b). R R b) Prove that if

H + J H + J b :b : dµ (b) F| | | | and b :b : dµ (b) G| | | | H J S S ≤ H J T T ≤ Z Z for all H, J r 0 r, then ∈ ≥ M S H + J b :b : dµ (b) F + G | | | | for all H, J H J S+T S+T ≤ ∈ Mr Z r 0  [≥ Hint: Prop osition I.21 and Problem I.21.

45 Theorem II.6 Assume Hypotheses (HG) and (HS). Let α 2 and W AC0 obey ≥ ∈ D W 1/3 . Then, for all f AC , k k(α+1)F ≤ ∈ R(f) 3 D W f k kαF ≤ α k k(α+1)F k kαF R(f) 3 D W f ||| |||αF ≤ α k k(α+1)F ||| |||αF

The proof of this Theorem follows that of Lemma II.11.

Corollary II.7 Assume Hypotheses (HG) and (HS). Let α 2 and W AC0 obey ≥ ∈ D W 1/3 . Then, for all f AC , k k(α+1)F ≤ ∈ α (f)(c) (f)(0) αF α 1 f αF kS − S k ≤ − k k α (f) αF α 1 f αF |||S ||| ≤ − ||| ||| α (W ) αF α 1 W αF k k ≤ − k k

The proof of this Corollary follows that of Lemma II.12. Let

W (c, a) = wl,r(L, J) cLaJ l,r IN L∈Ml P∈ J∈MPr where wl,r(L, J) is a function which is separately antisymmetric under permutations of its L and J arguments and that vanishes identically when l + r is zero or odd. With this antisymmetry

W (c, a + b) W (c, a) = r+s w (L, J.K)c a b − s l,r+s L J K l,r≥0 L∈Ml Xs≥1 JX∈Mr  K∈Ms where J.K = (j , , j , k , , k ) when J = (j , , j ) and K = (k , , k ). That is, 1 · · · r 1 · · · s 1 · · · r 1 · · · s J.K is the concatenation of J and K. So

` . W (c,a+b) W (c,a) 1 ri+si e − 1: = : w (L , J .K )c a b : . b `! si li,ri+si i i i Li Ji Ki b − l ,r ≥0 L ∈M i=1 `>0 i i i li s ≥1 J ∈M X Xi iXri Q  K ∈M i si

46 with the index i in the second and third sums running from 1 to `. Hence ` R(f) = 1 ri+si Rs (f) `! si l,r i=1 `>0 r,s,l∈IN` X sXi≥1 Q  where ` s . Rl,r(f) = . wli,ri+si (Li, Ji.Ki)cLi aJi bKi:b f(c, b) dµS(b) L ∈M i=1 i li Z J ∈M iXri Q K ∈M i si

Problem II.4 Let W (a) = 1 i,j D w(i, j) aiaj with w(i, j) = w(j, i). Verify that ≤ ≤ − P W (a + b) W (a) = w(i, j) bibj + 2 w(i, j) aibj − 1 i,j D 1 i,j D ≤P≤ ≤P≤

Problem II.5 Assume Hypothesis (HG). Let s, s0, m 1 and ≥

f(b) = fm(H) bH W (b) = ws(K) bK W 0(b) = ws0 0 (K0) bK0 0 H m K s K s0 ∈MP ∈MP ∈MP a) Prove that . . m+s 2 .W (b). f(b) dµS(b) mF − fm S ws b ≤ ||| ||| k k k k Z

. . b) Prove that .W (b)W 0(b). b f(b) dµS(b) = 0 if m = 1 and

. R . m+s+s0 4 2 .W (b)W 0(b). f(b) dµS(b) m(m 1)F − fm S ws w0 0 b ≤ − ||| ||| k k k k k s k Z if m 2. ≥

Proposition II.8 Assume Hypothesis (HG). Let

(p,m) f (c, a) = fp,m(I, H) cIaH H∈Mm I∈M Pp Let r, s, l IN` with each s 1. If m < `, Rs (f (p,m)) vanishes. If m ` ∈ i ≥ l,r ≥ ` s (p,m) m m si 2 Rl,r(f ) 1 `! ` F fp,m S F − wli,ri+si k k ≤ k k i=1 k k k k  Q`   s (p,m) m m si 2 Rl,r(f ) 1 `! ` F fp,m S F − wli,ri+si ||| ||| ≤ ||| ||| i=1 k k k k  Q   47 Proof: We have

s (p,m) Rl,r(f ) = fl,r,s(L1, , L`, I, J1, , J`) cL1 cL` cI aJ1 aJ` ± J ∈M · · · · · · · · · · · · I p i ri ∈M L ∈M X iXli 1≤i≤` with

`

fl,r,s(L1, , L`, I, J1, , J`) = : wli,ri+si (Li, Ji, Ki)bKi:b fp,m(I, H) bH dµS(b) · · · · · · H∈Mm i=1 K ∈Ms Z i P i Q The integral over b is bounded in Lemma II.9 below. It shows that

fl r s(L , , L , I, J , , J ) , , 1 · · · ` 1 · · · ` vanishes if m < ` and is, for m `, bounded by ≥

m m+Σsi 2` fl r s(L , , L , I, J , , J ) `! T (L , , L , I, J , , J ) F − , , 1 · · · ` 1 · · · ` ≤ ` 1 · · · ` 1 · · · `  where

` D

T (L1, , L`, I, J1, , J`) = ui(Li, Ji, ki) Ski,hi fp,m(I, H) · · · · · · H m i=1 ki=1 | || | | | ∈MP Q P  and, for each i = 1, , ` · · ·

~ ui(Li, Ji, ki) = wli,ri+si (Li, Ji, (ki).Ki) ˜ | | Ki si−1 ∈MP Recall that (k ).K~ = (k , k~ , , k~ ) when K~ = (k~ , , k~ ). By construction, u = 1 1 2 · · · s 2 · · · s k ik w . By Lemma II.10, below k li,ri+si k

` ` ` ` T fp,m S ui fp,m S wli,ri+si k k ≤ k k k k i=1 k k ≤ k k k k i=1 k k Q Q and hence ` m m+Σsi 2` ` fl,r,s `! ` F − fp,m S wli,ri+si k k ≤ k k k k i=1 k k  Q Similarly, the second bound follows from T f S ` ` u . ||| ||| ≤ ||| p,m||| k k i=1 k ik 48 Q Lemma II.9 Assume Hypothesis (HG). Then

` ` H +Σ K 2` : b : b dµ (b) F| | | i|− S Ki H S ki1,hµi i=1 ≤ 1≤µ1,···,µ`≤|H| i=1 | | Z all different Q P Q

Proof: For convenience, set j = k and K~ = K k for each i = 1, , ` . By i i1 i i \ { i1} · · · antisymmetry,

` ` : bK : bH dµS(b) = : b ˜ bj bj : bH dµS(b) i ± Ki 1 · · · ` Z i=1 Z i=1 Q Q Recall the integration by parts formula (Problem I.19)

D : b b : f(b) dµ (b) = S : b : ∂ f(b) dµ (b) K j S j,m K ∂bm S Z m=1 Z P and the denition (Denition I.11)

∂ 0 if m / H bH = J ∈ ∂bm ( 1)| | b b if b = b b b  − J K H J m K of left partial derivative. Integrate by parts successively with respect to b b j` · · · j1 ` ` ` D ∂ : bK : bH dµS(b) = : b ˜ : Sj ,m bH dµS(b) i ± Ki i ∂bm Z i=1 Z i=1  i=1 m=1  Q Q Q  P  and then apply Leibniz’s rule

` D ` ∂ Sj ,m bH = Sj ,h b i ∂bm i µi H hµ1 , ,hµ` i=1 m=1 1≤µ1,···,µ`≤|H| ± i=1 \{ ··· } Q  P  all differenP t  Q  and Hypothesis (HG).

Lemma II.10 Let

` D

T (J1, , J`, I) = ui(Ji, ki) Ski,hi fp,m(I, H) · · · H m i=1 ki=1 | || | | | ∈MP Q P  with ` m. Then ≤ ` ` T fp,m S ui k k ≤ k k k k i=1 k k Q` ` T fp,m S ui ||| ||| ≤ ||| ||| k k i=1 k k Q 49 Proof: For the triple norm,

T = T (J1, , J`, I) ||| ||| J ∈M , 1≤i≤` · · · i ri IX∈Mp ` D

= ui(Ji, ki) Ski,hi fp,m(I, H) J ∈M , 1≤i≤` i=1 ki=1 | || | | | i ri   I∈MpX, H∈Mm Q P `

= ui(Ji, ki) Ski,hi fp,m(I, H) i=1 k J | || | | | I∈Mp i i∈Mri HX∈Mm Q  P P  ` = v (h ) f (I, H) i i | p,m | I∈Mp i=1 HX∈Mm  Q  where

vi(hi) = ui(Ji, ki) Ski,hi k J | || | i i∈Mri Since P P

sup vi(hi) = sup ui(Ji, ki) Ski,hi h h | | | | i i ki Ji ri P  ∈MP  sup ui(Ji, ki) sup Ski,hi ≤ k | | h | | i Ji ri i ki  ∈MP  P u S ≤ k ik k k we have ` ` T u S f (I, H) = f u S ||| ||| ≤ k ik k k | p,m | ||| p,m||| k ik k k I∈Mp i=1 i=1 H∈MXm  Q  Q   The proof for the double norm, i.e. the norm with one external argument sup’d over rather than summed over, is similar. There are two cases. Either the external argument that is being sup’d over is a component of I or it is a component of one of the Jj’s. In the former case, ` D ~ sup ui(Ji, ki) Ski,hi fp,m((i1)I, H) 1 i1 D J ∈M , 1≤i≤` i=1 ki=1 | || | | | ≤ ≤ i ri ˜   I∈Mp−X1, H∈Mm Q P ` = sup vi(hi) fp,m((i1)~I, H) 1 i1 D ˜ i=1 | | ≤ ≤ I∈Mp−1   HX∈Mm Q ` ` sup ui S fp,m((i1)~I, H) fp,m ui S ≤ 1 i1 D ˜ i=1 k k k k | | ≤ k k i=1 k k k k ≤ ≤ I∈Mp−1     HX∈Mm Q Q

50 In the latter case, if a component of, for example, J1, is to be sup’d over

D ` D ~ sup u1((j1).J1, k1) Sk1,h1 ui(Ji, ki) Ski,hi fp,m(I, H) 1 j D | || | i=2 | || | | | 1 J˜ ∈M k1=1 ki=1 ≤ ≤ 1 r1−1 J ∈MX, 2≤i≤`    i ri P Q P I∈Mp, H∈Mm D ` ~ = sup u1((j1).J1, k1) Sk1,h1 vi(hi) fp,m(I, H) 1 j D | || | i=2 | | 1 J˜ ∈M k1=1 ≤ ≤ 1 r1−1    IX∈Mp P Q H∈Mm ` D ~ ui S sup u1((j1).J1, k1) Sk1,h1 fp,m(I, H) ≤ i=2 k k k k 1 j D | || || | 1 J˜ ∈M k1=1   ≤ ≤ 1 r1−1 Q IX∈Mp P H∈Mm ` fp,m ui S ≤ k k i=1 k k k k Q   by Problem II.2, twice.

Lemma II.11 Assume Hypotheses (HG) and (HS). Let

(p,m) f (c, a) = fp,m(I, H) cIaH H∈Mm I∈M Pp For all α 2 and ` 1 ≥ ≥ ` ` 1 ri+si Rs (f (p,m)) 2 f (p,m) D W `! si l,r αF α αF (α+1)F r,s,l∈IN` i=1 k k ≤ k k k k sPi≥1 Q    ` ` 1 ri+si Rs (f (p,m)) 2 f (p,m) D W `! si l,r αF α αF (α+1)F r,s,l∈IN` i=1 ||| ||| ≤ ||| ||| k k sPi≥1 Q   

Proof: We prove the bound with the norm. The proof for is identical. We k · k ||| · ||| may assume, without loss of generality, that m `. By Proposition II.8, since m 2m, ≥ ` ≤ 1 Rs (f (p,m)) = 1 αΣli+Σri+pFΣli+Σri+p Rs (f (p,m))  `! k l,r kαF `! k l,r k1 ` Σli+Σri+p Σli+Σri+p m m si 2 α F ` F fp,m S F − wli,ri+si ≤ k k i=1 k k k k `  Q   m p m+p li+ri li+ri+si 2 α F fp,m Dα F wli,ri+si ≤ k k i=1 k k ` Q   2 m li+ri li+ri+si = α fp,m αF Dα F wli,ri+si k k i=1 k k  Q   51 As α 2 and m ` 1, ≥ ≥ ≥ ` 1 ri+si s (p,m) R (f ) αF `! si l,r r,s,l∈IN` i=1 k k sPi≥1 Q  ` 2 f (p,m) ri+si Dαli+ri Fli+ri+si w α αF si li,ri+si ≤ k k r,s,l∈IN` i=1 k k sPi≥1 Q h  i ` 2 (p,m) r+s r l l+r+s = α f αF D s α α F wl,r+s k k r,s,l∈IN k k h sP≥1  i q ` 2 (p,m) q q s l l+q = α f αF D s α − α F wl,q k k l IN q 1 s=1 k k h ∈ ≥ i P P P  ` 2 (p,m) q l l+q α f αF D (α + 1) α F wl,q ≤ k k q,l IN k k h ∈ i P ` 2 f (p,m) D W ≤ α k kαF k k(α+1)F h i

Proof of Theorem II.6:

` R(f) 1 ri+si Rs (f) αF `! si l,r αF k k ≤ i=1 k k `>0 r,s,l∈IN` X sXi≥1 Q  ` 2 f (p,m) D W ≤ α k kαF k k(α+1)F m,p X`>0 X D W   2 k k(α+1)F = α f αF 1 D W k k − k k(α+1)F 3 f D W ≤ α k kαF k k(α+1)F

The proof for the other norm is similar.

Lemma II.12 Assume Hypothesis (HG). If α 1 then, for all g(a, c) AC ≥ ∈

g(a, c) dµS(a) g(a, c) αF αF ≤ ||| ||| Z

[g(a, c ) g(a, 0)] dµS(a) g(a, c) αF − αF ≤ k k Z

52 Proof: Let

g(a, c) = gl,r(L, J) cLaJ l,r 0 L∈Ml P≥ J∈MP r with gl,r(L, J) antisymmetric under separate permutations of its L and J arguments. Then

g(a, c) dµS(a) = gl,r(L, J) cL aJ dµS(a) αF l,r 0 L∈Ml αF Z ≥ J∈Mr Z P P l l = α F gl,r(L, J) aJ dµS(a) l 0 L r 0 J r ≥ ∈Ml ≥ ∈M Z P l l+Pr P P α F gl,r(L, J) ≤ l,r 0 L∈Ml | | P≥ J∈MP r g(a, c) ≤ ||| |||αF Similarly,

[g(a, c) g(a, 0) dµS(a) = gl,r(L, J) cL aJ dµS(a) − αF l≥1 L∈Ml αF Z r≥0 J∈Mr Z P P l l = α F sup gl,r (k)L~, J aJ dµS(a) l 1 1 k n L˜ r 0 J r ≥ ≤ ≤ ∈Ml−1 ≥ ∈M Z P l l+r P P P  α F sup gl,r (k)L~, J l≥1 ˜ ≤ 1 k n L∈Ml−1 | | r≥0 ≤ ≤ P J∈MP r  g(a, c) ≤ k kαF

1 Proof of Corollary II.7: Set g = (1l R)− f. Then −

(f)(c) (f)(0) αF = g(a, c) g(a, 0) dµS(a) (Theorem II.2) kS − S k − αF Z 1 (1l  R)− (f)  (Lemma II.12) ≤ k − kαF 1 1 1 3D W /α f αF 1 1/α f αF (Theorem II.6) ≤ − k k(α+1)F k k ≤ − k k The argument for (f) is identical. |||S |||αF With the more detailed notation

f(a, c) eW (a,c) dµ (a) (f, W ) = S S eW (a,c) dµ (a) R S R53 we have, by (II.1) followed by the rst bound of this Corollary, with the replacements W εW and f W , → → 1 (W ) αF = (W, εW )(c) (W, εW )(0) dε k k 0 S − S αF Z 1   α W dε = α W ≤ α 1 k kαF α 1 k kαF Z0 − −

We end this subsection with a proof of the continuity of the Schwinger functional

(f; W, S) = 1 f(c, a) eW (c,a) dµ (a) where (c; W, S) = eW (c,a) dµ (a) S (c;W,S) S Z S Z Z Z and the renormalization group map

(W, S)(c) = log 1 eW (c,a) dµ (a) where Z = eW (0,a) dµ (a) ZW,S S W,S S Z Z with respect to the interaction W and covariance S.

Theorem II.13 Let F, D > 0, 0 < t, v 1 and α 4. Let W, V (AC)0. If, for all ≤ 2 ≥ ∈ H, J r 0 r ∈ ≥ M

S H + J H + J b :b : dµ (b) F| | | | b :b : dµ (b) (√t F)| | | | H J S S ≤ H J T T ≤ Z Z S F2D T tF2D k k ≤ k k ≤ D W 1 D V v k k(α+2)F ≤ 6 k k(α+2)F ≤ 6 then (f; W + V, S + T ) (f; W, S) 8 (t + v) f |||S − S |||αF ≤ ||| |||αF (W + V, S + T ) (W, S) 3 (t + v) k − kαF ≤ D

Proof: First observe that, for all z 1 | | ≤ t

H + J b :b : dµ (b) F| | | | by Problem II.3.a H J zT zT ≤ Z H + J bH :bJ:S+zT dµS+zT (b) (2F)| | | | by Problem II.3.b ≤ Z S + zT 2F2D (2F)2D k k ≤ ≤ 54 1 Also, for all z0 , | | ≤ v 1 D W + z0V k k(α+2)F ≤ 3 α Hence, by Corollary II.7, with F replaced by 2F, α replaced by 2 , W replaced by W + z0V and S replaced by S + zT α (f; W + z0V, S + zT ) αF α 2 f αF |||S ||| ≤ − ||| ||| (II.2) α α 1 (W + z0V, S + zT ) αF α 2 W + z0V αF α 2 3D k k ≤ − k k ≤ − 1 1 for all z and z0 . | | ≤ t | | ≤ v The linear map R( ; εW + z0V, S + zT ) is, for each 0 ε 1, a polynomial in · ≤ ≤ z and z0. By Theorem II.6, the kernel of 1l R is trivial for all 0 ε 1. Hence, by − ≤ ≤ Theorem II.2 and (II.1), both (f; W + z0V, S + zT ) and (W + z0V, S + zT ) are analytic S 1 1 functions of z and z0 on z , z0 . | | ≤ t | | ≤ v By the Cauchy integral formula, if f(z) is analytic and bounded in absolute value by M on z r, then | | ≤ 1 f(ζ) f 0(z) = 2πı (ζ z)2 dζ ζ =r − Z| | and, for all z r , | | ≤ 2 1 M 1 f 0(z) 2πr = 4M ≤ 2π (r/2)2 r 1 1 Hence, for all z , z0 | | ≤ 2t | | ≤ 2 v d α dz (f; W + z0V, S + zT ) αF 4t α 2 f αF S ≤ − ||| ||| d α dz0 (f; W + z0V, S + zT ) αF 4v α 2 f αF S ≤ − ||| |||

d α 1 dz (W + z0V, S + zT ) αF 4t α 2 3D ≤ − d α 1 dz0 (W + z0V, S + zT ) αF 4v α 2 3D ≤ − By the chain rule, for all z 1, | | ≤ d α dz (f; W + zV, S + zT ) αF 4(t + v) α 2 f αF S ≤ − ||| ||| d α 1 dz (W + zV, S + zT ) αF 4(t + v) α 2 3D ≤ − Integrating z from 0 to 1 and

α (f; W + V, S + T ) (f; W, S) αF 4(t + v) α 2 f αF S − S ≤ − ||| ||| α 1 (W + V, S + T ) (W, S) αF 4(t + v) α 2 3D − ≤ − α As α 4, α 2 2 and the Theorem follows. ≥ − ≤ 55 II.4 Sample Applications We now apply the expansion to a few examples. In these examples, the set of Grassmann algebra generators a , , a is replaced by ψ , ψ x IRd+1, σ S { 1 · · · n} x,σ x,σ ∈ ∈ with S being a nite set (of spin/colour values). See I.5. To save writing, we introduce § the notation ξ = (x, σ, b) IRd+1 S 0, 1 ∈ × × { } dξ . . . = dd+1x . . . Z b 0,1 σ S Z ∈{X } X∈ ψx,σ if b = 0 ψξ = ψx,σ if b = 1  Now elements of the Grassmann algebra have the form

∞ f(ψ) = dξ dξ f (ξ , , ξ ) ψ ψ 1 · · · r r 1 · · · r ξ1 · · · ξr r=0 X Z the norms f and f(ψ) become k rk k kα r fr = max sup dξj fr(ξ1, , ξr) 1 i r k k ξi j=1 · · · ≤ ≤ Z j6=i Q ∞ f(ψ) = αr f k kα k rk r=0 X When we need a second copy of the generators, we use ξ IRd+1 S 0, 1 (in ξ ∈ × × { } place of c1, , cn ). Then elements of the Grassmannalgebra have the form { · · · } ∞ f(, ψ) = dξ0 dξ0 dξ1 dξr fl,r(ξ0 , , ξ0; ξ1, , ξr) ξ0 ξ0 ψξ ψξ 1 · · · l · · · 1 · · · l · · · 1 · · · l 1 · · · r l,rX=0 Z and the norms f and f(, ψ) are k l,rk k kα l+r fl,r = max sup dξj fl,r(ξ1, , ξl; ξl+1, , ξl+r) k k 1 i l+r ξi j=1 · · · · · · ≤ ≤ Z j6=i Q ∞ f(, ψ) = αl+r f k kα k l,rk l,rX=0 All of our Grassmann Gaussian integrals will obey

ψx,σψx0,σ0 dµS(ψ) = 0 ψx,σψx0,σ0 dµS(ψ) = 0 Z Z ψ ψ 0 0 dµ (ψ) = ψ 0 0 ψ dµ (ψ) x,σ x ,σ S − x ,σ x,σ S Z Z 56 Hence, if ξ = (x, σ, b), ξ0 = (x0, σ0, b0)

0 if b = b0 = 0 Cσ,σ0 (x, x0) if b = 0, b0 = 1 S(ξ, ξ0) =  Cσ0,σ(x0, x) if b = 1, b0 = 0  − 0 if b = b0 = 1 where 

Cσ,σ0 (x, x0) = ψx,σψx,σ0 dµS(ψ) Z That our Grassmann algebra is no longer nite dimensional is, in itself, not a big deal. The Grassmann algebras are not the ultimate objects of interest. The ultimate objects of interest are various expectation values. These expectation values are complex numbers that we have chosen to express as the values of Grassmann Gaussian integrals. See (I.3). If the covariances of interest were to satisfy the hypotheses (HG) and (HS), we would be able to easily express the expectation values as limits of integrals over nite dimensional Grassmann algebras using Corollary II.7 and Theorem II.13. The real diculty is that for many, perhaps most, models of interest, the covari- ances (also called propagators) do not satisfy (HG) and (HS). So, as explained in I.5, § we express the covariance as a sum of terms, each of which does satisfy the hypotheses. These terms, called single scale covariances, will, in each example, be constructed by sub- stituting a partition of unity of IRd+1 (momentum space) into the full covariance. The partition of unity will be constructed using a xed \scale parameter" M > 1 and a func- 2 2 1/2 1/2 tion ν C∞([M − , M ]) that takes values in [0, 1], is identically 1 on [M − , M ] and ∈ 0 obeys ∞ ν M 2jx = 1 (II.3) j=0 X  for 0 < x < 1.

2 2 Problem II.6 Let M > 1. Construct a function ν C∞([M − , M ]) that takes values ∈ 0 1/2 1/2 in [0, 1], is identically 1 on [M − , M ] and obeys

∞ ν M 2jx = 1 j=0 X  for 0 < x < 1.

57 Example (Gross{Neveu2) The propagator for the Gross-Neveu model in two space-time dimensions has

2 d p ip (x0 x) pσ,σ0 + mδσ,σ0 C 0 (x, x0) = ψ ψ 0 dµ (ψ) = e · − 6 σ,σ x,σ x,σ S (2π)2 p2 + m2 Z Z where ip p p = 0 1 6 p1 ip0  − −  is a 2 2 matrix whose rows and columns are indexed by σ , . This propagator × ∈ {↑ ↓} does not satisfy Hypothesis (HG) for any nite F. If it did satisfy (HG) for some nite F, 2 Cσ,σ0 (x, x0) = ψx,σψx0,σ0 dµS(ψ) would be bounded by F for all x and x0. This is not

the case { it bloR ws up as x0 x 0. − → Set M 2j ν p2 if j > 0 2j νj(p) =  ν M  if j = 0, p 1  p2 | | ≥  1  if j = 0, p < 1 | |  Then  ∞ S(x, y) = S(j)(x, y) Xj=0 with 0 if b = b0 = 0 (j) (j) Cσ,σ0 (x, x0) if b = 0, b0 = 1 S (ξ, ξ0) =  (j)  C 0 (x0, x) if b = 1, b0 = 0  − σ ,σ 0 if b = b0 = 1 and   2 (j) d p ip (x0 x) p + m C (x, x0) = e · − 6 ν (p) (II.4) (2π)2 p2 + m2 j Z We now check that, for each 0 j < , S(j) does satisfy Hypotheses (HG) and ≤ ∞ (j) j 1 j+1 (HS). The integrand of C is supported on M − p M for j > 0 and p M ≤ | | ≤ | | ≤ 2 for j = 0. This is a region of volume at most π M j+1 const M 2j. By Corollary I.35 ≤ and Problem II.7, below, the value of F for this propagator is bounded by

1/2 p+m d2p 1 2j 1/2 j/2 F = 2 6 ν (p) C M = C M j p2+m2 j (2π)2 ≤ F M j F Z    p +m p+m for some constant CF. Here p6 2+m2 is the matrix norm of p6 2+m2 .

58 Problem II.7 Prove that

p+m 1 6 2 2 = p +m √p2+m2

By the following Lemma, the value of D for this propagator is bounded by

1 Dj = M 2j

We have increased the value of CF in Fj in order to avoid having a const in Dj.

Lemma II.14

2 (j) 1 sup d y Cσ,σ0 (x, y) const M j x,σ 0 | | ≤ Xσ Z

2 (0) Proof: When j = 0, the Lemma reduces to sup 0 d y C 0 (x, y) < . This is x,σ σ | σ,σ | ∞ p+m (0) the case, because every matrix element of p6 2+m2 νj(pP) is CR0∞, so that Cσ,σ0 (x, y) is a C∞ rapidly decaying function of x y. Hence it suces to consider j 1. The integrand of − ≥ 2 (II.4) is supported on a region of volume at most π M j+1 const M 2j and has every ≤ matrix element bounded by  j+1 M +m const 1 M 2j−2+m2 ≤ M j

j 1 j+1 since M − p M on the support of ν (p). Hence ≤ | | ≤ j

(j) p+m d2p 0 6 sup Cσ,σ (x, y) sup p2+m2 σ,σ0 νj(p) (2π)2 x,y | | ≤ σ,σ0 σ,σ0 Z  (II.5)

const 1 ν (p) d2p const 1 M 2j const M j ≤ M j j ≤ M j ≤ Z (j) To show that C 0 (x, y) decays suciently quickly in x y, we play the usual σ,σ − integration by parts game.

2 4 (j) d p p + m ∂2 ∂2 2 ip (y x) (y x) Cσ,σ0 (x, y) = 2 6 2 2 νj (p) ∂p2 + ∂p2 e · − − (2π) p + m 0 1 Z 2  d p ip (y x) ∂2 ∂2 2 p + m = 2 e · − ∂p2 + ∂p2 6 2 2 νj(p) (2π) 0 1 p + m Z    59 p+m P (p) Each matrix element of p62+m2 is a ratio Q(p) of two polynomials in p = (p0, p1). For any such rational function ∂P ∂Q ∂ P (p) (p)Q(p) P (p) (p) ∂pi − ∂pi = 2 ∂pi Q(p) Q(p)

The dierence between the degrees of the numerator and denominator of ∂ P (p) obeys ∂pi Q(p)

deg ∂P Q P ∂Q deg Q2 deg P + deg Q 1 2deg Q = deg P deg Q 1 ∂pi − ∂pi − ≤ − − − −  Hence (α0,α1) α α P (p) ∂ 0 ∂ 1 p+m σ,σ0 α α 0 1 p62+m2 0 = 2 2 1+α0+α1 ∂p0 ∂p1 σ,σ (p +m )

(α0,α1)  2 2 1+α0+α1 with P 0 (p) a polynomial of degree at most deg (p + m ) 1 α α = σ,σ − − 0 − 1 ∂ p+m 1+α +α . In words, each acting on 62 2 increases the dierence between the degree 0 1 ∂pi p +m  of the denominator and the degree of the numerator by one. So does each ∂ acting on ∂pi j νj(p), provided you count both M and p as having degree one. For example,

2j 2j ∂ M 2p0 2j M ν 2 = 4 M ν0 2 ∂p0 p − p p 2 2j 8p2 2j 4p2 2j ∂ M  2 0 2j M 0 4j M ∂p2 ν p2 = p4 + p6 M ν0 p2 + p8 M ν00 p2 0 − h i In general,   

2 2 2`j 2j ∂ ∂ 2 pσ,σ0 +m Pσ,σ0,n,`(p) M Qn,`(p) (`) M 2 + 2 6 2 2 νj(p) = 2 2 1+n 4`+2(4−n−`) ν 2 ∂p0 ∂p1 p +m (p +m ) p p n,`∈IN    1≤Xn+`≤4 

p 0 +m 6 σ,σ Here n is the number of derivatives that acted on p2+m2 and ` is the number of derivatives that acted on ν. The remaining 4 n ` derivatives acted on the factors arising from the − − argument of ν upon application of the chain rule. The polynomials Pσ,σ0,n,`(p) and Qn,`(p) have degrees at most 1+n and `+(4 n `), respectively. All together, when you count both − − j 2 2 1+n 4`+2(4 n `) M and p as having degree one, the degree of the denominator (p + m ) p − − , namely 2(1 + n) + 4` + 2(4 n `) = 10 + 2`, is at least ve larger than the degree of − − 2`j the numerator P 0 (p)M Q (p), which is at most 1 + n + 2` + ` + (4 n `) = σ,σ ,n,` n,` − − 5+2`. Recalling that p is bounded above and below by const M j (of course with dierent | | constants),

2`j P 0 (p) 2j (1+n)j 2`j (4−n)j σ,σ ,n,` M Qn,`(p) ν(`) M const M M M = const 1 (p2+m2)1+n p4`+2(4−n−`) p2 ≤ M 2j(1+n)M j(8−2n+2`) M 5j

 60 and ∂2 ∂2 2 p+m 1 ∂p2 + ∂p2 p26 +m2 νj (p) const M 5j 0 1 ≤    ∂2 ∂2 2 p+m on the support of the integrand. The support of 2 + 2 62 2 νj(p) is contained ∂p 0 ∂p1 p +m 2j in the support of νj (p). So the integrand is still supp orted in a region of volume const M and 4j 4 (j) 4j 1 2j j sup M (y x) Cσ,σ0 (x, y) const M M 5j M const M (II.6) x,y − ≤ ≤ σ,σ0

1 3 Multiplying the 4 power of (II.5) by the 4 power of (II.6) gives

3j 3 (j) j sup M y x Cσ,σ0 (x, y) const M (II.7) x,y | − | ≤ σ,σ0

Adding (II.5) to (II.7) gives

3j 3 (j) j sup [1 + M y x ]Cσ,σ0 (x, y) const M x,y | − | ≤ σ,σ0

Dividing across (j) M j Cσ,σ0 (x, y) const 1+M 3j x y 3 | | ≤ | − | Integrating

2 (j) 2 M j 1 2 1 1 d y C 0 (x, y) d y const = const d z const | σ,σ | ≤ 1+M 3j x y 3 M j 1+ z 3 ≤ M j Z Z | − | Z | | We made the change of variables z = M j(y x). −

To apply Corollary II.7 to this model, we x some α 2 and dene the norm ≥ W of W (ψ) = dξ dξ w (ξ , , ξ ) ψ ψ to be k kj r>0 1 · · · r r 1 · · · r ξ1 · · · ξr P R r j r−4 W = D W = (αC ) M 2 w k kj jk kαFj F k rk r X Let J > 0 be a cuto parameter (meaning that, in the end, the model is dened by taking the limit J ) and dene, as in I.5, → ∞ §

1 W (Ψ+ψ) W (ψ) J () = log e dµS(≤J) (ψ) where ZJ = e dµS(≤J) (ψ) G ZJ Z Z 61 and

1 W (Ψ+ψ) W (ψ) j(W )() = log Z e dµS(j) (ψ) where ZW,S(j) = e dµS(j) (ψ) W,S(j) Z Z Then, by Problem I.16,

= (1) (2) (J) (W ) GJ S ◦ S ◦ · · · ◦ S

Set

W = (j) (j+1) (J) (W ) j S ◦ S ◦ · · · ◦ S Suppose that we have shown that W 1 . To integrate out scale j 1 we use k jkj ≤ 3 −

α α+1 6 1 Theorem II.15GN Suppose α 2 and M α 1 2 α . If W j 3 and wr vanishes ≥ ≥ − k k ≤ for r 4, then j 1(W ) j 1 W j.  ≤ k − k − ≤ k k

Proof: We rst have to relate W ( + ψ) to W (ψ) , because we wish to apply k kα k kα Corollary II.7 with W (c, a) replaced by W ( + ψ). To do so, we temporarily revert to the old notation with c and a generators, rather than and ψ generators. Observe that

W (c + a) = wm(I)(c + a)I m I X ∈MXm

= wm J(I J) cJaI J \ \ m I J I X ∈MXm X⊂  l+r = l wl+r(JK)cJaK l,r J K X X∈Ml X∈Mr 

In passing from the rst line to the second line, we used that wm(I) is antisymmetric under permutation of its arguments. In passing from the second line to the third line, we renamed

l+r J + K I J = K. The arises because, given two ordered sets J, K, there are | | | | ordered l J \ | | sets I with J I, K = I J. Hence  ⊂ \ W (c + a) = αl+r l+r w = αm2m w = W (a) k kα l k l+rk k mk k k2α l,r m X  X Similarly, W ( + ψ) = W (ψ) k kα k k2α 62 To apply Corollary II.7 at scale j 1, we need −

1 Dj 1 W ( + ψ) (α+1)F = Dj 1 W (ψ) 2(α+1)F − k k j−1 − k k j−1 ≤ 3

But r (j 1) r−4 Dj 1 W 2(α+1)F = 2(α + 1)CF M − 2 wr − k k j−1 k k r X  r α+1 ( 1 2 ) r j r−4 = 2 M − 2 − r (αC ) M 2 w α F k rk r 6 ≥   X r α+1 1 r j r−4 2 M − 6 (αC ) M 2 w ≤ α F k rk r 6 X≥   6 2 α+1 1 W 1 ≤ α M k kj ≤ 3 6  as M > 2 α+1 and W 1 . By Corollary II.7, α k kj ≤ 3   α j 1(W ) j 1 = Dj 1 j 1(W ) αFj−1 α 1 Dj 1 W ( + ψ) αFj−1 k − k − − k − k ≤ − − k k α α+1 6 1 α 1 2 α M W j W j ≤ − k k ≤ k k 

Theorem II.15GN is just one ingredient used in the construction of the Gross{

Neveu2 model. It basically reduces the problem to the study of the projection

P W (ψ) = dξ dξ w (ξ , , ξ ) ψ ψ 1 · · · r r 1 · · · r ξ1 · · · ξr r=2,4 X Z of W onto the part of the Grassmann algebra of degree at most four. A souped up, \renormalized", version of Theorem II.15GN can be used to reduce the problem to the study of the projection P 0W of W onto a three dimensional subspace of the range of P .

Example (Naive Many-fermion2) The propagator, or covariance, for many{fermion models is

3 d k ik (x0 x) 1 C 0 (x, x0) = δ 0 e · − σ,σ σ,σ (2π)3 ik e(k) Z 0 − 63 k2 where k = (k0, k) and e(k) is the one particle dispersion relation (a generalisation of 2m ) minus the chemical potential (which controls the density of the gas). The subscript on many-fermion signies that the number of space dimensions is two (i.e. k IR2, k 2 ∈ ∈ IR3). For pedagogical reasons, I am not using the standard many{body Fourier transform conventions. We assume that e(k) is a reasonably smooth function (for example, C 4) that has a nonempty, compact, strictly convex zero set, called the Fermi curve and denoted . F We further assume that e(k) does not vanish for k , so that is itself a reasonably ∇ ∈ F F smooth curve. At low temperatures only those momenta with k 0 and k near are 0 ≈ F important, so we replace the above propagator with

3 d k ik (x0 x) U(k) C 0 (x, x0) = δ 0 e · − σ,σ σ,σ (2π)3 ik e(k) Z 0 −

The precise ultraviolet cuto, U(k), shall be chosen shortly. It is a C0∞ function which takes values in [0, 1], is identically 1 for k2 + e(k)2 1 and vanishes for k2 + e(k)2 larger 0 ≤ 0 than some constant. This covariance does not satisfy Hypotheses (HS) for any nite D. 1 U(k) 0 If it did, Cσ,σ (0, x0) would be L in x0 and consequently the Fourier transform ik e(k) 0− U(k) would be uniformly bounded. But ik e(k) blows up at k0 = 0, e(k) = 0. So we write the 0− covariance as sum of innitely many \single scale" covariances, each of which does satisfy (HG) and (HS). This decomposition is implemented through a partition of unity of the set of all k’s with k2 + e(k)2 1. 0 ≤ We slice momentum space into shells around the Fermi curve. The jth shell is dened to be the support of

(j) 2j 2 2 ν (k) = ν M (k0 + e(k) )  where ν is the function of (II.3). By construction, ν(x) vanishes unless 1 x M 2, so M 2 ≤ ≤ that the jth shell is a subset of

k 1 ik e(k) 1 M j+1 ≤ | 0 − | ≤ M j−1  As the scale parameter M > 1, the shells near the Fermi curve have j near + . Setting ∞ 3 (j) (j) d k ik (x0 x) ν (k) C 0 (x, x0) = δ 0 e · − σ,σ σ,σ (2π)3 ik e(k) Z 0 − 64 and U(k) = ∞ ν(j)(k) we have j=0 P ∞ (j) Cσ,σ0 (x, x0) = Cσ,σ0 (x, x0) j=0 X The integrand of the propagator C(j) is supported on a region of volume at most 2j 1 1 const M − ( k j−1 and, as e(k) j−1 and e is nonvanishing on , k must | 0| ≤ M | | ≤ M ∇ F j j+1 remain within a distance const M − of ) and is bounded by M . By Corollary I.35, F the value of F for this propagator is bounded by

1/2 ν(j)(k) d3k j 1 1/2 1 Fj = 2 3 CF M 2j = CF j/2 (II.8) ik0 e(k) (2π) ≤ M M Z | − |    for some constant CF. Also

(j) ν(j)(k) d3k 1 sup Cσ,σ0 (x, y) ik e(k) (2π)3 const M j x,y | | ≤ | 0− | ≤ σ,σ0 Z

(j) ∂ ν (k) Each derivative ∂k acting on ik e(k) increases the supremum of its magnitude by a factor i 0− of order M j. So the naive argument of Lemma II.14 gives

(j) 1/M j 3 (j) 2j Cσ,σ0 (x, y) const [1+M −j x y ]4 sup d y Cσ,σ0 (x, y) const M | | ≤ | − | ⇒ x,σ 0 | | ≤ Xσ Z There is not much point going through this bound in greater detail, because Corollary C.3 1 1 (j) gives a better bound. In Appendix C, we express, for any l , , C 0 (x, y) as j ∈ M j M j/2 σ,σ a sum of at most const terms, each of which is bounded in Corollary C.3. Applying that lj   l 1 bound, with j = M j/2 , yields the better bound

3 (j) 1 j 3j/2 sup d y C 0 (x, y) const M const M (II.9) σ,σ lj x,σ 0 | | ≤ ≤ Xσ Z So the value of D for this propagator is bounded by

5j/2 Dj = M

This time we dene the norm

r j r−5 W = D W = (αC ) M − 2 w k kj jk kαFj F k rk r X 65 Again, let J > 0 be a cuto parameter and dene, as in I.5, § 1 W (Ψ+ψ) W (ψ) J (c) = log e dµS(≤J) (a) where ZJ = e dµS(≤J) (a) G ZJ Z Z and

1 W (Ψ+ψ) W (ψ) j(W )(c) = log Z e dµS(j) (a) where ZW,S(j) = e dµS(j) (a) W,S(j) Z Z Then, by Problem I.16,

= (J) (1) (0) (W ) GJ S ◦ · · · ◦ S ◦ S Also call = W . If we have integrated out all scales from the ultraviolet cuto, which in Gj j this (infrared) problem is xed at scale 0, to j and we have ended up with some interaction that obeys W 1 , then we integrate out scale j + 1 using the following analog of k kj ≤ 3 Theorem II.15GN.

α 2 α+1 12 1 Theorem II.15MB1 Suppose α 2 and M 2 α 1 α . If W j 3 and wr ≥ ≥ − k k ≤ vanishes for r < 6, then j+1(W ) j+1 W j.   k k ≤ k k

Proof: To apply Corollary II.7 at scale j + 1, we need

D W ( + ψ) = D W 1 j+1k k(α+1)Fj+1 j+1k k2(α+1)Fj+1 ≤ 3 But r (j+1) r−5 D W = 2(α + 1 C ) M − 2 w j+1k k2(α+1)Fj+1 F k rk r X  r α+1 1 (1 5 ) r j r−5 = 2 M − 2 − r (αC ) M − 2 w α F k rk r 6 ≥   X r α+1 1 r j r−5 2 M − 12 (αC ) M − 2 w ≤ α F k rk r 6 X≥   6 2 α+1 1 W W 1 ≤ α M 1/2 k kj ≤ k kj ≤ 3 By Corollary II.7  α j+1(W ) j+1 = Dj+1 j+1(W ) αFj+1 α 1 Dj+1 W ( + ψ) αFj+1 k k k k ≤ − k k α α+1 6 1 α 1 2 α M 1/2 W j W j ≤ − k k ≤ k k 

66 It looks, in Theorem II.15MB1, like ve-legged vertices w5 are marginal and all

vertices wr with r < 5 have to be renormalized. Of course, by evenness, there are no

ve{legged vertices so only vertices wr with r = 2, 4 have to be renormalized. But it still looks, contrary to the behaviour of [FST], like four{legged vertices are worse than marginal. Fortunately, this is not the case. Our bounds can be tightened still further. In the bounds (II.8) and (II.9) the momentum k runs over a shell around the Fermi curve. Eectively, the estimates we have used to count powers of M j assume that all momenta entering an r{legged vertex run independently over the shell. Thus the estimates fail to take into account conservation of momentum. As a simple illustration of 3 (j) (j) this, observe that for the two{legged diagram B(x, y) = d z Cσ,σ(x, z)Cσ,σ(z, y), (II.9) yields the bound R

sup d3y B(x, y) sup d3z C(j) (x, z) d3y C(j) (z, y) | | ≤ σ,σ σ,σ x Z x Z Z const M 3j/ 2M 3j/2 = const M 3j ≤ (j) 2 ν (k) (j) (j) But B(x, y) is the Fourier transform of W (k) = [ik e(k)]2 = C (k)C (p) p=k. Conser- 0− vation of momentum forces the momenta in the two lines to be the same. Plugging this

l 1 W (k) and j = M j/2 into Corollary C.2 yields

3 1 2j 5j/2 sup d y B(x, y) const l M const M | | ≤ j ≤ x Z We exploit conservation of momentum by partitioning the Fermi curve into \sectors".

Example (Many-fermion2 { with sectorization) We start by describing precisely what sectors are, as subsets of momentum space.

Let, for k = (k0, k), k0(k) be any reasonable \projection" of k onto the Fermi curve

= k IR2 e(k) = 0 F ∈  In the event that is a circle of radius k cen tered on the origin, it is natural to choose F F kF k0(k) = k k. For general , one can always construct, in a tubular neighbourhood of , | | F F 67 a C∞ vector eld that is transverse to , and then dene k0(k) to be the unique point of F that is on the same integral curve of the vector eld as k is. F Let j > 0 and set

1 if k ( j) ∈ F ν ≥ (k) =  ν(i)(k) otherwise  i j P≥ Let I be an interval on the Fermi surface . Then F

( j 1) s = k k0(k) I, k supp ν ≥ − ∈ ∈  is called a sector of length length(I) at scale j. Two dierent sectors s and s0 are called neighbours if s0 s = . A sectorization of length l at scale j is a set of sectors of ∩ 6 ∅ j j length lj at scale j that obeys

- the set j of sectors covers the Fermi surface

- each sector in j has precisely two neighbours in j, one to its left and one to its right 1 1 - if s, s0 are neighbours then l length(s s0 ) l ∈ j 16 j ≤ ∩ ∩ F ≤ 8 j Observe that there are at most 2 length( )/l sectors in . In these notes, we x l = 1 F j j j M j/2 and a sectorization j at scale j.

s1 s2 s3 s4

F

Next we describe how we \sectorize" an interaction

r

Wr = wr (x1, σ1, κ1), , (xr, σr, κr) ψσ1 (x1, κ1) ψσr (xr, κr) dxi · · · · · · i=1 σi∈{↑,↓} Z κiX∈{0,1}  Q where ψσi (xi) = ψσi (xi, κi) ψσi (xi) = ψσi (xi, κi) κi=0 κi=1

68 Let (r, ) denote the space of all translation invariant functions F j

r f (x , σ , κ , s ), , (x , σ , κ , s ) : IR3 , 0, 1 C r 1 1 1 1 · · · r r r r × {↑ ↓} × { } × j →   whose Fourier transform, f^ (k , σ , κ , s ), , (k , σ , κ , s ) , vanishes unless k s . r 1 1 1 1 · · · r r r r i ∈ i An fr (r, j) is said to be a sectorized representative for wr if ∈ F

w^r (k1, σ1, κ1), , (kr, σr, κr) = f^r (k1, σ1, κ1, s1), , (kr, σr, κr, sr) · · · si∈Σj · · ·  1≤Pi≤r 

( j) for all k , , k supp ν ≥ . It is easy to construct a sectorized representative for w 1 · · · r ∈ r ( j) by introducing (in momentum space) a partition of unity of supp ν ≥ subordinate to j.

Furthermore, if fr is a sectorized representative for wr, then

r w (x , σ , κ ), , (x , σ , κ ) ψ (x , κ ) ψ (x , κ ) dx r 1 1 1 · · · r r r σ1 1 1 · · · σr r r i Z i=1  Q r = fr (x1, σ1, κ1, s1), , (xr, σr, κr, sr) ψσ1 (x1, κ1) ψσr (xr, κr) dxi s ∈Σ · · · · · · i j Z i=1 1≤Pi≤r  Q

for all ψσi (xi, κi) \in the support of" dµC(≥j) , i.e. provided ψ is integrated out using a ( j) Gaussian Grassmann measure whose propagator is supported in supp ν ≥ (k). Further- more, by the momentum space support property of fr,

r f (x , σ , κ , s ), , (x , σ , κ , s ) ψ (x , κ ) ψ (x , κ ) dx r 1 1 1 1 · · · r r r r σ1 1 1 · · · σr r r i Z i=1  Q r = f (x , σ , κ , s ), , (x , σ , κ , s ) ψ (x , κ , s ) ψ (x , κ , s ) dx r 1 1 1 1 · · · r r r r σ1 1 1 1 · · · σr r r r i Z i=1  Q where ψ (x, b, s) = d3y ψ (y, b, s)χ^(j)(x y) σ σ s − Z (j) and χ^s is the Fourier transform of a function that is identically one on the sector s. This function is chosen shortly before Proposition C.1. We have expressed the interaction

r r

Wr = fr (x1, σ1, κ1, s1), , (xr, σr, κr, sr) ψσi (xi, κi, si) dxi s ∈Σ · · · i j Z i=1 i=1 σi∈{↑,↓}  κi∈{P0,1} Q Q

69 in terms of a sectorized kernel fr and new \sectorized" elds, ψσ(x, κ, s), that have prop- agator (j) Cσ,σ0 (x, s), (y, s0) = ψσ(x, 0, s)ψσ0 (y, 1, s0) dµC(j) (ψ) Z  3 (j) (j) (j) d k ik (y x) ν (k)χs (k)χs0 (k) = δ 0 e · − σ,σ (2π)3 ik e(k) Z 0 − The momentum space propagator

(j) (j) (j) (j) ν (k)χs (k)χs0 (k) C 0 (k, s, s0) = δ 0 σ,σ σ,σ ik e(k) 0 − vanishes unless s and s0 are equal or neighbours, is supported in a region of volume l 1 j const j M 2j and has supremum bounded by const M . By an easy variant of Corollary I.35, the value of F for this propagator is bounded by

1/2 l F C 1 M jl = C j j ≤ F M 2j j F M j  q for some constant CF. By Corollary C.3,

3 (j) j sup d y Cσ,σ0 (x, s), (y, s0) const M x,σ,s | | ≤ σ0,s0 Z X  so the value of D for this propagator is bounded by

D = 1 M 2j j lj

We are now almost ready to dene the norm on interactions that replaces the unsectorized norm W = D W of the last example. We dene a norm on (r, ) k kj jk kαFj F j by

f = max max dx` f (x1, σ1, κ1, s1), , (xr, σr, κr, sr) x ,σ ,κ ,s k k 1 i r i i i i σ ,κ ,s `=i · · · ≤ ≤ k k k Z 6 Xk6=i Q  and for any translation invariant function

r w (x , σ , κ ), , (x , σ , κ ) : IR3 , 0, 1 C r 1 1 1 · · · r r r × {↑ ↓} × { } → we dene   w = inf f f (r, ) a representative for W k rkΣj k k ∈ F j n o The sectorized norm on interactions is

r−2 r−4 r r 2 j W = D (αF ) w = (αC ) l M − 2 w k kα,j j j k rkΣj F j k rkΣj r r X X 70 Proposition II.16 (Change of Sectorization) Let j0 > j 0. There is a constant ≥ C , independent of M, j and j0, such that for all r 4 S ≥

l r 3 j − wr Σ 0 CS l wr Σj k k j ≤ j0 k k  

Proof: The spin indices σi and bar/unbar indices κi play no role, so we suppress them. Let  > 0 and choose f (r, ) such that r ∈ F j

wr(k1, , kr) = fr (k1, s1), , (kr, sr) · · · si∈Σj · · · 1≤Pi≤r 

( j) for all k , , k in the support of supp ν ≥ and 1 · · · r

w f  k rkΣj ≥ k rk −

Let

1 = χs0 (k0)

s0 Σ 0 X∈ j be a partition of unity of the Fermi curve subordinate to the set s0 s0 of F ∩ F ∈ j intervals that obeys 

m constm sup ∂k0 χs0 lm k0 ≤ j0

Fix a function ϕ C∞ [0, 2) , independen t of j, j0 and M, which takes values in [0, 1] ∈ 0 and which is identically 1 for 0 x 1. Set ≤ ≤

2(j0 1) 2 2 ϕj0 (k) = ϕ M − [k0 + e(k) ]  ( j0) Observe that ϕj0 is identically one on the support of ν ≥ and is supported in the support ( j0 1) of ν ≥ − . Dene g (r, 0 ) by r ∈ F j r 0 0 gr (k1, s10 ), , (kr, sr0 ) = fr (k1, s1), , (kr, sr) χsm (km)ϕj (km) · · · s`∈Σ · · · m=1  1≤P`≤r  Q   r 0 0 = fr (k1, s1), , (kr, sr) χsm (km)ϕj (km) s ∩s0 6=∅ · · · m=1 ` ` 1≤P`≤r  Q   71 Clearly

wr(k1, , kr) = gr (k1, s10 ), , (kr, sr0 ) · · · s0 ∈Σ · · · ` j0 1≤P`≤r  ( j0) for all k` in the support of supp ν ≥ . Dene r Mom (s0) = (s0 , , s0 ) 0 s0 = s0 and there exist k s0 , 1 ` r i 1 · · · r ∈ j i ` ∈ ` ≤ ≤  ` such that ( 1) k` = 0 ` − Here, I am assuming, without loss of generality, that the evenP(respectively, odd) numbered legs of wr are hooked to ψ’s (respectively ψ’s). Then

gr = max sup dx` gr (x1, s10 ), , (xr, sr0 ) k k 1 i r x ∈IR3 · · · ≤ ≤ i 0 `=i s0∈Σ Momi(s ) Z 6 j0 X Q 

3 Fix any 1 i r, s0 0 and x IR . Then ≤ ≤ ∈ j i ∈

dx` gr (x1, s10 ), , (xr, sr0 ) 0 `=i · · · Momi(s ) Z 6 X Q  r dx` fr (x1, s1), , (xr, sr) max χ^s00 ϕ^j0 00 s ,···,sr s Σ 0 ≤ 0 1 `=i · · · j k ∗ k Momi(s ) s ∩s0 6=∅ Z 6 ∈ X ` P` Q  (j) r By Proposition C.1, with j = j0 and φ = ϕ^j0 , maxs00 Σ 0 χ^s00 ϕ^j0 is bounded by a ∈ j k ∗ k constant independent of M, j0 and lj0 . Observe that

dx` fr (x1, s1), , (xr, sr) s ,···,sr 0 1 `=i · · · Momi(s ) s ∩s0 6=∅ Z 6 X ` P` Q 

dx f (x , s ), , (x , s ) ≤ ` r 1 1 · · · r r s1,···,sr Mom (s0) `=i 0 i Z 6 siX∩s 6=∅ s X∩s0 6=∅ ` ` Q  1≤`≤r I will not prove the fact that, for any xed s , , s , there are at most 1 · · · r ∈ j l r 3 j − CS0 l elements of Momi(s0) obeying s` s`0 = for all 1 ` r, but I will try to j0 ∩ 6 ∅ ≤ ≤ motiv ate it below. As there are at most two sectors s j that intersect s0, ∈ dx f (x , s ), , (x , s ) ` r 1 1 · · · r r s1,···,sr Mom (s0) `=i 0 i Z 6 siX∩s 6=∅ s X∩s0 6=∅ ` ` Q  1≤`≤r

lj r 3 2 C0 l − sup dx` fr (x1, s1), , (xr, sr) S j0 ≤ s Σj s ,···,s `=i · · · ∈ 1 r Z 6   sXi=s Q  l r 3 j − 2 CS0 l fr ≤ j0 k k   72 and r lj r 3 w g 2 max χ^ 00 ϕ^ 0 C0 − f r Σj0 r s j S l 0 r s00 Σ 0 j k k ≤ k k ≤ ∈ j k ∗ k k k l r 3   j − CS l wl,r Σj +  ≤ j0 k k k

 4  with CS = 2 maxs00 Σ 0 χ^s00 ϕ^j0 CS0 . ∈ j k ∗ k Now, I will try to motivate the fact that, for any xed s , s , there are 1 · · · r ∈ j l r 3 j − at most CS0 l elements of Momi(s0) obeying s` s`0 = for all 1 ` r. We may j0 ∩ 6 ∅ ≤ ≤ assume that i = 1. Then s0 must be s0. Denote by I` the interval on the Fermi curve 1 F that has length l + 2l 0 and is centered on s . If s0 0 intersects s , then s0 is j j ` ∩ F ∈ j ` ∩ F 3 l contained in I`. Every sector in j0 contains an interval of of length 4 j0 that does not l l F 4 j +2 j0 intersect any other sector in j0 . At most [ l ] of these \hard core" intervals can be 3 j0 l contained in I . Thus there are at most [ 4 j + 3]r 3 choices for s , , s . ` 3 l 0 − 20 r0 2 j · · · − Fix s10 , s20 , , sr0 2. Once sr0 1 is chosen, sr0 is essentially uniquely determined by · · · − − conservation of momentum. But the desired bound on Momi(s0) demands more. It says, roughly speaking, that both sr0 1 and sr0 are essentially uniquely determined. As k` runs − r 2 ` over s0 for 1 ` r 2, the sum − ( 1) k runs over a small set centered on some ` ≤ ≤ − `=1 − ` point p. In order for (s10 , , sr0 ) to bPe in Mom1(s0), there must exist kr 1 sr0 1 and · · · − ∈ − ∩ F kr sr0 with kr kr 1 very close to p. But kr kr 1 is a secant joining two points ∈ ∩ F − − − − of the Fermi curve . We have assumed that is convex. Consequently, for any given F F 2 2 p = 0 in IR there exist at most two pairs (k0, q0) with k0 q0 = p. So, if p is not 6 ∈ F − near the origin, sr0 1 and sr0 are almost uniquely determined. If p is close to zero, then − r 2 ` `=1− ( 1) k` must be close to zero and the number of allowed s10 , s20 , , sr0 2 is reduced. − · · · − P

α 2 α+1 12 1 Theorem II.15MB2 Suppose α 2 and M α 1 2CS α . If W α,j 3 and ≥ ≥ − k k ≤ wr vanishes for r 4, then j+1(W ) α,j+1 W α,j.  ≤ k k ≤ k k

Proof: We rst verify that W ( + ψ) 1 . k kα+1,j+1 ≤ 3

r (r 2)/2 (j+1) r−4 W ( + ψ) = W = 2(α + 1)C l − M − 2 w k kα+1,j+1 k k2(α+1),j+1 F j+1 k rkΣj+1 r X  73 l r−2 r−4 l r−4 α+1 r j+1 2 j r 3 r (r 2)/2 j 2 l M − 2 CS l − (αCF) l − M − 2 wr Σ ≤ α j j+1 j k k j r 6 X≥    r−4 l r−4 r−4 α+1 r j 2 r (r 2)/2 j 2CS M − 2 l (αCF) l − M − 2 wr Σ ≤ α j+1 j k k j r 6 X≥   α+1 1 (1 4 ) r r (r 2)/2 j r−4 = 2C M − 4 − r (αC ) l − M − 2 w S α F j k rkΣj r 6 X≥ 6  2C α+1 1 W 1 ≤ S α M 1/2 k kα,j ≤ 3  By Corollary II.7,

α α α+1 6 1 j+1(W ) α,j+1 α 1 W ( + ψ) α,j+1 α 1 2CS α M 1/2 W α,j W α,j k k ≤ − k k ≤ − k k ≤ k k 

74 Appendix A: Innite Dimensional Grassmann Algebras

To generalize the discussion of I to the innite dimensional case we need to § add topology. We start with a vector space that is an `1 space. This is not the only V possibility. See, for example [Be]. Let be any countable set. We now generate a Grassmann algebra from the I vector space

1 = ` ( ) = α : C αi < V I I → i | | ∞ n ∈I o P Equipping with the norm α = i αi turns it into a Banach space. The algebra V k k ∈I | | will again be an `1 space. The indexPset will be I, the (again countable) set of all nite subsets of , including the empty set. The Grassmann algebra, with coecients in C, I generated by is V 1 A( ) = ` (I) = α : I C αI < I → I I | | ∞ n P∈ o A A Clearly = ( ) is a Banach space with norm α = I I αI . I k k ∈ | | It is also an algebra under the multiplication P

(αβ)I = sgn(J, I J) αJβI J \ \ J I X⊂ The sign is dened as follows. Fix any ordering of and view every I I as being I 3 ⊂ I listed in that order. Then sgn(J, I J) is the sign of the permutation that reorders (J, I J) \ \ to I. The choice of ordering of is arbitrary because the map α sgn(I)α , with I { I} 7→ { I} sgn(I) being the sign of the permutation that reorders I according to the reordering of , I is an isometric isomorphism. The following bound shows that multiplication is everywhere dened and continuous.

αβ = (αβ)I = sgn(J, I J) αJβI J αJ βI J α β (A.1) k k | | \ \ ≤ | | | \ | ≤ k k k k I I I I J I I I J I X∈ X∈ X⊂ X∈ X⊂

A Hence ( ) is a Banach algebra with identity 1lI = δI, . In other words, 1l is the function I ∅ on I that takes the value one on I = and the value zero on every I = . ∅ 6 ∅ 75 Dene, for each i , a to be the element of A( ) that takes the value 1 on ∈ I i I I = i and zero otherwise. Also dene, for each I I, a to be the element of A( ) that { } ∈ I I takes the value 1 on I and zero otherwise. Then

aI = ai i I Y∈ where the product is in the order of the ordering of and I

α = αIaI

I X⊂I If f :C C is any function that is dened and analytic in a neighbourhood of 0, → then the power series f(α) converges for all α A( ) with α strictly smaller than the ∈ I k k radius of convergence of f since, by (A.1),

∞ ∞ f(α) = 1 f (n)(0)αn 1 f (n)(0) α n k k n! ≤ n! | | k k n=0 n=0 X X

If f is entire, like the exponential of any polynomial, then f(α) is dened on all of A( ). I The following problems give several easy generalizations of the above construc- tion.

Problem A.1 Let be any ordered countable set and I the set of all nite subsets of I I (including the empty set). Each I I inherits an ordering from . Let w : (0, ) be ∈ I I → ∞ any strictly positive function on and set I

WI = wi i I Y∈ with the convention W = 1. Dene ∅

1 = ` ( , w) = α : C wi αi < V I I → i | | ∞ n ∈I o P and \the Grassmann algebra generated by " V

1 A( , w) = ` (I, W ) = α : I C WI αI < I → I I | | ∞ n ∈ o P 76 The multiplication is (αβ)I = J I sgn(J, I J) αJβI J where sgn(J, I J) is the sign of the ⊂ \ \ \ A permutation that reorders (J, IPJ) to I. The norm α = WI αI turns ( , w) into a \ k k I I | | I Banach space. P∈

a) Show that αβ α β k k ≤ k k k k b) Show that if f :C C is any function that is dened and analytic in a neighbourhood → 1 (n) n of 0, then the power series f(α) = ∞ f (0)α converges for all α A with α n=0 n! ∈ k k smaller than the radius of convergenceP of f.

c) Prove that A ( ) = α : I C α = 0 for all but nitely many I is a dense f I → I A n o subalgebra of ( , w). I

Problem A.2 Let be any ordered countable set and I the set of all nite subsets of . I I Let S = α : I C →  be the set of all sequences indexed by I. Observe that our standard product (αβ)I = S I J I sgn(J, I J) αJβI J is well{dened on { for each I , J I is a nite sum. We now ⊂ \ \ ∈ ⊂ dene,P for each integer n, a norm on (a subset of) S by P

n I α n = 2 | | αI k k I I | | P∈ It is dened for all α S for which the series converges. Observe that this is precisely ∈ the norm of Problem A.1 with w = 2n for all i . Also observe that, if m < n, then i ∈ I α α . Dene k km ≤ k kn

A = α S α n < for all n ZZ ∩ ∈ k k ∞ ∈

A =  α S α n < for some n Z Z ∪ ∈ k k ∞ ∈  a) Prove that if α, β A then αβ A . ∈ ∩ ∈ ∩ b) Prove that if α, β A then αβ A . ∈ ∪ ∈ ∪ c) Prove that if f(z) is an entire function and α A , then f(α) A . ∈ ∩ ∈ ∩ 77 d) Prove that if f(z) is analytic at the origin and α A has α strictly smaller than ∈ ∪ | ∅| the radius of convergence of f, then f(α) A . ∈ ∪

Before moving on to integration, we look at some examples of Grassmann algebras generated by ordinary functions or distributions on IRd. In these algebras we can have elements like

dd+1k k2 (ψ, ψ) = (2π)d+1 ik0 2m µ ψk,σψk,σ A − σ S − − ∈ Z P 4 d+1  λ d ki d+1 (2π) δ(k1+k2 k3 k4)ψ ψ u^(k k )ψ 0 ψ 0 2 (2π)d+1 − − k1,σ k3,σ 1 3 k2,σ k4,σ − σ,σ0 S i=1 − ∈ Z P Q from I.5, that look like the they are in an algebra with uncountable dimension. But they § really aren’t.

Example A.1 (Functions and distributions on IRd/LZZd.) There is a natural way to express the space of smooth functions on the torus IRd/LZZd (i.e. smooth functions on IRd that are periodic with respect to LZZd) as a space of sequences: . The same is true for the space of distributions on IRd/LZZd. By denition, a distribution on IRd/LZZd is a map d d f : C∞ IR /LZZ C →   h < f, h > 7→ that is linear in h and is continuous in the sense that there exist constants C IR, ν IN f ∈ f ∈ such that d d2 νf < f, h > sup Cf 1 dx2 h(x) (A.2) | | ≤ x IRd/LZZd j=1 − j ∈ Q  1 d d for all h C∞. Each L function f(x) on IR /L ZZ is identied with the distribution ∈

< f, h >= dx f(x)h(x) (A.3) d d ZIR /LZZ which has C = f 1 , ν = 0. The delta \function" supported at x is really the f k kL f distribution

< δx, h >= h(x)

78 and has Cδx = 1, νδx = 0 . Let γ ZZ. We now dene a Grassmann algebra A (IRd/LZZd), γ ZZ. It is the ∈ γ ∈ Grassmann algebra of Problem A.1 with index set

= 2π ZZd I L

and weight function d γ 2 γ wp = 1 + pj j=1 Q  To each distribution f, we associate the sequence

i f~p = f, e− indexed by . In the event that f is given by integration against an L1 function, then I

i f~p = dx f(x)e− d d ZIR /LZZ is the usual Fourier coecient. For example, let q 2π ZZd and f(x) = 1 ei. Then ∈ L Ld 1 if p = q f~ = p 0 if p = q  6 and γ ~ γ wp fp = wq p 2π ZZd ∈XL

is nite for all γ. Call this sequence (the sequence whose pth entry is one when p = q and d d d d zero otherwise) ψq. We have just shown that ψq (IR /LZZ ) A (IR /LZZ ) for all ∈ Vγ ⊂ γ γ ZZ. Dene, for each x IRd/LZZd, ∈ ∈

1 i ψ(x) = Ld e ψq q 2π ZZd ∈XL

th 1 i 1 i The p entry in the sequence for ψ(x) is Ld q e δp,q = Ld e . As P 1 γ i 1 γ Ld wp e = Ld wp p 2π ZZd p 2π ZZd ∈XL ∈XL

79 converges for all γ < 1 , ψ(x) (IRd/LZZd) A (IRd/LZZd) for all γ < 1 . − 2 ∈ Vγ ⊂ γ − 2 By (A.2), for each distribution f, there are constants Cf and νf such that

d d 2 ν ~ i d f i 2 νf fp = f, e− sup Cf 1 dx2 e− = Cf (1 + pj ) | | ≤ x IRd/LZZd j=1 − j j=1 ∈ Q  Q

Because d 2 γ (1 + pj ) j=1 p 2π ZZd ∈XL Q 2π d d d converges for all γ < 1/2 , the sequence f~p p ZZ is in (IR /LZZ ) = − { | ∈ L } Vγ `1( , wγ) A (IRd/LZZd), for all γ < ν 1/2 . Thus, every distribution is in I ⊂ γ − f − Vγ for some γ (often negative). On the other hand, by Problem A.4 below, a distribution is

given by integration against a C∞ function (i.e. is of the form (A.3) for some periodic C∞

function f(x)) if and only if f~p decays faster than the inverse of any polynomial in p and then it (or rather f~) is in for every γ. Vγ

Problem A.3

d d a) Let f(x) C∞ IR /LZZ . Dene ∈  i f~p = dx f(x)e− d d ZIR /LZZ Prove that for every γ IN, there is a constant C such that ∈ f,γ d ~ 2 γ 2π d fp Cf,γ (1 + pj )− for all p L ZZ ≤ j=1 ∈

Q d d d d b) Let f be a distribution on IR /LZZ . For any h C∞ IR /LZZ , set ∈  1 i~ ~ i hR(x) = Ld e hp where hp = dx h(x)e− IRd/LZZd p∈ 2π ZZd L Z |pX|

Prove that

f, h = lim f, hR h i R h i →∞

80 Problem A.4 Let f be a distribution on IRd/LZZd. Suppose that, for each γ ZZ , there ∈ is a constant Cf,γ such that

d i 2 γ 2π d f, e− Cf,γ (1 + pj )− for all p L ZZ ≤ j=1 ∈ Q d d Prove that there is a C∞ function F (x) on IR /LZZ such that

f, h = dx F (x)h(x) h i d d ZIR /LZZ

Example A.2 (Functions and distributions on IRd). Example A.1 exploited the basis ei p 2π ZZd of L2 IRd/LZZd . This is precisely the basis of eigenfunctions ∈ L d d2   2 d of j=1 1 dx2 . If instead we use the basis of L (IR ) given by the eigenfunctions of − j d 2 d2  jQ=1 xj dx2 we get a very useful identication between tempered distributions and − j Qfunctions on thecountable index set

0 = i = (i , , i ) i ZZ≥ I 1 · · · d j ∈ n o that we now briey describe. For more details see [RS, Appendix to V.3]. Dene the dierential operators

1 d 1 d A = x + A† = x √2 dx √2 − dx   and the Hermite functions

1 2 1 2 x π1/4 e− ` = 0 h`(x) = 1 ` ( †) h (x) ` > 0 ( √`! A 0

0 for ` ZZ≥ . ∈

Problem A.5 Prove that

a) AA†f = A†Af + f

b) Ah0 = 0

c) A†Ah` = `h`

81 d) h , h 0 = δ 0 . Here f, g = f(x)g(x)dx. h ` ` i `,` h i 2 d2 e) x 2 = 2A†A + 1 R − dx

Dene the multidimensional Hermite functions

d

hi(x) = hij (xj) j=1 Y for i . They form an orthonormal basis for L2(IRd) and obey ∈ I d d 2 d2 γ γ xj dx2 hi = (1 + 2ij ) hi j=1 − j j=1 Q  Q d d By denition, Schwartz space (IR ) is the set of C∞ functions on IR all of S whose derivatives decay faster than any polynomial. That is, a function f(x) is in (IRd), S if it is C∞ and d 2 d2 γ sup xj dx2 f(x) < x j=1 − j ∞

0 Q d for all γ ZZ≥ . A tempered distribution on IR is a map ∈ f : (IRd) C S → h < f, h > 7→ that is linear in h and is continuous in the sense that there exist constants C IR, γ IN ∈ ∈ such that d 2 d2 γ < f, h > sup C xj dx2 h(x) | | ≤ x IRd j=1 − j ∈ Q  To each tempered distribution we associate the function

fi =< f, hi > on . In the event that the distribution f is given by integration against an L2 function, I that is, < f, h >= F (x)h(x) ddx (A.4) Z for some L2 function, F (x), then

d fi = F (x)hi(x) d x Z 82 The distribution f is Schwartz class (meaning that one may choose the F (x) of (A.4) in (IRd)) if and only if f decays faster than any polynomial in i. For any distribution, f is S i i bounded by some polynomial in i. So, if we dene

d γ γ wi = (1 + 2ij) j=1 Y then every Schwartz class function (or rather its f) is in = `1( , wγ) for every γ and Vγ I every tempered distribution is in for some γ. The Grassmann algebra generated by Vγ Vγ d is called Aγ(IR ). In particular, since the Hermite functions are continuous and uniformly bounded [AS, p.787], every delta function δ(x x ) and indeed every nite measure has − 0 γ fi const . Thus, since i ZZ≥0 (1 + i) converges for all γ < 1, all nite measures | | ≤ ∈ − are in γ for all γ < 1. TheP sequence representation of Schwartz class functions and of V − tempered distributions is discussed in more detail in [RS, Appendix to V.3].

Problem A.6 Show that the constant function f = 1 is in for all γ < 1. Hint: the Vγ − Fourier transform of h is ( i)`h . ` − `

Innite Dimensional Grassmann Integrals

Once again, let be any countable set, I the set of all nite subsets of , = `1( ) I I V I and A = `1(I). We now dene some linear functionals on A. Of course the innite dimensional analogue of da da will not make sense. But we can dene the · D · · · 1 analogue of the GrassmannR Gaussian integral, dµS, at least for suitable S. · Let S : C be an innite matrix.R Recall that, for each i , ai is the I × I → ∈ I element of A that takes the value 1 on I = i and zero otherwise, and, for each I I, a { } ∈ I is the element of A that takes the value 1 on I and zero otherwise. By analogy with the nite dimensional case, we wish to dene a linear functional on A by

p

aij dµS = Pf Sij ,ik 1 j,k p Z j=1 ≤ ≤ Q   We now nd conditions under which this denes a bounded linear functional. Any A A ∈ 83 can be written in the form ∞ A = αIaI p=0 I⊂I X |XI|=p We want the integral of this A to be

∞ A dµS = αI aIdµS Z p=0 I⊂I Z X |XI|=p (A.5) ∞ = αI Pf SIj ,Ik 1 j,k p p=0 I⊂I ≤ ≤ X |XI|=p   where I = I , , I with I < < I in the ordering of . { 1 · · · p} 1 · · · p I We now hypothesise that there is a number CS such that

p sup Pf SIj ,Ik 1 j,k p CS (A.6) I⊂I ≤ ≤ ≤ |I|=p   Such a bound is the conclusion of the following simple Lemma. The bound proven in this Lemma is not tight enough to be of much use in quantum eld theory applications. A much better bound for those applications is proven in Corollary I.35.

Lemma A.3 Let S be a bounded linear operator on `2( ). Then I

p/2 sup Pf SIj ,Ik 1 j,k p S I⊂I ≤ ≤ ≤ k k |I|=p  

Proof: Recall that Hadamard’s inequality (Problem I.24) bounds a determinant by the th product of the lengths of its columns. The k column of the matrix SIj ,Ik 1 j,k p has ≤ ≤ length   p S 2 S 2 v | Ij ,Ik | ≤ | j,Ik | uj=1 sj uX X∈I t th But the vector Sj,Ik j is precisely the image under S of the Ik standard unit basis ∈I 2 vector, δj,Ik j , in ` ( ) and hence has length at most S . Hence ∈I I k k  p det S S 2 S p Ij ,Ik 1 j,k p ≤ | j,Ik | ≤ k k ≤ ≤ k=1 sj   Y X∈I 84 By Proposition I.18.d,

1/2 p/2 Pf SIj ,Ik 1 j,k p det SIj ,Ik 1 j,k p S ≤ ≤ ≤ ≤ ≤ ≤ k k

   

Under the hypothesis (A.6), the integral (A.5) is bounded by

∞ Adµ α Cp A (A.7) C ≤ | I| S ≤ k k Z p=0 I∈I X |XI|=p

provided C 1. Thus, if C 1, the map A A A dµ is a bounded linear map S ≤ S ≤ ∈ 7→ S with norm at most one. R The requirement that C be less than one is not crucial. Dene, for each n IN, S ∈ the Grassmann algebra

A I n I n = α : C α n = 2 | | αI < → k k I I | | ∞  P∈

If 2n C , then (A.7) shows that A A A dµ is a bounded linear functional on A . ≥ S ∈ 7→ S n Alternatively, we can view dµS as an unRbounded linear functional on A with domain · of denition the dense subalgebraR An of A.

85 Appendix B: Pfaans

Denition B.1 Let S = (S ) be a complex n n matrix with n = 2m even. By ij × denition, the Pfaan Pf(S) of S is

n 1 i i Pf(S) = ε 1··· n S S (B.1) 2mm! i1i2 · · · in−1in i , ,i =1 1 ···Xn where 1 if i1, , in is an even permutation of 1, , n i1 in · · · · · · ε ··· = 1 if i1, , in is an odd permutation of 1, , n ( −0 if i , · · · , i are not all distinct · · · 1 · · · n By convention, the Pfaan of a matrix of odd order is zero.

Problem B.1 Let T = (T ) be a complex n n matrix with n = 2m even and let ij × S = 1 (T T t) be its skew symmetric part. Prove that Pf T = Pf S. 2 −

0 S12 Problem B.2 Let S = with S21 = S12 C . Show that Pf S = S12. S21 0 − ∈  

For the rest of this Appendix, we assume that S is a skew symmetric matrix. Let

= (k , ` ), , (k , ` ) k , ` , , k , ` = 1, , 2m Pm { 1 1 · · · m m } { 1 1 · · · m m} { · · · } n o be the set of of all partitions of 1, , 2m into m disjoint ordered pairs. Observe that, { · · · } k ` k ` the expression ε 1 1··· m m S S is invariant under permutations of the pairs in k1`1 · · · km`m the partition (k , ` ), , (k , ` ) , since { 1 1 · · · m m }

k ` k ` k ` k ` ε π(1) π(1)··· π(m) π(m) = ε 1 1··· m m for any permutation π . Thus,

1 k1`1 km`m Pf(S) = m ε ··· S S (B.10) 2 k1`1 · · · km`m P X∈Pm Let < = (k , ` ), , (k , ` ) k < ` for all 1 i m Pm { 1 1 · · · m m } ∈ Pm i i ≤ ≤ n o

86 Because S is skew symmetric

k1`1 ki`i km`m k1`1 `iki km`m ε ··· ··· Ski`i = ε ··· ··· S`iki so that

k1`1 km`m Pf(S) = ε ··· S S (B.100) k1`1 · · · km`m P < X∈Pm

Problem B.3 Let α , , α be complex numbers and let S be the 2r 2r skew symmetric 1 · · · r × matrix r 0 α S = m αm 0 m=1 M  −  All matrix elements of S are zero, except for r 2 2 blocks running down the diagonal. × For example, if r = 2, 0 α1 0 0 α 0 0 0 S = − 1  0 0 0 α2   0 0 α2 0   −  Prove that Pf(S) = α α α . 1 2 · · · r

Proposition B.2 Let S = (Sij) be a skew symmetric matrix of even order n = 2m . π a) Let π be a permutation and set S = (Sπ(i)π(j)) . Then,

Pf(Sπ) = sgn(π) Pf(S)

b) For all 1 k = ` n , let M be the matrix obtained from S by deleting rows ≤ 6 ≤ k` k and ` and columns k and ` . Then,

n k+` Pf(S) = sgn(k `) ( 1) Sk` Pf (Mk`) `=1 − − P In particular, n ` Pf(S) = ( 1) S1` Pf (M1`) `=2 − P

87 Proof: a) Recall that π is a xed permutation of 1, , n. Making the change of · · · summation variables j = π(i ), , j = π(i ), 1 1 · · · n n n π 1 i i Pf(S ) = ε 1··· n S S 2mm! π(i1)π(i2) · · · π(in−1)π(in) i , ,i =1 1 ···Xn n 1 π−1(j ) π−1(j ) = ε 1 ··· n S S 2mm! j1j2 · · · jn−1jn j , ,j =1 1 ···Xn n 1 1 j j = sgn(π− ) ε 1··· n S S 2mm! j1j2 · · · jn−1jn j , ,j =1 1 ···Xn 1 = sgn(π− ) Pf(S) = sgn(π) Pf(S) b) Fix 1 k n . Let P = (k , ` ), , (k , ` ) be a partition in . We may ≤ ≤ { 1 1 · · · m m } Pm assume, by reindexing the pairs, that k = k1 or k = `1 . Then,

1 k` k ` 1 k k k ` Pf (S) = ε 1··· m m S S + ε 1 ··· m m S S 2m k`1 · · · km`m 2m k1k · · · km`m P∈Pm P∈Pm kX1=k `X1=k By antisymmetry and a change of summation variable,

k k k ` k` k ` ε 1 ··· m m S S = ε 1··· m m S S k1k · · · km`m k`1 · · · km`m P∈Pm P∈Pm `X1=k kX1=k Thus, 2 k` k ` Pf(S) = ε 1··· m m S S 2m k`1 · · · km`m P∈Pm kX1=k n 2 k ` k ` = ε ··· m m S S 2m k` · · · km`m `=1 P∈Pm P k1=Xk `1=` Extracting Sk` from the inner sum,

k 1 − 2 k ` k ` k ` Pf(S) = S ε 2 2··· m m S S k` 2m k2`2 · · · km`m `=1 P∈Pm P k1=Xk `1=` n 2 k ` k ` k ` + S ε 2 2··· m m S S k` 2m k2`2 · · · km`m `=k+1 P∈Pm P k1=Xk `1=` The following Lemma implies that,

k 1 n − k+` k+`+1 Pf(S) = Sk` ( 1) Pf (Mk`) + Sk` ( 1) Pf (Mk`) `=1 − `=k+1 − P P

88 Lemma B.3 For all 1 k = ` n , let M be the matrix obtained from S by deleting ≤ 6 ≤ k` rows k and ` and columns k and ` . Then,

( 1)k+` k ` k ` k ` Pf (M ) = sgn(k `) − ε 2 2··· m m S S k` − 2m−1 k2`2 · · · km`m P∈Pm k1=Xk `1=`

Proof: For all 1 k < ` n , ≤ ≤

M = S 0 0 ; 1 i, j n 2 k` i j ≤ ≤ −   where, for each 1 i n 2 , ≤ ≤ − i if 1 i k 1 ≤ ≤ − i0 = i + 1 if k i ` 1 ( i + 2 if ` ≤ i ≤ n − 2 ≤ ≤ − By denition,

1 i1 in−2 Pf (M ) = m−1 ε ··· S 0 0 S 0 0 k` 2 (m 1)! i1i2 in−3in−2 − · · · 1 i , ,i n 2 ≤ 1 ···Xn−2≤ − We have i i k+` 1 k`i0 i0 ε 1··· n−2 = ( 1) − ε 1··· n−2 −

for all 1 i1, , in 2 n 2 . It follows that ≤ · · · − ≤ − k+`−1 0 0 ( 1) k`i1 in−2 Pf (M ) = m−−1 ε ··· S 0 0 S 0 0 k` 2 (m 1)! i1i2 in−3in−2 − · · · 1 i , ,i n 2 ≤ 1 ···Xn−2≤ − ( 1)k+`−1 k ` k ` k ` − 2 2 m m = 2m−1(m 1)! ε ··· Sk2`2 Skm`m − · · · 1 k ,` ,k ,` n ≤ 2 2···Xm m≤ ( 1)k+`−1 k ` k ` k ` − 2 2 m m = 2m−1(m 1)! ε ··· Sk2`2 Skm`m − · · · P∈Pm k1=Xk `1=` If, on the other hand, k > ` ,

( 1)k+`+1 ` k k ` k ` Pf (M ) = Pf (M ) = − ε 2 2··· m m S S k` `k 2m−1 k2`2 · · · km`m P∈Pm k1=X` `1=k ( 1)k+` k ` k ` k ` = − ε 2 2··· m m S S 2m−1 k2`2 · · · km`m P∈Pm k1=Xk `1=`

89 Proposition B.4 Let 0 C S = Ct 0  −  where C = (c ) is a complex m m matrix. Then, ij × 1 m(m 1) Pf(S) = ( 1) 2 − det(C) −

Proof: The proof is by induction on m 1 . If m = 1 , then, by Problem B.2, ≥ 0 S Pf 12 = S S 0 12  21  Suppose m > 1 . The matrix elements S , i, j = 1, , n = 2m , of S are ij · · · 0 if 1 i, j m ≤ ≤ ci j m if 1 i m and m + 1 j n Sij = − ≤ ≤ ≤ ≤  cj i m if m + 1 i n and 1 j m  −  −0 if m + 1 ≤ i,≤j n ≤ ≤ ≤ ≤ For each k = 1, , m , wehave · · · 0 C1,k M = t 1k+m C1,k 0  −  where C1,k is the matrix of order m 1 obtained from the matrix C by deleting row 1 − and column k . It now follows from Proposition B.2 b) and our induction hypothesis that

n ` Pf(S) = ( 1) S1` Pf (M1`) `=m+1 − Pn ` = ( 1) c1 ` m Pf (M1`) `=m+1 − − mP k+m = ( 1) c1 k Pf (M1k+m) k=1 − m P k+m 1 (m 1)(m 2) 1,k = ( 1) c1 k ( 1) 2 − − det C k=1 − − P   Collecting factors of minus one m m+1 1 (m 1)(m 2) k+1 1,k Pf(S) = ( 1) ( 1) 2 − − ( 1) c1 k det C − − k=1 − m   1 m(m 1) k+1 P 1,k = ( 1) 2 − ( 1) c1 k det C − k=1 − 1 m(m 1) P   = ( 1) 2 − det(C) −

90 Proposition B.5 Let S = (Si,j) be a skew symmetric matrix of even order n = 2m . a) For any matrix B of even order n = 2m ,

Pf BtS B = det(B) Pf(S)  b) Pf(S)2 = det(S)

t Proof: a) The matrix element of B S B with indices i1, i2 is bj1i1 Sj1j2 bj2i2 . Hence j1,j2 P t 1 i i Pf B S B = ε 1··· n b b S S 2mm! j1i1 · · · jnin j1j2 · · · jn−1jn i , ,i j , ,j  1X··· n 1X··· n 1 i i = ε 1··· n b b S S 2mm! j1i1 · · · jnin j1j2 · · · jn−1jn j , ,j i , ,i 1X··· n 1X··· n The expression i i ε 1··· n b b j1i1 · · · jnin i , ,i 1X··· n is the determinant of the matrix whose `th row is the jth row of B. If any two of j , , j ` 1 · · · n j j are equal, this determinant is zero. Otherwise it is ε 1··· n det(B). Thus

i i j j ε 1··· n b b = ε 1··· n det(B) j1i1 · · · jnin i , ,i 1X··· n and

t 1 j j Pf B S B = det(B) ε 1··· n S S = det(B) Pf(S) 2mm! j1j2 · · · jn−1jn j , ,j  1X··· n b) It is enough to prove the identity for real, nonsingular matrices, since Pf(S)2 and det(S) are polynomials in the matrix elements S , i, j = 1, , n , of S . So, let S be ij · · · real and nonsingular and, as in Lemma I.10 (or Problem I.9), let R be a real orthogonal matrix such that RtSR = T with m 0 α T = i αi 0 i=1 M  −  for some real numbers α , , α . Then, det R = 1, so that, by part (a) and Problem 1 · · · m ± B.3, Pf(S) = det(R) Pf(S) = Pf RtSR = Pf(T ) = α α ± ± ± ± 1 · · · m  91 and det(S) = det RtSR = det T = α2 α2 = Pf(S)2 1 · · · m 

Problem B.4 Let S be a skew symmetric D D matrix with D odd. Prove that det S = 0. ×

92 Appendix C: Propagator Bounds

The propagator, or covariance, for many{fermion models is the Fourier transform of δσ,σ0 C 0 (k) = σ,σ ık e(k) 0 − where k = (k0, k) and e(k) is the one particle dispersion relation minus the chemical

potential. For this appendix, the spins σ, σ0 play no role, so we suppress them completely. We also restrict our attention to two space dimensions (i.e. k IR2, k IR3) though ∈ ∈ it is trivial to extend the results of this appendix to any number of space dimensions. We assume that e(k) is a C4 (though C2+ε would suce) function that has a nonempty, compact zero set , called the Fermi curve. We further assume that e(k) does not vanish F ∇ for k , so that is itself a reasonably smooth curve. At low temperatures only those ∈ F F momenta with k 0 and k near are important, so we replace the above propagator 0 ≈ F with U(k) C(k) = ık e(k) 0 − The precise ultraviolet cuto, U(k), shall be chosen shortly. It is a C0∞ function which takes values in [0, 1], is identically 1 for k2 + e(k)2 1 and vanishes for k2 + e(k)2 larger 0 ≤ 0 than some constant. We slice momentum space into shells around the Fermi surface. To do this, we x 2 2 M > 1 and choose a function ν C∞([M − , M ]) that takes values in [0, 1], is identically ∈ 0 1/2 1/2 1 on [M − , M ] and obeys ∞ ν M 2jx = 1 j=0 X  for 0 < x < 1. The jth shell is dened to be the support of

(j) 2j 2 2 ν (k) = ν M k0 + e(k)  By construction, the jth shell is a subset of

k 1 ık e(k) 1 M j+1 ≤ | 0 − | ≤ M j−1  93 Setting C(j)(k) = C(k)ν(j)(k)

and U(k) = ∞ ν(j)(k) we have j=0 P ∞ C(k) = C(j)(k) j=0 X

Given any function χ(k0) on the Fermi curve , we dene F

(j) (j) Cχ (k) = C (k)χ(k0(k))

where, for k = (k0, k), k0(k) is any reasonable \projection" of k onto the Fermi curve. In the event that is a circle of radius k centered on the origin, it is natural to choose F F kF k0(k) = k k. For general , one can always construct, in a tubular neighbourhood of , | | F F a C∞ vector eld that is transverse to , and then dene k0(k) to be the unique point of F that is on the same integral curve of the vector eld as k is. See [FST1, Lemma 2.1]. F k

k0 F

To analyze the Fourier transform of C(j)(k), we further decompose the jth shell into more or less rectangular \sectors". To do so, we x l 1 , 1 and choose a j ∈ M j M j/2 partition of unity   (j) 1 = χs (k0) s Σ(j) ∈X (j) of the Fermi curve with each χs supported on an interval of length l and obeying F j

m (j) const sup ∂k0 χs l m for m 4 k0 ≤ j ≤

Here ∂k0 is the derivative along the Fermi curve. If k0(t) is a parametrization of the Fermi d 0 curve by arc length, with the standard orientation, then ∂k f(k0) k0=k0(t) = dt f k0(t) .  94 4 Proposition C.1 Let χ(k0) be a C function on the Fermi curve which takes values F in [0, 1], which is supported on an interval of length l 1 , 1 and whose derivatives j ∈ M j M j/2 obey  

n Kχ sup ∂k0 χ(k0) ln for n 4 k0 ≤ j ≤

^ Fix any point kc0 in the support of χ. Let t and n^ be unit tangent and normal vectors to the Fermi curve at kc0 and set

j j ρ(x, y) = 1 + M − x y + M − (x y) n^ + l (x y) ^t | 0 − 0| | − · | j| − · |

4 (j) 2j 2 2 Let φ be a C0 function which takes values in [0, 1] and set φ = φ M [k0 + e(k) ] . For any function W (k) define 

(j) d3k ık (x y) (j) Wχ,φ(x, y) = (2π)3 e · − W (k)φ (k) χ k0(k) Z  There is a constant, const , depending on Kχ, φ and e(k), but independent of M, j, x and y such that

α l l 2 α α (j) j 4 j α0 1 ^ 2 W (x, y) const 2j ρ(x, y)− max sup j(α +α ) ∂ n^ k t k W (k) χ,φ M 3 M 0 1 k0 | | ≤ α∈IN k supp χφ(j) ·∇ ·∇ |α|≤4 ∈   where α = (α , α , α ) and α = α + α + α . 0 1 2 | | 0 1 2 n^

O(1/M j)

^t kc0

O(lj)

(j) 2 Proof: Use S to denote the support of φ (k)χ (k0(k) . If φ(x) vanishes for x Q , ≥ (j) j j then φ (k) vanishes unless k0 QM and e(k) QM . If k S, then k0 lies in an | | ≤ | | ≤ ∈ j interval of length 2QM − , the component of k tangential to lies in an interval of length F 95 const l and, by Problem C.1 below, the component of k normal to lies in an interval of j F j 2j length 2C0QM − . Thus S has volume at most const M − lj and

(j) lj sup Wχ,φ(x, y) vol(S) sup W (k) const M 2j sup W (k) x,y | | ≤ k S | | ≤ k S | | ∈ ∈ We dened ρ as a sum of four terms. Multiplying out ρ4 gives us a sum of

4 x0 y0 β0 (x y) nˆ β1 β2 4 terms, each of the form − − · l (x y) ^t with β 4. To bound M j M j j − · | | ≤ (j) sup ρ(x, y)4 W (x, y) by 44C, it suces to b ound x,y | χ,φ |

x y β0 (x y) nˆ β1 β2 (j) 0− 0 − · l (x y) ^t W (x, y) M j M j j − · χ,φ

 d3k ık (x y) 1 β0 1 β1 l β2 (j) = 3 e · − j ∂ j n^ k ^t k W (k)φ (k) χ k0(k) (2π) M k0 M · ∇ j · ∇ Z       by C for all x, y IR3 and β IN3 with β 4. The volume of the domain of integration ∈ ∈ | | ≤ lj is still bounded by const M 2j , so by the product rule, to prove the desired bound it suces to prove that

1 β0 1 β1 l ^ β2 (j) max sup M j ∂k0 M j n^ k jt k φ (k) χ k0(k) const β 4 k S · ∇ · ∇ ≤ | |≤ ∈       l 1 Since j and all derivatives of k0(k) to order 4 are bounded, j ≥ M β β β β l 2 max sup 1 ∂ 0 1 n^ 1 l ^t 2 χ k (k) const max j 1 const M j k0 M j k j k 0 M jβ1 lβ1+β2 β 4 k ·∇ ·∇ ≤ β1+β2 4 j ≤ | |≤ ≤     so, by the product rule, it suces to prove

1 β0 1 β1 l ^ β2 (j) max sup M j ∂k0 M j n^ k jt k φ (k) const β 4 k S · ∇ · ∇ ≤ | |≤ ∈    Set I = 1, , β , { · · · | |} 1 ∂ if 1 i β M j k0 ≤ ≤ 0 1 d = j n^ k if β + 1 i β + β i  M · ∇ 0 ≤ ≤ 0 1   l ^t k if β + β + 1 i β j · ∇ 0 1 ≤ ≤ | | I0  and, for each I0 I, d = i I0 di. By Problem C.2, below, ⊂ ∈ β Q | | m m I (j) d φ 2j 2 2 2j Ii 2 2 d φ (k) = dxm M k0 + e(k) M d k0 + e(k) i=1 m=1 (I , ,I ) X 1 ···Xm ∈Pm  Q  96 where is the set of all partitions of I into m nonempty subsets I , , I with, for Pm 1 · · · m all i < i0, the smallest element of Ii smaller than the smallest element of Ii0 . For all m m 4, d φ M 2j k2 + e(k)2 is bounded by a constant independent of j, so to prove ≤ dxm 0 the Proposition, it suces to prov e that

2j 1 β0 1 β1 l ^ β2 2 2 max sup M M j ∂k0 M j n^ k jt k k0 + e(k) const β 4 k S · ∇ · ∇ ≤ | |≤ ∈     If β = 0 0 6 j 2k0M if β0 = 1, β1 = β2 = 0 2j 1 β0 1 β1 l ^ β2 2 2 M j ∂k0 j n^ k jt k k + e(k) = 2 if β = 2, β = β = 0 M M · ∇ · ∇ 0 0 1 2 ( 0 otherwise     is bounded, independent of j, since k const 1 on S. Thus it suces to consider | 0| ≤ M j β0 = 0. Applying the product rule once again, this time to the derivatives acting on M 2je(k)2 = [M je(k)] [M je(k)], it suces to prove

j 1 β1 l ^ β2 max sup M M j n^ k jt k e(k) const (C.1) β 4 k S · ∇ · ∇ ≤ | |≤ ∈   1 If β1 = β2 = 0, this follows from the fact that e(k) const M j on S. If β1 1 or β2 2, β | | ≤ ≥ ≥ M j l 2 it follows from j 1. (Recall that l 1 .) This leaves only β = 0, β = 1. If M β1j ≤ j ≤ M j/2 1 2 ^t ke(k) is evaluated at k = k0 , it vanishes, since ke(k0 ) is parallel to n^. The second · ∇ c ∇ c derivative of e is bounded so that,

jl ^ jl ^ ^ M j sup t ke(k) = M j sup t ke(k) t ke(kc0 ) k S · ∇ k S · ∇ − · ∇ ∈ ∈ jl const M j sup k kc0 ≤ k S | − | ∈ const M jl2 const ≤ j ≤ since l 1 . j ≤ M j/2

Problem C.1 Prove, under the hypotheses of Proposition C.1, that there are constants

C, C0 > 0, such that if k k0 C, then there is a point p0 with (k p0) ^t = 0 and − c ≤ ∈ F − · k p0 C0 e(k) . − ≤

97 Problem C.2 Let f : IRd IR and g : IR IR. Let 1 i , , i d. Prove that → → ≤ 1 · · · n ≤

n n m ∂ g f(x) = g(m) f(x) ∂ f(x) ∂xi` ∂xi` `=1 p=1 ` Ip m=1 (I , ,I ) (n) ∈  Q   X 1 ··· Xm ∈Pm  Q Q

(n) where m is the set of all partitions of (1, , n) into m nonempty subsets I , , I P · · · 1 · · · m with, for all i < i0, the smallest element of Ii smaller than the smallest element of Ii0 .

Corollary C.2 Under the hypotheses of Proposition C.1,

(j) lj sup Wχ,φ(x, y) const M 2j sup W (k) ≤ k supp χφ(j) | | ∈ and

(j) (j) sup dy Wχ,φ(x, y) , sup dx Wχ,φ(x, y) x y Z Z α l 2 α α j α0 1 ^ 2 const max sup j(α +α ) ∂ n^ k t k W (k) 3 M 0 1 k0 ≤ α∈IN k supp χφ(j) · ∇ · ∇ |α|≤4 ∈  

Proof: The rst claim is an immediate consequence of Proposition C.1 since ρ 1. For ≥ the second statement, use

1 1 1 1 sup dy ρ(x,y)4 , sup dx ρ(x,y)4 = sup dx ρ(x y,0)4 = dx0 ρ(x0,0)4 x Z y Z y Z − Z with x0 = x y. Subbing in the denition of ρ, −

1 1 dx 4 = dx −j −j 4 ρ(x,0) [1+M x0 +M x nˆ +lj x ˆt ] Z Z | | | · | | · | 2j 1 1 = M l dz [1+ z + z + z ]4 j | 0| | 1| | 2| Z2j 1 const M l ≤ j

j j 1 We made the change of variables x0 = M z0, x n^ = M z1, x ^t = l z2. · · j 98 Corollary C.3 Under the hypotheses of Proposition C.1,

l sup C(j)(x, y) const j χ ≤ M j

and sup dy C(j)(x, y) , sup dx C(j)(x, y) const M j χ χ ≤ x Z y Z

1 Proof: Apply Corollary C.2 with W (k) = ık e(k) and φ = ν. To achieve the desired 0− bounds, we need

1 α0 1 α1 l α2 1 j max sup j ∂ j n^ k ^t k const M M k0 M j ık0 e(k) α 4 k supp χν(j) · ∇ · ∇ − ≤ | |≤ ∈    In the notation of the pro of of Proposition C.1, with β replaced by α,

α | | m+1 m dI ν(j)(k) = ( 1)mm! 1 dIi (ık e(k)) ık0 e(k) 0 − − i=1 − m=1 (I , ,I ) X 1 ···Xm ∈Pm   Q α | | j m+1 m = M j ( 1)mm! 1/M M jdIi (ık e(k)) ık0 e(k) 0 − − i=1 − m=1 (I , ,I ) X 1 ···Xm ∈Pm   Q

j m+1 (j) 1 1/M On the support of χν , ık0 e(k) const M j so that ık e(k) is bounded uni- | − | ≥ 0− formly in j. That M jdIi (ık e(k)) is bounded uniformlyin j is immediate from (C.1), 0 − j ık0M if β0 = β1 = β2 = 0 j 1 β0 1 β1 l ^ β2 M j ∂k0 j n^ k jt k (ık0) = ı if β = 1, β = β = 0 M M · ∇ · ∇ 0 1 2 ( 0 otherwise    and the fact that k const 1 on the support of χν(j). | 0| ≤ M j

99 Appendix D: Problem Solutions

I. Fermionic Functional Integrals §

Problem I.1 Let be a complex vector space of dimension D. Let s . Then s has V ∈ V D n a unique decomposition s = s0 + s1 with s0 C and s1 V. Prove that, if ∈ ∈ n=1 V s0 = 0, then there is a unique s0 with ss0 = 1 and a uniqueL s00V with s00s = 1 6 ∈ V ∈ V and furthermore V V D n 1 n s1 s0 = s00 = + ( 1) n+1 s0 s − 0 n=1 X

n Solution. Note that if n > D, then s1 = 0. Dene

D n 1 n s1 s~ = + ( 1) n+1 s0 s − 0 n=1 X Then D n 1 n s1 ss~ = ss~ = (s0 + s1) + ( 1) n+1 s0 s − 0 n=1 h X i D D n n+1 n s1 s1 n s1 = 1 + ( 1) n + + ( 1) n+1 s s0 s − 0 − 0 n=1 n=1 h X i h X i D D 1 n − n+1 n s1 n s1 = 1 + ( 1) n + ( 1) s sn+1 − 0 − 0 n=1 n=0 h X i h X i D D 0 n 0 n n s1 n 1 s1 = 1 + ( 1) n + ( 1) − 0 = 1 s sn − 0 − 0 n=1 0 h X i h nX=1 i Thus s~ satises ss~ = ss~ = 1. If some other s0 satises ss0 = 1, then s0 = 1s0 = sss~ 0 = s~1 = s~

and if some other s00 satises s00s = 1, then s00 = s001 = s00ss~ = 1s~ = s~.

Problem I.2 Let be a complex vector space of dimension D. Every element s of V V 100 V has a unique decomposition s = s + s with s C and s D n . Dene 0 1 0 ∈ 1 ∈ n=1 V L V D s s0 1 n e = e n! s1 n=0 n P o Prove that if s, t with st = ts, then, for all n IN, ∈ V ∈ V n n n m n m (s + t) = m s t − m=0 X 

and

eset = etes = es+t

n n n m n m Solution. The proof that (s + t) = m=0 m s t − is by induction on n. The case n n n m n m n = 1 is obvious. Suppose that (s + t)P= m=0 m s t − for some n. Since st = ts, we have ssmtn = smtns for all nonnegative Pintegers m and n, so that

(s + t)n+1 = s(s + t)n + (s + t)nt n n n m+1 n m n m n+1 m = m s t − + m s t − m=0 m=0 X X n+1  n  n m n+1 m n m n+1 m = m 1 s t − + m s t − m=1 − m=0 X n X  n n+1 n m n+1 m n m n+1 m n+1 = s + m 1 s t − + m s t − + t m=1 − m=1 Xn  X  n+1 n! n! m n+1 m n+1 = t + (m 1)!(n m+1)! + m!(n m)! s t − + s − − − m=1 h i Xn n+1 n+1 m n+1 m m n+1 m n+1 = t + m n+1 + n+1− s t − + s m=1 X h i n+1  n+1 m n+1 m = m s t − m=0 X 

For the second part, since s , t , s = s s and t = t t all commute with each other, 0 0 1 − 0 1 − 0 101 n n and s1 = t1 = 0 for all n > D

s t t s s0 ∞ 1 m t0 ∞ 1 n s0 t0 ∞ 1 m n e e = e e = e m! s1 e n! t1 = e e m!n! s1 t1 m=0 n=0 m,n=0 n o n o n o P N P P s0 t0 ∞ 1 m N m = e e m!(N m)! s1 t1 − where N = n + m N=0 m=0 − n P P N o s0 t0 ∞ 1 N m N m = e e N! m s1 t1 − N=0 m=0 n P P  o s0+t0 ∞ 1 N s+t = e N! (s1 + t1) = e N=0 n P o

Problem I.3 Use the notation of Problem I.2. Let, for each α IR, s(α) . Assume ∈ ∈ V that s(α) is dierentiable with respect to α and that s(α)s(β) = s(β)s(α) forVall α and β. Prove that d n n 1 d dα s(α) = ns(α) − dα s(α) and d s(α) s(α) d dα e = e dα s(α)

Solution. First, let for each α IR, t(α) and assume that t(α) is dierentiable ∈ ∈ V with respect to α. Taking the limit of V

s(α+h)t(α+h) s(α)t(α) s(α+h) s(α) t(α+h) t(α) h − = h− t(α + h) + s(α) h−

as h tends to zero gives that s(α)t(α) is dierentiable and obeys the usual product rule

d dα s(α)t(α) = s0(α)t(α) + s(α)t0(α)   Dierentiating s(α)s(β) = s(β)s(α) with respect to α gives that s0(α)s(β) = s(β)s0(α) for d n n 1 all α and β too. If for some n, dα s(α) = ns(α) − s0(α), then,

d n n d n n n 1 dα s(α)s(α) = s0(α)s(α) + s(α) dα s(α) = s0(α)s(α) + ns(α)s(α) − s0(α) n = (n + 1)s(α) s0(α)

102 and the rst part follows easily by induction. For the second part, write s(α) = s0(α) + s (α) with s (α) C and s (α) D n . Then s (α) and s (β) commute for all α 1 0 ∈ 1 ∈ n=1 V 0 1 and β and L V

D D+1 d s(α) d s0(α) 1 n d s0(α) 1 n dα e = dα e n! s1(α) = dα e n! s1(α) n=0 n=0 h n PD+1 oi h Dn+1P oi s0(α) 1 n s0(α) 1 n 1 = s0(α)e n! s1(α) + e (n 1)! s1(α) − s10 (α) n=0 n=1 − n PD o n DP o s0(α) 1 n s0(α) 1 n = s0(α)e n! s1(α) + e n! s1(α) s10 (α) n=0 n=0 D n P o n P o s0(α) 1 n s(α) = e n! s1(α) s0(α) + s10 (α) = e s0(α) n=0 n P o

Problem I.4 Use the notation of Problem I.2. If s0 > 0, dene

D ( 1)n−1 s n ln s = ln s + − 1 0 n s0 n=1 X  with ln s IR. 0 ∈ a) Let, for each α IR, s(α) . Assume that s(α) is dierentiable with respect to α, ∈ ∈ V that s(α)s(β) = s(β)s(α) for allVα and β and that s0(α) > 0 for all α. Prove that

d s0(α) dα ln s(α) = s(α) b) Prove that if s with s IR, then ∈ V 0 ∈ V ln es = s

Prove that if s with s > 0, then ∈ V 0 V eln s = s

103 Solution. a)

D D 0 n−1 n d s0(α) n 1 s1(α) n 1 s1(α) ln s(α) = + ( 1) − n s0 (α) ( 1) − n+1 s0 (α) dα s0(α) − s0(α) 1 − − s0(α) 0 n=1 n=1 X X D 1 D 0 − n n s0(α) n s1(α) n s1(α) = + ( 1) n+1 s10 (α) + ( 1) n+1 s0(α) s0(α) − s0(α) − s0(α) n=0 n=1 X X D 0 0 n s0(α) s1(α) n s1(α) = + + ( 1) n+1 s10 (α) + s0(α) s0(α) s0(α) − s0(α) n=1 X   D 1 d D+1 since s1(α) s10 (α) = D+1 dα s1(α) = 0. By Problem I.1,

0 0 0 d s0(α) s1(α) 1 1 s (α) ln s(α) = + + s10 (α) + s0(α) = dα s0(α) s0(α) s(α) − s0(α) s(α)    b) For the rst part use (with α IR) ∈ d αs seαs d dα ln e = eαs = s = dα αs

This implies that ln eαs αs is independent of α. As it is zero for α = 0, it is zero for all − ln s α. For the second part, set s0 = e . Then, by the rst part, ln s0 = ln s. So it suces to D D prove that ln is injective. This is done by expanding out s0 = `=0 s`0 and s = `=0 s` ` with s0 , s` and verifying that every s0 = s` by inductionPon `. Projecting P ` ∈ V ` D D V n−1 n−1 0 0 ( 1) s s0 n ( 1) s s0 n ln s0 + − − = ln s0 + − −0 n s0 n s0 n=1 n=1 X  X  0 1 onto gives ln s = ln s0 and hence s0 = s . Projecting both sides onto gives V 0 0 0 0 V 1−1 1−1 0 0 V ( 1) s s0 ( 1) s s0 V − − = − −0 1 s0 1 1 s0 1

s0 s0 n  m  − 0 (note that s0 has no component in for any m < n) and hence s10 = s1. 0 V 2 Projecting both sides onto gives V V 1−1 2−1 2 1−1 0 0 2−1 0 0 2 ( 1) s s0 ( 1)V s s0 ( 1) s s0 ( 1) s s0 − − + − − = − −0 + − −0 1 s0 2 2 s0 1 1 s0 2 2 s0 1  h  i  h  i Since s0 = s0 and s10 = s1,

1−1 1−1 0 0 0 s2 ( 1) s s0 ( 1) s s0 s2 = − − = − −0 = 0 s0 1 s0 2 1 s0 2 s0   and consequently s20 = s2. And so on.

104 Problem I.5 Use the notation of Problems I.2 and I.4. Prove that if s, t with ∈ V st = ts and s0, t0 > 0, then V ln(st) = ln s + ln t

Solution. Let u = ln s and v = ln t. Then s = eu and t = ev and, as uv = vu,

ln(st) = ln euev = ln eu+v = u + v = ln s + ln t  

Problem I.6 Generalise Problems I.1{I.5 to with S a nite dimensional graded S V superalgebra having S0 = C. V

Solution. The denitions of Problems I.1{I.5 still make perfectly good sense when the Grassmann algebra is replaced by a nite dimensional graded superalgebra S having V S0 = C. Note that,Vbecause S is nite dimensional, there is a nite DS such that S =

DS S . Furthermore, as was observed in Denitions I.5 and I.6, S0 = is itself a nite ⊕m=0 m S V DS+D dimensional graded superalgebra, with S0 = S0 where S0 =V Sm1 ⊕m=0 m m m1+m2=m ⊗ m2 . In particular S0 = S0. So, just replace L V 0 V Let be a complex vector space of dimension D. Every element s of has a unique V V D n decomposition s = s0 + s1 with s0 C and s1 . V ∈ ∈ n=1 V by L V D0 Let S0 = S0 be a nite dimensional graded superalgebra having S0 = C. Every ⊕m=0 m 0 D0 element s of S0 has a unique decomposition s = s +s with s C and s S0 . 0 1 0 ∈ 1 ∈ ⊕m=1 m The other parts of the statements and even the solutions remain the same, except for

trivial changes, like replacing D by D0.

Problem I.7 Let be a complex vector space of dimension D. Let s = s + s V 0 1 ∈ V D n with s0 C and s1 . Let f(z) be a complex valued function that is analyticV ∈ ∈ n=1 V L V 105 1 (n) n in z < r. Prove that if s < r, then ∞ f (0) s converges and | | | 0| n=0 n! D ∞ P 1 (n) n 1 (n) n n! f (0) s = n! f (s0) s1 n=0 n=0 X X

1 (m) m ∞ Solution. The Taylor series m=0 m! f (0) t converges absolutely and uniformly to 1 (n+m) m f(t) for all t in any compact subsetP of z C z < r . Hence ∞ f (0) s ∈ | | m=0 m! 0 (n) converges to f (s0) and  P

D D ∞ 1 (n) n 1 (n+m) m n n! f (s0) s1 = n!m! f (0) s0 s1 n=0 n=0 m=0 X X X min N,D ∞ { } 1 (N) N n n = n!(N n)! f (0) s0 − s1 where N = m + n n=0 − NX=0 X min N,D ∞ { } 1 (N) N N n n = N! f (0) n s0 − s1 n=0 NX=0 X N  ∞ 1 (N) N N n n = N! f (0) n s0 − s1 N=0 n=0 X X  ∞ 1 (N) N = N! f (0) s NX=0

Problem I.8 Let a , , a be an ordered basis for . Let b = D M a , 1 i D, 1 · · · D V i j=1 i,j j ≤ ≤ be another ordered basis for . Prove that P V da da = det M db db · D · · · 1 · D · · · 1 Z Z In particular, if b = a for some permutation σ S i σ(i) ∈ D da da = sgn σ db db · D · · · 1 · D · · · 1 Z Z

Solution. By linearity, it suces to verify that

b b da da = 0 i1 · · · i` D · · · 1 Z 106 unless ` = D and b b da da = det M 1 · · · D D · · · 1 Z That b b da da = 0 when ` = D is obvious, because a a da da i1 · · · i` D · · · 1 6 j1 · · · j` D · · · 1 vanishesR when ` = D. R 6 D b b da da = M M a a da da 1 · · · D D · · · 1 1,j1 · · · D,jD j1 · · · jD D · · · 1 j , ,j =1 Z 1 ···XD Z Now a a da da = 0 unless all of the j ’s are dierent so j1 · · · jD D · · · 1 k R b b da da = M M a a da da 1 · · · D D · · · 1 1,π(1) · · · D,π(D) π(1) · · · π(D) D · · · 1 π S Z X∈ D Z = M M sgn π a a da da 1,π(1) · · · D,π(D) 1 · · · D D · · · 1 π S X∈ D Z = sgn π M M = det M 1,π(1) · · · D,π(D) π S X∈ D In particular, if b = a for some permutation σ S i σ(i) ∈ D b b da da = a a da da 1 · · · D D · · · 1 σ(1) · · · σ(D) D · · · 1 Z Z = sgn σ a a da da = sgn σ 1 · · · D D · · · 1 Z

Problem I.9 Let S be a matrix ◦ λ be a real number ◦ ~v and ~v be two mutually perpendicular, complex conjugate unit vectors ◦ 1 2 S~v = ıλ~v and S~v = ıλ~v . ◦ 1 1 2 − 2 Set w~ = 1 ~v ~v w~ = 1 ~v + ~v 1 √2ı 1 − 2 2 √2 1 2   a) Prove that w~ and w~ are two mutually perpendicular, real unit vectors • 1 2 Sw~ = λw~ and Sw~ = λw~ . • 1 2 2 − 1 107 b) Suppose, in addition, that S is a 2 2 matrix. Let R be the 2 2 matrix whose rst × × column is w~1 and whose second column is w~2. Prove that R is a real orthogonal matrix 0 λ and that RtSR = . λ −0   c) Generalise to the case in which S is a 2r 2r matrix. ×

Solution. a) Aside from a factor of √2, w~ 1 and w~2 are the imaginary and real parts of ~v1, so they are real. I’ll use the convention that the complex conjugate is on the left argument of the dot product. Then

w~ w~ = 1 ~v ~v ~v ~v = 1 ~v ~v + ~v ~v = 1 1 · 1 2 1 − 2 · 1 − 2 2 1 · 1 2 · 2 w~ w~ = 1 ~v + ~v  ~v + ~v  = 1 ~v ~v + ~v ~v  = 1 2 · 2 2 1 2 · 1 2 2 1 · 1 2 · 2 w~ w~ = ı ~v ~v  ~v + ~v  = ı ~v ~v ~v ~v  = 0 1 · 2 2 1 − 2 · 1 2 2 1 · 1 − 2 · 2 and    Sw~ = 1 S~v S~v = 1 ıλ~v + ıλ~v = λ ~v + ~v = λw~ 1 √2ı 1 − 2 √2ı 1 2 √2 1 2 2 Sw~ = 1 S~v + S~v  = 1 ıλ~v ıλ~v  = ıλ ~v ~v  = λw~ 2 √2 1 2 √2 1 − 2 √2 1 − 2 − 1    b) The matrix R is real, because its column vectors w~ 1 and w~2 are real and it is orthogonal because its column vectors are mutually perpendicular unit vectors. Furthermore,

w~ t w~ t w~ t 1 S [ w~ w~ ] = 1 [ Sw~ Sw~ ] = 1 [ λw~ λw~ ] w~ t 1 2 w~ t 1 2 w~ t 2 − 1  2   2   2  w~ w~ w~ w~ 0 λ = λ 1 2 1 1 = w~ · w~ −w~ · w~ λ −0  2 · 2 − 2 · 1    c) Let r IN and ∈ S be a 2r 2r matrix ◦ × λ , 1 ` r be real numbers ◦ ` ≤ ≤ for each 1 ` r, ~v and ~v be two mutually perpendicular, complex conjugate ◦ ≤ ≤ 1,` 2,` unit vectors

~v and ~v 0 be perpendicular for all 1 i, j 2 and all ` = `0 between 1 and r ◦ i,` j,` ≤ ≤ 6 S~v = ıλ ~v and S~v = ıλ ~v , for each 1 ` r. ◦ 1,` ` 1,` 2,` − ` 2,` ≤ ≤ Set w~ = 1 ~v ~v w~ = 1 ~v + ~v 1,` √2ı 1,` − 2,` 2,` √2 1,` 2,`   108 for each 1 ` r. Then, as in part (a), w , w , w , w , , w , w are mutually ≤ ≤ 1,1 2,1 1,2 2,2 · · · 1,r 2,r perpendicular real unit vectors obeying

Sw~ = λ w~ Sw~ = λ w~ 1,` ` 2,` 2,` − ` 1,`

and if R = [w , w , w , w , , w , w ], then 1,1 2,1 1,2 2,2 · · · 1,r 2,r r 0 λ RtSR = − ` λ` 0 M`=1  

i Problem I.10 Let P (z) = ci z be a power series with complex coecients and i 0 ≥ innite radius of convergence andP let f(a) be an even element of . Show that S V V ∂ ∂ P f(a) = P 0 f(a) f(a) ∂a` ∂a`    

Solution. Write f(a) = f + f (a) with f C and f (a) S S D S m . 0 1 0 ∈ 1 ∈ \ 0 ⊕ m=1 ⊗ V We allow P (z) to have any radius of convergence strictly larger than f0 .LBy ProblemV I.7, | |

D ∞ n 1 (n) n ci f(a) = n! P (f0) f1(a) n=0 n=0 X X

∂ ∂ As f(a) = f1(a), it suces, by linearity in P , to show that ∂a` ∂a`

∂ n n 1 ∂ f1(a) = n f1(a) − f1(a) (D.1) ∂a` ∂a`     For any even g(a) and any h(a), the product rule

∂ g(a)h(a) = ∂ g(a) h(a) + g(a) ∂ h(a) ∂a` ∂a` ∂a`   h i h i applies. (D.1) follows easily from the product rule, by induction on n.

109 Problem I.11 Let and 0 be vector spaces with bases a , . . . , a and b , . . . , b 0 V V { 1 D} { 1 D } respectively. Let S and T be D D and D0 D0 skew symmetric matrices. Prove that × ×

f(a, b) dµS(a) dµT (b) = f(a, b) dµT (b) dµS(a) Z h Z i Z h Z i

Solution. Let c , . . . , c and d , . . . , d 0 be bases for second copies of and 0, { 1 D} { 1 D } V V respectively. It suces to consider f(a, b) = eΣiciai+Σidibi , because all f(a, b)’s can be constructed by taking linear combinations of derivatives of eΣiciai+Σidibi with respect to

Σiciai+Σidibi ci’s and di’s. For f(a, b) = e ,

Σiciai+Σidibi f(a, b) dµS(a) dµT (b) = e dµS(a) dµT (b) Z h Z i Z h Z i Σidibi Σiciai = e e dµS(a) dµT (b) Z Z h 1 i Σidibi Σij ciSij cj = e e− 2 dµT (b) Z h1 i Σij ciSij cj Σidibi = e− 2 e dµT (b) Z 1 Σ c S c 1 Σ d T d = e− 2 ij i ij j e− 2 ij i ij j and

Σiciai+Σidibi f(a, b) dµT (b) dµS(a) = e dµT (b) dµS(a) Z h Z i Z h Z i Σiciai Σidibi = e e dµT (b) dµS(a) Z Z h 1 i Σiciai Σij diTij dj = e e− 2 dµS(a) Z h1 i Σij diTij dj Σiciai = e− 2 e dµS(a) Z 1 Σ d T d 1 Σ c S c = e− 2 ij i ij j e− 2 ij i ij j are equal.

Problem I.12 Let be a D dimensional vector space with basis a , . . . , a and 0 be V { 1 D} V a second copy of with basis c , . . . , c . Let S be a D D skew symmetric matrix. V { 1 D} × Prove that Σ c a 1 Σ c S c e i i i f(a) dµ (a) = e− 2 ij i ij j f(a Sc) dµ (a) S − S Z Z 110 Here Sc i = j Sijcj.  P Solution. It suces to consider f(a) = eΣibiai , because all f’s can be constructed by

Σibiai Σibiai taking linear combinations of derivatives of e with respect to bi’s. For f(a) = e ,

Σiciai Σi(bi+ci)ai e f(a) dµS(a) = e dµS(a) Z Z 1 Σ (b +c )S (b +c ) = e− 2 ij i i ij j j

1 Σ c S c Σ b S c 1 Σ b S b = e− 2 ij i ij j e− ij i ij j e− 2 ij i ij j and

1 Σ c S c 1 Σ c S c Σ b (a Σ S c ) e− 2 ij i ij j f(a Sc) dµ (a) = e− 2 ij i ij j e i i i− j ij j dµ (a) − S S Z Z 1 Σ c S c Σ b S c 1 Σ b S b = e− 2 ij i ij j e− ij i ij j e− 2 ij i ij j are equal.

Problem I.13 Let be a complex vector space with even dimension D = 2r and basis V ψ , , ψ , ψ , , ψ . Here, ψ need not be the complex conjugate of ψ . Let A be an { 1 · · · r 1 · · · r} i i r r matrix and dµ (ψ, ψ) the Grassmann Gaussian integral obeying × · A R ψiψj dµA(ψ, ψ) = ψiψj dµA(ψ, ψ) = 0 ψiψj dµA(ψ, ψ) = Aij Z Z Z a) Prove

ψ ψ ψ ψ dµ (ψ, ψ) in · · · i1 j1 · · · jm A Z m = ( 1)`+1A ψ ψ ψ ψ ψ dµ (ψ, ψ) − i1j` in · · · i2 j1 · · · 6 j` · · · jm A `=1 Z P Here, the ψ signies that the factor ψ is omitted from the integrand. 6 j` j` b) Prove that, if n = m, 6

ψ ψ ψ ψ dµ (ψ, ψ) = 0 in · · · i1 j1 · · · jm A Z 111 c) Prove that

ψin ψi1 ψj1 ψjn dµA(ψ, ψ) = det Aikj` · · · · · · 1 k,` n Z h i ≤ ≤ Σ (ζ¯ ψ +ψ¯ ζ ) d) Let 0 be a second copy of with basis ζ , , ζ , ζ , , ζ . View e i i i i i as V V { 1 · · · r 1 · · · r} an element of 0 . Prove that ∧V V

V Σi(ζ¯iψi+ψ¯iζi) Σi,j ζ¯iAij ζj e dµA(ψ, ψ) = e Z

Solution. Dene ψi if 1 i r ai(ψ, ψ) = ≤ ≤ ψi r if r + 1 i 2r  − ≤ ≤ and 0 A S = At 0  −  Then

f(a) dµS(a) = f a(ψ, ψ) dµA(ψ, ψ) Z Z  a) Translating the integration by parts formula D ∂ ak f(a) dµS(a) = Sk` f(a) dµS(a) ∂a` Z `=1 Z P of Proposition I.17 into the ψ, ψ language gives r ∂ ψk f(ψ, ψ) dµA(ψ, ψ) = Ak` ¯ f(ψ, ψ) dµA(ψ, ψ) ∂ψ` Z `=1 Z Now, just apply this formula to P

(m+1)(n 1) ψ ψ ψ ψ dµ (ψ, ψ) = ( 1) − ψ ψ ψ ψ ψ dµ (ψ, ψ) in · · · i1 j1 · · · jm A − i1 j1 · · · jm in · · · i2 A Z Z with k = i and f(ψ, ψ) = ψ ψ ψ ψ . This gives 1 j1 · · · jm in · · · i2 ψ ψ ψ ψ dµ (ψ, ψ) in · · · i1 j1 · · · jm A Z m (m+1)(n 1) ` 1 = ( 1) − ( 1) − A ψ ψ ψ ψ ψ dµ (ψ, ψ) − − i1j` j1 · · · 6 j` · · · jm in · · · i2 A `=1 Z m (m+1)(n 1)+(Pm 1)(n 1) ` 1 = ( 1) − − − ( 1) − A ψ ψ ψ ψ ψ dµ (ψ, ψ) − − i1j` in · · · i2 j1 · · · 6 j` · · · jm A `=1 Z m ` 1 P = ( 1) − A ψ ψ ψ ψ ψ dµ (ψ, ψ) − i1j` in · · · i2 j1 · · · 6 j` · · · jm A `=1 Z P 112 as desired.

d) Dene ζi if 1 i r bi(ψ, ψ) = ≤ ≤ ζi r if r + 1 i 2r  − − ≤ ≤ Then

1 Σi(ζ¯iψi+ψ¯iζi) Σibiai Σi,j biSij bj Σi,j ζ¯iAij ζj e dµA(ψ, ψ) = e dµS(a) = e− 2 = e Z Z since

1 0 A ζ 1 Aζ 1 ζ, ζ = ζ, ζ − = ζAζ + ζAtζ = ζAζ 2 − At 0 ζ 2 − Atζ 2 − −  −   −   −      h i

n ∂ m ∂ b) Apply `=1 ¯ and `=1 ∂ζ to the conclusion of part (d) and set ζ = ζ = 0. The ∂ζi` j` resulting leftQ hand side is,Qup to a sign, ψin ψi1 ψj1 ψjm dµA(ψ, ψ). Unless m = n, · · · · · · the right hand side is zero. R

c) The proof is by induction on n. For n = 1, by the denition of dµA,

ψi1 ψj1 dµA(ψ, ψ) = Ai1j1 Z as desired. If the result is known for n 1, then, by part (a), −

ψ ψ ψ ψ dµ (ψ, ψ) in · · · i1 j1 · · · jm A Z m = ( 1)`+1A ψ ψ ψ ψ ψ dµ (ψ, ψ) − i1j` in · · · i2 j1 · · · 6 j` · · · jm A `=1 Z m P `+1 (1,`) = ( 1) Ai1j` det M `=1 − P (1,`) where M is the matrix Aiαjβ 1 α,β n with row 1 and column ` deleted. So expansion ≤ ≤ along the rst row,  

m `+1 (1,`) det Aiαjβ 1 α,β n = ( 1) Ai1j` det M ≤ ≤ `=1 −   P gives the desired result.

113 Problem I.14 Prove that

(c) = 1 c S c + Sc C − 2 ij i ij j G −  where Sc i = j Sijcj.  P Solution. By Problem I.12, with f(a) = eW (a), and Problem I.5,

(c) = log 1 eΣiciai eW (a) dµ (a) C Z S Z 1 1 2 Σij ciSij cj W (a Sc) = log Z e− e − dµS(a) h Z i 1 1 W (a Sc) = c S c + log e − dµ (a) − 2 ij i ij j Z S h Z i = 1 c S c + Sc − 2 ij i ij j G − 

ZJ Problem I.15 We have normalized J+1 so that J+1(0) = 0. So the ratio in G G ZJ+1

ZJ J (c+a) J+1(c) = log eG dµS(J+1) (a) G ZJ+1 Z had better obey

ZJ+1 J (a) = eG dµ (J+1) (a) ZJ S Z Verify by direct computation that this is the case.

Solution. By Proposition I.21,

W (a) ZJ+1 = e dµS(≤J+1) (a) Z W (a+c) = e dµS(≤J) (a) dµS(J+1) (c) Z h Z i J (a) = ZjeG dµS(J+1) (c) Z h i J (a) = Zj eG dµS(J+1) (c) Z

114 Problem I.16 Prove that

= (J) (J−1) (1) (W ) GJ S ◦ S ◦ · · · ◦ S

= (1) (2) (J) (W ) S ◦ S ◦ · · · ◦ S

Solution. By the denitions of and (W ), GJ S

= (≤J) (W ) GJ S

Now apply Theorem I.29.

Problem I.17 Prove that

:f(a): = f(a + b) dµ S(b) − Z f(a) = :f:(a + b) dµS(b) Z

Solution. It suces to consider f(a) = eΣiciai . For this f

Σici(ai+bi) Σiciai Σicibi f(a + b) dµ S(b) = e dµ S(b) = e e dµ S(b) − − − Z Z Z c a 1 Σ c S c = e i i e 2 i,j i ij j = :f(a): and 1 ci(ai+bi) Σi,j ciSij cj :f:(a + b) dµS(b) = e e 2 dµS(b) Z Z 1 Σiciai Σi,j ciSij cj Σicibi = e e 2 e dµS(b) Z Σ c a 1 Σ c S c 1 Σ c S c Σ c a = e i i i e 2 i,j i ij j e− 2 i,j i ij j = e i i i = f(a)

Problem I.18 Prove that ∂ :f(a): = . ∂ f(a) . ∂a` . ∂a` .

115 Solution. By linearity, it suces to consider f(a) = a with I = (i , , i ). I 1 · · · n 1 ∂ ∂ ∂ ∂ 2 Σi,j biSij bj Σibiai ∂a :aI: = ∂a ∂b ∂b e e ` ` i1 · · · in b=0 n ∂ ∂ ∂ 1 Σ b S b Σ b a 2 i,j i ij j i i i = ( 1) ∂b ∂b ∂a e e − i1 · · · in ` b=0 n ∂ ∂ 1 Σ b S b Σ b a 2 i,j i ij j i i i = ( 1) ∂b ∂b e b`e − − i1 · · · in b=0

If ` / I, we get zero. Otherwise, by antisymmetry, we may assume that ` = i . Then ∈ n 1 ∂ n ∂ ∂ 2 Σi,j biSij bj Σibiai ∂a :aI: = ( 1) ∂b ∂b e e ` − − i1 · · · in−1 b=0 n 1 . . = ( 1) − .ai ai . − 1 · · · n−1 . ∂ . = aI . ∂a` .

Problem I.19 Prove that

D ∂ :g(a)ai: f(a) dµS(a) = Si` :g(a): f(a) dµS(a) ∂a` Z `=1 Z P

Solution. It suces to consider g(a) = aI and f(a) = aJ. Observe that

1 1 1 Σmbmam Σm,j bmSmj bj Σmcmam Σm,j bmSmj bj Σm,j (bm+cm)Smj (bj +cj ) e e 2 e dµS(a) = e 2 e− 2 Z Σ b S c 1 Σ c S c = e− m,j m mj j e− 2 m,j m mj j Now apply ∂ to both sides ∂bi

1 D 1 ∂ Σmbmam Σm,j bmSmj bj Σmcmam ΣbmSmj cj ΣcmSmj cj e e 2 e dµS(a) = Si,`c` e− e− 2 ∂bi − Z `=1 D P Σ b a 1 Σ b S b Σ c a = S c e m m m e 2 m,j m mj j e m m m dµ (a) − i,` ` S `=1 Z D P 1 Σmbmam Σm,j bmSmj bj ∂ Σmcmam = Si,` e e 2 e dµS(a) ∂a` `=1 Z Applying P

J ∂ ∂ ( 1)| | ∂bj ∂ck − j I k J ∈ ∈ Q Q J to both sides and setting b = c = 0 gives the desired result. The ( 1)| | is to move − ∂ ∂ ∂ k J ∂c past the ∂b on the left and past the ∂a on the right. ∈ k i ` Q 116 Problem I.20 Prove that

:f:(a + b) = :f(a + b):a = :f(a + b):b

Here : : means Wick ordering of the a ’s and : : means Wick ordering of the b ’s. · a i · b i Precisely, if a , b , A , B are bases of four vector spaces, all of the same dimension, { i} { i} { i} { i} 1 . ΣiAiai+ΣiBibi . ΣiBibi . ΣiAiai . ΣiAiai+ΣiBibi 2 Σij AiSij Aj .e . a = e .e . a = e e 1 . ΣiAiai+ΣiBibi . ΣiAiai . ΣiBibi . ΣiAiai+ΣiBibi 2 Σij BiSij Bj .e . b = e .e . b = e e

Solution. It suces to consider f(a) = eΣiciai . Then

1 1 Σicidi 2 Σij ciSij cj Σici(ai+bi) 2 Σij ciSij cj :f:(a + b) = :f(d):d d=a+b = e e d=a+b = e e and 1 Σici(ai+bi) Σicibi Σiciai Σicibi Σiciai Σij ciSij cj :f(a + b):a = :e :a = e :e :a = e e e 2

Σ c (a +b ) 1 Σ c S c = e i i i i e 2 ij i ij j and 1 Σici(ai+bi) Σiciai Σicibi Σiciai Σicibi Σij ciSij cj :f(a + b):b = :e :b = e :e :b = e e e 2

Σ c (a +b ) 1 Σ c S c = e i i i i e 2 ij i ij j All three are the same.

Problem I.21 Prove that

. . :f:S+T (a + b) = .f(a + b). a,S b,T

Here S and T are skew symmetric matrices, : : means Wick ordering with respect to · S+T . . S + T and . . a,S means Wick ordering of the ai’s with respect to S and of the bi’s with · b,T respect to T .

Solution. It suces to consider f(a) = eΣiciai . Then

1 Σicidi 2 Σij ci(Sij +Tij )cj :f:S+T (a + b) = :f(d):d,S+T d=a+b = e e d=a+b 1 Σici(ai+bi) 2 Σij ci(Sij +Tij )cj = e e 117 and

. . . Σici(ai+bi) . Σiciai . Σicibi . .f(a + b). a,S = .e . a,S = :e :a,S .e . b,T b,T b,T Σ c a 1 Σ c S c Σ c b 1 Σ c T c = e i i i e 2 ij i ij j e i i i e 2 ij i ij j

Σ c (a +b ) 1 Σ c (S +T )c = e i i i i e 2 ij i ij ij j

The two right hand sides are the same.

Problem I.22 Prove that

:f(a): dµS(a) = f(0) Z

Solution. It suces to consider f(a) = eΣiciai . Then

1 Σiciai Σiciai Σij ciSij cj :f(a): dµS(a) = :e : dµS(a) = e e 2 dµS(a) Z Z Z 1 1 1 Σij ciSij cj Σiciai Σij ciSij cj Σij ciSij cj = e 2 e dµS(a) = e 2 e− 2 = 1 = f(0) Z

Problem I.23 Prove that

n ei . . 0 0 . a`i,µ . dµS(ψ) = Pf T(i,µ),(i ,µ ) Z i=1 µ=1 Y Q   where

0 if i = i0 T(i,µ),(i0,µ0) = S` ,` 0 0 if i = i0  i,µ i ,µ 6 n Here T is a skew symmetric matrix with i=1 ei rows and columns, numbered, in order

(1, 1), , (1, e1), (2, 1), (2, e2), , (n, enP). The product in the integrand is also in this · · · · · · · · · order.

118 n Solution. The proof is by induction on the number of a’s. That is, on `=1 e`. As P :ai: dµS(ψ) = ai dµS(ψ) = 0 Z Z

:ai1 : :ai2 : dµS(ψ) = ai1 ai2 dµS(ψ) = Si1,i2 Z Z

:ai1 ai2 : dµS(ψ) = 0 Z the induction starts OK. By Problem I.19,

n n ei e1 1 ei . a . dµ (ψ) = S . − a . ∂ . a . dµ (ψ) . `i,µ . S `1e1 ,p . `1,µ . ∂ap . `i,µ . S Z i=1 µ=1 p Z µ=1 i=2 µ=1 Y Q P Q Y Q Dene i

E0 = 0 Ei = ej j=1 X and

lEi−1+µ = `i,µ

Under this new indexing, the last equation becomes

n n Ei E1 1 Ei . a . dµ (ψ) = S . − a . ∂ . a . dµ (ψ) . lj . S lE1 ,p . lj . ∂ap . lj . S Z i=1 j=Ei−1+1 p Z j=1 i=2 j=Ei−1+1 Y Q P Q Y n Q En E1 1 Ei k E 1 . − . . . = ( 1) − 1− S a a dµ (ψ) lE1 ,lk . lj . . lj . S j=E +1 k=E1+1 − j=1 i−1 Z i=2 j6=k P Q Y Q where we have used the product rule (Proposition I.13), Problem I.18 and the observation

. ei . that . µ=1 a`i,µ . has the same parity, even or odd, as ei does. Let Mk` be the matrix obtainedQ from T by deleting rows k and ` and columns k and ` . By the inductive hypothesis

n Ei En . . k E1 1 . al . dµS(ψ) = ( 1) − − Sl ,l Pf ME k j − E1 k 1 Z i=1 j=Ei−1+1 k=E1+1 Y Q P  By Proposition I.18.a, with k replaced by E1,

n Ei . . . alj . dµS(ψ) = Pf(T ) Z i=1 j=Ei−1+1 Y Q E +k k E 1 since, for all k > E , sgn(E k) ( 1) 1 = ( 1) − 1− . 1 1 − − − 119 Problem I.24 Let a , 1 i, j n be complex numbers. Prove Hadamard’s inequality ij ≤ ≤ n n 1/2 2 det aij aij ≤ i=1 j=1 | | h i   Q P from Gram’s inequality.

Solution. Set a = (a , , a ) and write a = a , e with the vectors e = i i1 · · · in ij h i ji j (0, , 1, , 0) , j = 1, , n forming the standard basis for Cn . Then, by Gram’s · · · · · · · · · inequality

n n n 1/2 2 det aij = det ai, ej ai ej = aij h i ≤ i=1 k k k k i=1 j=1 | | h i h i   Q Q P

Problem I.25 Let be a vector space with basis a , , a . Let S 0 be a skew V { 1 · · · D} `,` symmetric D D matrix with ×

S(`, `0) = f`, g`0 for all 1 `, `0 D h iH ≤ ≤

for some Hilbert space and vectors f`, g`0 . Set F` = f` g` . Prove that H ∈ H k kHk kH n p a dµ (a) F i` S ≤ i` Z `=1 1 ` n Q ≤Y≤

Solution. I will prove both (a) n a dµ (a) F i` S ≤ i` Z `=1 1 ` n Q ≤Y≤

(b) m n . . n ai . aj . dµS(a) 2 Fi Fj k ` ≤ k ` Z k=1 `=1 1 k m 1 ` n Q Q ≤Y≤ ≤Y≤

For (a) Just apply Gram’s inequality to

n

ai` dµS(a) = Pf Sik,i` = det Sik,i` 1 k,` n Z `=1 ≤ ≤ Q h i q  

120 For (b) observe that, by Problem I.17,

m n . . aik . aj` . dµS(a) Z k=1 `=1 Q m Q n = ai aj + bj dµS(a) dµ S(b) k ` ` − Z k=1 `=1 Q Q m  = aik aj` dµS(a) bj` dµ S(b) ± k=1 ` I ` I − I 1, ,n Z ∈ Z 6∈ ⊂{X··· } Q Q Q There are 2n terms. Apply (a) to both factors of each term.

Problem I.26 Prove, under the hypotheses of Proposition I.34, that

n ei . . √ √ . ψ `i,µ, κi,µ . dµA(ψ) 2 f`i,µ 2 g`i,µ µ=1 ≤ k kH k kH Z i=1 1≤i≤n 1≤i≤n Y Q  1≤Yµ≤ei 1≤Yµ≤ei κi,µ=0 κi,µ=1

Solution. Dene S = (i, µ) 1 i n, 1 µ e , κ = 0 ≤ ≤ ≤ ≤ i i,µ S =  (i, µ) 1 i n, 1 µ e , κ = 1 ≤ ≤ ≤ ≤ i i,µ  If the integral does not vanish, the cardinality of S and S coincide and there is a sign ± such that (this is a special case of Problem I.23)

n ei . . . ψ `i,µ, κi,µ . dµA(ψ) = det Mα,β α∈S ± ¯ Z i=1 µ=1 β∈S Y Q    where 0 if i = i0 M(i,µ),(i0,µ0) = A(` , ` 0 0 ) if i = i0  i,µ i ,µ 6 Dene the vectors uα, α S and vβ, β S in Cn+1 by ∈ ∈ 1 if i = n + 1 α ui = 1 if α = (i, µ) for some 1 µ ei ( 0 otherwise ≤ ≤ 1 if i = n + 1 β vi = 1 if β = (i, µ) for some 1 µ ei ( −0 otherwise ≤ ≤ 121 Observe that, for all α S and β S, ∈ ∈ uα = vβ = √2 k k k k 1 if α = (i, µ), β = (i0, µ0) with i = i0 uα vβ = 6 · 0 if α = (i, µ), β = (i0, µ0) with i = i0  Hence, setting F = uα f Cn+1 for α = (i, µ) S α ⊗ `i,µ ∈ ⊗ H ∈ G = vβ g Cn+1 for β = (i, µ) S β ⊗ `i,µ ∈ ⊗ H ∈ we have

Mα,β = Fα, Gβ Cn+1 h i ⊗H and consequently, by Gram’s inequality,

n ei . . . ψ `i,µ, κi,µ . dµA(ψ) = det Mα,β α∈S Z i=1 µ=1 β∈S¯ Y Q   

Fα Cn+1 Gβ Cn+1 ≤ k k ⊗H k k ⊗H α S β S¯ Y∈ Y∈ √2 f` √2 g` ≤ k α kH k β kH α S β S¯ Y∈ Y∈

122 II. Fermionic Expansions §

Problem II.1 Let D

F (a) = f(j1, j2) aj1 aj2 j ,j =1 1X2 D

W (a) = w(j1, j2, j3, j4) aj1 aj2 aj3 aj4 j ,j ,j ,j =1 1 2X3 4 with f(j1, j2) and w(j1, j2, j3, j4) antisymmetric under permutation of their arguments. a) Set

1 λW (a) λW (a) (λ) = F (a) e dµS(a) where λ = e dµS(a) S λ Z Z Z Z d` Compute ` (λ) for ` = 0, 1, 2. dλ S λ=0

b) Set . λW (a+b) λW (a) . R(λ) = .e − 1. F (b) dµS(b) − b Z d` Compute ` R(λ) for all ` IN. dλ λ=0 ∈

Solution. a) We have dened

λW (a) D λW (a) F (a) e dµS(a) aj1 aj2 e dµS(a) (λ) = λW (a) = f(j1, j2) λW (a) S e dµS(a) e dµS(a) R j1,j2=1 R X R R Applying Proposition I.17 (integration by parts) to the numerator with k = j1 and using the antisymmetry of w gives

D ∂ λW (a) aj e dµS(a) ∂ak 2 (λ) = f(j1, j2) Sj1,k λW (a) S k e dµS(a) j1,j2=1 R  X P D λW (a) ∂ R aj e λW (a) dµS(a) = f(j , j ) S S 2 ∂ak 1 2 j1,j2 − j1,k eλW (a) dµ (a) j ,j =1 R S  1X2  Xk  D R λW (a) aj2 aj4 aj5 aj6 e dµS(a) = f(j1, j2) Sj1,j2 4λ Sj1,j3 w(j3, j4, j5, j6) λW (a) − e dµS(a) j1,j2=1 j3,j4,j5,j6 R X  X  R (D.2)

123 Setting λ = 0 gives D (0) = f(j , j )S S 1 2 j1,j2 j ,j =1 1X2 Dierentiating once with respect to λ and setting λ = 0 gives

0(0) = 4 f(j , j )S w(j , j , j , j ) a a a a dµ (a) S − 1 2 j1,j3 3 4 5 6 j2 j4 j5 j6 S j , ,j 1X··· 6 Z Integrating by parts and using the antisymmetry of w a second time

0(0) = 12 f(j , j )S S w(j , j , j , j ) a a dµ (a) S − 1 2 j1,j3 j2,j4 3 4 5 6 j5 j6 S j , ,j 1X··· 6 Z = 12 f(j , j )S S w(j , j , j , j )S − 1 2 j1,j3 j2,j4 3 4 5 6 j5,j6 j , ,j 1X··· 6

To determine 00(0) we need the order λ contribution to S

λW (a) aj2 aj4 aj5 aj6 e dµS(a) Sj1,j3 w(j3, j4, j5, j6) λW (a) e dµS(a) j3,j4,j5,j6 R X λW (a) R aj5 aj6 e dµS(a) = 3 Sj1,j3 Sj2,j4 w(j3, j4, j5, j6) λW (a) e dµS(a) j3,j4,j5,j6 R X λW (a) ∂ R aj aj aj e λW (a) dµS(a) 4 5 6 ∂ak Sj1,j3 w(j3, j4, j5, j6)Sj2,k λW (a) − R e dµS(a)  j3,j4X,j5,j6,k R = 3 Sj1,j3 Sj2,j4 w(j3, j4, j5, j6)Sj5,j6 j ,j ,j ,j 3 X4 5 6 λW (a) ∂ aj e λW (a) dµS(a) 6 ∂ak 3 Sj1,j3 Sj2,j4 w(j3, j4, j5, j6)Sj5,k λW (a) − R e dµS(a) j3,j4X,j5,j6,k λW (a) ∂ aj aj aj e R λW (a) dµS(a) 4 5 6 ∂ak Sj1,j3 w(j3, j4, j5, j6)Sj2,k λW (a) − R e dµS(a)  j3,j4X,j5,j6,k R = 3 Sj1,j3 Sj2,j4 w(j3, j4, j5, j6)Sj5,j6 j ,j ,j ,j 3 X4 5 6 ∂ 3λ Sj ,j Sj ,j w(j3, j4, j5, j6)Sj ,k aj W (a) dµS(a) − 1 3 2 4 5 6 ∂ak j3,j4X,j5,j6,k Z ∂ 2 λ Sj ,j w(j3, j4, j5, j6)Sj ,k aj aj aj W (a) dµS(a) + O(λ ) − 1 3 2 4 5 6 ∂ak j3,j4X,j5,j6,k Z 124 D ∂ 00(0) = 24 f(j1, j2) Sj ,j Sj ,j w(j3, j4, j5, j6)Sj ,k aj W (a) dµS(a) S 1 3 2 4 5 6 ∂ak j ,j =1 1X2  j3,j4X,j5,j6,k Z  D ∂ + 8 f(j1, j2) Sj ,j w(j3, j4, j5, j6)Sj ,k aj aj aj W (a) dµS(a) 1 3 2 4 5 6 ∂ak j ,j =1 1X2  j3,j4X,j5,j6,k Z  D ∂ = 24 f(j1, j2) Sj ,j Sj ,j w(j3, j4, j5, j6)Sj ,k aj W (a) dµS(a) 1 3 2 4 5 6 ∂ak j ,j =1 1X2  j3,j4X,j5,j6,k Z  D ∂ + 16 f(j1, j2) Sj ,j w(j3, j4, j5, j6)Sj ,j Sj ,k aj W (a) dµS(a) 1 3 4 5 2 6 ∂ak j ,j =1 1X2  j3,j4X,j5,j6,k Z  D ∂ ∂ + 8 f(j1, j2) Sj ,j w(j3, j4, j5, j6)Sj ,kSj ,` aj aj W (a) dµS(a) 1 3 2 4 5 6 ∂a` ∂ak j1,j2=1  j3,···,j6 Z  X Xk,` = 24 4 3 f(j , j )S S w(j , j , j , j )S S w(j , j , j , j )S × × 1 2 j1,j3 j2,j4 3 4 5 6 j5,j7 j6,j8 7 8 9 10 j9,j10 j , ,j 1 X··· 10 + 16 4 3 f(j , j )S S w(j , j , j , j )S S w(j , j , j , j )S × × 1 2 j1,j3 j2,j7 3 4 5 6 j4,j5 j6,j8 7 8 9 10 j9,j10 j , ,j 1 X··· 10 + 8 4 3 f(j , j )S S w(j , j , j , j )S S w(j , j , j , j )S × × 1 2 j1,j3 j2,j7 3 4 5 6 j4,j8 j5,j6 7 8 9 10 j9,j10 j , ,j 1 X··· 10 8 4! f(j , j )S S w(j , j , j , j )S S S w(j , j , j , j ) − × 1 2 j1,j3 j2,j7 3 4 5 6 j4,j8 j5,j9 j6,j10 7 8 9 10 j , ,j 1 X··· 10 = 24 4 3 f(j , j )S S w(j , j , j , j )S S w(j , j , j , j )S × × 1 2 j1,j3 j2,j4 3 4 5 6 j5,j7 j6,j8 7 8 9 10 j9,j10 j , ,j 1 X··· 10 + 24 4 3 f(j , j )S S w(j , j , j , j )S S w(j , j , j , j )S × × 1 2 j1,j3 j2,j7 3 4 5 6 j4,j5 j6,j8 7 8 9 10 j9,j10 j , ,j 1 X··· 10 8 4! f(j , j )S S w(j , j , j , j )S S S w(j , j , j , j ) − × 1 2 j1,j3 j2,j7 3 4 5 6 j4,j8 j5,j9 j6,j10 7 8 9 10 j , ,j 1 X··· 10 By way of resume, the answer to part (a) is

D (0) = f(j , j )S S 1 2 j1,j2 j ,j =1 1X2 D 0(0) = 12 f(j , j )S S w(j , j , j , j )S S − 1 2 j1,j3 j2,j4 3 4 5 6 j5,j6 j , ,j =1 1 ···X6 00(0) = 288 f(j , j )S S w(j , j , j , j )S S w(j , j , j , j )S S 1 2 j1,j3 j2,j4 3 4 5 6 j5,j7 j6,j8 7 8 9 10 j9,j10 j , ,j 1 X··· 10

+ 288 f(j1, j2)Sj1,j3 Sj2,j7 w(j3, j4, j5, j6)Sj4,j5 Sj6,j8 w(j7, j8, j9, j10)Sj9,j10 j , ,j 1 X··· 10 125 192 f(j , j )S S w(j , j , j , j )S S S w(j , j , j , j ) − 1 2 j1,j3 j2,j7 3 4 5 6 j4,j8 j5,j9 j6,j10 7 8 9 10 j , ,j 1 X··· 10 b) First observe that

D W (a + b) W (a) = w(j , j , j , j ) b b b b + 4b b b a − 1 2 3 4 j1 j2 j3 j4 j1 j2 j3 j4 j ,j ,j ,j =1 1 2X3 4 h

+ 6bj1 bj2 aj3 aj4 + 4bj1 aj2 aj3 aj4 i

is of degree at least one in b. By Problem I.19 and Proposition I.31.a, :bI: bJ dµS(b) d` vanishes unless the degree of J is at least as large as the degree of I. Hence R` R(λ) = 0 dλ λ=0

for all ` 3. Also ≥ R(0) = :0:b F (b) dµS(b) = 0 Z That leaves the nontrivial cases

. . R0(0) = .[W (a + b) W (a)]. F (b) dµS(b) − b Z . . = 6 f(j1, j2)w(j3, j4, j5, j6) .bj3 bj4 aj5 aj6 . b bj1 bj2 dµS(b) j , ,j 1X··· 6 Z = 12 f(j , j )S S w(j , j , j , j )a a − 1 2 j1,j3 j2,j4 3 4 5 6 j5 j6 j , ,j 1X··· 6 and

. 2 . R00(0) = .[W (a + b) W (a)] . F (b) dµS(b) − b Z . . = 16 f(j1, j2)w(j3, j4, j5, j6)w(j7, j8, j9, j10) .bj3 aj4 aj5 aj6 bj7 aj8 aj9 aj10 . bbj1 bj2 dµS(b) j , ,j 1 X··· 10 Z

= 32 f(j1, j2) Sj1,j3 Sj2,j7 w(j3, j4, j5, j6)aj4 aj5 aj6 w(j7, j8, j9, j10)aj8 aj9 aj10 j , ,j 1 X··· 10

Problem II.2 Dene, for all f : C and g : C with r, s 1 and r + s > 2, Mr → Ms → ≥ f g : r+s 2 C by ∗ M − → D f g(j1, , jr+s 2) = f(j1, , jr 1, k)g(k, jr, , jr+s 2) ∗ · · · − · · · − · · · − Xk=1 126 Prove that

f g f g and f g min f g , f g k ∗ k ≤ k k k k ||| ∗ ||| ≤ ||| ||| k k k k ||| ||| 

Solution. By denition

f g = max max f g(j1, , jr+s 2) k ∗ k 1 i r+s 2 1 ` D 1≤j ,···,j ≤D ∗ · · · − ≤ ≤ − ≤ ≤ 1 r+s−2 jPi=`

We consider the case 1 i r 1. The case r i r + s 2 is similar. For any ≤ ≤ − ≤ ≤ − 1 i r 1, ≤ ≤ − max f g(j1, , jr+s 2) 1 ` D 1≤j ,···,j ≤D ∗ · · · − ≤ ≤ 1 r+s−2 jPi=` D max f(j1, , jr 1, k)g(k, jr, , jr+s 2) ≤ 1 ` D 1≤j ,···,j ≤D · · · − · · · − ≤ ≤ 1 r+s−2 k=1 jPi=` P D = max f(j1, , jr 1, k) g(k, jr, , jr+s 2) − − 1 ` D 1≤j1,···,jr−1≤D k=1 · · · 1 j , ,j D · · · ≤ ≤ ≤ r ··· r+s−2≤ jPi=` P P D max f(j1, , jr 1, k) max g(k, jr, , jr+s 2) − − ≤ 1 ` D 1≤j1,···,jr−1≤D k=1 · · · 1 k D 1 j , ,j D · · · ≤ ≤ ≤ ≤ ≤ r ··· r+s−2≤ jPi=` h P i P D max f(j1, , jr 1, k) g ≤ 1 ` D 1≤j ,···,j ≤D · · · − k k ≤ ≤ 1 r−1 k=1 jPi=` P

f g ≤ k k k k We prove f g f g . The other case is similar. ||| ∗ ||| ≤ ||| ||| k k

f g = f g(j1, , jr+s 2) − ||| ∗ ||| 1 j1, ,jr+s−2 D ∗ · · · ≤ ··· ≤ P D f(j1, , jr 1, k)g(k, jr, , jr+s 2) − − ≤ 1 j1, ,jr+s−2 D k=1 · · · · · · ≤ ··· P ≤ P = f(j1, , jr 1, k) g(k, jr, , jr +s 2) − − k · · · jr , ,jr+s−2 · · · j1, ,jr−1 ··· ···X h P P i

f(j1, , jr 1, k) max g(k, jr, , jr+s 2) − k − ≤ k · · · jr , ,jr+s−2 · · · j1, ,jr−1 ··· ···X h P i h P i

f(j1, , jr 1, k) g ≤ k · · · − k k j1, ,jr−1 ···X P = f g ||| ||| k k

127 Problem II.3 Let : : denote Wick ordering with respect to the covariance S. · S a) Prove that if

H + J b :b : dµ (b) F| | | | for all H, J H J S S ≤ ∈ Mr Z r 0 [≥

then H + J b :b : dµ (b) z F | | | | for all H, J H J zS zS ≤ | | ∈ Mr Z r 0 p  [≥

b) Prove that if

H + J H + J b :b : dµ (b) F| | | | and b :b : dµ (b) G| | | | H J S S ≤ H J T T ≤ Z Z

for all H, J r 0 r, then ∈ ≥ M S H + J b :b : dµ (b) F + G | | | | for all H, J H J S+T S+T ≤ ∈ Mr Z r 0  [≥

Solution. a) For any ζ with ζ2 = z,

1 Σiaibi . Σicibi . Σiaibi Σicibi 2 Σij zciSij cj e .e . zS dµzS(b) = e e e dµzS (b) Z Z 1 Σ z(a +c )S (a +c ) 1 Σ zc S c = e− 2 ij i i ij j j e 2 ij i ij j

1 Σ (ζa +ζc )S (ζa +ζc ) 1 Σ (ζc )S (ζc ) = e− 2 ij i i ij j j e 2 ij i ij j

Σiζaibi . Σiζcibi . = e .e . S dµS(b) Z Dierentiating with respect to a , i H and c , i J gives i ∈ i ∈

H + J ( H + J )/2 bH :bJ:zS dµzS(b) = ζ| | | | bH :bJ:S dµS(b) = z | | | | bH :bJ:S dµS(b) Z Z Z Note that if H + J is not even, the integral vanishes. So we need not worry about which | | | | square root to take.

128 b) Let f(b) = bJ. Then

bH :bJ:S+T dµS+T (b) = (a + b)H :f:S+T (a + b) dµS(a) dµT (b) by Proposition I.21 Z Z . . = (a + b)H .(a + b)J . a,S dµS(a) dµT (b) by Problem I.21 b,T Z = aH0 :aJ0 :S dµS(a) bH H0 :bJ J0 :T dµT (b) ± \ \ H0⊂H Z Z JX0⊂J Hence

H0 + J0 H H0 + J J0 H + J b :b : dµ (b) F| | | |G| \ | | \ | = (F + G)| | | | H J S+T S+T ≤ H0⊂H Z 0 JX⊂J

Problem II.4 Let W (a) = 1 i,j D w(i, j) aiaj with w(i, j) = w(j, i). Verify that ≤ ≤ − P W (a + b) W (a) = w(i, j) bibj + 2 w(i, j) aibj − 1 i,j D 1 i,j D ≤P≤ ≤P≤

Solution.

W (a + b) W (a) = w(i, j) (ai + bi)(aj + bj) w(i, j) aiaj − 1 i,j D − 1 i,j D ≤P≤ ≤P≤ = w(i, j) bibj + w(i, j) aibj + w(i, j) biaj 1 i,j D 1 i,j D 1 i,j D ≤P≤ ≤P≤ ≤P≤ Now just sub in

w(i, j) biaj = w(j, i) ( ajbi) = w(j, i) ajbi = w(i, j) aibj 1 i,j D 1 i,j D − − 1 i,j D 1 i,j D ≤P≤ ≤P≤ ≤P≤ ≤P≤

Problem II.5 Assume Hypothesis (HG). Let s, s0, m 1 and ≥

f(b) = fm(H) bH W (b) = ws(K) bK W 0(b) = ws0 0 (K0) bK0 0 H m K s K s0 ∈MP ∈MP ∈MP 129 a) Prove that . . m+s 2 .W (b). f(b) dµS(b) mF − fm S ws b ≤ ||| ||| k k k k Z . . b) Prove that .W (b)W 0(b). f(b) dµS(b) = 0 if m = 1 and that, for m 2, b ≥ . R . m+s+s0 4 2 .W (b)W 0(b). f(b) dµS(b) m(m 1)F − fm S ws w0 0 b ≤ − ||| ||| k k k k k s k Z

Solution. a) By denition

. . . . .W (b). b f(b) dµS(b) = ws(K)fm(H) .bK . b bH dµS(b) Z H∈Mm Z KX∈Ms D ~ . . = ws(K.(k))fm(H) .bK˜ bk . b bH dµS(b) H∈Mm k=1 ˜ Z K∈MXs−1 P Recall that K~ .(k), the concatenation of K~ and (k), is the element of constructed by Ms appending k to the element K~ of s 1. By Problem I.19 (integration by parts) and M − Hypothesis (HG),

D . . ∂ b ˜ bk bH dµS(b) = Sk` :b ˜ : bH dµS(b) . K . b K ∂b` Z `=1 Z m P j 1 = ( 1) − Sk,h :b ˜ : bh , ,h , ,h dµS(b) − j K 1 ··· 6 j ··· m j=1 Z mP s+m 2 Sk,hj F − ≤ j=1 | | P We may assume, without loss of generality, that fm(H) is antisymmetric under permutation of its m arguments. Hence D m . . s+m 2 .W (b). f(b) dµS(b) ws(K~ .(k)) Sk,h fm(H) F − b ≤ | | | j | | | H∈Mm k=1 j=1 Z ˜ K∈MXs−1 P P D ~ s+m 2 = m ws(K.(k)) Sk,h1 fm(H) F − k=1 | | | | | | H m K˜ ∈MX P  ∈MXs−1  D s+m 2 m ws Sk,h1 fm(H) F − ≤ k k k=1 | | | | H m ∈MX  P  s+m 2 m w S f (H) F − ≤ k sk k k | m | H ∈MXm s+m 2 = m w S f F − k sk k k ||| m||| 130 b) By denition

. . . . 0 .W (b)W 0(b). b f(b) dµS(b) = ws(K)ws0 0 (K0)fm(H) .bKbK . b bH dµS(b) H∈Mm Z K∈Ms Z K0∈M P s0 D . . ~ 0 ~ 0 = ws(K.(k))ws (K0.(k0))fm(H) .bK˜ bkbK˜ 0 bk . b bH dµS(b) 0 H∈Mm k,k =1 ˜ Z K∈MXs−1 P K˜ 0∈M s0−1 By Problem I.19 (integration by parts), twice, and Hypothesis (HG), D . . ∂ b ˜ bkb ˜ 0 bk0 bH dµS(b) = Sk0` :b ˜ bkb ˜ 0 : bH dµS(b) . K K . b K K ∂b` Z `=1 Z m P j0 1 0 = ( 1) − Sk ,hj0 :bK˜ bkbK˜ 0 : bh 1, ,hj0 , ,hm dµS(b) j0=1 − ··· 6 ··· Z Pm m 0 0 = Sk ,hj0 Sk,hj :bK˜ bK˜ : bH hj ,hj0 dµS(b) j0=1 j=1 ± \{ } j6=j0 Z mP mP s+s0+m 4 0 Sk,hj Sk ,hj0 F − ≤ j0=1 j=1 | | | | 0 P jP6=j Again, we may assume, without loss of generality, that fm(H) is antisymmetric under permutation of its m arguments, so that

. . .W (b)W 0(b). b f(b) dµS(b) Z D m m s+s0+m 4 ~ 0 ~ 0 w s(K.(k)) ws (K0.(k0)) Sk,hj Sk ,hj0 fm(H) F − ≤ k,k0=1 j0=1 j=1 | | | | | | | | | | H∈Mm 0 ˜ j6=j K∈MXs−1 P P P K˜ 0∈M s0−1 D s+s0+m 4 ~ 0 ~ 0 = m(m 1) ws(K.(k)) ws (K0.(k0)) Sk,h1 Sk ,h2 fm(H) F − − 0 | | | | | | | | | | H∈Mm k,k =1 ˜ K∈MXs−1 P K˜ 0∈M s0−1

D 0 0 0 s+s +m 4 ˜ 0 ˜ 0 = m(m 1) ws(K.(k)) ws (K .(k )) Sk,h1 Sk ,h2 fm(H) F − − 0 | | | | | | | | | | H k,k =1 K˜ K˜ 0 ∈MXm    P P D P D s+s0+m 4 0 0 m(m 1) ws ws Sk,h1 Sk ,h2 fm(H) F − ≤ − k k k k k=1 | | k0=1 | | | | H m ∈MX  P  P  2 s+s0+m 4 m(m 1) w w 0 S f (H) F − ≤ − k sk k s k k k | m | H ∈MXm 2 s+s0+m 4 = m(m 1) w w 0 S f F − − k sk k s k k k ||| m|||

131 2 2 Problem II.6 Let M > 1. Construct a function ν C∞([M − , M ]) that takes values in ∈ 0 1/2 1/2 [0, 1], is identically 1 on [M − , M ] and obeys

∞ ν M 2jx = 1 j=0 X  for 0 < x < 1.

Solution. The function 1/x2 e− if x > 0 ν1(x) = 0 if x 0  ≤ is C∞. Then ν (x) = ν ( x)ν (x + 1) 2 1 − 1

is C∞, strictly positive for 1 < x < 0 and vanishes for x 0 and x 1, − ≥ ≤ − x 1 ν2(y) dy ν3(x) = −0 R 1 ν2(y) dy − R is C∞, vanishes for x 1, is strictly increasing for 1 < x < 0 and identically one for ≤ − − x 0 and ≥ ν (x) = ν (1 x)ν (x + 1) 4 3 − 1

is C∞, vanishing for x 2 identically one for x 1 and monotone for 1 x 2. | | ≥ | | ≤ ≤ | | ≤ Observe that, for any x, at most two terms in the sum

∞ ν(x) = ν4(x + 3m) m= X−∞ are nonzero, because the support of ν has width 4 < 2 3. Hence the sum always 4 × converges. On the other hand, as the width of the support is precisely 4 > 3, at least one term is nonzero, so that ν(x) > 0. As ν is periodic, it is uniformly bounded away from zero. Furthermore, for x 1, we have x + 3m 2 for all m = 0, so that ν(x) = ν (x) = 1 for | | ≤ | | ≥ 6 4 all x 1. Hence | | ≤ ν4(x) ν5(x) = ν¯(x)

132 is C∞, nonnegative, vanishing for x 2 identically one for x 1 and obeys | | ≥ | | ≤

∞ ∞ ν4(x+3m) ν5(x + 3m) = ν¯(x) = 1 m= m= X−∞ X−∞ Let L > 0 and set 1 ν(x) = ν5 L ln x

1  2L 2L Then ν(x) is C∞ on x > 0, is supported on ln x 2, i.e. for e− x e , is L ≤ ≤ ≤ 1 L L identically one for ln x 1, i.e. for e− x e and obeys L ≤ ≤ ≤

∞ ∞ 3Lm 1 ν e x = ν5 L ln x + 3m = 1 for all x > 0 m= m= X−∞  X−∞  Finally, just set M = e3L/2 and observe that e2L M 2, eL M 1/2 and that when x < 1 ≤ ≥ 3Lm only those terms with m 0 contribute to m∞= ν e x . ≥ −∞ P 

Problem II.7 Prove that p+m 1 6 2 2 = p +m √p2+m2

2 Solution. For any matrix, M = M ∗M , where M ∗ is the adjoint (complex conjugate k k k k of the transpose) of M. As

ip0 p1 ∗ ip0 p1 p∗ = = − − = p 6 p1 ip0 p1 ip0 −6  − −    and 2 2 2 ip0 p1 ip0 p1 p0 + p1 0 2 p = = 2 2 = p 1l 6 p1 ip0 p1 ip0 − 0 p + p −  − −   − −   0 1  we have

p+m 2 1 1 6 = (p∗ + m)(p + m) = ( p + m)(p + m) p2+m2 (p2+m2)2 6 6 (p2+m2)2 −6 6 = 1 p2 + m2 = 1 1l = 1 (p2+m2)2 −6 p2+ m2 p2+ m2

133 Appendix A. Innite Dimensional Grassmann Algebras

Problem A.1 Let be any ordered countable set and I the set of all nite subsets of I I (including the empty set). Each I I inherits an ordering from . Let w : (0, ) be ∈ I I → ∞ any strictly positive function on and set I

WI = wi i I Y∈ with the convention W = 1. Dene ∅

1 = ` ( , w) = α : C wi αi < V I I → i | | ∞ n ∈I o P and \the Grassmann algebra generated by " V

1 A( , w) = ` (I, W ) = α : I C WI αI < I → I I | | ∞ n ∈ o P

The multiplication is (αβ)I = J I sgn(J, I J) αJβI J where sgn(J, I J) is the sign of the ⊂ \ \ \ A permutation that reorders (J, IPJ) to I. The norm α = WI αI turns ( , w) into a \ k k I I | | I Banach space. P∈

a) Show that αβ α β k k ≤ k k k k

b) Show that if f :C C is any function that is dened and analytic in a neighbourhood → 1 (n) n of 0, then the power series f(α) = ∞ f (0)α converges for all α A with α n=0 n! ∈ k k smaller than the radius of convergenceP of f.

c) Prove that A ( ) = α : I C α = 0 for all but nitely many I is a dense f I → I A n o subalgebra of ( , w). I

Solution. a)

αβ = WI(αβ)I = WI sgn(J, I J) αJβI J k k | | \ \ I I I I J I X∈ X∈ X⊂

134 = sgn(J, I J) WJαJ WI JβI J \ \ \ I I J I X∈ X⊂ WJ αJ WI J βI J ≤ | | \ | \ | I I J I X∈ X⊂ W α W β = α β ≤ J| J| I| I| k k k k I,J I X∈

b) ∞ ∞ f(α) = 1 f (n)(0)αn 1 f (n)(0) α n k k n! ≤ n! | | k k n=0 n=0 X X converges if α is strictly smaller than the radius of convergence of f. k k c) A ( ) is obviously closed under addition, multiplication and multiplication by complex f I numbers, so it suces to prove that A ( ) is dense in A( , w). I is a countable set. Index f I I its elements I , I , I , . Let α A( , w) and dene, for each n IN, α(n) A ( ) by 1 2 3 · · · ∈ I ∈ ∈ f I

(n) α if j n α = Ij Ij 0 otherwise≤ n Then

(n) ∞ lim α α = lim WI αI = 0 n − n j | j | →∞ →∞ j=n+1 X

since ∞ W α converges. j=1 Ij | Ij | P

Problem A.2 Let be any ordered countable set and I the set of all nite subsets of . I I Let S = α : I C →  be the set of all sequences indexed by I. Observe that our standard product (αβ)I = S I J I sgn(J, I J) αJβI J is well{dened on { for each I , J I is a nite sum. We now ⊂ \ \ ∈ ⊂ dene,P for each integer n, a norm on (a subset of) S by P

n I α n = 2 | | αI k k I I | | P∈ 135 It is dened for all α S for which the series converges. Dene ∈ A = α S α n < for all n ZZ ∩ ∈ k k ∞ ∈

A =  α S α n < for some n Z Z ∪ ∈ k k ∞ ∈ a) Prove that if α, β A then αβ A . ∈ ∩ ∈ ∩ b) Prove that if α, β A then αβ A . ∈ ∪ ∈ ∪ c) Prove that if f(z) is an entire function and α A , then f(α) A . ∈ ∩ ∈ ∩ d) Prove that if f(z) is analytic at the origin and α A has α strictly smaller than ∈ ∪ | ∅| the radius of convergence of f, then f(α) A . ∈ ∪

Solution. a), b) By problem A.1 with w = 2n, αβ α β . Part (a) follows i k kn ≤ k kn k kn immediately. For part (b), let α , β 0 < . Then, since α α for all m n, k km k km ∞ k km ≤ k kn ≤

αβ min m,m0 α min m,m0 β min m,m0 α m β m0 < k k { } ≤ k k { } k k { } ≤ k k k k ∞

c) For each n ZZ, ∈ ∞ ∞ 1 (k) k 1 (k) k f(α) n = k! f (0)α k! f (0) α n k k n ≤ | | k k k=0 k=0 X X converges if α is strictly smaller than the radius of convergence of f. k kn

d) Let n ZZ be such that α n < . Such an n exists because α A . Observe that, ∈ k k ∞ ∈ ∪ for each m n, ≤ m I (m n) I n I (n m) n I α m = 2 | | αI = 2 − | |2 | | αI α + 2− − 2 | | αI k k I I | | I I | | ≤ | ∅| I I | | ∈ ∈ ∈ P (n m)P P α + 2− − α n ≤ | ∅| k k Consequently, for all α A , ∈ ∪ lim α m = α m ∅ →−∞ k k | | If α is strictly smaller than the radius of convergence of f, then, for some m, α m is | ∅| k k also strictly smaller than the radius of convergence of f and

∞ f(α) 1 f (k)(0) α k < k km ≤ k! | | k km ∞ Xk=0

136 Problem A.3

d d a) Let f(x) C∞ IR /LZZ . Dene ∈ i  f~p = dx f(x)e− d d ZIR /LZZ Prove that for every γ IN, there is a constant C such that ∈ f,γ d ~ 2 γ 2π d fp Cf,γ (1 + pj )− for all p L ZZ ≤ j=1 ∈ Q d d d d b) Let f be a distribution on IR /LZZ . For any h C∞ IR /LZZ , set ∈ 1 i~ ~  i hR(x) = Ld e hp where hp = dx h(x)e− IRd/LZZd p∈ 2π ZZd L Z |pX|

f, h = lim f, hR R h i →∞ h i

Solution. a) d d 2 γ ~ 2 γ i (1 + pj ) fp = dx f(x) (1 + pj ) e− d d j=1 ZIR /LZZ j=1 Q Qd ∂2 γ i = dx f(x) 1 ∂x2 e− d d − j ZIR /LZZ j=1 Q d  i ∂2 γ = dx e− 1 ∂x2 f(x) d d − j ZIR /LZZ j=1 by integration by parts. Note that there are no boundaryQ terms arising from the integration by parts because f is periodic. Hence d d 2 γ ~ ∂2 γ (1 + pj ) fp Cf,γ dx 1 ∂x2 f(x) < ≤ ≡ d d − j ∞ j=1 ZIR /LZZ j=1 Q Q 

b) By (A.2) 1 i~ f, h h = f, d e hp h − Ri L p∈ 2π ZZd D XL E |p|≥R

d d2 νf 1 i~ Cf sup 1 dx2 Ld e hp ≤ x d d j=1 − j IR /LZZ p∈ 2π ZZd ∈  XL  Q  |p|≥R

d ν Cf 2 f ~ Ld 1 + pj hp ≤ j=1 p∈ 2π ZZd L |pX|≥R Q 

137 By part (a) of this problem, there is, for each γ > 0, a constant Ch,γ such that

d ~ 2 γ 2π d hp Ch,γ (1 + pj )− for all p L ZZ ≤ j=1 ∈ Q

Hence, choosing γ = νf + 1,

d Cf 2 νf 2 γ f, h hR Ch,γ Ld 1 + pj (1 + pj )− h − i ≤ j=1 p∈ 2π ZZd L |pX|≥R  Q d C 1 f 2 − = Ch,γ Ld 1 + pj j=1 p∈ 2π ZZd L |pX|≥R Q 

Since d 1 d 2 − 1 1 + pj = 1+p2 < j=1 1 ∞ p 2π ZZd p 2π ZZ ∈XL Q  h 1X∈ L i we have d 1 2 − lim 1 + pj = 0 R j=1 →∞ p∈ 2π ZZd L |pX|≥R Q  and hence

lim f, h hR = 0 R h − i →∞

Problem A.4 Let f be a distribution on IRd/LZZd. Suppose that, for each γ ZZ , there ∈ is a constant Cf,γ such that

d i 2 γ 2π d f, e− Cf,γ (1 + pj )− for all p L ZZ ≤ j=1 ∈ Q

d d Prove that there is a C∞ function F (x) on IR /LZZ such that

f, h = dx F (x)h(x) h i d d ZIR /LZZ

138 ~ i 1 i ~ Proof: Dene Fp = f, e− and F (x) = Ld q 2π ZZd e Fq. By hypothesis ∈ L ~ d 2 γ 2π d Fq Cf,γ (1 + q )− for all q ZZ and γP IN. Consequently, all termwise ≤ j=1 j ∈ L ∈ deriv atives ofQthe series dening F converge absolutely and uniformly in x. Hence F ∈ d d C∞ IR /LZZ . Dene the distribution ϕ by  < ϕ, h >= dx F (x)h(x) d d ZIR /LZZ We wish to show that ϕ = f. As

i i < ϕ, e− > = dx F (x)e− d d ZIR /LZZ 1 ~ i<(q p),x> = Ld Fq dx e − IRd/LZZd q 2π ZZd Z ∈XL 1 ~ d ~ i = Ld FqL δp,q = Fp =< f, e− > q 2π ZZd ∈XL we have that < ϕ, h >=< f, h > for any h that is a nite linear combination of i 2π d e− , p ZZ . It now suces to apply part (b) of Problem A.3. ∈ L

Problem A.5 Prove that

a) AA†f = A†Af + f

b) Ah0 = 0

c) A†Ah` = `h`

d) h , h 0 = δ 0 . Here f, g = f(x)g(x)dx. h ` ` i `,` h i 2 d2 e) x 2 = 2A†A + 1 R − dx

d df Solution. a) By denition and the fact that dx (xf) = f + x dx ,

1 d d 1 2 d d d2 1 2 d2 AA†f = x + x f = x + x x 2 f = x + 1 2 f 2 dx − dx 2 dx − dx − dx 2 − dx 1 d  d  1  2 d d d2  1  2 d2  A†Af = x x + f = x x + x 2 f = x 1 2 f 2 − dx dx 2 − dx dx − dx 2 − − dx      (D.3) Subtracting

AA†f A†Af = f − 139 b)

1/4 d 1 x2 1 x2 1 x2 √2π Ah0 = x + e− 2 = xe− 2 xe− 2 = 0 dx −  c) The proof is by induction on `. When ` = 0,

A†Ah0 = 0 = 0h0

by part (b). If A†Ah` = `h`, then, by part (a),

1 1 1 1 1 † h = † †h = † † h + †h = †`h + †h A A `+1 A A √`+1 A ` √`+1 A A A ` √`+1 A ` √`+1 A ` √`+1 A ` 1 = (` + 1) †h = (` + 1)h √`+1 A ` `+1

d) By part (c)

(` `0) h`, h`0 = `h`, h`0 h`, `0h`0 = A†Ah`, h`0 h`, A†Ah`0 − h i h i − h i − = h`, A†Ah`0 h`, A†Ah `0 = 0 −

since A† and A are adjoints. So, if ` = `0, then h`, h`0 = 0. We prove that h`, h` = 1, 6 h i h i by induction on `. For ` = 0

1 x2 h , h = e− dx = 1 h 0 0i √π Z If h , h = 1, then h ` `i 1 1 h`+1, h`+1 = A†h`, A†h` = h`, AA†h` h i `+1 `+1 1 1 = `+1 h`, A†A + 1 h` = `+1 h`, ` + 1 h` = 1   e) Adding the two rows of (D.3) gives

2 d2 AA†f + A†Af = x 2 f − dx   or 2 d2 x 2 = AA† + A†A − dx

Part (a) may be expressed AA† = A†A + 1. Subbing this in gives part (e).

140 Problem A.6 Show that the constant function f = 1 is in for all γ < 1. Vγ −

Solution. We rst show that, under the Fourier transform convention,

1 ikx g^(k) = g(x) e− dx √2π Z h^ = ( i)`h . The Fourier transform of h is ` − ` 0

1 1 1 x2 ikx 1 1 1 (x+ik)2 1 k2 h^ (k) = e− 2 e− dx = e− 2 e− 2 dx 0 π1/4 √2π π1/4 √2π Z Z 1 1 k2 1 1 x2 1 1 k2 = e− 2 e− 2 dx = e− 2 π1/4 √2π π1/4 Z = h0(k)

as desired. Furthermore

dg 1 ikx 1 d ikx (k) = g0(x) e− dx = g(x) e− dx = ik g^(k) dx √2π − √2π dx Z Z c 1 ikx d 1 ikx d xg(k) = xg(x) e− dx = i g(x) e− dx = i g^(k) √2π dk √2π dk Z Z so that c 1 d 1 d A†g = x g = (i ik g^ = ( i)A†g^ √2 − dx √2 dk − −   1 ^  ` b  As h` = A†h` 1,dthe claim h` = ( i) h` follows easily by induction. √` − − Now let f be the function that is identically one. Then

f =< f, h >= h (x) dx = √2π h^ (0) = √2π( i)`h (0) ` ` ` ` − ` Z γ is bounded uniformly in ` (see [AS, p.787]). Since i IN(1 + 2i) converges for all γ < 1, ∈ − f is in γ for all γ < 1. P V −

141 Appendix B. Pfaans

Problem B.1 Let T = (T ) be a complex n n matrix with n = 2m even and let ij × S = 1 (T T t) be its skew symmetric part. Prove that Pf T = Pf S. 2 −

Solution. Dene, for any n matrices, T (k) = (T (k)), 1 k n each of size n n, ij ≤ ≤ × n (1) (n) 1 i i (1) (n) pf T , , T = ε 1··· n T T · · · 2mm! i1i2 · · · in−1in i , ,i =1  1 ···Xn Then

n (1)t (2) (n) 1 i i (1)t (2) (n) pf T , T , , T = ε 1··· n T T T · · · 2mm! i1i2 i3i4 · · · in−1in i , ,i =1 1 ···Xn  n 1 i i (1) (2) (n) = ε 1··· n T T T 2mm! i2i1 i3i4 · · · in−1in i , ,i =1 1 ···Xn n 1 j ,j ,i i (1) (2) (n) = ε 2 1 3··· n T T T 2mm! j1j2 i3i4 · · · in−1in j ,j ,i , ,i =1 1 2 X3 ··· n n 1 j ,j ,i i (1) (2) (n) = ε 1 2 3··· n T T T − 2mm! j1j2 i3i4 · · · in−1in j ,j ,i , ,i =1 1 2 X3 ··· n = pf T (1), T (2), , T (n) − · · ·  Because pf T (1), T (2), , T (n) is linear in T (1), · · · t  pf 1 T (1) T (1) , T (2), , T (n) 2 { − } · · · t = 1 pf T (1), T (2) , , T (n) 1 pf T (1) , T (2), , T (n) 2 · · · − 2 · · · = 1 pfT (1), T (2), , T (n) + 1 pfT (1), T (2), , T (n)  2 · · · 2 · · · = pf T(1), T (2), , T (n)   · · ·  Applying the same reasoning to the other arguments of pf T (1), T (2), , T (n) . · · ·

t t  pf 1 T (1) T (1) , , 1 T (n) T (n) = pf T (1), T (2), , T (n) 2 { − } · · · 2 { − } · · ·   Setting T (1) = T (2) = = T (n) = T gives the desired result. · · · 142 0 S12 Problem B.2 Let S = with S21 = S12 C . Show that Pf S = S12. S21 0 − ∈  

Solution. By (B.1) with m = 1,

0 S12 1 k` 1 Pf = 2 ε Sk` = 2 S12 S21 = S12 S21 0 −   1 k,` 2 ≤P≤ 

Problem B.3 Let α , , α be complex numbers and let S be the 2r 2r skew symmetric 1 · · · r × matrix r 0 α S = m αm 0 m=1 M  −  Prove that Pf(S) = α α α . 1 2 · · · r

Solution. All matrix elements of S are zero, except for r 2 2 blocks running down the × diagonal. For example, if r = 2,

0 α1 0 0 α 0 0 0 S = − 1  0 0 0 α2   0 0 α2 0   −  By (B.100), k ` k ` Pf(S) = ε 1 1··· r r S S k1`1 · · · kr `r P < X∈Pr k ` k ` = ε 1 1··· r r S S k1`1 · · · kr `r 1≤k1

1,2, (2r 1),2r Pf(S) = ε ··· − S1,2 S2r 1,2r = α1α2 αr · · · − · · ·

143 Problem B.4 Let S be a skew symmetric D D matrix with D odd. Prove that det S = 0. ×

Solution. As St = S, −

det S = det St = det( S) = ( 1)D det S − −

As D is odd, det S = det S and det S = 0. −

144 Appendix C. Propagator Bounds

Problem C.1 Prove, under the hypotheses of Proposition C.1, that there are constants

C, C0 > 0, such that if k k0 C, then there is a point p0 with (k p0) ^t = 0 and − c ≤ ∈ F − · k p0 C0 e(k) . − ≤

Solution. We are assuming that e does not vanish on . As e(k0 ) n^, e(k0 ) n^ is ∇ F ∇ c k ∇ c · nonzero. Hence e(k) n^ is bounded away from zero, say by C , on a neighbourhood of ∇ · 2 ^ kc0 . This neighbourhood contains a square centered on kc0 , with sides parallel to n^ and t, n^ k k c0 ^t p0 F such that is a graph over the sides parallel to ^t. Pick C to be half the length of the F sides of this square. Then any k within a distance C of kc0 is also in this square. For each k in this square there is an ε such that k + εn^ and the line joining k to k + εn^ ∈ F ∈ F also lie in the square. Then

ε ε e(k) = e(k) e(k + εn^) = dt d e(k + tn^) = dt e(k + tn^) n^ C ε − dt ∇ · ≥ 2| | Z0 Z0

= C 2 k (k + εn^) −

Pick p0 = k + ε n^.

Problem C.2 Let f : IRd IR and g : IR IR. Let 1 i , , i d. Prove that → → ≤ 1 · · · n ≤

n n m ∂ g f(x) = g(m) f(x) ∂ f(x) ∂xi` ∂xi` `=1 p=1 ` Ip m=1 (I , ,I ) (n) ∈  Q   X 1 ··· Xm ∈Pm  Q Q

(n) where m is the set of all partitions of (1, , n) into m nonempty subsets I , , I P · · · 1 · · · m with, for all i < i0, the smallest element of Ii smaller than the smallest element of Ii0 .

145 Solution. The proof is easy by induction on n. When n = 1,

∂ ∂ g f(x) = g0 f(x) f(x) ∂xi1 ∂xi1   which agrees with the desired conclusion, since (1) = (1) , m must be one and I must P1 { } 1 be (1). Once the result is known for n 1, − n 1 n − m ∂ g f(x) = ∂ g(m) f(x) ∂ f(x) ∂xi` ∂xin ∂xi` `=1 p=1 ` Ip m=1 (I , ,I ) (n−1) ∈  Q   X 1 ··· mX∈Pm  Q Q n 1 − m = g(m+1) f(x) ∂ f(x) ∂ f(x) ∂xin ∂xi` p=1 ` Ip m=1 (I , ,I ) (n) ∈ X 1 ··· Xm ∈Pm  Q Q n 1 m − m + g(m) f(x) ∂ f(x) ∂ ∂ f(x) ∂xi ∂xi ∂xi p=1 ` n ` ` Ip ` Iq m=1 (I , ,I ) (n) q=1 p6=q ∈ ∈ X 1 ··· Xm ∈Pm X  Q Q  Q  n 0 0 m = g(m ) f(x) ∂ f(x) ∂xi 0 ` 0 p=1 ` I m =1 (I0 , ,I0 ) (n) ∈ p X 1 ··· Xm0 ∈Pm0  Q Q

For the rst sum after the second equals sign, we made the changes of variables m0 = m+1, (n) I10 = I1, , Im0 0 1 = Im, Im0 0 = (n). They account for all terms of (I10 , , Im0 0 ) m0 for · · · − · · · ∈ P th which n is the only element of some I 0. For each 1 q m, the q term in the second ` ≤ ≤ (n) sum after the second equals sign accounts for all terms of (I 0 , , I0 0 ) 0 for which n 1 · · · m ∈ Pm appears in Iq0 and is not the only element of Iq0 .

146 References

[AR] A. Abdesselam, V. Rivasseau, Explicit Fermionic Tree Expansions, Letters in Math. Phys. 44 (1998), 77.

[AS] M. Abramowitz and I. A. Stegun eds., Handbook of mathematical functions with formulas, graphs, and mathematical tables, Dover, 1964.

[Be] F. A. Berezin, The method of second quantization, Academic Press, 1966.

[BS] F. Berezin, M. Shubin, The Schrodinger Equation, Kluwer 1991, Supplement 3: D. Leites, Quantization and supermanifolds.

[FMRS] J. Feldman, J. Magnen, V. Rivasseau and R. Seneor, A Renormalizable Field Theory: The Massive Gross-Neveu Model in Two Dimensions, Commun. Math. Phys. 103 (1986), 67-103.

[FST] J. Feldman, M. Salmhofer, E. Trubowitz, Perturbation Theory around Non-nested Fermi Surfaces, I: Keeping the Fermi Surface Fixed, Journal of Statistical Physics 84, 1209-1336 (1996).

[GK] K. Gawedzki, A. Kupiainen, Gross{Neveu model through convergent perturbation expansions, Commun. Math. Phys. 102 (1986), 1.

[RS] M. Reed and B. Simon, Methods of Modern , I: , Academic Press, 1972. The \abstract" fermionic expansion discussed in II is similar to that in § J. Feldman, H. Knorrer, E. Trubowitz, A Nonperturbative Representation for • Fermionic Correlation Functions, Communications in Mathematical Physics, 195, 465-493 (1998). and is a simplied version of the expansion in J. Feldman, J. Magnen, V. Rivasseau and E.Trubowitz, An Innite Volume Expansion • for Many Fermion Green’s Functions, Helvetica Physica Acta, 65 (1992) 679-721. The latter also contains a discussion of sectors. Both are available over the web at http://www.math.ubc.ca/ feldman/research.html

e 147