Master’s Thesis
Renormalization of Factorially Divergent Series: Quantum Field Theory in Zero Dimensions
Supervisor: Bren Schaap Prof. dr. Ronald Kleiss Dedicated to my personal group Preliminary remarks This research is based almost entirely on information from the book Quantum Field Theory: A Diagrammatic Approach [1] by Ronald Kleiss, and is a continuation of the bachelor’s theses [2, 3, 4] of Dirk van Buul, Ilija Milutin and Mila Keijer. The subject of this research was revitalized by Michael Borinsky through his work [5, 6, 7], and he was the one who pointed Ronald Kleiss in the right direction. I would therefore like to thank all of them for introducing me to this fascinating subject. Contents
1 Introduction to zero-dimensional QFT 6 1.1 Random numbers ...... 7 1.1.1 Green’s functions ...... 7 1.1.2 The Schwinger-Dyson equations ...... 8 1.1.3 Connected Green’s functions ...... 9 1.2 Feynman diagrams ...... 10 1.2.1 Diagrammatic equations ...... 12
2 Renormalization 14 2.1 Renormalizing ϕ3 theory ...... 15 2.2 Factorially divergent series ...... 18 2.2.1 Retrieving αf and cf ...... 20 2.3 The factorially divergent nature of Green’s functions ...... 21 2.4 Asymptotic behaviour of renormalized ϕ3 theory ...... 22 2.4.1 The improvement factor ...... 22 2.4.2 Asymptotic behaviour of the tadpole ...... 24 2.5 Freedom of choice ...... 26 2.6 Physical interpretation ...... 29
3 Real fields 30 3.1 ϕ4 theory ...... 31 3.2 ϕQ theory for odd Q ...... 33 3.3 ϕQ theory for even Q ...... 35 3.4 Renormalizing ϕ123 theory ...... 37
4 QED 39 4.1 Bald QED ...... 40 4.2 Counterterm QED ...... 43 4.2.1 Computing the counterterm ...... 43 4.2.2 Everything is in the counterterm ...... 44 4.2.3 The improvement factor of Counterterm QED ...... 45 4.3 Quenched QED ...... 47 4.3.1 The improvement factor of Quenched QED ...... 47 4.4 Furry QED ...... 49 4.4.1 The improvement factor of Furry QED ...... 49
4 5 Combination theories 51 3/4 5.1 ϕz theory ...... 52 5.1.1 Unphysical z and the saddlepoint approximation ...... 53 5.1.2 Physical z ...... 54 5.1.3 The improvement factor ...... 55 5.1.4 Determining z ...... 57 3/6 5.2 ϕz theory ...... 58 5.3 Pure ϕ3/4 theory ...... 61
6 QCD 64 6.1 Asymptotics of H0,0 ...... 65 6.1.1 Unphysical z > 0 ...... 66 6.1.2 Physical z = −1...... 68 6.2 Renormalizing xQCD ...... 69 6.2.1 The improvement factor for xQCD ...... 70
7 Higgs 71 7.1 Asymptotics of H0,0 ...... 72 7.2 Renormalizing Higgs theory ...... 73
Conclusion 75
Appendices 76 A Factorially divergent pullbacks ...... 77 B Computing the counterterm in CQED (alternative method) ...... 79 C Saddlepoint approximation in QCD ...... 81
Bibliography 83
5 Chapter 1
Introduction to zero-dimensional QFT
“Look yonder,” said my Guide, “[...] I conduct thee downward to the lowest depth of existence, even to the realm of Pointland, the Abyss of No dimensions.” — Edwin A. Abbot, Flatland
It might come as a surprise to us four-dimensional creatures that zero dimensions can be so interesting. Zero-dimensional spacetimes, also known as “points”, contain neither space nor time, and yet hundreds of pages have been written about them. This thesis is an addition to the subcollection of writings about quantum field theory (QFT) in zero dimensions. The sole purpose of this first chapter is to lay the theoretical foundations that are necessary for this research. For those familiar with QFT in zero dimensions, it is merely a demonstration of notation. For others who are familiar with QFT in four dimensions, it is a good reminder of the fundamental basics of QFT. For everyone else, it is a minimal introduction to QFT in zero dimensions. In the first section we introduce the probabilistic interpretation of quantum field theory; this includes random numbers, probability density functions, and expecta- tion values. In the second section we introduce the famous Feynman diagrams, along with their associated Feynman rules.
6 1.1 Random numbers
Zero-dimensional quantum field theory concerns itself with everything there is to know about all possible zero-dimensional spacetimes that are filled with random (complex) numbers. For any given spacetime, these numbers are generated by a set of stochastic variables {ϕi}, called “fields”, which we can group together into one vector ~ϕ. These fields obey a (spacetime-specific1) probability density which we will write as 1 1 P(~ϕ) = exp − S(~ϕ) (1.1) N ~ where the action S(~ϕ) is a function of the fields and N is the normalization constant. Every theory is fully specified by its corresponding action. When we observe the values of the fields in one zero-dimensional spacetime, the resulting numbers look random and nothing can be said about the underlying prob- ability. This is in contrast to a spacetime where these numbers change over time and where we can probe the underlying probability density by repeated measure- ments.2 To emulate this behaviour in zero dimensions and get a good definition of probability, we can observe a collection of independent and identically distributed spacetimes. In other words: we cannot say anything about a single value of a field but we can say things about expectation values.
1.1.1 Green’s functions
Q ni We will look at the expectation values G~n = h i ϕi i which are called the Green’s functions. Trivially, G~0 = h1i = 1. To make life a little easier we define Z 1 Y ni ~ H~n ≡ ( ϕi ) exp − S(~ϕ) dϕ (1.2) i ~ with the integral implicitly running from −∞ to +∞ so that
H~n G~n = (1.3) H~0 and H~0 is an explicit way of writing N. In general, the integrals H~n can not be solved analytically. In this research we solve these integrals perturbatively to get a series expansion in ~. To be able to say things about the collection of Green’s functions we define the path integral
1For comparison: to our best understanding the four-dimensional spacetime we live in is gen- erated by the action of the Standard Model. 2This is completely analogous to how one would test if a die is fair or not.
7 1 Z 1 1 Z(J~) ≡ exp − S(~ϕ) + J~ · ~ϕ dϕ~ N ~ ~ Z ki 1 Y X 1 Jiϕi 1 = ( ) exp − S(~ϕ) dϕ~ N ki! ~ ~ i ki≥0 ki X Y 1 Ji = G~k (1.4) ki! ~ ~k i ~ where bookkeeping device Ji is called a source and the sum runs over all vectors k with integer entries ki ≥ 0. This is our first of many encounters where we use a P 1 k series expansion (in this case exp(x) = k≥0 k! x ) to simplify an integral. Every Green’s function G~n can be retrieved from the path integral by applying the correct operator:
" ni # Y ∂ ~ G~n = ~ Z(J) . ∂Ji i J~=~0 1.1.2 The Schwinger-Dyson equations∗ We will now deduce properties of the collection of Green’s functions from the path integral. Let us define ∂ as the vector with ith component ∂ and consider the ∂J~ ∂Ji equality Z ∂S ∂ ~ 1 ∂S ∂ 1 1 ~ ~ ~ − Ji Z(J) = ~ − Ji exp − S(~ϕ) + J · ~ϕ dϕ ∂ϕi ∂J~ N ∂ϕi ∂J~ ~ ~ Z 1 ∂S 1 1 ~ ~ = (~ϕ) − Ji exp − S(~ϕ) + J · ~ϕ dϕ N ∂ϕi ~ ~ 1 Z ∂ 1 1 = −~ exp − S(~ϕ) + J~ · ~ϕ dϕ~ N ∂ϕi ~ ~ = 0 where in the last step the integral over ϕi vanishes because the probability density goes to zero at its end points. From this we find a Schwinger-Dyson equation (SDe) ∂S ∂ ~ ~ ~ Z(J) = JiZ(J) (1.5) ∂ϕi ∂J~ for every field in the theory. From the SDe’s we can obtain a relationship between the individual Green’s functions. For this we need to plug in the path integral as expansion of Green’s functions and group together the terms with the same powers of Ji . This method is very powerful in cases where computing H~n is computationally heavy: instead of having to compute an integral for every Green’s function you consider, it suffices to calculate a few integrals and use the (in general relatively simple) SDe’s to compute the rest of the Green’s functions. ∗This subsection is not essential to understanding the main line of reasoning in this thesis.
8 1.1.3 Connected Green’s functions While the Green’s functions fully describe P(~ϕ), the average quantum field theorist will use connected Green’s functions since these are directly related to quantities we can measure, such as cross sections and decay times. The connected Green’s functions are defined by2
ki ~ X Y 1 Ji ln(Z(J)) = C~k (1.6) ki! ~ ~k i where this time normalization requires C~0 = 0. Every Green’s function can be written in terms of connected Green’s functions, and vice versa. For instance, for a theory with only one field, C1 = G1 = hϕi corresponds to the mean, C2 = 2 2 2 G2 − G1= hϕ i − hϕi corresponds to the variance, etc. The connected Green’s functions are the main subject of study in this research. Just as the Schwinger-Dyson equations yield a set of equations for the Green’s functions, it is also possible to obtain a set of equations for the connected Green’s functions. We shall do this in section 1.2.1, but first we must climb onto the shoulders of Richard P. Feynman.
2Using the series expansion of ln(1 + x).
9 1.2 Feynman diagrams
If we consider only real-valued fields3 we can write the most general expression of the action as
X Y 1 ki S(~ϕ) = λ~k ϕi ki! ~k i and distinguish the terms we encounter:4
• The constant term λ~0 does not contribute to the theory because it gets ab- sorbed by the normalization constant.
• The constants λeˆi are called counterterms and we will encounter an example in section 4.2. ~ • The quadratic terms with k = 2ˆei are called mass terms with µi ≡ λ2ˆei a positive real number referred to as the “mass” of field ϕi. ~ • The quadratic terms with k =e ˆi +e ˆj (i 6= j) are normally not considered.
• All higher-order terms are called interaction terms for which λ~k is called the coupling constant. Let us now consider a theory described by an action which only contains mass and interaction terms 1 X 1 2 X Y ki P S(~ϕ) = 2 µiϕi + λ~k ϕi θ(3 ≤ i ki) (1.7) ki! i ~k i where the function θ(x) is 1 if x is true and 0 if x is false. What would we do if we wanted to calculate a connected Green’s function by hand? This is where the famous Feynman diagrams come in: every Green’s function can be expressed perturbatively in terms of Feynman diagrams. Instead of working with symbolic equations, it is possible to write diagrammatic equations that make sense visually. A Feynman diagram is a graph — built up out of lines and vertices — which stands for an expression that can be determined using Feynman rules. Every type of line and every type of vertex has a factor associated with it. For the theory of equation 1.7 the building blocks are
↔ ~ , ↔ ~ , etc. µ1 µ2 (1.8) λ λ ↔ − 1,1,1 , ↔ − 3,3,5,8 , etc. ~ ~
~ where λa,b,c is shorthand for λ~k with k =e ˆa +e ˆb +e ˆc . The Feynman rules state that the value of the whole Feynman diagram is equal to the product of its building blocks, multiplied by the sm factor of the diagram.
3We will encounter complex-valued fields in chapter 4. 4 th eˆi is the “unit vector” with a one in the i place and zeros everywhere else.
10 The sm factor Every Feynman diagram D is weighted by a factor sm(D): the product of the symmetry factor s(D) and the multiplicity m(D). The multiplicity of a diagram is the number of distinct ways in which distinct labels can be assigned to external lines5 of the same type. To get the symmetry factor of a diagram that has at least one external line, multiply:
1 • a factor k! for every set of k internal lines that may be permuted without changing the diagram;
1 • a factor p! for every set of p disjunct connected pieces that may be permuted without changing the diagram.
Computing a connected Green’s function is like computing the matrix element M for a scattering or decay process. To compute a connected Green’s function C~n , 6 take the sum of all connected Feynman diagrams with exactly ni external lines of type i. In general, this sum has an infinite number of terms, but terms con- taining more building blocks contribute less to the final result. This means that in practice a computation is an approximation for which the error decreases the more “higher-order” terms are computed. To be more specific: the perturbative expansion of a connected Green’s function is equal to the value of the lowest level diagrams (called tree level) multiplied by a power series in terms of dimensionless parameters. This is because higher-order (also called loop-order) terms are the same as tree-level diagrams but with loops added to them (see figure 1.1 for an example) and every loop evaluates to such a dimensionless parameter by the Feynman rules.
Figure 1.1: Examples of a tree diagram and a (four-)loop diagram.
5An internal line is a line that is connected to vertices at both ends. An external line is a line that is not an internal line. 6Also include semi-connected diagrams (all parts in a semi-connected diagrams are connected to at least one external line) to get G~n . To get H~n, include disconnected diagrams as well.
11 1.2.1 Diagrammatic equations∗ Feynman diagrams are a powerful tool to construct equations that relate connected Green’s functions to each other. There are two main types of such equations: 1) equations that look at subdiagrams of a connected Green’s function and 2) equations that morph one connected Green’s function into another. Application of both of these types can be found in section 4.2.1.
Subdiagrams A connected Green’s function is a sum of an infinite number of diagrams. By grouping these diagrams in different ways, different relationships emerge. A peculiar feature of this infinity is that every connected Green’s function can be found as a subdiagram in every other connected Green’s function.2 To make it possible to draw an infinite number of diagrams, we introduce a shaded area in our Feynman diagrams; a shaded area with lines attached to it is shorthand for all possible connected Feynman diagrams with precisely that number of external lines. Look for example at figure 1.2. Here we see that C5 (second diagram on the right) can be found in C3 (diagram on the left). Such a diagrammatic equation leads to the algebraic equation
−λ 2 −λ C = 3~ + 4 C + ... 3 µ3 µ 5 which can then be used in an algebraic derivation.
= + + . . .
Figure 1.2: Example of how C3 can contain C5.
Morphing
−1 As seen in equation 1.8, every line comes with a factor µi and every vertex comes with a factor λ~n. In other words, a diagram D with a lines of type i and b vertices of −a b type ~n is proportional to the factor µi λ~n . This means that there exist operators ∂ ∂ −µi D = a ·D and λ~n D = b ·D ∂µi ∂λ~n that “count” the number of lines and vertices of a certain type. Because these operations are diagram dependent, we can also apply them to a sum of diagrams such as a connected Green’s function. ∗This subsection is not essential to understanding the main line of reasoning in this thesis. 2At least in general. This is not true for theories in which subsets of fields do not interact with each other.
12 The counting operators can be used to apply a morphing operation (such as adding, removing or replacing a component) in all possible ways. For instance, all possible diagrams D0 where one line of type i in D is replaced by a line of type j can be generated with the operation
µ2 ∂ − i D µj ∂µi and all possibilities of adding an external line of type j to a vertex of type n can be generated with the operation λ ∂ ~ n+1 D . µj ∂λn Of course one can think of a vast number of operations, but one in particular will be of great importance to us later. It is possible to add a vertex to a diagram by placing it in the middle of a line. One can think of this as cutting the line open, thereby splitting the line in two, and then gluing it together again with the added vertex. The corresponding operation
−λ ∂ ∂ −~ D = λ D ~ ∂µi ∂µi can be accompanied by attaching a component C to the new vertex ∂ Cλ D ∂µi such as an external line. As mentioned before, these operations can be applied to connected Green’s func- tions. The operations in which the number of external lines changes, as in the pre- vious examples, can be used to morph one into another, thus creating an equation that relates two connected Green’s functions.
13 Chapter 2
Renormalization
The main goal of this thesis is to investigate the transformation of connected Green’s functions under renormalization. Renormalization is the process in which the pa- rameters of a theory (e.g. µ, λ) are expressed in terms of measured quantities. The best way to learn about renormalization is via an example; in section 2.1 we will first introduce the concept of renormalization by renormalizing ϕ3 theory. After that we will set up the mathematical framework of factorially divergent series (2.2) that is necessary to understand the behaviour of connected Green’s functions at large loop order (2.3) and the difference between the original and the renormalized ones (2.4). In section 2.5 we again renormalize ϕ3 theory but this time using a different renormalization scheme. Section 2.6 concludes this chapter with a brief physical interpretation of the results.
14 2.1 Renormalizing ϕ3 theory
ϕ3 theory is a good example for learning how renormalization works because it includes many facets also present in other theories. We start with the action 1 1 S(ϕ) = µϕ2 + λ ϕ3 (2.1) 2 3! 3 including one mass term and one three-point interaction term. This action leads to the Feynman rules λ ↔ ~ ↔ − 3 (2.2) µ ~ and these Feynman rules combine into the dimensionless parameter
λ2 u = 3~ ∝ (2.3) µ3 that represents an added loop. The dot indicates a vertex that can be attached to a line.
Computing the connected Green’s functions To calculate the desired connected Green’s functions, we must first compute the 1 integrals Hn perturbatively. We find Z n 1 Hn = ϕ exp(− S(ϕ))dϕ ~ k Z X 1 λ3 µ = − ϕn+3k exp − ϕ2 dϕ k! 6 2 k≥0 ~ ~ p k 2 X λ3 ~ p! ∝ − p p θ(p = n + 3k even) (2.4) 6 µ 2 k≥0 ~ k!( 2 )!2 where we first used the series expansion of the exponential for the interaction term and then the integral
s p 2 Z 1 1 2 2π p! p − 2 µϕ ~ ~ ϕ e ~ dϕ = p p θ(p even) (2.5) µ µ 2 ( 2 )!2 for the mass term. Often it is more convenient to substitute ~ for u and doing so results in the expression n 6µ X k p! Hn ∝ − u p p θ(p = 6k − 2n) (2.6) λ k 2 3 n 36 (2k − n)!( 2 )!2 k≥d 2 e by redefining k → 2k − n in equation 2.4.
1We will ignore the fact that all integrands, including the normalization, diverge for ϕ → −∞.
15 Taking all Hn and using the steps from section 1.1 leads to the connected Green’s functions of the ϕ3 theory −λ C = γ 3~t (u) = + ... 1 1 µ2 1 ( (2.7) λ n−2 n−2 C = γ ~ − 3~ t (u) = γ + ... n≥2 n µ µ2 n n
2 where γn is the sum of the sm factors of the tree-level diagrams and tn(u) = 1+O(u) is the “tail” of the connected Green’s function.
Renormalization We now choose to reparameterize the theory by introducing the parametersµ ˆ and ˆ λ3 in such a way that the two connected Green’s functions
C = ~t (u) ≡ ~ 2 µ 2 µˆ (2.8) λ 2 λˆ 2 C = − 3~ t (u) ≡ − 3~ 3 µ3 3 µˆ3 become free of loop corrections. One could think of this as performing a measure- 3 ment of these two quantities after which C2 and C3 are simply numbers. We would now like to express all connected Green’s functions in terms of the new parameters. For this we define a new dimensionless parameter 2 ˆ2 C3 λ3~ uˆ ≡ 3 ≡ 3 (2.9) C2 µˆ which we can express in terms of the old dimensionless parameter as 2 t3(u) uˆ(u) = u 3 ≡ uρˆ(u) (2.10) t2(u) withρ ˆ(u) = 1 + O(u) another series in u. Equation 2.10 can be inverted to yield an equation for u in terms of the new parameteru ˆ u(ˆu) =uρ ˆ (ˆu) (2.11) by tweaking the coefficients of ρ(ˆu) order by order such thatu ˆ(ˆuρ(ˆu)) =u ˆ. This leads to the expressions −λ −λˆ C = γ 3~t (u) ≡ γ 3~tˆ (ˆu) 1 1 µ2 1 1 µˆ2 1 !n−2 (2.12) λ n−2 λˆ C = γ ~ − 3~ t (u) ≡ γ ~ − 3~ tˆ (ˆu) n≥2 n µ µ2 n n µˆ µˆ2 n in terms of the new tails tˆn(ˆu).
2 1 Strictly speaking C1 does not have a tree-level contribution so γ1 = 2 is the sm factor of the one-loop diagram. 3This is only a thought experiment since there is no notion of “measuring” in zero dimensions.
16 Comparing the tails
What did this renormalization procedure yield? Of course the expressions of C2 and C3 are much simpler now, but there is more. When we write
X (k) k tn(u) = tn u k≥0 (2.13) ˆ X ˆ(k) k tn(ˆu) = tn uˆ k≥0
(k) (k) we can compare the coefficients tn and tˆn with each other. In figure 2.1 the ratio (k) (k) 4 tn /tˆn is plotted as function of loop order k. We notice three things:
1. the coefficients of the new tails are, for n ≥ 2, smaller than the coefficients of the original tails;
2. the ratios seem to tend asymptotically toward a constant value;
3. the behaviour for n = 1 seems to be fundamentally different than for n ≥ 2.
The first property is a nice extra that renormalization gives us. The second and third properties will be discussed in section 2.4 but first we will need to construct a mathematical framework in which we can analyse asymptotic behaviour.
Figure 2.1: The ratio of the tail coefficients in ϕ3 theory.
4The computations in this research were performed, and the plots were made, using MapleTM.
17 2.2 Factorially divergent series
P n A formal power series f(x) = n≥0 fnx is called factorially divergent (fd) when the coefficients fn grow as
n 1 fn ≈ af cf Γ(αf n + βf ) · 1 + O( n ) (2.14)
+ for large n and some constants af , cf ∈ R6=0, αf ∈ N and βf ∈ R. As we will see in section 2.3, Green’s functions are, in genera, factorially divergent. Extensive work was done by Michael Borinsky [5, 6, 7] on factorially divergent series with αf = 1. To explicitly distinguish this special case, we will call series with αf > 1 factorially super divergent (fsd). Where Borinsky mainly focuses on 1 computing the O( n ) corrections using rigorous mathematics, we are only interested in (intuitively understanding) the leading behaviour. It is possible to asymptotically compare fd’s to each other when we define the following relations: f f(x) ≺ g(x) if lim n = 0 (2.15) n→∞ gn f(x) g(x) if g(x) ≺ f(x) (2.16)
fn f(x) ∼ g(x) if lim ∈ R6=0 (2.17) n→∞ gn f f(x) g(x) if lim n = 1 (2.18) n→∞ gn from which it can be verified that f(x) ≺ g(x) if [αf < αg] or [αf = αg ∧ cf < cg] or [αf = αg ∧ cf = cg ∧ βf < βg]. The corollary f(x) + g(x) f(x) if g(x) ≺ f(x) directly shows the power of these relations. We will call a series t(x) properly factorially divergent (pfd) if t0 = 1 and t1 6= 0. k By defining the leading term of an fd series f(x) as L(f, x) = fkx with fk 6= 0 and ∀j The fd shift Let g(x) fd then the expression xg(x) has coefficients n−1 (xg)n = gn−1 ≈ agcg Γ(αgn − αg + βg) ag ⇒ axg = cxg = cg βxg = βg − αg (2.19) cg so that multiplying with x “shifts” βg by αg such that xg(x) ≺ g(x). 5Sometimes called “reversion”. 18 The pfd product Let f(x) and g(x) pfd then their product is asymptotically equal to their sum n X n X f(x) · g(x) = x fkgn−k n≥0 k=0 X n x (f0gn + fng0) n≥0 = f(x) + g(x) (2.20) because the sum over k is dominated by fn and/or gn. We get the expression p f(x) p · f(x) ∀p ∈ R (2.21) by generalizing the multiplication. The pfd pullback After reparameterization we get expressions of the form f(xg(x)) called pullbacks. For pfd’s f(x) and g(x) we have derived the asymptotic behaviour of the factorially divergent pullback as ( g1 f1xg(x) + f(x) exp if αf = 1 f(xg(x)) cf (2.22) f1xg(x) + f(x) if αf > 1 (see appendix A for full derivation). Computing the actual expression for a pullback to large loop order is computa- tionally heavy. Therefore we have used Horner’s method [8] to optimize efficiency. The pfd inversion If we start with the equation y = xf(x) where f(x) is pfd then it is always6 possible to find a pfd series g(y) such that x = yg(y) solves the equation x = xf(x)g(xf(x)). We will be interested in two properties of the series g(y): the first-order term g1 and the asymptotic behaviour. 2 First of all we can find g1 by looking only at terms up to O(x ) in the equation x = xf(x)g(xf(x)) 2 2 = x(1 + f1x + O(x )) 1 + g1x(1 + O(x)) + O(x ) 2 = x(1 + f1x + g1x + O(x )) ⇒ g1 = −f1 (2.23) which gives us a remarkably simple relationship. 6This is guaranteed by the Lagrange inversion theorem. 19 For the asymptotic behaviour, we can make use of previous results. We start by observing that y = xf(x) = yg(y)f(yg(y)) implies that 1 = g(y)f(yg(y)) g(y) + f(yg(y)) ( g1 g(y) + f(y) exp if αf = 1 cf g(y) + f(y) if αf > 1 where we threw away the pullback term f1yg(y) in the last step since we know for sure that f1yg(y) ≺ g(y). Rearranging gives us ( f1 −f(y) exp − if αf = 1 g(y) cf (2.24) −f(y) if αf > 1 the asymptotic behaviour of g(y). Note that this means that g(y) ∼ f(y). Computing the actual expression of the inverse to large loop order is compu- tationally heavy. Therefore we have used Maple’s powseries:-reversion() com- mand to optimize efficiency. 2.2.1 Retrieving αf and cf In order to use equations 2.22 and 2.24 we have to be able to retrieve f1, cf and αf from a given series expansion f(x). Since f1 is a low-order term, it can be computed by hand (see e.g. equation 2.37). Retrieving cf and αf from a series is nothing more than taking the limit of the ratio f Γ(α n + β + α ) n+1 f f f αf lim = lim cf cf · (αf n) (2.25) n→∞ fn n→∞ Γ(αf n + βf ) (n+k)! k and reading off cf and αf . The approximation n! ≈ n for large n will turn out to be very useful to compute these ratios quickly. 20 2.3 The factorially divergent nature of Green’s functions The goal of this section is to demonstrate that Green’s functions in ϕ3 theory are factorially divergent. The steps followed in this section hold for most theories de- scribed in this research, but in chapter 5 we will see that the definition of a factorially divergent series (equation 2.14) does not cover all zero-dimensional theories. Our road map is to first prove that H0 is fd and then take this result to show, step by step, that all series in a theory are fd. Taking equation 2.6 with n = 0 and using Stirling’s approximation r √ nn 2π z z n! ≈ 2πn or Γ(z) ≈ e z e for large factorials we get X (6k)! H ∝ uk 0 36k(2k)!(3k)!23k k≥0 k k k X 1 1 66 k uk √ 36 · 23 2233 e k≥0 2πk k X 1 3 Γ(k) uk (2.26) 2π 2 k≥0 1 3 as proof for H0 being fd with af = 2π , cf = 2 , αf = 1 and βf = 0. To go from H0 P (k) k to Hn we write Hn = k≥dn/2e Hn u and calculate H(k) 6µn (6k)−2n 2µn lim n = − = − (2.27) k→∞ (k) λ (2k)−n(3k)−n2−n λ H0 3 3 implying that Hn ∼ H0 ∀n which means that all Hn are fd. The next step is to go from Hn to Gn. Writing Hn(u) for the tail of Hn we find Hn Hn(u) Gn = = L(Hn, u) Hn − L(Hn, u) · H0 Hn (2.28) H0 H0 dn/2e because L(Hn, u) ∝ u so that Gn is similar to H0 as well. Going from Gn to Cn is a bit more subtle. Cn is equal to Gn plus a sum of products of Gm’s with m < n. Since L(Gm, u) is at least O(u), we infer that (just like in equation 2.28) Cn Gn which means also Cn ∼ H0. So, indeed, all Green’s functions are fd as we wanted to show. As a bonus we also discovered that all (connected) Green’s functions are asymptotically similar to one another. Because taking out the lead term to get the tails tn(u) only shifts βf , and all series in a theory are essentially a combination of these tn’s, all series in a theory have the same αf and cf . 21 2.4 Asymptotic behaviour of renormalized ϕ3 theory The goal of this section is to predict the asymptotic behaviour as seen in figure 2.1 and verify this numerically. It is clear that the behaviour of C1 is different than that of Cn≥2. Therefore we will first focus on Cn≥2 (section 2.4.1) and then look at C1 (section 2.4.2). To apply our fd knowledge, we must determine the mutual comparison between all tails tn(u) and u(ˆu). We know that all connected Green’s functions are similar to each other, and with the help of the general expressions for the connected Green’s functions (equation 2.7) we find ∀n, m : Cn ∼ Cm n−1 ⇒ ∀n ≥ 2 : ut1(u) ∼ u tn(u) ⇒ ∀n ≥ 4 : t1(u) ∼ t2(u) ≺ t3(u) ≺ tn(u) where we substituted ~ in favour of u. Immediately we also find 2 t3(u) 2 −3 ρˆ(u) = 3 t3(u) + t2(u) ∼ t3(u) t2(u) ⇒ u(ˆu) =uρ ˆ (ˆu) ≺ tn≥3(ˆu) from the pfd inversion. This means that the asymptotic behaviour of an expression will be dominated by the tail with the largest n. 2.4.1 The improvement factor Our starting point for determining the asymptotic behaviour of Cn≥2 after renormal- ization is the definition of the renormalized tails from equation 2.12. Substituting ˆ the definitions ofµ ˆ and λ3 gives us n−2 , ˆ ! n−3 ˆ µˆ λ3 λ3 t2(u(ˆu)) tn≥2(ˆu) ≡ 2 2 tn(u(ˆu)) = n−2 tn(u(ˆu)) (2.29) µ µ µˆ t3(u(ˆu)) in which we recognize pfd products and pfd pullbacks. Using our fd knowledge we find tˆn≥4(ˆu) (n − 3)t2(u(ˆu)) − (n − 2)t3(u(ˆu)) + tn(u(ˆu)) (1) (1) (1) [(n − 3)t2 − (n − 2)t3 + tn ]u(ˆu) ρ1 + [(n − 3)t2(ˆu) − (n − 2)t3(ˆu) + tn(ˆu)] exp cf ρ1 tn(ˆu) exp (2.30) cf as our desired result. Alternatively the derivation7 n−3 ˆ t2(u) ρ1 tn≥4(ˆu) = n−2 tn(u) [tn(u)]u=u(ˆu) tn(ˆu) exp (2.31) t3(u) u=u(ˆu) cf 7 Note that this derivation holds only because tn(u) u(ˆu). 22 is a bit less rigorous but somewhat easier to follow. The fact that tˆn(ˆu) ∼ tn(ˆu) for n ≥ 4 means that we indeed predict that the ratio of the coefficients of the tails tends asymptotically to a constant value as was already suggested by figure 2.1. This brings us to the improvement factor 8 I for ϕ3 theory t(k) ρ ρˆ I[ϕ3] ≡ lim n = exp − 1 = exp 1 (2.32) k→∞ (k) tˆn cf cf which turns out to be a computable constant that is independent of n. Of course we want to verify that we found the correct asymptotic value. For this we must computeρ ˆ1 and cf . Determining ρˆ1 ρˆ1 is a low-order quantity which means that it can be calculated by hand. Recapit- ulating the definition ofu ˆ(u) we have 2 2 C3 t3(u) uˆ(u) = 3 = u 3 (2.33) C2 t2(u) on the one hand and