Bogoliubov-Parasiuk-Hepp-Zimmermann Renormalization in Configuration Space
Total Page:16
File Type:pdf, Size:1020Kb
Bogoliubov-Parasiuk-Hepp-Zimmermann Renormalization in Configuration Space Von der Fakultät für Physik und Geowissenschaften der Universität Leipzig genehmigte DISSERTATION zur Erlangung des akademischen Grades DOCTOR RERUM NATURALIUM (Dr.rer.nat.) vorgelegt von Diplom-Physiker Steffen Pottel geboren am 09. Juli 1984 in Neubrandenburg (Deutschland) Gutachter: Professor Klaus Sibold Professor Raimar Wulkenhaar Tag der Verleihung: 18. Juni 2018 Bibliographische Daten Bogoliubov-Parasiuk-Hepp-Zimmermann Renormalization in Configuration Space (Bogoliubov-Parasiuk-Hepp-Zimmermann Renormierung im Konfigurationsraum) Pottel, Steffen Universität Leipzig, Dissertation, 2017 111 Seiten, 0 Abbildungen, 98 Referenzen Referat: In der vorliegenden Arbeit wird das Konzept der Renormierung im Impul- sraum nach Bogoliubov, Parasiuk, Hepp und Zimmermann in einen Ortsraumformalis- mus übertragen und auf analytische Raumzeiten im Rahmen von algebraischen Quan- tenfeldtheorien erweitert. Der Beweis des Schemas benutzt dabei keines der Argu- mente aus dem Impulsraum. Dennoch wird der Zusammenhang zwischen beiden For- mulierungen analysiert und Unterschiede sowie Grenzen unter Fouriertransformation aufgezeigt. Weiterhin werden Normalprodukte, die eine Verallgemeinerung der Wick- ordnung darstellen, Zimmermannidentitäten und die lineare Feldgleichung im Rahmen der neuen Renormierungsvorschrift hergeleitet. Acknowledgement I would like to thank Klaus Sibold for uncountable discussions and the opportunity to work on numerous topics in modern physics. His passion for renormalization theory and his veneration for Wolfhart Zimmermann served as inspiration for the present work. I am grateful for the support by my second supervisor Rainer Verch. Further, I would like to thank the members of the "Quantum Field Theory and Gravitation" group as well as the "Elementary Particle Theory" group at the Institute for Theoretical Physics for their support and creation of a productive working environment. In particular, the scientific coffee breaks organized by Gandalf Lechner and Ko Sanders are much appreciated. I would like to thank Jan Zschoche for many helpful discussion and proofreading parts of the present work. I am grateful for the kind hospitality at the Max Planck Institute for Mathematics in the Sciences and the financial support by the International Max Planck Research School "Mathematics in the Sciences". Above all, my family and Camilla deserve my sincerest thanks for their considerable amount of patience and unlimited support. Contents 1 Introduction 1 2 Preliminaries 9 2.1 The Wave Equation on Analytic Spacetimes . .9 2.2 Quantization of Fields . 15 3 BPHZ Renormalization in Configuration Space 28 3.1 Regularization . 28 3.2 Convergence of the R-operation . 32 3.2.1 Space Decomposition . 36 3.2.2 Reordering . 37 3.2.3 Recursion . 43 3.3 Recovering Time-Ordered Products . 49 4 Relation to the Momentum Space Method 56 4.1 Infrared Scaling . 56 4.2 Limit of Constant Coupling . 59 4.3 Additional Subtractions . 66 5 Normal Products 73 5.1 Zimmermann Identity . 74 5.2 Normal Products . 84 5.3 Field Equation . 92 6 Conclusion 96 Appendix 100 A Analytic Continuation . 100 B Taylor Operator . 102 Chapter 1 Introduction In laboratory experiments, scientists aim at extracting quantitative information about a physical system via observables. The system is prepared in a state and should produce the same result every time the experiment is repeated. This reproducibility does neither necessarily have to be exact for every measurement nor is it restricted to the same ex- act system. Specifically, a classical mechanical system is completely determined by its position and momentum at any given fixed time. In the Hamiltonian formulation, this corresponds to a point in phase space and the observables are represented by (smooth) functions. The system is said to be in a completely prepared state, since every mea- surement reproduces the same values any time it is repeated. For systems with a large particle number, an accurate preparation is often possible only with respect to certain macroscopic parameters like total energy, volume or total particle number. In theoretical approaches, one considers a large number of copies of the system which mathematically corresponds to the assignment of a distribution in phase space such that a measurement may be identified by the expectation value of a random variable, the observable. The latter experiences a sudden shift on microscopic scales where quantum effects set in. Namely, one does not want to change the notion of a system but let the observables account for the quantum character. For instance, in quantum mechanics, the simultane- ous measurement of conjugate observables, i.e. Fourier transform duals like position and momentum, is possible only up to a minimal uncertainty. Absorption and emission pro- cesses of photons on a microscopic level demand the incorporation of special relativity, thus the study of quantum fields. In the conventional approach to quantum field theory [IZ80] on Minkowski space, one greatly relies on its large symmetry group leading to a distinguished ground state, the particle interpretation of excitations and a preferred representation, the Fock space, based on asymptotic conditions [LSZ55, LSZ57]. However, physically most interesting models seem to be impenetrable since the involved quantum fields fulfill nonlinear partial dif- ferential equations, for which there is no general well-posedness theory available and, hence, generally no control over correlation functions. One usually evades the problem by using perturbation theory and approximates the interacting theory by a formal power 1 series in the nonlinearity about the free theory, i.e. the model with linear equation of motion. This formal expansion contains, apart from the lack of knowledge regarding convergence of the series in four and higher dimensions, some shortcomings related to physics. Undeniably, the self-interaction of the system is not fully reproduced, when the formation of bound states is neglected or long range interactions are not treatable satisfyingly. But even at finite order of the perturbative expansion, almost all physical quantities are ill-defined so that the approximation turns out to be too rough in order to cope with the properties of quantum fields. Due to Heisenberg’s uncertainty princi- ple, the latter have to be delocalized in spacetime and are promoted to distributions, by which physical quantities, which are nonlinear in the field, are afflicted with products of distributions defined at the same point. Nevertheless one may proceed and introduce a regularization, with which the physical quantities attain finite values, but this leads to the question whether the regularization is in concordance with all necessary physical principles. If it can be implemented for all physical quantities coherently and respects a set of physically reasonable axioms, the regularization prescription is called a renormal- ization scheme. In this regard, Feynman, Tomonaga and Schwinger were the pioneers with their works on loop corrections in quantum electrodynamics. Their results were studied more constructively by Dyson [Dys49a] leading to a broad and intense develop- ment in the field of renormalization theory [Vel76], which vests perturbative quantum field theory with high predictive power. Ultimately, the questionable recipe of simply summing the perturbation series up to some finite order and calling this a prediction is justified only by its overwhelming success: experiments can be described to be within accuracy parts per million. The existence of several renormalization prescriptions indicates that the problem of ren- dering perturbatively defined correlation functions among quantum fields well-defined does not have a unique solution. Indeed, it can be proved [Hep69] that any two renormal- ization schemes are related to each other by a finite change in the choice of ambiguities, the so-called counterterms. The first example of such equivalence was established between Bogoliubov-Parasiuk-Hepp, abbreviated BPH, and analytic renormalization. The idea of Bogoliubov and Parasiuk [BP57] is to employ a variation of the Hadamard regularization of singular integrals [RS81, Chapter 5] for the subtraction of divergent contributions, ab- breviated as R-operation. The variation of the Hadamard regularization is necessary due to the structure of the quantities in question. Specifically, the (time-ordered) correlation functions, evaluated in ground state, allow for an expansion in weights over so-called Feynman graphs, where the inner or interaction vertices may be treated as functions of spacetime rather than distributions. The aforementioned divergent contributions may occur at points of coinciding vertices in the integration over all configurations of such Feynman graphs in spacetime. For their analysis, it is convenient to consider the Fourier transform of correlation functions, since the complexity of the involved functions gets re- duced and the question of renormalizability changes over to the study of integrations over free (loop) momenta. The latter may be best understood in regard to the convolution theorem, i.e. the (generalized) convolutions of weights at incident edges of an integrated graph become standard pointwise products (including energy-momentum-conservation 2 at that vertex) and any pair of paths, which are disjoint with respect to their edge sets but share the same endpoints, transforms into a convolution. After choosing a basis of loops, the number of