Mathematical Aspects of Classical Field Theory
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Modern Coding Theory: the Statistical Mechanics and Computer Science Point of View
Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View Andrea Montanari1 and R¨udiger Urbanke2 ∗ 1Stanford University, [email protected], 2EPFL, ruediger.urbanke@epfl.ch February 12, 2007 Abstract These are the notes for a set of lectures delivered by the two authors at the Les Houches Summer School on ‘Complex Systems’ in July 2006. They provide an introduction to the basic concepts in modern (probabilistic) coding theory, highlighting connections with statistical mechanics. We also stress common concepts with other disciplines dealing with similar problems that can be generically referred to as ‘large graphical models’. While most of the lectures are devoted to the classical channel coding problem over simple memoryless channels, we present a discussion of more complex channel models. We conclude with an overview of the main open challenges in the field. 1 Introduction and Outline The last few years have witnessed an impressive convergence of interests between disciplines which are a priori well separated: coding and information theory, statistical inference, statistical mechanics (in particular, mean field disordered systems), as well as theoretical computer science. The underlying reason for this convergence is the importance of probabilistic models and/or probabilistic techniques in each of these domains. This has long been obvious in information theory [53], statistical mechanics [10], and statistical inference [45]. In the last few years it has also become apparent in coding theory and theoretical computer science. In the first case, the invention of Turbo codes [7] and the re-invention arXiv:0704.2857v1 [cs.IT] 22 Apr 2007 of Low-Density Parity-Check (LDPC) codes [30, 28] has motivated the use of random constructions for coding information in robust/compact ways [50]. -
Foundations of Probability Theory and Statistical Mechanics
Chapter 6 Foundations of Probability Theory and Statistical Mechanics EDWIN T. JAYNES Department of Physics, Washington University St. Louis, Missouri 1. What Makes Theories Grow? Scientific theories are invented and cared for by people; and so have the properties of any other human institution - vigorous growth when all the factors are right; stagnation, decadence, and even retrograde progress when they are not. And the factors that determine which it will be are seldom the ones (such as the state of experimental or mathe matical techniques) that one might at first expect. Among factors that have seemed, historically, to be more important are practical considera tions, accidents of birth or personality of individual people; and above all, the general philosophical climate in which the scientist lives, which determines whether efforts in a certain direction will be approved or deprecated by the scientific community as a whole. However much the" pure" scientist may deplore it, the fact remains that military or engineering applications of science have, over and over again, provided the impetus without which a field would have remained stagnant. We know, for example, that ARCHIMEDES' work in mechanics was at the forefront of efforts to defend Syracuse against the Romans; and that RUMFORD'S experiments which led eventually to the first law of thermodynamics were performed in the course of boring cannon. The development of microwave theory and techniques during World War II, and the present high level of activity in plasma physics are more recent examples of this kind of interaction; and it is clear that the past decade of unprecedented advances in solid-state physics is not entirely unrelated to commercial applications, particularly in electronics. -
Quantum Field Theory*
Quantum Field Theory y Frank Wilczek Institute for Advanced Study, School of Natural Science, Olden Lane, Princeton, NJ 08540 I discuss the general principles underlying quantum eld theory, and attempt to identify its most profound consequences. The deep est of these consequences result from the in nite number of degrees of freedom invoked to implement lo cality.Imention a few of its most striking successes, b oth achieved and prosp ective. Possible limitation s of quantum eld theory are viewed in the light of its history. I. SURVEY Quantum eld theory is the framework in which the regnant theories of the electroweak and strong interactions, which together form the Standard Mo del, are formulated. Quantum electro dynamics (QED), b esides providing a com- plete foundation for atomic physics and chemistry, has supp orted calculations of physical quantities with unparalleled precision. The exp erimentally measured value of the magnetic dip ole moment of the muon, 11 (g 2) = 233 184 600 (1680) 10 ; (1) exp: for example, should b e compared with the theoretical prediction 11 (g 2) = 233 183 478 (308) 10 : (2) theor: In quantum chromo dynamics (QCD) we cannot, for the forseeable future, aspire to to comparable accuracy.Yet QCD provides di erent, and at least equally impressive, evidence for the validity of the basic principles of quantum eld theory. Indeed, b ecause in QCD the interactions are stronger, QCD manifests a wider variety of phenomena characteristic of quantum eld theory. These include esp ecially running of the e ective coupling with distance or energy scale and the phenomenon of con nement. -
Randomized Algorithms Week 1: Probability Concepts
600.664: Randomized Algorithms Professor: Rao Kosaraju Johns Hopkins University Scribe: Your Name Randomized Algorithms Week 1: Probability Concepts Rao Kosaraju 1.1 Motivation for Randomized Algorithms In a randomized algorithm you can toss a fair coin as a step of computation. Alternatively a bit taking values of 0 and 1 with equal probabilities can be chosen in a single step. More generally, in a single step an element out of n elements can be chosen with equal probalities (uniformly at random). In such a setting, our goal can be to design an algorithm that minimizes run time. Randomized algorithms are often advantageous for several reasons. They may be faster than deterministic algorithms. They may also be simpler. In addition, there are problems that can be solved with randomization for which we cannot design efficient deterministic algorithms directly. In some of those situations, we can design attractive deterministic algoirthms by first designing randomized algorithms and then converting them into deter- ministic algorithms by applying standard derandomization techniques. Deterministic Algorithm: Worst case running time, i.e. the number of steps the algo- rithm takes on the worst input of length n. Randomized Algorithm: 1) Expected number of steps the algorithm takes on the worst input of length n. 2) w.h.p. bounds. As an example of a randomized algorithm, we now present the classic quicksort algorithm and derive its performance later on in the chapter. We are given a set S of n distinct elements and we want to sort them. Below is the randomized quicksort algorithm. Algorithm RandQuickSort(S = fa1; a2; ··· ; ang If jSj ≤ 1 then output S; else: Choose a pivot element ai uniformly at random (u.a.r.) from S Split the set S into two subsets S1 = fajjaj < aig and S2 = fajjaj > aig by comparing each aj with the chosen ai Recurse on sets S1 and S2 Output the sorted set S1 then ai and then sorted S2. -
Electromagnetic Field Theory
Electromagnetic Field Theory BO THIDÉ Υ UPSILON BOOKS ELECTROMAGNETIC FIELD THEORY Electromagnetic Field Theory BO THIDÉ Swedish Institute of Space Physics and Department of Astronomy and Space Physics Uppsala University, Sweden and School of Mathematics and Systems Engineering Växjö University, Sweden Υ UPSILON BOOKS COMMUNA AB UPPSALA SWEDEN · · · Also available ELECTROMAGNETIC FIELD THEORY EXERCISES by Tobia Carozzi, Anders Eriksson, Bengt Lundborg, Bo Thidé and Mattias Waldenvik Freely downloadable from www.plasma.uu.se/CED This book was typeset in LATEX 2" (based on TEX 3.14159 and Web2C 7.4.2) on an HP Visualize 9000⁄360 workstation running HP-UX 11.11. Copyright c 1997, 1998, 1999, 2000, 2001, 2002, 2003 and 2004 by Bo Thidé Uppsala, Sweden All rights reserved. Electromagnetic Field Theory ISBN X-XXX-XXXXX-X Downloaded from http://www.plasma.uu.se/CED/Book Version released 19th June 2004 at 21:47. Preface The current book is an outgrowth of the lecture notes that I prepared for the four-credit course Electrodynamics that was introduced in the Uppsala University curriculum in 1992, to become the five-credit course Classical Electrodynamics in 1997. To some extent, parts of these notes were based on lecture notes prepared, in Swedish, by BENGT LUNDBORG who created, developed and taught the earlier, two-credit course Electromagnetic Radiation at our faculty. Intended primarily as a textbook for physics students at the advanced undergradu- ate or beginning graduate level, it is hoped that the present book may be useful for research workers -
Lo Algebras for Extended Geometry from Borcherds Superalgebras
Commun. Math. Phys. 369, 721–760 (2019) Communications in Digital Object Identifier (DOI) https://doi.org/10.1007/s00220-019-03451-2 Mathematical Physics L∞ Algebras for Extended Geometry from Borcherds Superalgebras Martin Cederwall, Jakob Palmkvist Division for Theoretical Physics, Department of Physics, Chalmers University of Technology, 412 96 Gothenburg, Sweden. E-mail: [email protected]; [email protected] Received: 24 May 2018 / Accepted: 15 March 2019 Published online: 26 April 2019 – © The Author(s) 2019 Abstract: We examine the structure of gauge transformations in extended geometry, the framework unifying double geometry, exceptional geometry, etc. This is done by giving the variations of the ghosts in a Batalin–Vilkovisky framework, or equivalently, an L∞ algebra. The L∞ brackets are given as derived brackets constructed using an underlying Borcherds superalgebra B(gr+1), which is a double extension of the structure algebra gr . The construction includes a set of “ancillary” ghosts. All brackets involving the infinite sequence of ghosts are given explicitly. All even brackets above the 2-brackets vanish, and the coefficients appearing in the brackets are given by Bernoulli numbers. The results are valid in the absence of ancillary transformations at ghost number 1. We present evidence that in order to go further, the underlying algebra should be the corresponding tensor hierarchy algebra. Contents 1. Introduction ................................. 722 2. The Borcherds Superalgebra ......................... 723 3. Section Constraint and Generalised Lie Derivatives ............. 728 4. Derivatives, Generalised Lie Derivatives and Other Operators ....... 730 4.1 The derivative .............................. 730 4.2 Generalised Lie derivative from “almost derivation” .......... 731 4.3 “Almost covariance” and related operators .............. -
Probability Theory Review for Machine Learning
Probability Theory Review for Machine Learning Samuel Ieong November 6, 2006 1 Basic Concepts Broadly speaking, probability theory is the mathematical study of uncertainty. It plays a central role in machine learning, as the design of learning algorithms often relies on proba- bilistic assumption of the data. This set of notes attempts to cover some basic probability theory that serves as a background for the class. 1.1 Probability Space When we speak about probability, we often refer to the probability of an event of uncertain nature taking place. For example, we speak about the probability of rain next Tuesday. Therefore, in order to discuss probability theory formally, we must first clarify what the possible events are to which we would like to attach probability. Formally, a probability space is defined by the triple (Ω, F,P ), where • Ω is the space of possible outcomes (or outcome space), • F ⊆ 2Ω (the power set of Ω) is the space of (measurable) events (or event space), • P is the probability measure (or probability distribution) that maps an event E ∈ F to a real value between 0 and 1 (think of P as a function). Given the outcome space Ω, there is some restrictions as to what subset of 2Ω can be considered an event space F: • The trivial event Ω and the empty event ∅ is in F. • The event space F is closed under (countable) union, i.e., if α, β ∈ F, then α ∪ β ∈ F. • The even space F is closed under complement, i.e., if α ∈ F, then (Ω \ α) ∈ F. -
Modern Coding Theory: the Statistical Mechanics and Computer Science Point of View
Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View Andrea Montanari1 and R¨udiger Urbanke2 ∗ 1Stanford University, [email protected], 2EPFL, ruediger.urbanke@epfl.ch February 12, 2007 Abstract These are the notes for a set of lectures delivered by the two authors at the Les Houches Summer School on ‘Complex Systems’ in July 2006. They provide an introduction to the basic concepts in modern (probabilistic) coding theory, highlighting connections with statistical mechanics. We also stress common concepts with other disciplines dealing with similar problems that can be generically referred to as ‘large graphical models’. While most of the lectures are devoted to the classical channel coding problem over simple memoryless channels, we present a discussion of more complex channel models. We conclude with an overview of the main open challenges in the field. 1 Introduction and Outline The last few years have witnessed an impressive convergence of interests between disciplines which are a priori well separated: coding and information theory, statistical inference, statistical mechanics (in particular, mean field disordered systems), as well as theoretical computer science. The underlying reason for this convergence is the importance of probabilistic models and/or probabilistic techniques in each of these domains. This has long been obvious in information theory [53], statistical mechanics [10], and statistical inference [45]. In the last few years it has also become apparent in coding theory and theoretical computer science. In the first case, the invention of Turbo codes [7] and the re-invention of Low-Density Parity-Check (LDPC) codes [30, 28] has motivated the use of random constructions for coding information in robust/compact ways [50]. -
Perturbative Algebraic Quantum Field Theory
Perturbative algebraic quantum field theory Klaus Fredenhagen Katarzyna Rejzner II Inst. f. Theoretische Physik, Department of Mathematics, Universität Hamburg, University of Rome Tor Vergata Luruper Chaussee 149, Via della Ricerca Scientifica 1, D-22761 Hamburg, Germany I-00133, Rome, Italy [email protected] [email protected] arXiv:1208.1428v2 [math-ph] 27 Feb 2013 2012 These notes are based on the course given by Klaus Fredenhagen at the Les Houches Win- ter School in Mathematical Physics (January 29 - February 3, 2012) and the course QFT for mathematicians given by Katarzyna Rejzner in Hamburg for the Research Training Group 1670 (February 6 -11, 2012). Both courses were meant as an introduction to mod- ern approach to perturbative quantum field theory and are aimed both at mathematicians and physicists. Contents 1 Introduction 3 2 Algebraic quantum mechanics 5 3 Locally covariant field theory 9 4 Classical field theory 14 5 Deformation quantization of free field theories 21 6 Interacting theories and the time ordered product 26 7 Renormalization 26 A Distributions and wavefront sets 35 1 Introduction Quantum field theory (QFT) is at present, by far, the most successful description of fundamental physics. Elementary physics , to a large extent, explained by a specific quantum field theory, the so-called Standard Model. All the essential structures of the standard model are nowadays experimentally verified. Outside of particle physics, quan- tum field theoretical concepts have been successfully applied also to condensed matter physics. In spite of its great achievements, quantum field theory also suffers from several longstanding open problems. The most serious problem is the incorporation of gravity. -
The Packing Problem in Statistics, Coding Theory and Finite Projective Spaces: Update 2001
The packing problem in statistics, coding theory and finite projective spaces: update 2001 J.W.P. Hirschfeld School of Mathematical Sciences University of Sussex Brighton BN1 9QH United Kingdom http://www.maths.sussex.ac.uk/Staff/JWPH [email protected] L. Storme Department of Pure Mathematics Krijgslaan 281 Ghent University B-9000 Gent Belgium http://cage.rug.ac.be/ ls ∼ [email protected] Abstract This article updates the authors’ 1998 survey [133] on the same theme that was written for the Bose Memorial Conference (Colorado, June 7-11, 1995). That article contained the principal results on the packing problem, up to 1995. Since then, considerable progress has been made on different kinds of subconfigurations. 1 Introduction 1.1 The packing problem The packing problem in statistics, coding theory and finite projective spaces regards the deter- mination of the maximal or minimal sizes of given subconfigurations of finite projective spaces. This problem is not only interesting from a geometrical point of view; it also arises when coding-theoretical problems and problems from the design of experiments are translated into equivalent geometrical problems. The geometrical interest in the packing problem and the links with problems investigated in other research fields have given this problem a central place in Galois geometries, that is, the study of finite projective spaces. In 1983, a historical survey on the packing problem was written by the first author [126] for the 9th British Combinatorial Conference. A new survey article stating the principal results up to 1995 was written by the authors for the Bose Memorial Conference [133]. -
Arxiv:1111.6055V1 [Math.CO] 25 Nov 2011 2 Definitions
Adinkras for Mathematicians Yan X Zhang Massachusetts Institute of Technology November 28, 2011 Abstract Adinkras are graphical tools created for the study of representations in supersym- metry. Besides having inherent interest for physicists, adinkras offer many easy-to-state and accessible mathematical problems of algebraic, combinatorial, and computational nature. We use a more mathematically natural language to survey these topics, suggest new definitions, and present original results. 1 Introduction In a series of papers starting with [8], different subsets of the “DFGHILM collaboration” (Doran, Faux, Gates, Hübsch, Iga, Landweber, Miller) have built and extended the ma- chinery of adinkras. Following the spirit of Feyman diagrams, adinkras are combinatorial objects that encode information about the representation theory of supersymmetry alge- bras. Adinkras have many intricate links with other fields such as graph theory, Clifford theory, and coding theory. Each of these connections provide many problems that can be compactly communicated to a (non-specialist) mathematician. This paper is a humble attempt to bridge the language gap and generate communication. We redevelop the foundations in a self-contained manner in Sections 2 and 4, using different definitions and constructions that we consider to be more mathematically natural for our purposes. Using our new setup, we prove some original results and make new interpretations in Sections 5 and 6. We wish that these purely combinatorial discussions will equip the readers with a mental model that allows them to appreciate (or to solve!) the original representation-theoretic problems in the physics literature. We return to these problems in Section 7 and reconsider some of the foundational questions of the theory. -
The Use of Coding Theory in Computational Complexity 1 Introduction
The Use of Co ding Theory in Computational Complexity Joan Feigenbaum ATT Bell Lab oratories Murray Hill NJ USA jfresearchattcom Abstract The interplay of co ding theory and computational complexity theory is a rich source of results and problems This article surveys three of the ma jor themes in this area the use of co des to improve algorithmic eciency the theory of program testing and correcting which is a complexity theoretic analogue of error detection and correction the use of co des to obtain characterizations of traditional complexity classes such as NP and PSPACE these new characterizations are in turn used to show that certain combinatorial optimization problems are as hard to approximate closely as they are to solve exactly Intro duction Complexity theory is the study of ecient computation Faced with a computational problem that can b e mo delled formally a complexity theorist seeks rst to nd a solution that is provably ecient and if such a solution is not found to prove that none exists Co ding theory which provides techniques for robust representation of information is valuable b oth in designing ecient solutions and in proving that ecient solutions do not exist This article surveys the use of co des in complexity theory The improved upp er b ounds obtained with co ding techniques include b ounds on the numb er of random bits used by probabilistic algo rithms and the communication complexity of cryptographic proto cols In these constructions co des are used to design small sample spaces that approximate the b ehavior