Supersymmetry, Supergravity, and Superstring Phenomenology
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Modern Coding Theory: the Statistical Mechanics and Computer Science Point of View
Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View Andrea Montanari1 and R¨udiger Urbanke2 ∗ 1Stanford University, [email protected], 2EPFL, ruediger.urbanke@epfl.ch February 12, 2007 Abstract These are the notes for a set of lectures delivered by the two authors at the Les Houches Summer School on ‘Complex Systems’ in July 2006. They provide an introduction to the basic concepts in modern (probabilistic) coding theory, highlighting connections with statistical mechanics. We also stress common concepts with other disciplines dealing with similar problems that can be generically referred to as ‘large graphical models’. While most of the lectures are devoted to the classical channel coding problem over simple memoryless channels, we present a discussion of more complex channel models. We conclude with an overview of the main open challenges in the field. 1 Introduction and Outline The last few years have witnessed an impressive convergence of interests between disciplines which are a priori well separated: coding and information theory, statistical inference, statistical mechanics (in particular, mean field disordered systems), as well as theoretical computer science. The underlying reason for this convergence is the importance of probabilistic models and/or probabilistic techniques in each of these domains. This has long been obvious in information theory [53], statistical mechanics [10], and statistical inference [45]. In the last few years it has also become apparent in coding theory and theoretical computer science. In the first case, the invention of Turbo codes [7] and the re-invention arXiv:0704.2857v1 [cs.IT] 22 Apr 2007 of Low-Density Parity-Check (LDPC) codes [30, 28] has motivated the use of random constructions for coding information in robust/compact ways [50]. -
Geometric Analysis of Shapes and Its Application to Medical Image Analysis
Geometric Analysis of Shapes and Its application to Medical Image Analysis by Anirban Mukhopadhyay (Under the DIRECTION of Suchendra M. Bhandarkar) Abstract Geometric analysis of shapes plays an important role in the way the visual world is per- ceived by modern computers. To this end, low-level geometric features provide most obvious and important cues towards understanding the visual scene. A novel intrinsic geometric sur- face descriptor, termed as the Geodesic Field Estimate (GFE) is proposed. Also proposed is a parallel algorithm, well suited for implementation on Graphics Processing Units, for efficient computation of the shortest geodesic paths. Another low level geometric descriptor, termed as the Biharmonic Density Estimate, is proposed to provide an intrinsic geometric scale space signature for multiscale surface feature-based representation of deformable 3D shapes. The computer vision and graphics communities rely on mid-level geometric understanding as well to analyze a scene. Symmetry detection and partial shape matching play an important role as mid-level cues. A comprehensive framework for detection and characterization of partial intrinsic symmetry over 3D shapes is proposed. To identify prominent overlapping symmetric regions, the proposed framework is decoupled into Correspondence Space Voting followed by Transformation Space Mapping procedure. Moreover, a novel multi-criteria optimization framework for matching of partially visible shapes in multiple images using joint geometric embedding is also proposed. The ultimate goal of geometric shape analysis is to resolve high level applications of modern world. This dissertation has focused on three different application scenarios. In the first scenario, a novel approach for the analysis of the non-rigid Left Ventricular (LV) endocardial surface from Multi-Detector CT images, using a generalized isometry-invariant Bag-of-Features (BoF) descriptor, is proposed and implemented. -
An Introduction to Supersymmetry
An Introduction to Supersymmetry Ulrich Theis Institute for Theoretical Physics, Friedrich-Schiller-University Jena, Max-Wien-Platz 1, D–07743 Jena, Germany [email protected] This is a write-up of a series of five introductory lectures on global supersymmetry in four dimensions given at the 13th “Saalburg” Summer School 2007 in Wolfersdorf, Germany. Contents 1 Why supersymmetry? 1 2 Weyl spinors in D=4 4 3 The supersymmetry algebra 6 4 Supersymmetry multiplets 6 5 Superspace and superfields 9 6 Superspace integration 11 7 Chiral superfields 13 8 Supersymmetric gauge theories 17 9 Supersymmetry breaking 22 10 Perturbative non-renormalization theorems 26 A Sigma matrices 29 1 Why supersymmetry? When the Large Hadron Collider at CERN takes up operations soon, its main objective, besides confirming the existence of the Higgs boson, will be to discover new physics beyond the standard model of the strong and electroweak interactions. It is widely believed that what will be found is a (at energies accessible to the LHC softly broken) supersymmetric extension of the standard model. What makes supersymmetry such an attractive feature that the majority of the theoretical physics community is convinced of its existence? 1 First of all, under plausible assumptions on the properties of relativistic quantum field theories, supersymmetry is the unique extension of the algebra of Poincar´eand internal symmtries of the S-matrix. If new physics is based on such an extension, it must be supersymmetric. Furthermore, the quantum properties of supersymmetric theories are much better under control than in non-supersymmetric ones, thanks to powerful non- renormalization theorems. -
Karen Keskulla Uhlenbeck
2019 The Norwegian Academy of Science and Letters has decided to award the Abel Prize for 2019 to Karen Keskulla Uhlenbeck University of Texas at Austin “for her pioneering achievements in geometric partial differential equations, gauge theory and integrable systems, and for the fundamental impact of her work on analysis, geometry and mathematical physics.” Karen Keskulla Uhlenbeck is a founder of modern by earlier work of Morse, guarantees existence of Geometric Analysis. Her perspective has permeated minimisers of geometric functionals and is successful the field and led to some of the most dramatic in the case of 1-dimensional domains, such as advances in mathematics in the last 40 years. closed geodesics. Geometric analysis is a field of mathematics where Uhlenbeck realised that the condition of Palais— techniques of analysis and differential equations are Smale fails in the case of surfaces due to topological interwoven with the study of geometrical and reasons. The papers of Uhlenbeck, co-authored with topological problems. Specifically, one studies Sacks, on the energy functional for maps of surfaces objects such as curves, surfaces, connections and into a Riemannian manifold, have been extremely fields which are critical points of functionals influential and describe in detail what happens when representing geometric quantities such as energy the Palais-Smale condition is violated. A minimising and volume. For example, minimal surfaces are sequence of mappings converges outside a finite set critical points of the area and harmonic maps are of singular points and by using rescaling arguments, critical points of the Dirichlet energy. Uhlenbeck’s they describe the behaviour near the singularities major contributions include foundational results on as bubbles or instantons, which are the standard minimal surfaces and harmonic maps, Yang-Mills solutions of the minimising map from the 2-sphere to theory, and integrable systems. -
Clifford Algebras, Spinors and Supersymmetry. Francesco Toppan
IV Escola do CBPF – Rio de Janeiro, 15-26 de julho de 2002 Algebraic Structures and the Search for the Theory Of Everything: Clifford algebras, spinors and supersymmetry. Francesco Toppan CCP - CBPF, Rua Dr. Xavier Sigaud 150, cep 22290-180, Rio de Janeiro (RJ), Brazil abstract These lectures notes are intended to cover a small part of the material discussed in the course “Estruturas algebricas na busca da Teoria do Todo”. The Clifford Algebras, necessary to introduce the Dirac’s equation for free spinors in any arbitrary signature space-time, are fully classified and explicitly constructed with the help of simple, but powerful, algorithms which are here presented. The notion of supersymmetry is introduced and discussed in the context of Clifford algebras. 1 Introduction The basic motivations of the course “Estruturas algebricas na busca da Teoria do Todo”consisted in familiarizing graduate students with some of the algebra- ic structures which are currently investigated by theoretical physicists in the attempt of finding a consistent and unified quantum theory of the four known interactions. Both from aesthetic and practical considerations, the classification of mathematical and algebraic structures is a preliminary and necessary require- ment. Indeed, a very ambitious, but conceivable hope for a unified theory, is that no free parameter (or, less ambitiously, just few) has to be fixed, as an external input, due to phenomenological requirement. Rather, all possible pa- rameters should be predicted by the stringent consistency requirements put on such a theory. An example of this can be immediately given. It concerns the dimensionality of the space-time. -
Modern Coding Theory: the Statistical Mechanics and Computer Science Point of View
Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View Andrea Montanari1 and R¨udiger Urbanke2 ∗ 1Stanford University, [email protected], 2EPFL, ruediger.urbanke@epfl.ch February 12, 2007 Abstract These are the notes for a set of lectures delivered by the two authors at the Les Houches Summer School on ‘Complex Systems’ in July 2006. They provide an introduction to the basic concepts in modern (probabilistic) coding theory, highlighting connections with statistical mechanics. We also stress common concepts with other disciplines dealing with similar problems that can be generically referred to as ‘large graphical models’. While most of the lectures are devoted to the classical channel coding problem over simple memoryless channels, we present a discussion of more complex channel models. We conclude with an overview of the main open challenges in the field. 1 Introduction and Outline The last few years have witnessed an impressive convergence of interests between disciplines which are a priori well separated: coding and information theory, statistical inference, statistical mechanics (in particular, mean field disordered systems), as well as theoretical computer science. The underlying reason for this convergence is the importance of probabilistic models and/or probabilistic techniques in each of these domains. This has long been obvious in information theory [53], statistical mechanics [10], and statistical inference [45]. In the last few years it has also become apparent in coding theory and theoretical computer science. In the first case, the invention of Turbo codes [7] and the re-invention of Low-Density Parity-Check (LDPC) codes [30, 28] has motivated the use of random constructions for coding information in robust/compact ways [50]. -
1 the Superalgebra of the Supersymmetric Quantum Me- Chanics
CBPF-NF-019/07 1 Representations of the 1DN-Extended Supersymmetry Algebra∗ Francesco Toppan CBPF, Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro (RJ), Brazil. E-mail: [email protected] Abstract I review the present status of the classification of the irreducible representations of the alge- bra of the one-dimensional N− Extended Supersymmetry (the superalgebra of the Supersym- metric Quantum Mechanics) realized by linear derivative operators acting on a finite number of bosonic and fermionic fields. 1 The Superalgebra of the Supersymmetric Quantum Me- chanics The superalgebra of the Supersymmetric Quantum Mechanics (1DN-Extended Supersymme- try Algebra) is given by N odd generators Qi (i =1,...,N) and a single even generator H (the hamiltonian). It is defined by the (anti)-commutation relations {Qi,Qj} =2δijH, [Qi,H]=0. (1) The knowledge of its representation theory is essential for the construction of off-shell invariant actions which can arise as a dimensional reduction of higher dimensional supersymmetric theo- ries and/or can be given by 1D supersymmetric sigma-models associated to some d-dimensional target manifold (see [1] and [2]). Two main classes of (1) representations are considered in the literature: i) the non-linear realizations and ii) the linear representations. Non-linear realizations of (1) are only limited and partially understood (see [3] for recent results and a discussion). Linear representations, on the other hand, have been recently clarified and the program of their classification can be considered largely completed. In this work I will review the main results of the classification of the linear representations and point out which are the open problems. -
The Packing Problem in Statistics, Coding Theory and Finite Projective Spaces: Update 2001
The packing problem in statistics, coding theory and finite projective spaces: update 2001 J.W.P. Hirschfeld School of Mathematical Sciences University of Sussex Brighton BN1 9QH United Kingdom http://www.maths.sussex.ac.uk/Staff/JWPH [email protected] L. Storme Department of Pure Mathematics Krijgslaan 281 Ghent University B-9000 Gent Belgium http://cage.rug.ac.be/ ls ∼ [email protected] Abstract This article updates the authors’ 1998 survey [133] on the same theme that was written for the Bose Memorial Conference (Colorado, June 7-11, 1995). That article contained the principal results on the packing problem, up to 1995. Since then, considerable progress has been made on different kinds of subconfigurations. 1 Introduction 1.1 The packing problem The packing problem in statistics, coding theory and finite projective spaces regards the deter- mination of the maximal or minimal sizes of given subconfigurations of finite projective spaces. This problem is not only interesting from a geometrical point of view; it also arises when coding-theoretical problems and problems from the design of experiments are translated into equivalent geometrical problems. The geometrical interest in the packing problem and the links with problems investigated in other research fields have given this problem a central place in Galois geometries, that is, the study of finite projective spaces. In 1983, a historical survey on the packing problem was written by the first author [126] for the 9th British Combinatorial Conference. A new survey article stating the principal results up to 1995 was written by the authors for the Bose Memorial Conference [133]. -
Arxiv:1111.6055V1 [Math.CO] 25 Nov 2011 2 Definitions
Adinkras for Mathematicians Yan X Zhang Massachusetts Institute of Technology November 28, 2011 Abstract Adinkras are graphical tools created for the study of representations in supersym- metry. Besides having inherent interest for physicists, adinkras offer many easy-to-state and accessible mathematical problems of algebraic, combinatorial, and computational nature. We use a more mathematically natural language to survey these topics, suggest new definitions, and present original results. 1 Introduction In a series of papers starting with [8], different subsets of the “DFGHILM collaboration” (Doran, Faux, Gates, Hübsch, Iga, Landweber, Miller) have built and extended the ma- chinery of adinkras. Following the spirit of Feyman diagrams, adinkras are combinatorial objects that encode information about the representation theory of supersymmetry alge- bras. Adinkras have many intricate links with other fields such as graph theory, Clifford theory, and coding theory. Each of these connections provide many problems that can be compactly communicated to a (non-specialist) mathematician. This paper is a humble attempt to bridge the language gap and generate communication. We redevelop the foundations in a self-contained manner in Sections 2 and 4, using different definitions and constructions that we consider to be more mathematically natural for our purposes. Using our new setup, we prove some original results and make new interpretations in Sections 5 and 6. We wish that these purely combinatorial discussions will equip the readers with a mental model that allows them to appreciate (or to solve!) the original representation-theoretic problems in the physics literature. We return to these problems in Section 7 and reconsider some of the foundational questions of the theory. -
SUPERSYMMETRY 1. Introduction the Purpose
SUPERSYMMETRY JOSH KANTOR 1. Introduction The purpose of these notes is to give a short and (overly?)simple description of supersymmetry for Mathematicians. Our description is far from complete and should be thought of as a first pass at the ideas that arise from supersymmetry. Fundamental to supersymmetry is the mathematics of Clifford algebras and spin groups. We will describe the mathematical results we are using but we refer the reader to the references for proofs. In particular [4], [1], and [5] all cover spinors nicely. 2. Spin and Clifford Algebras We will first review the definition of spin, spinors, and Clifford algebras. Let V be a vector space over R or C with some nondegenerate quadratic form. The clifford algebra of V , l(V ), is the algebra generated by V and 1, subject to the relations v v = v, vC 1, or equivalently v w + w v = 2 v, w . Note that elements of· l(V ) !can"b·e written as polynomials· in V · and this! giv"es a splitting l(V ) = l(VC )0 l(V )1. Here l(V )0 is the set of elements of l(V ) which can bCe writtenC as a linear⊕ C combinationC of products of even numbers ofCvectors from V , and l(V )1 is the set of elements which can be written as a linear combination of productsC of odd numbers of vectors from V . Note that more succinctly l(V ) is just the quotient of the tensor algebra of V by the ideal generated by v vC v, v 1. -
The Use of Coding Theory in Computational Complexity 1 Introduction
The Use of Co ding Theory in Computational Complexity Joan Feigenbaum ATT Bell Lab oratories Murray Hill NJ USA jfresearchattcom Abstract The interplay of co ding theory and computational complexity theory is a rich source of results and problems This article surveys three of the ma jor themes in this area the use of co des to improve algorithmic eciency the theory of program testing and correcting which is a complexity theoretic analogue of error detection and correction the use of co des to obtain characterizations of traditional complexity classes such as NP and PSPACE these new characterizations are in turn used to show that certain combinatorial optimization problems are as hard to approximate closely as they are to solve exactly Intro duction Complexity theory is the study of ecient computation Faced with a computational problem that can b e mo delled formally a complexity theorist seeks rst to nd a solution that is provably ecient and if such a solution is not found to prove that none exists Co ding theory which provides techniques for robust representation of information is valuable b oth in designing ecient solutions and in proving that ecient solutions do not exist This article surveys the use of co des in complexity theory The improved upp er b ounds obtained with co ding techniques include b ounds on the numb er of random bits used by probabilistic algo rithms and the communication complexity of cryptographic proto cols In these constructions co des are used to design small sample spaces that approximate the b ehavior -
Algorithmic Coding Theory ¡
ALGORITHMIC CODING THEORY Atri Rudra State University of New York at Buffalo 1 Introduction Error-correcting codes (or just codes) are clever ways of representing data so that one can recover the original information even if parts of it are corrupted. The basic idea is to judiciously introduce redundancy so that the original information can be recovered when parts of the (redundant) data have been corrupted. Perhaps the most natural and common application of error correcting codes is for communication. For example, when packets are transmitted over the Internet, some of the packets get corrupted or dropped. To deal with this, multiple layers of the TCP/IP stack use a form of error correction called CRC Checksum [Pe- terson and Davis, 1996]. Codes are used when transmitting data over the telephone line or via cell phones. They are also used in deep space communication and in satellite broadcast. Codes also have applications in areas not directly related to communication. For example, codes are used heavily in data storage. CDs and DVDs can work even in the presence of scratches precisely because they use codes. Codes are used in Redundant Array of Inexpensive Disks (RAID) [Chen et al., 1994] and error correcting memory [Chen and Hsiao, 1984]. Codes are also deployed in other applications such as paper bar codes, for example, the bar ¡ To appear as a book chapter in the CRC Handbook on Algorithms and Complexity Theory edited by M. Atallah and M. Blanton. Please send your comments to [email protected] 1 code used by UPS called MaxiCode [Chandler et al., 1989].