Limits on Efficient Computation in the Physical World

Total Page:16

File Type:pdf, Size:1020Kb

Limits on Efficient Computation in the Physical World Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Bachelor of Science (Cornell University) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA, BERKELEY Committee in charge: Professor Umesh Vazirani, Chair Professor Luca Trevisan Professor K. Birgitta Whaley Fall 2004 The dissertation of Scott Joel Aaronson is approved: Chair Date Date Date University of California, Berkeley Fall 2004 Limits on Efficient Computation in the Physical World Copyright 2004 by Scott Joel Aaronson 1 Abstract Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In partic- ular, any quantum algorithm that solves the collision problem—that of deciding whether a sequence of n integers is one-to-one or two-to-one—must query the sequence Ω n1/5 times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no “black-box” quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in poly- nomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”; and that any quantum algorithm needs Ω 2n/4/n queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wol- fram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a “Sure/Shor separator”—a criterion that separates the already-verified quantum states from those that appear in Shor’s factoring algorithm. I argue that such a separator should be based on a complexity classification of quantum states, and go on to create such a classification. Next I ask what happens to the quantum computing model if we take into account that the speed of light is finite—and in particu- lar, whether Grover’s algorithm still yields a quadratic speedup for searching a database. Refuting a claim by Benioff, I show that the surprising answer is yes. Finally, I analyze hypothetical models of computation that go even beyond quantum computing. I show that 2 many such models would be as powerful as the complexity class PP, and use this fact to give a simple, quantum computing based proof that PP is closed under intersection. On the other hand, I also present one model—wherein we could sample the entire history of a hidden variable—that appears to be more powerful than standard quantum computing, but only slightly so. Professor Umesh Vazirani Dissertation Committee Chair iii Contents List of Figures vii List of Tables viii 1 “Aren’t You Worried That Quantum Computing Won’t Pan Out?” 1 2 Overview 6 2.1 LimitationsofQuantumComputers . ... 7 2.1.1 TheCollisionProblem.......................... 8 2.1.2 LocalSearch ............................... 9 2.1.3 Quantum Certificate Complexity . 10 2.1.4 TheNeedtoUncompute. .. .. .. .. .. .. .. 11 2.1.5 Limitations of Quantum Advice . 11 2.2 ModelsandReality................................ 13 2.2.1 Skepticism of Quantum Computing . 13 2.2.2 Complexity Theory of Quantum States . 13 2.2.3 Quantum Search of Spatial Regions . 14 2.2.4 Quantum Computing and Postselection . 15 2.2.5 ThePowerofHistory .......................... 16 3 Complexity Theory Cheat Sheet 18 3.1 TheComplexityZooJunior . 19 3.2 Notation...................................... 20 3.3 Oracles ...................................... 21 4 Quantum Computing Cheat Sheet 23 4.1 Quantum Computers: N Qubits ........................ 24 4.2 FurtherConcepts................................. 27 I Limitations of Quantum Computers 29 5 Introduction 30 5.1 TheQuantumBlack-BoxModel. 31 5.2 OracleSeparations ............................... 32 iv 6 The Collision Problem 34 6.1 Motivation .................................... 36 6.1.1 OracleHardnessResults . 36 6.1.2 InformationErasure ........................... 36 6.2 Preliminaries ................................... 37 6.3 Reduction to Bivariate Polynomial . ..... 38 6.4 LowerBound ................................... 41 6.5 SetComparison.................................. 43 6.6 OpenProblems.................................. 46 7 Local Search 47 7.1 Motivation .................................... 49 7.2 Preliminaries ................................... 51 7.3 RelationalAdversaryMethod . 52 7.4 Snakes....................................... 57 7.5 SpecificGraphs.................................. 60 7.5.1 BooleanHypercube ........................... 60 7.5.2 Constant-Dimensional Grid Graph . 64 8 Quantum Certificate Complexity 67 8.1 SummaryofResults ............................... 68 8.2 RelatedWork................................... 70 8.3 Characterization of Quantum Certificate Complexity . .......... 70 8.4 Quantum Lower Bound for Total Functions . 72 8.5 AsymptoticGaps................................. 74 8.5.1 LocalSeparations............................. 76 8.5.2 Symmetric Partial Functions . 77 8.6 OpenProblems.................................. 78 9 The Need to Uncompute 79 9.1 Preliminaries ................................... 81 9.2 QuantumLowerBound ............................. 82 9.3 OpenProblems.................................. 87 10 Limitations of Quantum Advice 88 10.1Preliminaries .................................. 91 10.1.1 QuantumAdvice ............................. 92 10.1.2 TheAlmostAsGoodAsNewLemma . 93 10.2 Simulating Quantum Messages . 93 10.2.1 Simulating Quantum Advice . 96 10.3 A Direct Product Theorem for Quantum Search . ..... 99 10.4 TheTraceDistanceMethod . 103 10.4.1 Applications ............................... 106 10.5OpenProblems .................................. 110 v 11 Summary of Part I 112 II Models and Reality 114 12 Skepticism of Quantum Computing 116 12.1 Bell Inequalities and Long-Range Threads . ........ 119 13 Complexity Theory of Quantum States 126 13.1 Sure/ShorSeparators. 127 13.2 ClassifyingQuantumStates . 130 13.3BasicResults ................................... 135 13.4 Relations Among Quantum State Classes . 138 13.5LowerBounds................................... 141 13.5.1 SubgroupStates ............................. 142 13.5.2 ShorStates ................................ 146 13.5.3 Tree Size and Persistence of Entanglement . 148 13.6 Manifestly Orthogonal Tree Size . 149 13.7 ComputingWithTreeStates . 154 13.8 TheExperimentalSituation . 157 13.9 ConclusionandOpenProblems . 160 14 Quantum Search of Spatial Regions 162 14.1 SummaryofResults ............................... 162 14.2RelatedWork................................... 164 14.3 ThePhysicsofDatabases . 165 14.4TheModel .................................... 167 14.4.1 LocalityCriteria . 168 14.5GeneralBounds.................................. 169 14.6SearchonGrids.................................. 173 14.6.1 Amplitude Amplification . 174 14.6.2 DimensionAtLeast3 . .. .. .. .. .. .. .. 175 14.6.3 Dimension2................................ 180 14.6.4 MultipleMarkedItems. 181 14.6.5 Unknown Number of Marked Items . 184 14.7 SearchonIrregularGraphs . 185 14.7.1 Bits Scattered on a Graph . 189 14.8 Application to Disjointness . 190 14.9OpenProblems .................................. 191 15 Quantum Computing and Postselection 192 15.1 The Class PostBQP ................................ 193 15.2 Fantasy Quantum Mechanics . 196 15.3OpenProblems .................................. 198 vi 16 The Power of History 199 16.1 The Complexity of Sampling Histories . 200 16.2 OutlineofChapter ............................... 201 16.3 Hidden-VariableTheories . 203 16.3.1 ComparisonwithPreviousWork . 205 16.3.2 Objections ................................ 206 16.4 Axioms for Hidden-Variable Theories . ....... 206 16.4.1 ComparingTheories . 207 16.5 ImpossibilityResults . 208 16.6 SpecificTheories ................................ 211 16.6.1 FlowTheory ............................... 211 16.6.2 Schr¨odingerTheory . 215 16.7 TheComputationalModel. 218 16.7.1 BasicResults ............................... 219 16.8 TheJuggleSubroutine. 220 16.9 Simulating SZK .................................. 221 16.10Search in N 1/3 Queries.............................. 224 16.11ConclusionsandOpenProblems . 226 17 Summary of Part II 228 Bibliography 229 vii List of Figures
Recommended publications
  • Varying Constants, Gravitation and Cosmology
    Varying constants, Gravitation and Cosmology Jean-Philippe Uzan Institut d’Astrophysique de Paris, UMR-7095 du CNRS, Universit´ePierre et Marie Curie, 98 bis bd Arago, 75014 Paris (France) and Department of Mathematics and Applied Mathematics, Cape Town University, Rondebosch 7701 (South Africa) and National Institute for Theoretical Physics (NITheP), Stellenbosch 7600 (South Africa). email: [email protected] http//www2.iap.fr/users/uzan/ September 29, 2010 Abstract Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to mat- ter. This will induce a violation of the universality of free fall. It is thus of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We thus detail the relations between the constants, the tests of the local posi- tion invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, Solar system observations, meteorites dating, quasar absorption spectra, stellar physics, pul- sar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we arXiv:1009.5514v1 [astro-ph.CO] 28 Sep 2010 describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of differ- ent constants.
    [Show full text]
  • The Multiplicative Weights Update Method: a Meta-Algorithm and Applications
    THEORY OF COMPUTING, Volume 8 (2012), pp. 121–164 www.theoryofcomputing.org RESEARCH SURVEY The Multiplicative Weights Update Method: A Meta-Algorithm and Applications Sanjeev Arora∗ Elad Hazan Satyen Kale Received: July 22, 2008; revised: July 2, 2011; published: May 1, 2012. Abstract: Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analyses are usually very similar and rely on an exponential potential function. In this survey we present a simple meta-algorithm that unifies many of these disparate algorithms and derives them as simple instantiations of the meta-algorithm. We feel that since this meta-algorithm and its analysis are so simple, and its applications so broad, it should be a standard part of algorithms courses, like “divide and conquer.” ACM Classification: G.1.6 AMS Classification: 68Q25 Key words and phrases: algorithms, game theory, machine learning 1 Introduction The Multiplicative Weights (MW) method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game Theory. The setting for this algorithm is the following. A decision maker has a choice of n decisions, and needs to repeatedly make a decision and obtain an associated payoff. The decision maker’s goal, in the long run, is to achieve a total payoff which is comparable to the payoff of that fixed decision that maximizes the total payoff with the benefit of ∗This project was supported by David and Lucile Packard Fellowship and NSF grants MSPA-MCS 0528414 and CCR- 0205594.
    [Show full text]
  • Invention, Intension and the Limits of Computation
    Minds & Machines { under review (final version pre-review, Copyright Minds & Machines) Invention, Intension and the Limits of Computation Hajo Greif 30 June, 2020 Abstract This is a critical exploration of the relation between two common as- sumptions in anti-computationalist critiques of Artificial Intelligence: The first assumption is that at least some cognitive abilities are specifically human and non-computational in nature, whereas the second assumption is that there are principled limitations to what machine-based computation can accomplish with respect to simulating or replicating these abilities. Against the view that these putative differences between computation in humans and machines are closely re- lated, this essay argues that the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing and on an inquiry into the scope and nature of human invention in mathematics, and their respective bearing on theories of computation. Keywords Artificial Intelligence · Intensionality · Extensionality · Invention in mathematics · Turing computability · Mechanistic theories of computation This paper originated as a rather spontaneous and quickly drafted Computability in Europe conference submission. It grew into something more substantial with the help of the CiE review- ers and the author's colleagues in the Philosophy of Computing Group at Warsaw University of Technology, Paula Quinon and Pawe lStacewicz, mediated by an entry to the \Caf´eAleph" blog co-curated by Pawe l. Philosophy of Computing Group, International Center for Formal Ontology (ICFO), Warsaw University of Technology E-mail: [email protected] https://hajo-greif.net; https://tinyurl.com/philosophy-of-computing 2 H.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • The Limits of Knowledge: Philosophical and Practical Aspects of Building a Quantum Computer
    1 The Limits of Knowledge: Philosophical and practical aspects of building a quantum computer Simons Center November 19, 2010 Michael H. Freedman, Station Q, Microsoft Research 2 • To explain quantum computing, I will offer some parallels between philosophical concepts, specifically from Catholicism on the one hand, and fundamental issues in math and physics on the other. 3 Why in the world would I do that? • I gave the first draft of this talk at the Physics Department of Notre Dame. • There was a strange disconnect. 4 The audience was completely secular. • They couldn’t figure out why some guy named Freedman was talking to them about Catholicism. • The comedic potential was palpable. 5 I Want To Try Again With a rethought talk and broader audience 6 • Mathematics and Physics have been in their modern rebirth for 600 years, and in a sense both sprang from the Church (e.g. Roger Bacon) • So let’s compare these two long term enterprises: - Methods - Scope of Ideas 7 Common Points: Catholicism and Math/Physics • Care about difficult ideas • Agonize over systems and foundations • Think on long time scales • Safeguard, revisit, recycle fruitful ideas and methods Some of my favorites Aquinas Dante Lully Descartes Dali Tolkien • Lully may have been the first person to try to build a computer. • He sought an automated way to distinguish truth from falsehood, doctrine from heresy 8 • Philosophy and religion deal with large questions. • In Math/Physics we also have great overarching questions which might be considered the social equals of: Omniscience, Free Will, Original Sin, and Redemption.
    [Show full text]
  • Thermodynamics of Computation and Linear Stability Limits of Superfluid Refrigeration of a Model Computing Array
    Thermodynamics of computation and linear stability limits of superfluid refrigeration of a model computing array Michele Sciacca1;6 Antonio Sellitto2 Luca Galantucci3 David Jou4;5 1 Dipartimento di Scienze Agrarie e Forestali, Universit`adi Palermo, Viale delle Scienze, 90128 Palermo, Italy 2 Dipartimento di Ingegneria Industriale, Universit`adi Salerno, Campus di Fisciano, 84084, Fisciano, Italy 3 Joint Quantum Centre (JQC) DurhamNewcastle, and School of Mathematics and Statistics, Newcastle University, Newcastle upon Tyne, NE1 7RU, United Kingdom 4 Departament de F´ısica,Universitat Aut`onomade Barcelona, 08193 Bellaterra, Catalonia, Spain 5Institut d'Estudis Catalans, Carme 47, Barcelona 08001, Catalonia, Spain 6 Istituto Nazionale di Alta Matematica, Roma 00185 , Italy Abstract We analyze the stability of the temperature profile of an array of computing nanodevices refrigerated by flowing superfluid helium, under variations of temperature, computing rate, and barycentric velocity of helium. It turns out that if the variation of dissipated energy per bit with respect to temperature variations is higher than some critical values, proportional to the effective thermal conductivity of the array, then the steady-state temperature profiles become unstable and refrigeration efficiency is lost. Furthermore, a restriction on the maximum rate of variation of the local computation rate is found. Keywords: Stability analysis; Computer refrigeration; Superfluid helium; Thermodynamics of computation. AMS Subject Classifications: 76A25, (Superfluids (classical aspects)) 80A99. (all'interno di Thermodynamics and heat transfer) 1 Introduction Thermodynamics of computation [1], [2], [3], [4] is a major field of current research. On practical grounds, a relevant part of energy consumption in computers is related to refrigeration costs; indeed, heat dissipation in very miniaturized chips is one of the limiting factors in computation.
    [Show full text]
  • The Limits of Computation & the Computation of Limits: Towards New Computational Architectures
    The Limits of Computation & The Computation of Limits: Towards New Computational Architectures An Intensive Program Proposal for Spring/Summer of 2012 Assoc. Prof. Ahmet KOLTUKSUZ, Ph.D. Yaşar University School of Engineering, Dept. Of Computer Engineering Izmir, Turkey The Aim & Rationale To disseminate the acculuted information and knowledge on the limits of the science of computation in such a compact and organized way that no any single discipline and/or department offers in their standard undergrad or graduate curriculums. While the department of Physics evalute the limits of computation in terms of the relative speed of electrons, resistance coefficient of the propagation IC and whether such an activity takes place in a Newtonian domain or in Quantum domain with the Planck measures, It is the assumptions of Leibnitz and Gödel’s incompleteness theorem or even Kolmogorov complexity along with Chaitin for the department of Mathematics. On the other hand, while the department Computer Science would like to find out, whether it is possible to find an algorithm to compute the shortest path in between two nodes of a lattice before the end of the universe, the departments of both Software Engineering and Computer Engineering is on the job of actually creating a Turing machine with an appropriate operating system; which will be executing that long without rolling off to the pitfalls of indefinite loops and/or of indefinite postponoments in its process & memory management routines while having multicore architectures doing parallel execution. All the problems, stemming from the limits withstanding, the new three and possibly more dimensional topological paradigms to the architectures of both hardware and software sound very promising and therefore worth closely examining.
    [Show full text]
  • Closed Timelike Curves Make Quantum and Classical Computing Equivalent
    Closed Timelike Curves Make Quantum and Classical Computing Equivalent Scott Aaronson∗ John Watrous† MIT University of Waterloo Abstract While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information. 1 Introduction The possibility of closed timelike curves (CTCs) within general relativity and quantum gravity theories has been studied for almost a century [11, 15, 13]. A different line of research has sought to understand the implications of CTCs, supposing they existed, for quantum mechanics, computation, and information [9, 8, 5]. In this paper we contribute to the latter topic, by giving the first complete characterization of the computational power of CTCs.
    [Show full text]
  • The Intractable and the Undecidable - Computation and Anticipatory Processes
    International JournalThe Intractable of Applied and Research the Undecidable on Information - Computation Technology and Anticipatory and Computing Processes DOI: Mihai Nadin The Intractable and the Undecidable - Computation and Anticipatory Processes Mihai Nadin Ashbel Smith University Professor, University of Texas at Dallas; Director, Institute for Research in Anticipatory Systems. Email id: [email protected]; www.nadin.ws Abstract Representation is the pre-requisite for any form of computation. To understand the consequences of this premise, we have to reconsider computation itself. As the sciences and the humanities become computational, to understand what is processed and, moreover, the meaning of the outcome, becomes critical. Focus on big data makes this understanding even more urgent. A fast growing variety of data processing methods is making processing more effective. Nevertheless these methods do not shed light on the relation between the nature of the process and its outcome. This study in foundational aspects of computation addresses the intractable and the undecidable from the perspective of anticipation. Representation, as a specific semiotic activity, turns out to be significant for understanding the meaning of computation. Keywords: Anticipation, complexity, information, representation 1. INTRODUCTION As intellectual constructs, concepts are compressed answers to questions posed in the process of acquiring and disseminating knowledge about the world. Ill-defined concepts are more a trap into vacuous discourse: they undermine the gnoseological effort, i.e., our attempt to gain knowledge. This is why Occam (to whom the principle of parsimony is attributed) advised keeping them to the minimum necessary. Aristotle, Maimonides, Duns Scotus, Ptolemy, and, later, Newton argued for the same.
    [Show full text]
  • Ab Initio Investigations of the Intrinsic Optical Properties of Germanium and Silicon Nanocrystals
    Ab initio Investigations of the Intrinsic Optical Properties of Germanium and Silicon Nanocrystals D i s s e r t a t i o n zur Erlangung des akademischen Grades doctor rerum naturalium (Dr. rer. nat.) vorgelegt dem Rat der Physikalisch-Astronomischen Fakultät der Friedrich-Schiller-Universität Jena von Dipl.-Phys. Hans-Christian Weißker geboren am 12. Juli 1971 in Greiz Gutachter: 1. Prof. Dr. Friedhelm Bechstedt, Jena. 2. Dr. Lucia Reining, Palaiseau, Frankreich. 3. Prof. Dr. Victor Borisenko, Minsk, Weißrußland. Tag der letzten Rigorosumsprüfung: 12. August 2004. Tag der öffentlichen Verteidigung: 19. Oktober 2004. Why is warrant important to knowledge? In part because true opinion might be reached by arbitrary, unreliable means. Peter Railton1 1Explanation and Metaphysical Controversy, in P. Kitcher and W.C. Salmon (eds.), Scientific Explanation, Vol. 13, Minnesota Studies in the Philosophy of Science, Minnesota, 1989. Contents 1 Introduction 7 2 Theoretical Foundations 13 2.1 Density-Functional Theory . 13 2.1.1 The Hohenberg-Kohn Theorem . 13 2.1.2 The Kohn-Sham scheme . 15 2.1.3 Transition to system without spin-polarization . 17 2.1.4 Physical interpretation by comparison to Hartree-Fock . 17 2.1.5 LDA and LSDA . 19 2.1.6 Forces in DFT . 19 2.2 Excitation Energies . 20 2.2.1 Quasiparticles . 20 2.2.2 Self-energy corrections . 22 2.2.3 Excitation energies from total energies . 25 QP 2.2.3.1 Conventional ∆SCF method: Eg = I − A . 25 QP 2.2.3.2 Discussion of the ∆SCF method Eg = I − A . 25 2.2.3.3 ∆SCF with occupation constraint .
    [Show full text]
  • Expander Flows, Geometric Embeddings and Graph Partitioning
    Expander Flows, Geometric Embeddings and Graph Partitioning SANJEEV ARORA Princeton University SATISH RAO and UMESH VAZIRANI UC Berkeley We give a O(√log n)-approximation algorithm for the sparsest cut, edge expansion, balanced separator,andgraph conductance problems. This improves the O(log n)-approximation of Leighton and Rao (1988). We use a well-known semidefinite relaxation with triangle inequality constraints. Central to our analysis is a geometric theorem about projections of point sets in d, whose proof makes essential use of a phenomenon called measure concentration. We also describe an interesting and natural “approximate certificate” for a graph’s expansion, which involves embedding an n-node expander in it with appropriate dilation and congestion. We call this an expander flow. Categories and Subject Descriptors: F.2.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; G.2.2 [Mathematics of Computing]: Discrete Mathematics and Graph Algorithms General Terms: Algorithms,Theory Additional Key Words and Phrases: Graph Partitioning,semidefinite programs,graph separa- tors,multicommodity flows,expansion,expanders 1. INTRODUCTION Partitioning a graph into two (or more) large pieces while minimizing the size of the “interface” between them is a fundamental combinatorial problem. Graph partitions or separators are central objects of study in the theory of Markov chains, geometric embeddings and are a natural algorithmic primitive in numerous settings, including clustering, divide and conquer approaches, PRAM emulation, VLSI layout, and packet routing in distributed networks. Since finding optimal separators is NP-hard, one is forced to settle for approximation algorithms (see Shmoys [1995]). Here we give new approximation algorithms for some of the important problems in this class.
    [Show full text]
  • Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the Ads/CFT Duality
    Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the AdS/CFT Duality Adam Bouland Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, 617 Soda Hall, Berkeley, CA 94720, U.S.A. [email protected] Bill Fefferman Department of Computer Science, University of Chicago, 5730 S Ellis Ave, Chicago, IL 60637, U.S.A. [email protected] Umesh Vazirani Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, 671 Soda Hall, Berkeley, CA 94720, U.S.A. [email protected] Abstract The AdS/CFT correspondence is central to efforts to reconcile gravity and quantum mechanics, a fundamental goal of physics. It posits a duality between a gravitational theory in Anti de Sitter (AdS) space and a quantum mechanical conformal field theory (CFT), embodied in a map known as the AdS/CFT dictionary mapping states to states and operators to operators. This dictionary map is not well understood and has only been computed on special, structured instances. In this work we introduce cryptographic ideas to the study of AdS/CFT, and provide evidence that either the dictionary must be exponentially hard to compute, or else the quantum Extended Church-Turing thesis must be false in quantum gravity. Our argument has its origins in a fundamental paradox in the AdS/CFT correspondence known as the wormhole growth paradox. The paradox is that the CFT is believed to be “scrambling” – i.e. the expectation value of local operators equilibrates in polynomial time – whereas the gravity theory is not, because the interiors of certain black holes known as “wormholes” do not equilibrate and instead their volume grows at a linear rate for at least an exponential amount of time.
    [Show full text]