Computational Complexity: an Empirical View Maël Pégny

Total Page:16

File Type:pdf, Size:1020Kb

Computational Complexity: an Empirical View Maël Pégny Computational Complexity: An Empirical View Maël Pégny To cite this version: Maël Pégny. Computational Complexity: An Empirical View. 39th Annual Convention of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB 2013), Apr 2013, Exeter, United Kingdom. hal-01470374 HAL Id: hal-01470374 https://hal.archives-ouvertes.fr/hal-01470374 Submitted on 17 Feb 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Computational Complexity: An Empirical View Maël Pégny 1 Abstract. Computational complexity theory (CCT) is usually con- To the extent of my knowledge, there is a striking assymetry in strued as the mathematical study of the complexity of computational the philosophical litterature2 between the treatment of computabil- problems. In recent years, research work on unconventional com- ity issues and that of complexity issues, when it comes to analyz- putational models has called into question the purely mathematical ing the role of empirical considerations. Foundational issues of com- nature of CCT, and has revealed potential relations between its sub- putability, especially the debate on the possibility of hypercomputa- ject matter and empirical sciences. In particular, recent debates sur- tion, have been the object of intensive scrutiny (see [12], [6], [27], rounding quantum computing have raised the possibility of a new [21], [9], to mention only a few references). The relations between computational model, one based on quantum mechanics, which may computational complexity and empirical considerations have been be exponentially more efficient than any previously known machine. the object of a somewhat lesser attention (see [22], [23], [16], [17], In this paper, I will show how those recent debates suggest an al- [30], [2], for a few remarkable exceptions). In particular, how this ternative, empirical view of CCT. I will then examine what are the empirical turn might affect our conception of CCT as a discipline fundamental arguments in the favor of this view, what are its con- has not been the subject of a systematic philosophical treatment. The sequences for our conception of CCT as a scientific field, and how aim of this work is to synthetise the dispersed remarks present in the further philosophical investigation should proceed. literature, and to provide such a systematic treatment. After articulating the orthodox, mathematical conception of CCT (section 1), I use recent analysis of the Extended Church-Turing The- 1 Introduction: unconventional computing and sis (ECTT) to present an empirical interpretation of CCT (section 2). philosophy I then examine how research on unconventional computational mod- In the last decades, computer scientists have been showing a growing els, especially quantum computing, can support arguments in favor interest in the study of unconventional computational models. Those of this empirical interpretation (section 3). Finally, I discuss how this models stem from a broadened understanding of computation. In his interpretation offers a new vision of the subject-matter of CCT, and original paper, A. Turing made it clear that his model was explicitly its unity as a scientific field: CCT can be seen as an empirical science, inspired by human, pen-and-paper computation (see [28], section 9). founded on empirical hypotheses (section 4) In many unconventional computational models, computation is no longer reduced to the discrete process of pen-and-paper computa- 2 The subject-matter of CCT, and the nature of tion: computation is any process that allows the encoding of an input computation value into the state of a given system, the processing of that informa- tion according to a given specification, and the reading of an output What is the object of computational complexity theory (abbreviated 3 value. Physical, chemical and biological processes can be harnessed as CCT )? In the literature, CCT is usually defined as a mathemati- to perform computations in this broadened sense. Algorithms can no cal science quantifying and studying the difficulty of mathematical, longer be conceived as abstract mathematical methods whose con- computational problems ([13], [5]). Two remarks must be made to ception and theoretical properties are completely independent of im- properly understand this definition. plementation: algorithms now become abstract models of empirical First, complexity properties are, prima facie, properties of algo- processes. rithms. They become intrinsic properties of a given problem through This new approach to computing raises new challenges for com- optimality results, which demonstrate that no algorithm can do better puter scientists. In the case of Turing machines, and other historical than a given, optimal one. It is this quantification over all possible al- models, empirical realizability was not a pending issue. The possi- gorithms, and the quest for optimality results, that distinguishes CCT bility of implementing these models by pen-and-paper computation from Algorithmic Analysis. was built into their very design. Many unconventional computational 2 By “philosophical literature”, I do not only mean the literature written by models, on the contrary, raise more serious issues of empirical real- professional philosophers, but also the foundational considerations pub- izability. Furthermore, defining the set of computable functions, or lished by physicists, computer scientists, and logicians. 3 computational costs, according to these models, might depend on I will actually focus my attention on uniform computational complexity theory. The reason for this restriction is simply that I am only interested in considerations of empirical realizability. In this broadened frame- computational models that can actually be implemented. If one thinks of the work of computation, it becomes natural to think that not only logical characterization of non-uniform complexity classes by resource-bounded and mathematical arguments, but also empirical ones, are relevant to Turing Machines receiving advice, one immediately faces the problem of the actual origin of such an advice. In full generality, it is unclear whether decide whether a given model is realistic, and what its exact compu- non-uniform computational models can be considered as empirically real- tational power is. izable models, or if they are to be considered as abstract tools used by the theorist. Even though that might be a very interesting philosophical issue, I 1 Université de Paris-1, CEA-Larsim, France, email: do not think that such a short, synthetic work is the proper place to discuss [email protected] such details. Second, and foremost, CCT as a domain rests on three distinct properties could not be attributed to computational tasks in an ab- hypotheses concerning the nature of its subject matter and the nature solute sense, but only relatively to a given model. The second reason of its results: is that the model-independence of polynomial time is one of the main arguments for the identification of polynomial-time complexity with • Hardware-independence. CCT concepts and results are indepen- tractability, and superpolynomial time-complexity with untractabil- dent of low-level hardware details. This is one of the central rea- ity (see, for instance, [19]). sons why the number of steps needed by a given algorithm is de- The obvious escape in ECTT is the adjective “reasonable.” It sug- scribed up to a linear factor, using the O notation. The exact num- gests the existence of computational models that go beyond the lim- ber of steps taken by an algorithm on a given machine could be its drawn by the ECTT, but are characterized as ’unreasonable.’ Yet sensitive to hardware details, but its gross magnitude should not without any specification of what a reasonable computational model show that sensitivity. should look like, the ECTT is not only informal, but also not well- • Model-independence. To be well defined, computational costs defined. must be considered within a given computational model, i.e. a In recent years, several computer scientists and physicist, includ- mathematical model describing precisely how algorithmic instruc- ing P. Shor, U. Vazirani, E. Bernstein and D. Aharonov suggested tions are to be written, and how they are to be executed. But the that “empirically realizable”4 should be substituted for “reasonable” main concepts and results of CCT are independent from any par- in the above phrasing (see [26], [7], [4]). The suggestion is quite ticular choice of model, e.g. Turing machine, or register machine. natural, since empirical realizability is a necessary condition of im- • Relevance for implementation. Despite hardware and model in- plementation. dependence, CCT results are relevant to determine the concrete A straightforward identification of “reasonable” with “empirically values of time-interval, and memory size, needed by a concrete realizable” might nevertheless be questionable. As P. van Emde Boas device when it executes a given algorithm. noticed in [29], a computational model might violate
Recommended publications
  • The Multiplicative Weights Update Method: a Meta-Algorithm and Applications
    THEORY OF COMPUTING, Volume 8 (2012), pp. 121–164 www.theoryofcomputing.org RESEARCH SURVEY The Multiplicative Weights Update Method: A Meta-Algorithm and Applications Sanjeev Arora∗ Elad Hazan Satyen Kale Received: July 22, 2008; revised: July 2, 2011; published: May 1, 2012. Abstract: Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analyses are usually very similar and rely on an exponential potential function. In this survey we present a simple meta-algorithm that unifies many of these disparate algorithms and derives them as simple instantiations of the meta-algorithm. We feel that since this meta-algorithm and its analysis are so simple, and its applications so broad, it should be a standard part of algorithms courses, like “divide and conquer.” ACM Classification: G.1.6 AMS Classification: 68Q25 Key words and phrases: algorithms, game theory, machine learning 1 Introduction The Multiplicative Weights (MW) method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game Theory. The setting for this algorithm is the following. A decision maker has a choice of n decisions, and needs to repeatedly make a decision and obtain an associated payoff. The decision maker’s goal, in the long run, is to achieve a total payoff which is comparable to the payoff of that fixed decision that maximizes the total payoff with the benefit of ∗This project was supported by David and Lucile Packard Fellowship and NSF grants MSPA-MCS 0528414 and CCR- 0205594.
    [Show full text]
  • Limits on Efficient Computation in the Physical World
    Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Bachelor of Science (Cornell University) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA, BERKELEY Committee in charge: Professor Umesh Vazirani, Chair Professor Luca Trevisan Professor K. Birgitta Whaley Fall 2004 The dissertation of Scott Joel Aaronson is approved: Chair Date Date Date University of California, Berkeley Fall 2004 Limits on Efficient Computation in the Physical World Copyright 2004 by Scott Joel Aaronson 1 Abstract Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In partic- ular, any quantum algorithm that solves the collision problem—that of deciding whether a sequence of n integers is one-to-one or two-to-one—must query the sequence Ω n1/5 times.
    [Show full text]
  • Expander Flows, Geometric Embeddings and Graph Partitioning
    Expander Flows, Geometric Embeddings and Graph Partitioning SANJEEV ARORA Princeton University SATISH RAO and UMESH VAZIRANI UC Berkeley We give a O(√log n)-approximation algorithm for the sparsest cut, edge expansion, balanced separator,andgraph conductance problems. This improves the O(log n)-approximation of Leighton and Rao (1988). We use a well-known semidefinite relaxation with triangle inequality constraints. Central to our analysis is a geometric theorem about projections of point sets in d, whose proof makes essential use of a phenomenon called measure concentration. We also describe an interesting and natural “approximate certificate” for a graph’s expansion, which involves embedding an n-node expander in it with appropriate dilation and congestion. We call this an expander flow. Categories and Subject Descriptors: F.2.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; G.2.2 [Mathematics of Computing]: Discrete Mathematics and Graph Algorithms General Terms: Algorithms,Theory Additional Key Words and Phrases: Graph Partitioning,semidefinite programs,graph separa- tors,multicommodity flows,expansion,expanders 1. INTRODUCTION Partitioning a graph into two (or more) large pieces while minimizing the size of the “interface” between them is a fundamental combinatorial problem. Graph partitions or separators are central objects of study in the theory of Markov chains, geometric embeddings and are a natural algorithmic primitive in numerous settings, including clustering, divide and conquer approaches, PRAM emulation, VLSI layout, and packet routing in distributed networks. Since finding optimal separators is NP-hard, one is forced to settle for approximation algorithms (see Shmoys [1995]). Here we give new approximation algorithms for some of the important problems in this class.
    [Show full text]
  • Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the Ads/CFT Duality
    Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the AdS/CFT Duality Adam Bouland Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, 617 Soda Hall, Berkeley, CA 94720, U.S.A. [email protected] Bill Fefferman Department of Computer Science, University of Chicago, 5730 S Ellis Ave, Chicago, IL 60637, U.S.A. [email protected] Umesh Vazirani Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, 671 Soda Hall, Berkeley, CA 94720, U.S.A. [email protected] Abstract The AdS/CFT correspondence is central to efforts to reconcile gravity and quantum mechanics, a fundamental goal of physics. It posits a duality between a gravitational theory in Anti de Sitter (AdS) space and a quantum mechanical conformal field theory (CFT), embodied in a map known as the AdS/CFT dictionary mapping states to states and operators to operators. This dictionary map is not well understood and has only been computed on special, structured instances. In this work we introduce cryptographic ideas to the study of AdS/CFT, and provide evidence that either the dictionary must be exponentially hard to compute, or else the quantum Extended Church-Turing thesis must be false in quantum gravity. Our argument has its origins in a fundamental paradox in the AdS/CFT correspondence known as the wormhole growth paradox. The paradox is that the CFT is believed to be “scrambling” – i.e. the expectation value of local operators equilibrates in polynomial time – whereas the gravity theory is not, because the interiors of certain black holes known as “wormholes” do not equilibrate and instead their volume grows at a linear rate for at least an exponential amount of time.
    [Show full text]
  • Quantum Computing : a Gentle Introduction / Eleanor Rieffel and Wolfgang Polak
    QUANTUM COMPUTING A Gentle Introduction Eleanor Rieffel and Wolfgang Polak The MIT Press Cambridge, Massachusetts London, England ©2011 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email [email protected] This book was set in Syntax and Times Roman by Westchester Book Group. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Rieffel, Eleanor, 1965– Quantum computing : a gentle introduction / Eleanor Rieffel and Wolfgang Polak. p. cm.—(Scientific and engineering computation) Includes bibliographical references and index. ISBN 978-0-262-01506-6 (hardcover : alk. paper) 1. Quantum computers. 2. Quantum theory. I. Polak, Wolfgang, 1950– II. Title. QA76.889.R54 2011 004.1—dc22 2010022682 10987654321 Contents Preface xi 1 Introduction 1 I QUANTUM BUILDING BLOCKS 7 2 Single-Qubit Quantum Systems 9 2.1 The Quantum Mechanics of Photon Polarization 9 2.1.1 A Simple Experiment 10 2.1.2 A Quantum Explanation 11 2.2 Single Quantum Bits 13 2.3 Single-Qubit Measurement 16 2.4 A Quantum Key Distribution Protocol 18 2.5 The State Space of a Single-Qubit System 21 2.5.1 Relative Phases versus Global Phases 21 2.5.2 Geometric Views of the State Space of a Single Qubit 23 2.5.3 Comments on General Quantum State Spaces
    [Show full text]
  • Limits on Efficient Computation in the Physical World by Scott Joel
    Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Bachelor of Science (Cornell University) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA, BERKELEY Committee in charge: Professor Umesh Vazirani, Chair Professor Luca Trevisan Professor K. Birgitta Whaley Fall 2004 The dissertation of Scott Joel Aaronson is approved: Chair Date Date Date University of California, Berkeley Fall 2004 Limits on Efficient Computation in the Physical World Copyright 2004 by Scott Joel Aaronson 1 Abstract Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In partic- ular, any quantum algorithm that solves the collision problem—that of deciding whether a sequence of n integers is one-to-one or two-to-one—must query the sequence Ω n1/5 times.
    [Show full text]
  • Flows, Cuts and Integral Routing in Graphs - an Approximation Algorithmist’S Perspec- Tive
    Flows, Cuts and Integral Routing in Graphs - an Approximation Algorithmist's Perspec- tive Julia Chuzhoy ∗ Abstract. Flow, cut and integral graph routing problems are among the most extensively studied in Operations Research, Optimization, Graph Theory and Computer Science. We survey known algorithmic results for these problems, including classical results and more recent developments, and discuss the major remaining open problems, with an emphasis on approximation algorithms. Mathematics Subject Classification (2010). Primary 68Q25; Secondary 68Q17, 68R05, 68R10. Keywords. Maximum flow, minimum cut, network routing, approximation algorithms, hardness of approximation, graph theory. 1. Introduction In this survey we consider flow, cut, and integral routing problems in graphs. These three types of problems are among the most extensively studied in Operations Research, Optimization, Graph Theory, and Computer Science. Problems of these types naturally arise in many applications, and algorithms for solving them are among the most valuable and powerful tools in algorithm design and analysis. In the classical maximum s{t flow problem, we are given an n-vertex graph G = (V; E), that can be either directed or undirected, with non-negative capacities c(e) on edges e 2 E, and two special vertices: s, called the source, and t, called the destination. Let P be the set of all paths connecting s to t in G. An s{t flow f is an assignment of non-negative values f(P ) to all paths P 2 P, such that for each edge e 2 E, the flow through e does not exceed its capacity c(e), that P P is, P :e2P f(P ) ≤ c(e).
    [Show full text]
  • Classical Verification and Blind Delegation of Quantum
    Classical Verification and Blind Delegation of Quantum Computations by Urmila M. Mahadev A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Umesh Vazirani, Chair Professor Satish Rao Professor Nikhil Srivastava Spring 2018 Classical Verification and Blind Delegation of Quantum Computations Copyright 2018 by Urmila M. Mahadev 1 Abstract Classical Verification and Blind Delegation of Quantum Computations by Urmila M. Mahadev Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair In this dissertation, we solve two open questions. First, can the output of a quantum computation be verified classically? We give the first protocol for provable classical verifica- tion of efficient quantum computations, depending only on the assumption that the learning with errors problem is post-quantum secure. The second question, which is related to verifiability and is often referred to as blind computation, asks the following: can a classical client delegate a desired quantum compu- tation to a remote quantum server while hiding all data from the server? This is especially relevant to proposals for quantum computing in the cloud. For classical computations, this task is achieved by the celebrated result of fully homomorphic encryption ([23]). We prove an analogous result for quantum computations by showing that certain classical homomorphic encryption schemes, when used in a different manner, are able to homomorphically evaluate quantum circuits. While we use entirely different techniques to construct the verification and homomorphic encryption protocols, they both rely on the same underlying cryptographic primitive of trapdoor claw-free functions.
    [Show full text]
  • Depth-Efficient Proofs of Quantumness
    Depth-efficient proofs of quantumness Zhenning Liu∗ and Alexandru Gheorghiu† ETH Z¨urich Abstract A proof of quantumness is a type of challenge-response protocol in which a classical verifier can efficiently certify the quantum advantage of an untrusted prover. That is, a quantum prover can correctly answer the verifier’s challenges and be accepted, while any polynomial-time classical prover will be rejected with high probability, based on plausible computational assumptions. To answer the verifier’s challenges, existing proofs of quantumness typically require the quantum prover to perform a combination of polynomial-size quantum circuits and measurements. In this paper, we show that all existing proofs of quantumness can be modified so that the prover need only perform constant-depth quantum circuits (and measurements) together with log- depth classical computation. We thus obtain depth-efficient proofs of quantumness whose sound- ness can be based on the same assumptions as existing schemes (factoring, discrete logarithm, learning with errors or ring learning with errors). 1 Introduction Quantum computation currently occupies the noisy intermediate-scale (NISQ) regime [Pre18]. This means that existing devices have a relatively small number of qubits (on the order of 100), perform operations that are subject to noise and are not able to operate fault-tolerantly. As a result, these devices are limited to running quantum circuits of small depth. Despite these limitations, there have been two demonstrations of quantum computational advantage [AAB+19, ZWD+20]. Both cases in- volved running quantum circuits whose results cannot be efficiently reproduced by classical computers, based on plausible complexity-theoretic assumptions [AA11, HM17, BFNV19].
    [Show full text]
  • Quantum Leap a New Proof Supports a 25-Year-Old Claim N of the Unique Power of Quantum Computing
    news Science | DOI:10.1145/3290407 Don Monroe Quantum Leap A new proof supports a 25-year-old claim N of the unique power of quantum computing. OPES FOR QUANTUM comput- ing have long been buoyed by the existence of algorithms that would solve some par- ticularly challenging prob- Hlems with exponentially fewer opera- tions than any known algorithm for conventional computers. Many experts believe, but have been unable to prove, that these problems will resist even the cleverest non-quantum algorithms. Recently, researchers have shown the strongest evidence yet that even if con- ventional computers were made much more powerful, they probably still could not efficiently solve some problems that a quantum computer could. That such problems exist is a long- standing conjecture about the greater capability of quantum computers. “It was really the first big conjecture in quantum complexity theory,” said computer scientist Umesh Vazirani of the University of California, Berke- ley, who proposed the conjecture with then-student Ethan Bernstein in the 1993 paper (updated in 1997) that es- tablished the field. That work, now further validated, al model, then or now, “that violates Quantum Resources challenged the cherished thesis that the extended Church-Turing thesis,” Conventional “classical” computers any general computer can simulate any Vazirani said. “It overturned this basic store information as bits than can be other efficiently, since quantum com- fabric of computer science, and said: in one of two states, denoted 0 and 1. puters will sometimes be out of reach ‘here’s a new kid on the block, and it’s In contrast, a quantum degree of free- of conventional emulation.
    [Show full text]
  • Adam Bouland
    Adam Bouland Department of Computer Science, Stanford University 172 Gates Computer Science, Stanford, CA 94305 [email protected], http://theory.stanford.edu/~abouland/ Academic Assistant Professor of Computer Science, Stanford Univeristy (July. 2021-present) Appointments Postdoctoral Researcher, UC Berkeley/Simons Institute for the Theory of Computing, ad- vised by Umesh Vazirani (Sept. 2017-June 2021). Education Massachusetts Institute of Technology, Cambridge, MA Ph.D. in Computer Science, September 2017, advised by Scott Aaronson University of Cambridge, Cambridge, UK M.Phil. in Advanced Computer Science, 2011, advised by Anuj Dawar M.A.St. in Mathematics, 2010 Yale University, New Haven, CT B.S. Computer Science & Mathematics, Physics, 2009 Summa Cum Laude, Distinction in Both Majors Awards NSF Graduate Research Fellowship, 2011-2016 Marshall Scholar, UK Government, 2009-2011 Howard L. Schultz Prize, Yale Physics Department, 2009 Junior Inductee into Phi Beta Kappa (top 10 in class of >1000), Yale Chapter, 2007 Additional Technical Advisor, QC Ware, Palo Alto, CA, June 2018-present Positions Near-term quantum algorithm development for quantum software startup. Research Visitor: F. U. Berlin, June 2018 (Host: Jens Eisert), U. Bristol, Aug. 2016 (Host: Ashley Montanaro), Tokyo Institute of Technology, Dec. 2016 (Host: Tomoyuki Morimae), Joint Center for Quantum Information and Computer Science (QuICS), University of Mary- land, Aug. 2015 (Host: Stephen Jordan), Centre for Quantum Technologies (CQT), Singapore, Jan.-Apr. 2014, Jun.-Aug. 2015 (Host: Miklos Santha). Undergraduate Research, Yale, Stanford, 2008-2009 Designed algorithms to improve multi-way sparse cuts in graphs. Advised by Daniel Spielman. Created software to analyze cosmic microwave background anisotropies and galaxy cluster surveys.
    [Show full text]
  • (Sub) Exponential Advantage of Adiabatic Quantum Computation with No Sign Problem
    (Sub)Exponential advantage of adiabatic quantum computation with no sign problem András Gilyén∗ Umesh Vazirani† November 20, 2020 Abstract We demonstrate the possibility of (sub)exponential quantum speedup via a quantum algorithm that follows an adiabatic path of a gapped Hamiltonian with no sign problem. This strengthens the superpolynomial separation recently proved by Hastings [Has20]. The Hamiltonian that exhibits this speed-up comes from the adjacency matrix of an undirected graph, and we can view the adiabatic evolution as an efficient O(poly(n))-time quantum algorithm for finding a specific "EXIT" vertex in the graph given the "ENTRANCE" vertex. On the other hand we show that if the graph is given via an adjacency-list oracle, there is no classical algorithm that finds the "EXIT" with probability greater than exp(−nδ) using δ 1 at most exp(n ) queries for δ = 5 − o(1). Our construction of the graph is somewhat similar to the “welded-trees” construction of Childs et al. [CCD+03], but uses additional ideas of Hastings [Has20] for achieving a spectral gap and a short adiabatic path. 1 Introduction Adiabatic quantum computing [FGGS00] is an interesting model of computation that is formu- lated directly in terms of Hamiltonians, the quantum analog of constraint satisfaction problems (CSPs). The computation starts in the known ground state of an initial Hamiltonian, and slowly (adiabatically) transforms the acting Hamiltonian into a final Hamiltonian whose ground state encapsulates the answer to the computational problem in question. The final state of the com- putation is guaranteed, by the quantum adiabatic theorem, to have high overlap with the desired ground state, as long as the running time of the adiabatic evolution is polynomially large in the inverse of the smallest spectral gap of any Hamiltonian along the adiabatic path [AR04].
    [Show full text]