Quantum Leap a New Proof Supports a 25-Year-Old Claim N of the Unique Power of Quantum Computing

Total Page:16

File Type:pdf, Size:1020Kb

Quantum Leap a New Proof Supports a 25-Year-Old Claim N of the Unique Power of Quantum Computing news Science | DOI:10.1145/3290407 Don Monroe Quantum Leap A new proof supports a 25-year-old claim N of the unique power of quantum computing. OPES FOR QUANTUM comput- ing have long been buoyed by the existence of algorithms that would solve some par- ticularly challenging prob- Hlems with exponentially fewer opera- tions than any known algorithm for conventional computers. Many experts believe, but have been unable to prove, that these problems will resist even the cleverest non-quantum algorithms. Recently, researchers have shown the strongest evidence yet that even if con- ventional computers were made much more powerful, they probably still could not efficiently solve some problems that a quantum computer could. That such problems exist is a long- standing conjecture about the greater capability of quantum computers. “It was really the first big conjecture in quantum complexity theory,” said computer scientist Umesh Vazirani of the University of California, Berke- ley, who proposed the conjecture with then-student Ethan Bernstein in the 1993 paper (updated in 1997) that es- tablished the field. That work, now further validated, al model, then or now, “that violates Quantum Resources challenged the cherished thesis that the extended Church-Turing thesis,” Conventional “classical” computers any general computer can simulate any Vazirani said. “It overturned this basic store information as bits than can be other efficiently, since quantum com- fabric of computer science, and said: in one of two states, denoted 0 and 1. puters will sometimes be out of reach ‘here’s a new kid on the block, and it’s In contrast, a quantum degree of free- of conventional emulation. Quantum completely different and able to do to- dom, such as the spin of an electron or computation is the only computation- tally different things.’” the polarization of a photon, can exist ON GRAPHIC FROM UNIVERSITY OF STRATHCLYDE BASED ASSOCIATES, ANDRIJ BORYS BY IMAGE 10 COMMUNICATIONS OF THE ACM | JANUARY 2019 | VOL. 62 | NO. 1 news concurrently in a weighted mixture of These mathematical arguments can two states. A set of, say, 50 of these “qu- determine if an answer can be assured ACM bits” thus can represent all 250 (~1015) given access to specific resources, such combinations of the individual states. as computational time or the depth of Member Manipulations of this assembly can be a corresponding circuit. An algorithm viewed as simultaneously performing a that is guaranteed to finish in “poly- quadrillion calculations. nomial time,” meaning the runtime News Performing a vast number of com- increases no faster than some fixed PRESERVING HISTORY putations does not do much good, power of the size of the input, can be IN A DIGITAL LIBRARY however, unless a unique answer can regarded as efficient. In contrast, many Edward A. Fox, be extracted. Interrogating the qubits problems, notably those that require a professor of computer forces them into some specific combi- exhaustively searching many combina- science at nation of 0s and 1s, with probabilities torial possibilities, are only known to Virginia that depend on their post-calculation yield to methods whose execution time Polytechnic Institute and weights. Critically, however, different grows exponentially or worse with the State University (Virginia Tech), initial configurations make quantum size of the input. recalls joining ACM more than contributions to the weight that are Complexity theory divides problems 50 years ago. complex numbers, which can cancel into “complexity classes,” depending Fox first became a member of ACM in 1967, while each other out as well as reinforce each on the resources they need. Some of an undergraduate at the other. The challenge is devising an al- the best-known problems belong to the Massachusetts Institute gorithm for which this cancellation class P, consisting of problems whose of Technology (MIT). During occurs for all configurations except solution can be found in polynomial his first year as a member, he launched MIT’s ACM the desired solution, so the eventual time. A much larger class is NP, which Student Chapter. measurement reveals this answer. includes problems for which a pro- In 2017, Fox was named Soon after Bernstein and Vazira- posed solution can be verified as cor- an ACM Fellow, cited for his contributions to information ni’s work, mathematician Peter Shor, rect in polynomial time. NP includes retrieval and digital libraries, working at AT&T Research in New such classic challenges as the traveling the latter a field he helped to Jersey as it spun off from Bell Labs, salesman problem and the graph ver- launch. “A lot of people don’t presented an algorithm that achieved tex coloring problem, which research- know what a digital library is,” Fox says, “So a way to think this goal for determining the fac- ers have been unable to show belong to of it is as an information tors of a large integer. The security P. Many experts strongly suspect that system tailored to a community of public key cryptography schemes polynomial-time algorithms for many of people.” Fox has served in numerous depends on this factorization being problems in NP have not been found positions and capacities within impractically time consuming, so the because they do not exist, in which case ACM over the years. He is potential for a rapid quantum solu- P≠NP. This important question, re- currently co-chair (with Michael tion attracted a lot of attention. garded by many as the most important Nelson) of the ACM Publications Board’s Digital Library and Inspired by this and other concrete open question in theoretical computer Technology Committee, which examples, researchers have been striv- science, remains unresolved, and a works closely with the technical ing to assemble ever-larger systems of $1-million prize from the Clay Math- and publishing staff to review physical qubits in the lab that can pre- ematics Institute awaits its answer. services offered by ACM in the context of competing and serve the delicate complex amplitudes complementary primary and of the qubits long enough to perform secondary online resources. a calculation, and to correct the in- He first became interested To compare in computer science in the mid- evitable errors. In recent years, several 1960s when, as a junior in high competing implementations have got- techniques that have school, he attended a computer ten big enough (dozens of qubits) to yet to be devised course during a study program achieve “quantum supremacy,” mean- at Columbia University. He and machines that went on to earn his bachelor’s ing solving selected problems faster degree in electrical engineering than a conventional computer. have yet to be built, from MIT, and both his master’s and Ph.D. degrees in computer Classifying Complexity computer scientists science from Cornell University. In the future, Fox hopes the Assessing comparative execution rely on computational technologies of information times is complicated by the fact that retrieval, digital libraries, and algorithms continually improve, complexity theory. archiving will be even better integrated, as a means of sometimes dramatically. To compare helping to preserve our history techniques that have yet to be devised and achievements for the future. and machines that have yet to be built, —John Delaney computer scientists rely not on cod- ing but on formal methods known as computational complexity theory. JANUARY 2019 | VOL. 62 | NO. 1 | COMMUNICATIONS OF THE ACM 11 news Bernstein and Vazirani defined “we show that there is one problem that a new complexity class called BQP BQP will solve better than PH.” In addi- (Bounded Quantum Polynomial), which “The basic ability tion to choosing the right oracle, he and has access to quantum resources. BQP to do Fourier Tal had to choose a problem that reveals is closely analogous to the conventional quantum computation’s strength—and class BPP (Bounded Probabilistic Poly- transformation, classical computation’s weakness—but nomial), which has access to a perfect that’s the heart they only needed one example. random-number generator and must They adapted an earlier suggestion not give a wrong answer too often. Cur- of the power by Scott Aaronson (then at the Mas- rently, some problems having only of quantum, sachusetts Institute of Technology) stochastic solutions are known, but it in which the computer must deter- is hoped that deterministic, “de-ran- at least most mine if one sequence of bits is (ap- domized” algorithms will eventually be of the algorithms proximately) the Fourier transform of found for them. another. Computing such frequency we know.” spectra is a natural task for quan- Consulting the Oracle tum computations, and Shor’s algo- The relationship of the quantum class rithm exploits precisely this strength BQP to various conventional classes, to identify periodicities that expose however, continues to be studied, long separations. “They are a way for us to prime factors of the target. “The basic after Bernstein and Vazirani suggested understand what kinds of problems ability to do Fourier transformation,” it includes problems beyond the scope are hard to prove and what kinds of re- Fortnow said, “that’s the heart of the of conventional techniques. “We have sults might be possible, but they’re not power of quantum, at least most of our conjectures and we can feel strongly a definite proof technique,” he said. the algorithms we know.” about them, but every so often they are “We didn’t prove a separation between “The hard part is to give the lower wrong,” Vazirani said. “A proof is really these two classes,” Raz agreed. “I can’t bound for the polynomial hierarchy,” something to be celebrated.” imagine that [such a separation] will Raz said. To show that no such algo- The new proof of separation does be proved in our lifetime.” rithm, even with access to the oracle, not apply to the pure versions of BQP “Already there were oracle separa- could solve it efficiently, he and Tal and the other complexity classes ad- tions of BQP and NP, BQP and P, and tweaked Aaronson’s suggestion so they dressed by the Vazirani-Bernstein con- other classes,” Raz said.
Recommended publications
  • Simulating Quantum Field Theory with a Quantum Computer
    Simulating quantum field theory with a quantum computer John Preskill Lattice 2018 28 July 2018 This talk has two parts (1) Near-term prospects for quantum computing. (2) Opportunities in quantum simulation of quantum field theory. Exascale digital computers will advance our knowledge of QCD, but some challenges will remain, especially concerning real-time evolution and properties of nuclear matter and quark-gluon plasma at nonzero temperature and chemical potential. Digital computers may never be able to address these (and other) problems; quantum computers will solve them eventually, though I’m not sure when. The physics payoff may still be far away, but today’s research can hasten the arrival of a new era in which quantum simulation fuels progress in fundamental physics. Frontiers of Physics short distance long distance complexity Higgs boson Large scale structure “More is different” Neutrino masses Cosmic microwave Many-body entanglement background Supersymmetry Phases of quantum Dark matter matter Quantum gravity Dark energy Quantum computing String theory Gravitational waves Quantum spacetime particle collision molecular chemistry entangled electrons A quantum computer can simulate efficiently any physical process that occurs in Nature. (Maybe. We don’t actually know for sure.) superconductor black hole early universe Two fundamental ideas (1) Quantum complexity Why we think quantum computing is powerful. (2) Quantum error correction Why we think quantum computing is scalable. A complete description of a typical quantum state of just 300 qubits requires more bits than the number of atoms in the visible universe. Why we think quantum computing is powerful We know examples of problems that can be solved efficiently by a quantum computer, where we believe the problems are hard for classical computers.
    [Show full text]
  • The Multiplicative Weights Update Method: a Meta-Algorithm and Applications
    THEORY OF COMPUTING, Volume 8 (2012), pp. 121–164 www.theoryofcomputing.org RESEARCH SURVEY The Multiplicative Weights Update Method: A Meta-Algorithm and Applications Sanjeev Arora∗ Elad Hazan Satyen Kale Received: July 22, 2008; revised: July 2, 2011; published: May 1, 2012. Abstract: Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analyses are usually very similar and rely on an exponential potential function. In this survey we present a simple meta-algorithm that unifies many of these disparate algorithms and derives them as simple instantiations of the meta-algorithm. We feel that since this meta-algorithm and its analysis are so simple, and its applications so broad, it should be a standard part of algorithms courses, like “divide and conquer.” ACM Classification: G.1.6 AMS Classification: 68Q25 Key words and phrases: algorithms, game theory, machine learning 1 Introduction The Multiplicative Weights (MW) method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game Theory. The setting for this algorithm is the following. A decision maker has a choice of n decisions, and needs to repeatedly make a decision and obtain an associated payoff. The decision maker’s goal, in the long run, is to achieve a total payoff which is comparable to the payoff of that fixed decision that maximizes the total payoff with the benefit of ∗This project was supported by David and Lucile Packard Fellowship and NSF grants MSPA-MCS 0528414 and CCR- 0205594.
    [Show full text]
  • Limits on Efficient Computation in the Physical World
    Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Bachelor of Science (Cornell University) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA, BERKELEY Committee in charge: Professor Umesh Vazirani, Chair Professor Luca Trevisan Professor K. Birgitta Whaley Fall 2004 The dissertation of Scott Joel Aaronson is approved: Chair Date Date Date University of California, Berkeley Fall 2004 Limits on Efficient Computation in the Physical World Copyright 2004 by Scott Joel Aaronson 1 Abstract Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In partic- ular, any quantum algorithm that solves the collision problem—that of deciding whether a sequence of n integers is one-to-one or two-to-one—must query the sequence Ω n1/5 times.
    [Show full text]
  • Richard C. Brower, Boston University with Many Slides from Kate Clark, NVIDIA
    Past and Future of QCD on GPUs Richard C. Brower, Boston University with many slides from Kate Clark, NVIDIA High Performance Computing in High Energy Physics CCNU Wahan China, Sept 19, 2018 Optimize the Intersection Application: QCD Algorithms: Architecture: AMG GPU Question to address • How do we put Quantum Field on the computer? • How to Maximize Flops/$, bandwidth/$ at Min Energy? • How to implement fastest algorithms: Dirac Sovlers, Sympletic Integrators, etc ? Standard Lattice QCD Formulation i d3xdt[ F 2 + (@ iA + m) ] Path Integral = 2A(x) (x) e− g2 µ⌫ µ µ − µ D Z R 1.Complex time for probability it x ! 4 iaAµ 2. Lattice Finite Differences (@µ iAµ) (x) ( x+ˆµ e x)/a − ! − d d e xDxy(A) y Det[D] 3. Fermionic integral − ! Z Det[D] dφdφ e φx[1/D]xyφy 4.Bosonic (pseudo- Fermions) ! − Z ij 1 γµ Lattice Dirac (x) − U ab(x) (x +ˆµ) ia 2 µ jb Color Dimension: a = 1,2,3 μ=1,2,…,d Spin i = 1,2,3,4 x x+ ➔ µ axis 2 x SU(3) Gauge Links ab ab iAµ (x) Uµ (x)=e x1 axis ➔ Quarks Propagation on Hypercubic Lattice* • Dominate Linear Algebra : Matrix solver for Dirac operator. • Gauge Evolution: In the semi-implicit quark Hamiltonian evolution in Monte vacuum Carlo time. u,d,s, proto u, • Analysis: proto d * Others: SUSY(Catteral et al), Random Lattices(Christ et al), Smiplicial Sphere (Brower et al) Byte/flop in Dirac Solver is main bottleneck • Bandwidth to memory rather than raw floating point throughput. • Wilson Dirac/DW operator (single prec) : 1440 bytes per 1320 flops.
    [Show full text]
  • CS286.2 Lectures 5-6: Introduction to Hamiltonian Complexity, QMA-Completeness of the Local Hamiltonian Problem
    CS286.2 Lectures 5-6: Introduction to Hamiltonian Complexity, QMA-completeness of the Local Hamiltonian problem Scribe: Jenish C. Mehta The Complexity Class BQP The complexity class BQP is the quantum analog of the class BPP. It consists of all languages that can be decided in quantum polynomial time. More formally, Definition 1. A language L 2 BQP if there exists a classical polynomial time algorithm A that ∗ maps inputs x 2 f0, 1g to quantum circuits Cx on n = poly(jxj) qubits, where the circuit is considered a sequence of unitary operators each on 2 qubits, i.e Cx = UTUT−1...U1 where each 2 2 Ui 2 L C ⊗ C , such that: 2 i. Completeness: x 2 L ) Pr(Cx accepts j0ni) ≥ 3 1 ii. Soundness: x 62 L ) Pr(Cx accepts j0ni) ≤ 3 We say that the circuit “Cx accepts jyi” if the first output qubit measured in Cxjyi is 0. More j0i specifically, letting P1 = j0ih0j1 be the projection of the first qubit on state j0i, j0i 2 Pr(Cx accepts jyi) =k (P1 ⊗ In−1)Cxjyi k2 The Complexity Class QMA The complexity class QMA (or BQNP, as Kitaev originally named it) is the quantum analog of the class NP. More formally, Definition 2. A language L 2 QMA if there exists a classical polynomial time algorithm A that ∗ maps inputs x 2 f0, 1g to quantum circuits Cx on n + q = poly(jxj) qubits, such that: 2q i. Completeness: x 2 L ) 9jyi 2 C , kjyik2 = 1, such that Pr(Cx accepts j0ni ⊗ 2 jyi) ≥ 3 2q 1 ii.
    [Show full text]
  • Taking the Quantum Leap with Machine Learning
    Introduction Quantum Algorithms Current Research Conclusion Taking the Quantum Leap with Machine Learning Zack Barnes University of Washington Bellevue College Mathematics and Physics Colloquium Series [email protected] January 15, 2019 Zack Barnes University of Washington UW Quantum Machine Learning Introduction Quantum Algorithms Current Research Conclusion Overview 1 Introduction What is Quantum Computing? What is Machine Learning? Quantum Power in Theory 2 Quantum Algorithms HHL Quantum Recommendation 3 Current Research Quantum Supremacy(?) 4 Conclusion Zack Barnes University of Washington UW Quantum Machine Learning Introduction Quantum Algorithms Current Research Conclusion What is Quantum Computing? \Quantum computing focuses on studying the problem of storing, processing and transferring information encoded in quantum mechanical systems.\ [Ciliberto, Carlo et al., 2018] Unit of quantum information is the qubit, or quantum binary integer. Zack Barnes University of Washington UW Quantum Machine Learning Supervised Uses labeled examples to predict future events Unsupervised Not classified or labeled Introduction Quantum Algorithms Current Research Conclusion What is Machine Learning? \Machine learning is the scientific study of algorithms and statistical models that computer systems use to progressively improve their performance on a specific task.\ (Wikipedia) Zack Barnes University of Washington UW Quantum Machine Learning Uses labeled examples to predict future events Unsupervised Not classified or labeled Introduction Quantum
    [Show full text]
  • Interactive Proofs for Quantum Computations
    Innovations in Computer Science 2010 Interactive Proofs For Quantum Computations Dorit Aharonov Michael Ben-Or Elad Eban School of Computer Science, The Hebrew University of Jerusalem, Israel [email protected] [email protected] [email protected] Abstract: The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that these systems perform as they should, if we cannot efficiently compute predictions for their behavior? Vazirani has asked [21]: If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory? In cryptographic settings, an untrusted future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To provide answers to these questions, we define Quantum Prover Interactive Proofs (QPIP). Whereas in standard Interactive Proofs [13] the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computational capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: ”Any language in BQP has a QPIP, and moreover, a fault tolerant one” (providing a partial answer to a challenge posted in [1]). We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our second protocol uses polynomial codes QAS due to Ben-Or, Cr´epeau, Gottesman, Hassidim, and Smith [8], combined with quantum fault tolerance and secure multiparty quantum computation techniques.
    [Show full text]
  • Quantum Supremacy
    Quantum Supremacy Practical QS: perform some computational task on a well-controlled quantum device, which cannot be simulated in a reasonable time by the best-known classical algorithms and hardware. Theoretical QS: perform a computational task efficiently on a quantum device, and prove that task cannot be efficiently classically simulated. Since proving seems to be beyond the capabilities of our current civilization, we lower the standards for theoretical QS. One seeks to provide formal evidence that classical simulation is unlikely. For example: 3-SAT is NP-complete, so it cannot be efficiently classical solved unless P = NP. Theoretical QS: perform a computational task efficiently on a quantum device, and prove that task cannot be efficiently classically simulated unless “the polynomial Heierarchy collapses to the 3nd level.” Quantum Supremacy A common feature of QS arguments is that they consider sampling problems, rather than decision problems. They allow us to characterize the complexity of sampling measurements of quantum states. Which is more difficult: Task A: deciding if a circuit outputs 1 with probability at least 2/3s, or at most 1/3s Task B: sampling from the output of an n-qubit circuit in the computational basis Sampling from distributions is generically more difficult than approximating observables, since we can use samples to estimate observables, but not the other way around. One can imagine quantum systems whose local observables are easy to classically compute, but for which sampling the full state is computationally complex. By moving from decision problems to sampling problems, we make the task of classical simulation much more difficult.
    [Show full text]
  • Quantum Algorithms for Classical Lattice Models
    Home Search Collections Journals About Contact us My IOPscience Quantum algorithms for classical lattice models This content has been downloaded from IOPscience. Please scroll down to see the full text. 2011 New J. Phys. 13 093021 (http://iopscience.iop.org/1367-2630/13/9/093021) View the table of contents for this issue, or go to the journal homepage for more Download details: IP Address: 147.96.14.15 This content was downloaded on 16/12/2014 at 15:54 Please note that terms and conditions apply. New Journal of Physics The open–access journal for physics Quantum algorithms for classical lattice models G De las Cuevas1,2,5, W Dür1, M Van den Nest3 and M A Martin-Delgado4 1 Institut für Theoretische Physik, Universität Innsbruck, Technikerstraße 25, A-6020 Innsbruck, Austria 2 Institut für Quantenoptik und Quanteninformation der Österreichischen Akademie der Wissenschaften, Innsbruck, Austria 3 Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Strasse 1, D-85748 Garching, Germany 4 Departamento de Física Teórica I, Universidad Complutense, 28040 Madrid, Spain E-mail: [email protected] New Journal of Physics 13 (2011) 093021 (35pp) Received 15 April 2011 Published 9 September 2011 Online at http://www.njp.org/ doi:10.1088/1367-2630/13/9/093021 Abstract. We give efficient quantum algorithms to estimate the partition function of (i) the six-vertex model on a two-dimensional (2D) square lattice, (ii) the Ising model with magnetic fields on a planar graph, (iii) the Potts model on a quasi-2D square lattice and (iv) the Z2 lattice gauge theory on a 3D square lattice.
    [Show full text]
  • Readout Rebalancing for Near Term Quantum Computers
    Readout Rebalancing for Near Term Quantum Computers Rebecca Hicks,1, ∗ Christian W. Bauer,2, † and Benjamin Nachman2, ‡ 1Physics Department, University of California, Berkeley, Berkeley, CA 94720, USA 2Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA (Dated: October 16, 2020) Readout errors are a significant source of noise for near term intermediate scale quantum computers. Mismeasuring a qubit as a |1i when it should be |0i occurs much less often than mismeasuring a qubit as a |0i when it should have been |1i. We make the simple observation that one can improve the readout fidelity of quantum computers by applying targeted X gates prior to performing a measurement. These X gates are placed so that the expected number of qubits in the |1i state is minimized. Classical post processing can undo the effect of the X gates so that the expectation value of any observable is unchanged. We show that the statistical uncertainty following readout error corrections is smaller when using readout rebalancing. The statistical advantage is circuit- and computer-dependent, and is demonstrated for the W state, a Grover search, and for a Gaussian state. The benefit in statistical precision is most pronounced (and nearly a factor of two in some cases) when states with many qubits in the excited state have high probability. I. INTRODUCTION the simple observation that one can improve the readout fidelity of quantum computers by applying targeted X Quantum computers hold great promise for a variety of gates prior to performing a measurement. These X gates scientific and industrial applications. However, existing are placed so that the expected number of qubits in the noisy intermediate-scale quantum (NISQ) computers [1] |1i state is minimized.
    [Show full text]
  • Expander Flows, Geometric Embeddings and Graph Partitioning
    Expander Flows, Geometric Embeddings and Graph Partitioning SANJEEV ARORA Princeton University SATISH RAO and UMESH VAZIRANI UC Berkeley We give a O(√log n)-approximation algorithm for the sparsest cut, edge expansion, balanced separator,andgraph conductance problems. This improves the O(log n)-approximation of Leighton and Rao (1988). We use a well-known semidefinite relaxation with triangle inequality constraints. Central to our analysis is a geometric theorem about projections of point sets in d, whose proof makes essential use of a phenomenon called measure concentration. We also describe an interesting and natural “approximate certificate” for a graph’s expansion, which involves embedding an n-node expander in it with appropriate dilation and congestion. We call this an expander flow. Categories and Subject Descriptors: F.2.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; G.2.2 [Mathematics of Computing]: Discrete Mathematics and Graph Algorithms General Terms: Algorithms,Theory Additional Key Words and Phrases: Graph Partitioning,semidefinite programs,graph separa- tors,multicommodity flows,expansion,expanders 1. INTRODUCTION Partitioning a graph into two (or more) large pieces while minimizing the size of the “interface” between them is a fundamental combinatorial problem. Graph partitions or separators are central objects of study in the theory of Markov chains, geometric embeddings and are a natural algorithmic primitive in numerous settings, including clustering, divide and conquer approaches, PRAM emulation, VLSI layout, and packet routing in distributed networks. Since finding optimal separators is NP-hard, one is forced to settle for approximation algorithms (see Shmoys [1995]). Here we give new approximation algorithms for some of the important problems in this class.
    [Show full text]
  • Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the Ads/CFT Duality
    Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the AdS/CFT Duality Adam Bouland Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, 617 Soda Hall, Berkeley, CA 94720, U.S.A. [email protected] Bill Fefferman Department of Computer Science, University of Chicago, 5730 S Ellis Ave, Chicago, IL 60637, U.S.A. [email protected] Umesh Vazirani Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, 671 Soda Hall, Berkeley, CA 94720, U.S.A. [email protected] Abstract The AdS/CFT correspondence is central to efforts to reconcile gravity and quantum mechanics, a fundamental goal of physics. It posits a duality between a gravitational theory in Anti de Sitter (AdS) space and a quantum mechanical conformal field theory (CFT), embodied in a map known as the AdS/CFT dictionary mapping states to states and operators to operators. This dictionary map is not well understood and has only been computed on special, structured instances. In this work we introduce cryptographic ideas to the study of AdS/CFT, and provide evidence that either the dictionary must be exponentially hard to compute, or else the quantum Extended Church-Turing thesis must be false in quantum gravity. Our argument has its origins in a fundamental paradox in the AdS/CFT correspondence known as the wormhole growth paradox. The paradox is that the CFT is believed to be “scrambling” – i.e. the expectation value of local operators equilibrates in polynomial time – whereas the gravity theory is not, because the interiors of certain black holes known as “wormholes” do not equilibrate and instead their volume grows at a linear rate for at least an exponential amount of time.
    [Show full text]