Efficiently Generating Ground States Is Hard for Postselected Quantum
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Simulating Quantum Field Theory with a Quantum Computer
Simulating quantum field theory with a quantum computer John Preskill Lattice 2018 28 July 2018 This talk has two parts (1) Near-term prospects for quantum computing. (2) Opportunities in quantum simulation of quantum field theory. Exascale digital computers will advance our knowledge of QCD, but some challenges will remain, especially concerning real-time evolution and properties of nuclear matter and quark-gluon plasma at nonzero temperature and chemical potential. Digital computers may never be able to address these (and other) problems; quantum computers will solve them eventually, though I’m not sure when. The physics payoff may still be far away, but today’s research can hasten the arrival of a new era in which quantum simulation fuels progress in fundamental physics. Frontiers of Physics short distance long distance complexity Higgs boson Large scale structure “More is different” Neutrino masses Cosmic microwave Many-body entanglement background Supersymmetry Phases of quantum Dark matter matter Quantum gravity Dark energy Quantum computing String theory Gravitational waves Quantum spacetime particle collision molecular chemistry entangled electrons A quantum computer can simulate efficiently any physical process that occurs in Nature. (Maybe. We don’t actually know for sure.) superconductor black hole early universe Two fundamental ideas (1) Quantum complexity Why we think quantum computing is powerful. (2) Quantum error correction Why we think quantum computing is scalable. A complete description of a typical quantum state of just 300 qubits requires more bits than the number of atoms in the visible universe. Why we think quantum computing is powerful We know examples of problems that can be solved efficiently by a quantum computer, where we believe the problems are hard for classical computers. -
Randomised Computation 1 TM Taking Advices 2 Karp-Lipton Theorem
INFR11102: Computational Complexity 29/10/2019 Lecture 13: More on circuit models; Randomised Computation Lecturer: Heng Guo 1 TM taking advices An alternative way to characterize P=poly is via TMs that take advices. Definition 1. For functions F : N ! N and A : N ! N, the complexity class DTime[F ]=A consists of languages L such that there exist a TM with time bound F (n) and a sequence fangn2N of “advices” satisfying: • janj ≤ A(n); • for jxj = n, x 2 L if and only if M(x; an) = 1. The following theorem explains the notation P=poly, namely “polynomial-time with poly- nomial advice”. S c Theorem 1. P=poly = c;d2N DTime[n ]=nd . Proof. If L 2 P=poly, then it can be computed by a family C = fC1;C2; · · · g of Boolean circuits. Let an be the description of Cn, andS the polynomial time machine M just reads 2 c this description and simulates it. Hence L c;d2N DTime[n ]=nd . For the other direction, if a language L can be computed in polynomial-time with poly- nomial advice, say by TM M with advices fang, then we can construct circuits fDng to simulate M, as in the theorem P ⊂ P=poly in the last lecture. Hence, Dn(x; an) = 1 if and only if x 2 L. The final circuit Cn just does exactly what Dn does, except that Cn “hardwires” the advice an. Namely, Cn(x) := Dn(x; an). Hence, L 2 P=poly. 2 Karp-Lipton Theorem Dick Karp and Dick Lipton showed that NP is unlikely to be contained in P=poly [KL80]. -
Lecture 10: Learning DNF, AC0, Juntas Feb 15, 2007 Lecturer: Ryan O’Donnell Scribe: Elaine Shi
Analysis of Boolean Functions (CMU 18-859S, Spring 2007) Lecture 10: Learning DNF, AC0, Juntas Feb 15, 2007 Lecturer: Ryan O’Donnell Scribe: Elaine Shi 1 Learning DNF in Almost Polynomial Time From previous lectures, we have learned that if a function f is ǫ-concentrated on some collection , then we can learn the function using membership queries in poly( , 1/ǫ)poly(n) log(1/δ) time.S |S| O( w ) In the last lecture, we showed that a DNF of width w is ǫ-concentrated on a set of size n ǫ , and O( w ) concluded that width-w DNFs are learnable in time n ǫ . Today, we shall improve this bound, by showing that a DNF of width w is ǫ-concentrated on O(w log 1 ) a collection of size w ǫ . We shall hence conclude that poly(n)-size DNFs are learnable in almost polynomial time. Recall that in the last lecture we introduced H˚astad’s Switching Lemma, and we showed that 1 DNFs of width w are ǫ-concentrated on degrees up to O(w log ǫ ). Theorem 1.1 (Hastad’s˚ Switching Lemma) Let f be computable by a width-w DNF, If (I, X) is a random restriction with -probability ρ, then d N, ∗ ∀ ∈ d Pr[DT-depth(fX→I) >d] (5ρw) I,X ≤ Theorem 1.2 If f is a width-w DNF, then f(U)2 ǫ ≤ |U|≥OX(w log 1 ) ǫ b O(w log 1 ) To show that a DNF of width w is ǫ-concentrated on a collection of size w ǫ , we also need the following theorem: Theorem 1.3 If f is a width-w DNF, then 1 |U| f(U) 2 20w | | ≤ XU b Proof: Let (I, X) be a random restriction with -probability 1 . -
Quantum Entanglement Producing and Precision Measurement With
QUANTUM ENTANGLEMENT PRODUCING AND PRECISION MEASUREMENT WITH SPINOR BECS by Zhen Zhang A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Physics) in the University of Michigan 2015 Doctoral Committee: Professor Luming Duan, Chair Associate Professor Hui Deng Professor Georg A. Raithel Professor Duncan G. Steel Assistant Professor Kai Sun c Zhen Zhang 2015 To my parents and my husband. ii ACKNOWLEDGMENTS I am greatly indebted to my adviser, Professor Luming Duan, for mentoring me over the last six years. He is a wise professor with sharp insights and broad knowledge and also a kind and supportive supervisor. He offered me a lot help and advise both in my research and for my career. It has been my great honor working with him during my doctoral study. I would also like to thank my undergraduate research adviser Professor Mailin Liang , Profes- sor Wusheng Dai and Professor Mi Xie at Tianjin University, China, for guiding me into the world of physics research and offering initial scientific training. I am also grateful to all the other profes- sors who gave me advice and help imparted their knowledge and enthusiasm through classroom teaching or otherwise during the ten years of undergraduate and graduate study. I also benefited tremendously from my group mates and visitors. In particular, Zhexuan Gong who gave me warm welcome and help when I joined the group; Yang-Hao Chan who taught me cold atom physics in the very beginning of my research; Jiang-Min Zhang shared with me a lot of knowledge and experience both in research and in personal life; Dong-Ling Deng and Sheng- Tao Wang discussed with me on many problems. -
Multi-Photon Boson-Sampling Machines Beating Early Classical Computers
Multi-photon boson-sampling machines beating early classical computers Hui Wang1,2*, Yu He1,2*,Yu-Huai Li1,2*, Zu-En Su1,2,Bo Li1,2, He-Liang Huang1,2, Xing Ding1,2, Ming-Cheng Chen1,2, Chang Liu1,2, Jian Qin1,2,Jin-Peng Li1,2, Yu-Ming He1,2,3, Christian Schneider3, Martin Kamp3, Cheng-Zhi Peng1,2, Sven Höfling1,3,4, Chao-Yang Lu1,2,$, and Jian-Wei Pan1,2,# 1 Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai, 201315, China 2 CAS-Alibaba Quantum Computing Laboratory, CAS Centre for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, China 3 Technische Physik, Physikalisches Instität and Wilhelm Conrad Röntgen-Center for Complex Material Systems, Universitat Würzburg, Am Hubland, D-97074 Würzburg, Germany 4 SUPA, School of Physics and Astronomy, University of St. Andrews, St. Andrews KY16 9SS, United Kingdom * These authors contributed equally to this work $ [email protected], # [email protected] Boson sampling is considered as a strong candidate to demonstrate the “quantum computational supremacy” over classical computers. However, previous proof-of- principle experiments suffered from small photon number and low sampling rates owing to the inefficiencies of the single-photon sources and multi-port optical interferometers. Here, we develop two central components for high-performance boson sampling: robust multi-photon interferometers with 99% transmission rate, and actively demultiplexed single-photon sources from a quantum-dot-micropillar with simultaneously high efficiency, purity and indistinguishability. We implement and validate 3-, 4-, and 5-photon boson sampling, and achieve sampling rates of 4.96 kHz, 151 Hz, and 4 Hz, respectively, which are over 24,000 times faster than the previous experiments, and over 220 times faster than obtaining one sample through calculating the matrices permanent using the first electronic computer (ENIAC) and transistorized computer (TRADIC) in the human history. -
Quantum Computer-Aided Design of Quantum Optics Hardware
Quantum computer-aided design of quantum optics hardware Jakob S. Kottmann,1, 2, ∗ Mario Krenn,1, 2, 3, y Thi Ha Kyaw,1, 2 Sumner Alperin-Lea,1, 2 and Al´anAspuru-Guzik1, 2, 3, 4, z 1Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Canada. 2Department of Computer Science, University of Toronto, Canada. 3Vector Institute for Artificial Intelligence, Toronto, Canada. 4Canadian Institute for Advanced Research (CIFAR) Lebovic Fellow, Toronto, Canada (Dated: May 4, 2021) The parameters of a quantum system grow exponentially with the number of involved quantum particles. Hence, the associated memory requirement to store or manipulate the underlying wave- function goes well beyond the limit of the best classical computers for quantum systems composed of a few dozen particles, leading to serious challenges in their numerical simulation. This implies that the verification and design of new quantum devices and experiments are fundamentally limited to small system size. It is not clear how the full potential of large quantum systems can be exploited. Here, we present the concept of quantum computer designed quantum hardware and apply it to the field of quantum optics. Specifically, we map complex experimental hardware for high-dimensional, many-body entangled photons into a gate-based quantum circuit. We show explicitly how digital quantum simulation of Boson sampling experiments can be realized. We then illustrate how to design quantum-optical setups for complex entangled photonic systems, such as high-dimensional Greenberger-Horne-Zeilinger states and their derivatives. Since photonic hardware is already on the edge of quantum supremacy and the development of gate-based quantum computers is rapidly advancing, our approach promises to be a useful tool for the future of quantum device design. -
Models of Optical Quantum Computing
Nanophotonics 2017; 6(3): 531–541 Review article Open Access Hari Krovi* Models of optical quantum computing DOI 10.1515/nanoph-2016-0136 element method [9], and search on graphs for marked ver- Received August 2, 2016; accepted November 9, 2016 tices [10]. All these examples provide evidence that this model of computing is potentially more powerful than Abstract: I review some work on models of quantum com- classical computing. However, it should also be pointed puting, optical implementations of these models, as well out that there is currently no evidence that quantum com- as the associated computational power. In particular, we puters can solve NP-hard problems in polynomial time. discuss the circuit model and cluster state implemen- The class NP stands for nondeterministic polynomial time tations using quantum optics with various encodings (defined more explicitly in the next section) and consists such as dual rail encoding, Gottesman-Kitaev-Preskill of problems whose solution can be checked polynomial encoding, and coherent state encoding. Then we discuss time by deterministic classical computers. The impor- intermediate models of optical computing such as boson tance of this class stems from the fact that several practi- sampling and its variants. Finally, we review some recent cal problems lie in this class. Despite the lack of evidence work in optical implementations of adiabatic quantum of an exponential quantum speed-up for problems in this computing and analog optical computing. We also pro- class, there are a lot of examples where one has a polyno- vide a brief description of the relevant aspects from com- mial speed-up (such as a square-root speed-up) for several plexity theory needed to understand the results surveyed. -
BQP and the Polynomial Hierarchy 1 Introduction
BQP and The Polynomial Hierarchy based on `BQP and The Polynomial Hierarchy' by Scott Aaronson Deepak Sirone J., 17111013 Hemant Kumar, 17111018 Dept. of Computer Science and Engineering Dept. of Computer Science and Engineering Indian Institute of Technology Kanpur Indian Institute of Technology Kanpur Abstract The problem of comparing two complexity classes boils down to either finding a problem which can be solved using the resources of one class but cannot be solved in the other thereby showing they are different or showing that the resources needed by one class can be simulated within the resources of the other class and hence implying a containment. When the relation between the resources provided by two classes such as the classes BQP and PH is not well known, researchers try to separate the classes in the oracle query model as the first step. This paper tries to break the ice about the relationship between BQP and PH, which has been open for a long time by presenting evidence that quantum computers can solve problems outside of the polynomial hierarchy. The first result shows that there exists a relation problem which is solvable in BQP, but not in PH, with respect to an oracle. Thus gives evidence of separation between BQP and PH. The second result shows an oracle relation problem separating BQP from BPPpath and SZK. 1 Introduction The problem of comparing the complexity classes BQP and the polynomial heirarchy has been identified as one of the grand challenges of the field. The paper \BQP and the Polynomial Heirarchy" by Scott Aaronson proves an oracle separation result for BQP and the polynomial heirarchy in the form of two main results: A 1. -
Research Statement Bill Fefferman, University of Maryland/NIST
Research Statement Bill Fefferman, University of Maryland/NIST Since the discovery of Shor's algorithm in the mid 1990's, it has been known that quan- tum computers can efficiently solve integer factorization, a problem of great practical relevance with no known efficient classical algorithm [1]. The importance of this result is impossible to overstate: the conjectured intractability of the factoring problem provides the basis for the se- curity of the modern internet. However, it may still be a few decades before we build universal quantum computers capable of running Shor's algorithm to factor integers of cryptographically relevant size. In addition, we have little complexity theoretic evidence that factoring is com- putationally hard. Consequently, Shor's algorithm can only be seen as the first step toward understanding the power of quantum computation, which has become one of the primary goals of theoretical computer science. My research focuses not only on understanding the power of quantum computers of the indefinite future, but also on the desire to develop the foundations of computational complexity to rigorously analyze the capabilities and limitations of present-day and near-term quantum devices which are not yet fully scalable quantum computers. Furthermore, I am interested in using these capabilities and limitations to better understand the potential for cryptography in a fundamentally quantum mechanical world. 1 Comparing quantum and classical nondeterministic computation Starting with the foundational paper of Bernstein and Vazirani it has been conjectured that quantum computers are capable of solving problems whose solutions cannot be found, or even verified efficiently on a classical computer [2]. -
NP-Complete Problems and Physical Reality
NP-complete Problems and Physical Reality Scott Aaronson∗ Abstract Can NP-complete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adia- batic algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing.” The section on soap bubbles even includes some “experimental” re- sults. While I do not believe that any of the proposals will let us solve NP-complete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics. 1 Introduction “Let a computer smear—with the right kind of quantum randomness—and you create, in effect, a ‘parallel’ machine with an astronomical number of processors . All you have to do is be sure that when you collapse the system, you choose the version that happened to find the needle in the mathematical haystack.” —From Quarantine [31], a 1992 science-fiction novel by Greg Egan If I had to debate the science writer John Horgan’s claim that basic science is coming to an end [48], my argument would lean heavily on one fact: it has been only a decade since we learned that quantum computers could factor integers in polynomial time. In my (unbiased) opinion, the showdown that quantum computing has forced—between our deepest intuitions about computers on the one hand, and our best-confirmed theory of the physical world on the other—constitutes one of the most exciting scientific dramas of our time. -
Arxiv:1412.8427V1 [Quant-Ph] 29 Dec 2014
Boson Sampling for Molecular Vibronic Spectra Joonsuk Huh,∗ Gian Giacomo Guerreschi, Borja Peropadre, Jarrod R. McClean, and Al´anAspuru-Guziky Department of Chemistry and Chemical Biology, Harvard University, Cambridge, Massachusetts 02138, United States (Dated: December 30, 2014) Quantum computers are expected to be more efficient in performing certain computations than any classical machine. Unfortunately, the technological challenges associated with building a full- scale quantum computer have not yet allowed the experimental verification of such an expectation. Recently, boson sampling has emerged as a problem that is suspected to be intractable on any classical computer, but efficiently implementable with a linear quantum optical setup. Therefore, boson sampling may offer an experimentally realizable challenge to the Extended Church-Turing thesis and this remarkable possibility motivated much of the interest around boson sampling, at least in relation to complexity-theoretic questions. In this work, we show that the successful development of a boson sampling apparatus would not only answer such inquiries, but also yield a practical tool for difficult molecular computations. Specifically, we show that a boson sampling device with a modified input state can be used to generate molecular vibronic spectra, including complicated effects such as Duschinsky rotations. I. INTRODUCTION a b Quantum mechanics allows the storage and manipula- tion of information in ways that are not possible accord- ing to classical physics. At a glance, it appears evident that the set of operations characterizing a quantum com- puter is strictly larger than the operations possible in a classical hardware. This speculation is at the basis of quantum speedups that have been achieved for oracu- lar and search problems [1, 2]. -
0.1 Axt, Loop Program, and Grzegorczyk Hier- Archies
1 0.1 Axt, Loop Program, and Grzegorczyk Hier- archies Computable functions can have some quite complex definitions. For example, a loop programmable function might be given via a loop program that has depth of nesting of the loop-end pair, say, equal to 200. Now this is complex! Or a function might be given via an arbitrarily complex sequence of primitive recursions, with the restriction that the computed function is majorized by some known function, for all values of the input (for the concept of majorization see Subsection on the Ackermann function.). But does such definitional|and therefore, \static"|complexity have any bearing on the computational|dynamic|complexity of the function? We will see that it does, and we will connect definitional and computational complexities quantitatively. Our study will be restricted to the class PR that we will subdivide into an infinite sequence of increasingly more inclusive subclasses, Si. A so-called hierarchy of classes of functions. 0.1.0.1 Definition. A sequence (Si)i≥0 of subsets of PR is a primitive recur- sive hierarchy provided all of the following hold: (1) Si ⊆ Si+1, for all i ≥ 0 S (2) PR = i≥0 Si. The hierarchy is proper or nontrivial iff Si 6= Si+1, for all but finitely many i. If f 2 Si then we say that its level in the hierarchy is ≤ i. If f 2 Si+1 − Si, then its level is equal to i + 1. The first hierarchy that we will define is due to Axt and Heinermann [[5] and [1]].