Efficiently Generating Ground States Is Hard for Postselected Quantum

Total Page:16

File Type:pdf, Size:1020Kb

Efficiently Generating Ground States Is Hard for Postselected Quantum Efficiently generating ground states is hard for postselected quantum computation Yuki Takeuchi,1, ∗ Yasuhiro Takahashi,1 and Seiichiro Tani1 1NTT Communication Science Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa 243-0198, Japan Although quantum computing is expected to outperform universal classical computing, an unconditional proof of this assertion seems to be hard because an unconditional separation between BQP and BPP implies P 6= PSPACE. Because of this, the quantum-computational-supremacy approach has been actively studied; it shows that if the output probability distributions from a family of quantum circuits can be efficiently simulated in classical polynomial time, then the polynomial hierarchy collapses to its second or third level. Since it is widely believed that the polynomial hierarchy does not collapse, this approach shows one kind of quantum advantage under a plausible assumption. On the other hand, the limitations of universal quantum computing are also actively studied. For example, it is believed to be impossible to generate ground states of any local Hamiltonians in quantum polynomial time. In this paper, we give evidence for this impossibility by applying an argument used in the quantum-computational-supremacy approach. More precisely, we show that if ground states of any 3-local Hamiltonians can be approximately generated in quantum polynomial time with postselection, then the counting hierarchy collapses to its first level. Our evidence is superior to the existing findings in the sense that we reduce the impossibility to an unlikely relation between classical complexity classes. Furthermore, our argument can be used to give evidence that at least one 3-local Hamiltonian exists such that its ground state cannot be represented by a polynomial number of bits, which may be related to a gap between QMA and QCMA. Quantum computing is expected to outperform classical 13], deterministic quantum computation with one quantum bit computing. Indeed, quantum advantages have already been (DQC1) [14, 15], Hadamard-classical circuit with one qubit shown in terms of query complexity [1] and communication (HC1Q) [16], and quantum random circuit sampling [17– complexity [2]. Regarding time complexity, it is also be- 20]. A proof-of-principle demonstration of quantum compu- lieved that universal quantum computing has advantages over tational supremacy has recently been achieved using quan- classical counterparts. For example, although an efficient tum random circuit sampling with 53 qubits [21]. Regard- quantum algorithm, i.e., Shor’s algorithm, exists for integer ing other models, small-scale experiments have been per- factorization [3], there is no known classical algorithm that formed toward the goal of demonstrating quantum computa- can do so efficiently. However, an unconditional proof that tional supremacy [22–27]. there is no such classical algorithm seems to be hard because On the other hand, the limitations of universal quantum an unconditional separation between BQP and BPP implies computing are also actively studied (e.g., see Refs. [28–30]). P PSPACE. Whether P PSPACE is a long-standing = = Understanding these limitations is important to clarify how to problem6 in the field of computer6 science. make good use of universal quantum computers. For example, it is believed to be impossible to generate ground states of any To give evidence of quantum advantage in terms of com- local Hamiltonians in quantum polynomial time, while their putational time, a sampling approach has been actively stud- heuristic generation has been studied using quantum anneal- ied. This approach is to show that if the output probabil- ing [31], variational quantum eigensolvers (VQE) [32], and ity distributions from a family of (non-universal) quantum quantum approximate optimization algorithms (QAOA) [33]. circuits can be efficiently simulated in classical polynomial Since deciding whether the ground-state energy of a given - time, then the polynomial hierarchy (PH) collapses to its 2 local Hamiltonian is low or high with polynomial precision is second or third level. Since it is widely believed that PH a QMA-complete problem [34], if efficient generation of the arXiv:2006.12125v1 [quant-ph] 22 Jun 2020 does not collapse, this approach shows one kind of quantum ground states is possible, then BQP QMA that seems to advantage (under a plausible complexity-theoretic assump- = be unlikely. This unlikeliness can be strengthened using the tion). This type of quantum advantage is called quantum complexity class PQMA[log] that includes QMA [35]. Simu- computational supremacy [4]. The quantum-computational- lating a single-qubit measurement on ground states of -local supremacy approach is remarkable because it reduces the 5 Hamiltonians is PQMA[log]-complete [36]. Therefore, if effi- impossibility of an efficient classical simulation of quantum cient generation is possible, then BQP PQMA[log] that would computing to unlikely relations between classical complexity = be more unlikely than BQP QMA. As well as the gap be- classes. Since classical complexity classes have been stud- = tween quantum and classical computing in terms of time com- ied for a longer time than quantum complexity classes, un- plexity, it is hard to unconditionally show the impossibility of likely relations between classical complexity classes would efficiently generating the ground states. be more unlikely than those involving quantum complexity classes. As sub-universal quantum computing models show- In this paper, we utilize a technique from the quantum- ing quantum computational supremacy, several models have computational-supremacy approach to give new evidence for been proposed, such as boson sampling [5–7], instantaneous this impossibility. More precisely, in Theorem 1, we show quantum polynomial time (IQP) [8, 9] and its variants [10– that if the ground states of any 3-local Hamiltonians can be 2 approximately generated in quantum polynomial time with (a) o postselection, then the counting hierarchy (CH) collapses to p m its first level, i.e., CH = PP. In Theorem 2, we consider a dif- |0 ! {p } nm ′+m ferent notion of approximationand show that if the probability Ux z z∈{0,1} . distributions obtained from the ground states can be approx- ′ m′ nm imately generated in quantum polynomial time with postse- |g!⊗ lection, then CH = PP. Theorem 2 studies the hardness of (b) ′ approximately generating the ground states from a different o p′ perspective, because it is closely related to the hardness of ap- m |0 ! proximately generating the probability distributions. Further- Ux . more, by using a similar argument, we show that if the ground . ′ states of any 3-local Hamiltonians can be uniquely repre- ⊗m′ nm sented by a polynomial number of bits, then CH collapses ρapprox PP to its second level, i.e., CH = PP . This result seems to ′ m ⊗m give additional evidence to support the conclusion that QMA FIG. 1: (a) A quantum circuit Ux with an input state |0 i|gi to is strictly larger than QCMA. Our results are different from decide whether x ∈ L or x∈ / L, where L is in postQMA. Let o and be output and postselection registers, respectively. If , the existing ones on the impossibility of efficient ground-state p o = p = 1 we conclude that x ∈ L. On the other hand, if p = 1 and o = 0, generation in a sense that we reduce the impossibility to un- then x∈ / L. The output probability distribution of nm′ + m qubits likely relations between classical complexity classes as in the ′ is denoted by {pz}z∈{0,1}nm +m . Each meter symbol represents a quantum-computational-supremacy approach. Z-basis measurement. (b) The same quantum circuit as in Fig. 1 (a) Preliminaries.—Before we explain our results, we will except that |gi is replaced with an approximate state ρapprox. The briefly review preliminaries required to understand our ar- output and postselection registers are denoted by o′ and p′, respec- gument. We use several complexity classes that are sets tively. of decision problems. Here, decision problems are mathe- matical problems that can be answered by YES or NO. We circuits if a polynomial number of copies of a ground state mainly use complexity classes CH, postBQP, postQCMA, (i.e., a minimum-eigenvalue state) of an appropriate - and postQMA, where the latter three are postselected versions g 3 local Hamiltonian is given (see Fig. 1| (a)).i Note that a -local of BQP, QCMA, and QMA, respectively. We assume that 3 Hamiltonian H = t H(i) with a polynomial t is the sum readers know the major complexity classes, such as P, PP, i=1 of polynomially many Hermitian operators (i) t , each PSPACE, and PH (for their definitions, see Ref. [37]). H i=1 of which acts on atP most three (possibly geometrically{ } nonlo- The class CH is the union of classes C P over all non- k cal) qubits. The operator norm H(i) is upper-bounded by negative integers k, i.e., CH = k≥0CkP, where C0P = P || || ∪ one for any 1 i t. and C P = PPCk P for all k 0. We say that CH col- ≤ ≤ k+1 ≥ lapses to its k-th level when CH = CkP. The first-level col- This lemma can be obtained by combiningresults in Refs. [40, lapse of CH seems to be especially unlikely. This is because, 42]. The proof is given in Appendix A. PP from Toda’s theorem [38], PH P CH. Therefore, if By removing and from the definition of postQMA, ⊆ ⊆ ρx ρ CH = PP, then PH PP. Since there exists an oracle rel- the complexity class postBQP is defined. Since PP = ⊆ NP ative to which PH (more precisely, P ) is not contained in postBQP [43], readers can replace PP with postBQP if they PP [39], the inclusion PH PP seems to be unlikely. are not familiar with the definition of PP. Furthermore, the ⊆ The complexity class postQMA is defined as follows [40, class postQCMA is defined by replacing each quantum state 41]: a language L is in postQMA if and only if there exist ρx and ρ with a polynomial number of classical bits.
Recommended publications
  • Simulating Quantum Field Theory with a Quantum Computer
    Simulating quantum field theory with a quantum computer John Preskill Lattice 2018 28 July 2018 This talk has two parts (1) Near-term prospects for quantum computing. (2) Opportunities in quantum simulation of quantum field theory. Exascale digital computers will advance our knowledge of QCD, but some challenges will remain, especially concerning real-time evolution and properties of nuclear matter and quark-gluon plasma at nonzero temperature and chemical potential. Digital computers may never be able to address these (and other) problems; quantum computers will solve them eventually, though I’m not sure when. The physics payoff may still be far away, but today’s research can hasten the arrival of a new era in which quantum simulation fuels progress in fundamental physics. Frontiers of Physics short distance long distance complexity Higgs boson Large scale structure “More is different” Neutrino masses Cosmic microwave Many-body entanglement background Supersymmetry Phases of quantum Dark matter matter Quantum gravity Dark energy Quantum computing String theory Gravitational waves Quantum spacetime particle collision molecular chemistry entangled electrons A quantum computer can simulate efficiently any physical process that occurs in Nature. (Maybe. We don’t actually know for sure.) superconductor black hole early universe Two fundamental ideas (1) Quantum complexity Why we think quantum computing is powerful. (2) Quantum error correction Why we think quantum computing is scalable. A complete description of a typical quantum state of just 300 qubits requires more bits than the number of atoms in the visible universe. Why we think quantum computing is powerful We know examples of problems that can be solved efficiently by a quantum computer, where we believe the problems are hard for classical computers.
    [Show full text]
  • Randomised Computation 1 TM Taking Advices 2 Karp-Lipton Theorem
    INFR11102: Computational Complexity 29/10/2019 Lecture 13: More on circuit models; Randomised Computation Lecturer: Heng Guo 1 TM taking advices An alternative way to characterize P=poly is via TMs that take advices. Definition 1. For functions F : N ! N and A : N ! N, the complexity class DTime[F ]=A consists of languages L such that there exist a TM with time bound F (n) and a sequence fangn2N of “advices” satisfying: • janj ≤ A(n); • for jxj = n, x 2 L if and only if M(x; an) = 1. The following theorem explains the notation P=poly, namely “polynomial-time with poly- nomial advice”. S c Theorem 1. P=poly = c;d2N DTime[n ]=nd . Proof. If L 2 P=poly, then it can be computed by a family C = fC1;C2; · · · g of Boolean circuits. Let an be the description of Cn, andS the polynomial time machine M just reads 2 c this description and simulates it. Hence L c;d2N DTime[n ]=nd . For the other direction, if a language L can be computed in polynomial-time with poly- nomial advice, say by TM M with advices fang, then we can construct circuits fDng to simulate M, as in the theorem P ⊂ P=poly in the last lecture. Hence, Dn(x; an) = 1 if and only if x 2 L. The final circuit Cn just does exactly what Dn does, except that Cn “hardwires” the advice an. Namely, Cn(x) := Dn(x; an). Hence, L 2 P=poly. 2 Karp-Lipton Theorem Dick Karp and Dick Lipton showed that NP is unlikely to be contained in P=poly [KL80].
    [Show full text]
  • Lecture 10: Learning DNF, AC0, Juntas Feb 15, 2007 Lecturer: Ryan O’Donnell Scribe: Elaine Shi
    Analysis of Boolean Functions (CMU 18-859S, Spring 2007) Lecture 10: Learning DNF, AC0, Juntas Feb 15, 2007 Lecturer: Ryan O’Donnell Scribe: Elaine Shi 1 Learning DNF in Almost Polynomial Time From previous lectures, we have learned that if a function f is ǫ-concentrated on some collection , then we can learn the function using membership queries in poly( , 1/ǫ)poly(n) log(1/δ) time.S |S| O( w ) In the last lecture, we showed that a DNF of width w is ǫ-concentrated on a set of size n ǫ , and O( w ) concluded that width-w DNFs are learnable in time n ǫ . Today, we shall improve this bound, by showing that a DNF of width w is ǫ-concentrated on O(w log 1 ) a collection of size w ǫ . We shall hence conclude that poly(n)-size DNFs are learnable in almost polynomial time. Recall that in the last lecture we introduced H˚astad’s Switching Lemma, and we showed that 1 DNFs of width w are ǫ-concentrated on degrees up to O(w log ǫ ). Theorem 1.1 (Hastad’s˚ Switching Lemma) Let f be computable by a width-w DNF, If (I, X) is a random restriction with -probability ρ, then d N, ∗ ∀ ∈ d Pr[DT-depth(fX→I) >d] (5ρw) I,X ≤ Theorem 1.2 If f is a width-w DNF, then f(U)2 ǫ ≤ |U|≥OX(w log 1 ) ǫ b O(w log 1 ) To show that a DNF of width w is ǫ-concentrated on a collection of size w ǫ , we also need the following theorem: Theorem 1.3 If f is a width-w DNF, then 1 |U| f(U) 2 20w | | ≤ XU b Proof: Let (I, X) be a random restriction with -probability 1 .
    [Show full text]
  • Quantum Entanglement Producing and Precision Measurement With
    QUANTUM ENTANGLEMENT PRODUCING AND PRECISION MEASUREMENT WITH SPINOR BECS by Zhen Zhang A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Physics) in the University of Michigan 2015 Doctoral Committee: Professor Luming Duan, Chair Associate Professor Hui Deng Professor Georg A. Raithel Professor Duncan G. Steel Assistant Professor Kai Sun c Zhen Zhang 2015 To my parents and my husband. ii ACKNOWLEDGMENTS I am greatly indebted to my adviser, Professor Luming Duan, for mentoring me over the last six years. He is a wise professor with sharp insights and broad knowledge and also a kind and supportive supervisor. He offered me a lot help and advise both in my research and for my career. It has been my great honor working with him during my doctoral study. I would also like to thank my undergraduate research adviser Professor Mailin Liang , Profes- sor Wusheng Dai and Professor Mi Xie at Tianjin University, China, for guiding me into the world of physics research and offering initial scientific training. I am also grateful to all the other profes- sors who gave me advice and help imparted their knowledge and enthusiasm through classroom teaching or otherwise during the ten years of undergraduate and graduate study. I also benefited tremendously from my group mates and visitors. In particular, Zhexuan Gong who gave me warm welcome and help when I joined the group; Yang-Hao Chan who taught me cold atom physics in the very beginning of my research; Jiang-Min Zhang shared with me a lot of knowledge and experience both in research and in personal life; Dong-Ling Deng and Sheng- Tao Wang discussed with me on many problems.
    [Show full text]
  • Multi-Photon Boson-Sampling Machines Beating Early Classical Computers
    Multi-photon boson-sampling machines beating early classical computers Hui Wang1,2*, Yu He1,2*,Yu-Huai Li1,2*, Zu-En Su1,2,Bo Li1,2, He-Liang Huang1,2, Xing Ding1,2, Ming-Cheng Chen1,2, Chang Liu1,2, Jian Qin1,2,Jin-Peng Li1,2, Yu-Ming He1,2,3, Christian Schneider3, Martin Kamp3, Cheng-Zhi Peng1,2, Sven Höfling1,3,4, Chao-Yang Lu1,2,$, and Jian-Wei Pan1,2,# 1 Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai, 201315, China 2 CAS-Alibaba Quantum Computing Laboratory, CAS Centre for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, China 3 Technische Physik, Physikalisches Instität and Wilhelm Conrad Röntgen-Center for Complex Material Systems, Universitat Würzburg, Am Hubland, D-97074 Würzburg, Germany 4 SUPA, School of Physics and Astronomy, University of St. Andrews, St. Andrews KY16 9SS, United Kingdom * These authors contributed equally to this work $ [email protected], # [email protected] Boson sampling is considered as a strong candidate to demonstrate the “quantum computational supremacy” over classical computers. However, previous proof-of- principle experiments suffered from small photon number and low sampling rates owing to the inefficiencies of the single-photon sources and multi-port optical interferometers. Here, we develop two central components for high-performance boson sampling: robust multi-photon interferometers with 99% transmission rate, and actively demultiplexed single-photon sources from a quantum-dot-micropillar with simultaneously high efficiency, purity and indistinguishability. We implement and validate 3-, 4-, and 5-photon boson sampling, and achieve sampling rates of 4.96 kHz, 151 Hz, and 4 Hz, respectively, which are over 24,000 times faster than the previous experiments, and over 220 times faster than obtaining one sample through calculating the matrices permanent using the first electronic computer (ENIAC) and transistorized computer (TRADIC) in the human history.
    [Show full text]
  • Quantum Computer-Aided Design of Quantum Optics Hardware
    Quantum computer-aided design of quantum optics hardware Jakob S. Kottmann,1, 2, ∗ Mario Krenn,1, 2, 3, y Thi Ha Kyaw,1, 2 Sumner Alperin-Lea,1, 2 and Al´anAspuru-Guzik1, 2, 3, 4, z 1Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Canada. 2Department of Computer Science, University of Toronto, Canada. 3Vector Institute for Artificial Intelligence, Toronto, Canada. 4Canadian Institute for Advanced Research (CIFAR) Lebovic Fellow, Toronto, Canada (Dated: May 4, 2021) The parameters of a quantum system grow exponentially with the number of involved quantum particles. Hence, the associated memory requirement to store or manipulate the underlying wave- function goes well beyond the limit of the best classical computers for quantum systems composed of a few dozen particles, leading to serious challenges in their numerical simulation. This implies that the verification and design of new quantum devices and experiments are fundamentally limited to small system size. It is not clear how the full potential of large quantum systems can be exploited. Here, we present the concept of quantum computer designed quantum hardware and apply it to the field of quantum optics. Specifically, we map complex experimental hardware for high-dimensional, many-body entangled photons into a gate-based quantum circuit. We show explicitly how digital quantum simulation of Boson sampling experiments can be realized. We then illustrate how to design quantum-optical setups for complex entangled photonic systems, such as high-dimensional Greenberger-Horne-Zeilinger states and their derivatives. Since photonic hardware is already on the edge of quantum supremacy and the development of gate-based quantum computers is rapidly advancing, our approach promises to be a useful tool for the future of quantum device design.
    [Show full text]
  • Models of Optical Quantum Computing
    Nanophotonics 2017; 6(3): 531–541 Review article Open Access Hari Krovi* Models of optical quantum computing DOI 10.1515/nanoph-2016-0136 element method [9], and search on graphs for marked ver- Received August 2, 2016; accepted November 9, 2016 tices [10]. All these examples provide evidence that this model of computing is potentially more powerful than Abstract: I review some work on models of quantum com- classical computing. However, it should also be pointed puting, optical implementations of these models, as well out that there is currently no evidence that quantum com- as the associated computational power. In particular, we puters can solve NP-hard problems in polynomial time. discuss the circuit model and cluster state implemen- The class NP stands for nondeterministic polynomial time tations using quantum optics with various encodings (defined more explicitly in the next section) and consists such as dual rail encoding, Gottesman-Kitaev-Preskill of problems whose solution can be checked polynomial encoding, and coherent state encoding. Then we discuss time by deterministic classical computers. The impor- intermediate models of optical computing such as boson tance of this class stems from the fact that several practi- sampling and its variants. Finally, we review some recent cal problems lie in this class. Despite the lack of evidence work in optical implementations of adiabatic quantum of an exponential quantum speed-up for problems in this computing and analog optical computing. We also pro- class, there are a lot of examples where one has a polyno- vide a brief description of the relevant aspects from com- mial speed-up (such as a square-root speed-up) for several plexity theory needed to understand the results surveyed.
    [Show full text]
  • BQP and the Polynomial Hierarchy 1 Introduction
    BQP and The Polynomial Hierarchy based on `BQP and The Polynomial Hierarchy' by Scott Aaronson Deepak Sirone J., 17111013 Hemant Kumar, 17111018 Dept. of Computer Science and Engineering Dept. of Computer Science and Engineering Indian Institute of Technology Kanpur Indian Institute of Technology Kanpur Abstract The problem of comparing two complexity classes boils down to either finding a problem which can be solved using the resources of one class but cannot be solved in the other thereby showing they are different or showing that the resources needed by one class can be simulated within the resources of the other class and hence implying a containment. When the relation between the resources provided by two classes such as the classes BQP and PH is not well known, researchers try to separate the classes in the oracle query model as the first step. This paper tries to break the ice about the relationship between BQP and PH, which has been open for a long time by presenting evidence that quantum computers can solve problems outside of the polynomial hierarchy. The first result shows that there exists a relation problem which is solvable in BQP, but not in PH, with respect to an oracle. Thus gives evidence of separation between BQP and PH. The second result shows an oracle relation problem separating BQP from BPPpath and SZK. 1 Introduction The problem of comparing the complexity classes BQP and the polynomial heirarchy has been identified as one of the grand challenges of the field. The paper \BQP and the Polynomial Heirarchy" by Scott Aaronson proves an oracle separation result for BQP and the polynomial heirarchy in the form of two main results: A 1.
    [Show full text]
  • Research Statement Bill Fefferman, University of Maryland/NIST
    Research Statement Bill Fefferman, University of Maryland/NIST Since the discovery of Shor's algorithm in the mid 1990's, it has been known that quan- tum computers can efficiently solve integer factorization, a problem of great practical relevance with no known efficient classical algorithm [1]. The importance of this result is impossible to overstate: the conjectured intractability of the factoring problem provides the basis for the se- curity of the modern internet. However, it may still be a few decades before we build universal quantum computers capable of running Shor's algorithm to factor integers of cryptographically relevant size. In addition, we have little complexity theoretic evidence that factoring is com- putationally hard. Consequently, Shor's algorithm can only be seen as the first step toward understanding the power of quantum computation, which has become one of the primary goals of theoretical computer science. My research focuses not only on understanding the power of quantum computers of the indefinite future, but also on the desire to develop the foundations of computational complexity to rigorously analyze the capabilities and limitations of present-day and near-term quantum devices which are not yet fully scalable quantum computers. Furthermore, I am interested in using these capabilities and limitations to better understand the potential for cryptography in a fundamentally quantum mechanical world. 1 Comparing quantum and classical nondeterministic computation Starting with the foundational paper of Bernstein and Vazirani it has been conjectured that quantum computers are capable of solving problems whose solutions cannot be found, or even verified efficiently on a classical computer [2].
    [Show full text]
  • NP-Complete Problems and Physical Reality
    NP-complete Problems and Physical Reality Scott Aaronson∗ Abstract Can NP-complete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adia- batic algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing.” The section on soap bubbles even includes some “experimental” re- sults. While I do not believe that any of the proposals will let us solve NP-complete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics. 1 Introduction “Let a computer smear—with the right kind of quantum randomness—and you create, in effect, a ‘parallel’ machine with an astronomical number of processors . All you have to do is be sure that when you collapse the system, you choose the version that happened to find the needle in the mathematical haystack.” —From Quarantine [31], a 1992 science-fiction novel by Greg Egan If I had to debate the science writer John Horgan’s claim that basic science is coming to an end [48], my argument would lean heavily on one fact: it has been only a decade since we learned that quantum computers could factor integers in polynomial time. In my (unbiased) opinion, the showdown that quantum computing has forced—between our deepest intuitions about computers on the one hand, and our best-confirmed theory of the physical world on the other—constitutes one of the most exciting scientific dramas of our time.
    [Show full text]
  • Arxiv:1412.8427V1 [Quant-Ph] 29 Dec 2014
    Boson Sampling for Molecular Vibronic Spectra Joonsuk Huh,∗ Gian Giacomo Guerreschi, Borja Peropadre, Jarrod R. McClean, and Al´anAspuru-Guziky Department of Chemistry and Chemical Biology, Harvard University, Cambridge, Massachusetts 02138, United States (Dated: December 30, 2014) Quantum computers are expected to be more efficient in performing certain computations than any classical machine. Unfortunately, the technological challenges associated with building a full- scale quantum computer have not yet allowed the experimental verification of such an expectation. Recently, boson sampling has emerged as a problem that is suspected to be intractable on any classical computer, but efficiently implementable with a linear quantum optical setup. Therefore, boson sampling may offer an experimentally realizable challenge to the Extended Church-Turing thesis and this remarkable possibility motivated much of the interest around boson sampling, at least in relation to complexity-theoretic questions. In this work, we show that the successful development of a boson sampling apparatus would not only answer such inquiries, but also yield a practical tool for difficult molecular computations. Specifically, we show that a boson sampling device with a modified input state can be used to generate molecular vibronic spectra, including complicated effects such as Duschinsky rotations. I. INTRODUCTION a b Quantum mechanics allows the storage and manipula- tion of information in ways that are not possible accord- ing to classical physics. At a glance, it appears evident that the set of operations characterizing a quantum com- puter is strictly larger than the operations possible in a classical hardware. This speculation is at the basis of quantum speedups that have been achieved for oracu- lar and search problems [1, 2].
    [Show full text]
  • 0.1 Axt, Loop Program, and Grzegorczyk Hier- Archies
    1 0.1 Axt, Loop Program, and Grzegorczyk Hier- archies Computable functions can have some quite complex definitions. For example, a loop programmable function might be given via a loop program that has depth of nesting of the loop-end pair, say, equal to 200. Now this is complex! Or a function might be given via an arbitrarily complex sequence of primitive recursions, with the restriction that the computed function is majorized by some known function, for all values of the input (for the concept of majorization see Subsection on the Ackermann function.). But does such definitional|and therefore, \static"|complexity have any bearing on the computational|dynamic|complexity of the function? We will see that it does, and we will connect definitional and computational complexities quantitatively. Our study will be restricted to the class PR that we will subdivide into an infinite sequence of increasingly more inclusive subclasses, Si. A so-called hierarchy of classes of functions. 0.1.0.1 Definition. A sequence (Si)i≥0 of subsets of PR is a primitive recur- sive hierarchy provided all of the following hold: (1) Si ⊆ Si+1, for all i ≥ 0 S (2) PR = i≥0 Si. The hierarchy is proper or nontrivial iff Si 6= Si+1, for all but finitely many i. If f 2 Si then we say that its level in the hierarchy is ≤ i. If f 2 Si+1 − Si, then its level is equal to i + 1. The first hierarchy that we will define is due to Axt and Heinermann [[5] and [1]].
    [Show full text]