Simulating Hamiltonian Dynamics on a Small Quantum Computer

Simulating Hamiltonian Dynamics on a Small Quantum Computer

Simulating Hamiltonian dynamics on a small quantum computer Andrew Childs Department of Combinatorics & Optimization and Institute for Quantum Computing University of Waterloo based in part on joint work with Dominic Berry, Richard Cleve, Robin Kothari, and Rolando Somma “... nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.” ! Richard Feynman Simulating physics with computers (1981) Outline • The Hamiltonian simulation problem • Three approaches to simulation - Product formulas - Quantum walk - Fractional queries • Comparison of methods for small-scale simulations Hamiltonian simulation Problem: Given* a Hamiltonian H, a time t, and an error tolerance ² (say, with respect to trace distance), find a quantum circuit that performs the unitary operation e—iHt (on an unknown quantum state) with error at most ². * For an efficient simulation, H should be concisely specified. More generally, H can be time-dependent. Applications: • Simulating physics • Implementing continuous-time quantum algorithms (quantum walk, adiabatic optimization, linear equations, ...) Local and sparse Hamiltonians Local Hamiltonians [Lloyd 96] m H = j=1 Hj where each H j acts on k = O(1) qubits P Sparse Hamiltonians [Aharonov, Ta-Shma 03] At most d nonzero entries per row, d = poly(log N) (where H is N N) £ In any given row, the location of the jth nonzero entry and its value can be computed efficiently (or is given by a black box) k Note: A k-local Hamiltonian with m terms is d-sparse with d = 2 m What should we simulate? Consider a two-dimensional grid of n qubits, with interactions only between nearest neighbors (at most 2n terms). • Initialize system in a product state • Evolve for time t • Measure a local observable for each spin This is probably hard to do with a classical computer for more than about 30 qubits. Related question: How can we apply quantum simulation to solve computational problems in quantum chemistry, materials science, etc.? (See, e.g., the talks by Alán and Matthias.) Overview of simulation methods Product formulas • Decompose Hamiltonian into a sum of terms and recombine by alternating between them • With high-order formulas, complexity can be close to linear in t, but with an exponential prefactor Quantum walk • Implement a unitary operation whose spectrum is related to the Hamiltonian; use phase estimation to adjust the eigenvalues • Complexity is linear in t with no large prefactor; also improved dependence on sparsity Fractional queries • Perform a compressed simulation of a very accurate product formula • Strictly improves over product formulas, with no large prefactors • Dependence on error is poly(log (1/²)) instead of poly(1/²) Product formulas Simulating a sum of terms m Suppose we want to simulate H = i=1 Hi Combine individual simulations withP the Lie product formula: iAt/r iBt/r r i(A+B)t lim e− e− = e− r !1 iAt/r iBt/r r i(A+B)t 2 e− e− = e− + O(t /r) To ensure error at most ², take r = O ( H t ) 2 / ✏ k k [Lloyd 96] High-order product formulas To get a better approximation, use higher-order formulas: iAt/r iBt/r r i(A+B)t 2 e− e− = e− + O(t /r) iAt/2r iBt/r iAt/2r r i(A+B)t 3 2 e− e− e− = e− + O(t /r ) ... Systematic expansions to arbitrary order are known [Suzuki 92] Using the kth order expansion, the number of exponentials required for an approximation with error at most ² is at most 1/2k 2k 2 m H t 5 m H t k k k k ✏ ⇣ ⌘ With k large, this is only slightly superlinear in t. Sublinear simulation is impossible (“no-fast-forwarding theorem”). [Berry, Ahokas, Cleve, Sanders 07] High-order product formulas m H t 1/2k Choose k to minimize 52km2 Ht k k k k ✏ ✓ ◆ 1 m H t 2 2plog5(m H t/✏) k log5 k k gives O m H t 5 k k ⇡ 2s ✏ k k ✓ ◆ ⇣ ⌘ This is subpolynomial (but superlogarithmic) in 1/²; closer to polynomial (cf. best known classical factoring algorithms). 1 log x When does it help to use 2 5 higher-order formulas? 2.0 1.5 1.0 0.5 x 100 104 106 108 1010 1012 Sparse Hamiltonians and coloring Strategy [Childs, Cleve, Deotto, Farhi, Gutmann, Spielman 03; Aharonov, Ta-Shma 03]: Color the edges of the graph of H. Then the simulation breaks into small pieces that are easy to handle. = + + A sparse graph can be efficiently colored using only local information [Linial 87], so this gives efficient simulations. Can produce a d2-coloring locally overhead of d4 ) Star decompositions = + + = + Strategy: Color the edges so that each color forms a “galaxy” (every component is a star graph). Simulate each galaxy by brute force and recombine. Tradeoff vs. edge coloring: • Decomposition has fewer terms • Each term is harder to simulate (2nd neighbors) Total overhead is d3 [Childs, Kothari 10] Quantum walk [Childs, arXiv:0810.0312] [Berry and Childs, arXiv:0910.4157 ] Hamiltonian simulation by quantum walk Another way to simulate an N N Hamiltonian H is to implement a related discrete-time (Szegedy) ⇥quantum walk. N N+1 N+1 Expand space from C to C C . ⌦ Walk operator is the product of two reflections: • Swap: S j, k = k, j | i | i • Reflect about span ,..., , where {| 1i | N i} 1 N j := j Hjk⇤ k + ⌫j N +1 | i | i⌦ H 1 | i | i! k k kX=1 q p H := max N H k k1 j k=1 | jk| i.e., 2 TT † where T j = P − | i | ji [Childs 10] Hamiltonian simulation by quantum walk To simulate H for time t: 1. Apply T to the input state à . | i 2. Perform phase estimation with U = iS (2 TT † ) , estimating a i arcsin λ − phase e ± for the component of T corresponding to an eigenvector± of H with eigenvalue ¸. | i 3. Use the estimate of arcsin ¸ to estimate ¸, and apply the phase e—i¸t. 4. Uncompute the phase estimation and T, giving an approximation of e—iHt à . | i Theorem: O ( Ht 1 / ✏ ) steps of this walk suffice to simulate H for time t with errork atk most ² (in trace distance). [Childs 10] Faster simulation of sparse Hamiltonians Perform quantum walk steps by brute force (query all d neighbors): O(d H t/✏) O(d3/2 H t/✏) O(d2 H t/✏) k k1 k k k kmax This is exactly linear in t; also scales better in d, but worse in ². Improved alternative: using only two queries, prepare N 1 Hjk⇤ Hjk j0 := j k, 0 + 1 | | k, 1 | i | i⌦pd 0s H max | i s − H max | i1 kX=1 k k k k @ H max := maxj,k Hjk A k k | | Resulting complexity: H t d H t O k k + d H t O k kmax p✏ k kmax p✏ ✓ ◆ ✓ ◆ [Berry, Childs 12] Fractional queries [Berry, Childs, Cleve, Kothari, and Somma, arXiv:1312.1414] Fractional- and continuous-query models Black box hides a string x 0, 1 n 2 { } Quantum query: Q i, b =( 1)bxi i, b | i − | i 0 |0i |0i |0i Q Q |0i |0i U0 U1 |0i |0i | i 0 |0i |0i 1/2 1/2 1/2 1/2 |0i Q Q Q Q |0i |0i U0 U1 U2 U3 |0i |0i | i 0 | i 0 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 |0i |0i Q Q Q Q Q Q Q Q |0i U0 U1 U2 U3 U4 U5 U6 U7 |0i |0i |0i | i 0 |0i |0i |0i |0i i⇡(HQ+HD )t |0i |0i e− |0i | i Useful for designing algorithms [Farhi, Goldstone, Gutmann 07] More powerful than the discrete query model? log t No: Can simulate a t-query fractional-query algorithm with O(t log log t ) discrete queries [Cleve, Gottesman, Mosca, Somma, Yonge-Mallo 09] Simulating fractional queries Fractional-query gadget: 0 R↵ P R↵ 0 | i • . Q . Q↵ | i . | i ⇡↵ 1 pc ps c = cos 2 10 R↵ = P = pc + s ps pc s =sin⇡↵ 0 i ✓ − ◆ 2 ✓ ◆ ↵m ↵1 “Segment” implementing U m Q U m 1 U 1 Q U 0 : − ··· 0 R↵ R↵ P | i 1 • ··· 1 . .. 0 R↵ R↵ P | i m ··· • m ··· . U0 Q U1 .. Um 1 Q Um | i − ··· Behavior of a segment ↵m ↵1 “Segment” implementing U m Q U m 1 U 1 Q U 0 : − ··· 0 R↵ R↵ P | i 1 • ··· 1 . .. 0 R↵ R↵ P | i m ··· • m ··· . U0 Q U1 .. Um 1 Q Um | i − ··· log(1/✏) Truncating the ancillas to Hamming weight k = O ( log log(1 / ✏ ) ) introduces error at most ² By rearranging the circuit, k queries suffice But this still only succeeds with constant probability Correcting faults success! ¾ ! ¾ ! ¾ ! segment 1! segment 2! …! segment t fault! ¼! ¾ ! ¼! ¾ ! undo! undo! ¼! ¾ ! ¼! ¾ ! undo! undo! ¼! ¾ ! ¼! ¾ ! undo! undo! ¼! ¼! Rube Goldberg, Professor Butts and the Self-Operating Napkin [Cleve, Gottesman, Mosca, Somma, Yonge-Mallo 09] Oblivious amplitude amplification Suppose U implements V with amplitude sin µ: U 0 =sin✓ 0 V + cos ✓ 1 φ | i| i | i | i | i| i segment (without 1 ideal evolution final measurement) 2 To perform V with amplitude close to 1: use amplitude amplification? But the input state is unknown! Using ideas from [Marriott, Watrous 05], we can show that a - independent reflection suffices to do effective amplitude amplification.| i With this oblivious amplitude amplification, we can perform the ideal evolution exactly with only three segments (one backward).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    29 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us