<<

Simulating Hamiltonian dynamics on a small quantum computer

Andrew Childs Department of Combinatorics & Optimization and Institute for

based in part on joint work with Dominic Berry, Richard Cleve, Robin Kothari, and Rolando Somma “... nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.” ! Richard Feynman Simulating physics with computers (1981) Outline • The Hamiltonian simulation problem • Three approaches to simulation - Product formulas - - Fractional queries • Comparison of methods for small-scale simulations Hamiltonian simulation

Problem: Given* a Hamiltonian H, a time t, and an error tolerance ² (say, with respect to trace distance), find a quantum circuit that performs the unitary operation e—iHt (on an unknown quantum state) with error at most ².

* For an efficient simulation, H should be concisely specified.

More generally, H can be time-dependent.

Applications: • Simulating physics • Implementing continuous-time quantum algorithms (quantum walk, adiabatic optimization, linear equations, ...) Local and sparse Hamiltonians

Local Hamiltonians [Lloyd 96]

m H = j=1 Hj where each H j acts on k = O(1) qubits P Sparse Hamiltonians [Aharonov, Ta-Shma 03]

At most d nonzero entries per row, d = poly(log N) (where H is N N) £ In any given row, the location of the jth nonzero entry and its value can be computed efficiently (or is given by a black box)

k Note: A k-local Hamiltonian with m terms is d-sparse with d = 2 m What should we simulate? Consider a two-dimensional grid of n qubits, with interactions only between nearest neighbors (at most 2n terms). • Initialize system in a product state • Evolve for time t • Measure a local observable for each spin This is probably hard to do with a classical computer for more than about 30 qubits.

Related question: How can we apply quantum simulation to solve computational problems in quantum chemistry, materials science, etc.? (See, e.g., the talks by Alán and Matthias.) Overview of simulation methods Product formulas • Decompose Hamiltonian into a sum of terms and recombine by alternating between them • With high-order formulas, complexity can be close to linear in t, but with an exponential prefactor Quantum walk • Implement a unitary operation whose spectrum is related to the Hamiltonian; use phase estimation to adjust the eigenvalues • Complexity is linear in t with no large prefactor; also improved dependence on sparsity Fractional queries • Perform a compressed simulation of a very accurate product formula • Strictly improves over product formulas, with no large prefactors • Dependence on error is poly(log (1/²)) instead of poly(1/²) Product formulas Simulating a sum of terms

m Suppose we want to simulate H = i=1 Hi Combine individual simulations withP the Lie product formula:

iAt/r iBt/r r i(A+B)t lim e e = e r !1 iAt/r iBt/r r i(A+B)t 2 e e = e + O(t /r)

To ensure error at most ², take r = O ( H t ) 2 / ✏ k k

[Lloyd 96] High-order product formulas To get a better approximation, use higher-order formulas:

iAt/r iBt/r r i(A+B)t 2 e e = e + O(t /r)

iAt/2r iBt/r iAt/2r r i(A+B)t 3 2 e e e = e + O(t /r )

... Systematic expansions to arbitrary order are known [Suzuki 92]

Using the kth order expansion, the number of exponentials required for an approximation with error at most ² is at most 1/2k 2k 2 m H t 5 m H t k k k k ✏ ⇣ ⌘ With k large, this is only slightly superlinear in t. Sublinear simulation is impossible (“no-fast-forwarding theorem”). [Berry, Ahokas, Cleve, Sanders 07] High-order product formulas m H t 1/2k Choose k to minimize 52km2 Ht k k k k ✏ ✓ ◆

1 m H t 2 2plog5(m H t/✏) k log5 k k gives O m H t 5 k k ⇡ 2s ✏ k k ✓ ◆ ⇣ ⌘ This is subpolynomial (but superlogarithmic) in 1/²; closer to polynomial (cf. best known classical factoring algorithms).

1 log x When does it help to use 2 5 higher-order formulas? 2.0

1.5

1.0

0.5

x 100 104 106 108 1010 1012 Sparse Hamiltonians and coloring

Strategy [Childs, Cleve, Deotto, Farhi, Gutmann, Spielman 03; Aharonov, Ta-Shma 03]: Color the edges of the graph of H. Then the simulation breaks into small pieces that are easy to handle.

= + +

A sparse graph can be efficiently colored using only local information [Linial 87], so this gives efficient simulations.

Can produce a d2-coloring locally overhead of d4 ) Star decompositions

= + +

= +

Strategy: Color the edges so that each color forms a “galaxy” (every component is a star graph). Simulate each galaxy by brute force and recombine. Tradeoff vs. edge coloring: • Decomposition has fewer terms • Each term is harder to simulate (2nd neighbors) Total overhead is d3 [Childs, Kothari 10] Quantum walk

[Childs, arXiv:0810.0312] [Berry and Childs, arXiv:0910.4157 ] Hamiltonian simulation by quantum walk Another way to simulate an N N Hamiltonian H is to implement a related discrete-time (Szegedy) ⇥quantum walk.

N N+1 N+1 Expand space from C to C C . ⌦ Walk operator is the product of two reflections:

• Swap: S j, k = k, j | i | i • Reflect about span ,..., , where {| 1i | N i} 1 N j := j Hjk⇤ k + ⌫j N +1 | i | i⌦ H 1 | i | i! k k kX=1 q p H := max N H k k1 j k=1 | jk| i.e., 2 TT † where T j = P | i | ji

[Childs 10] Hamiltonian simulation by quantum walk To simulate H for time t: 1. Apply T to the input state à . | i 2. Perform phase estimation with U = iS (2 TT † ) , estimating a i arcsin phase e ± for the component of T corresponding to an eigenvector± of H with eigenvalue ¸. | i

3. Use the estimate of arcsin ¸ to estimate ¸, and apply the phase e—i¸t. 4. Uncompute the phase estimation and T, giving an approximation of e—iHt à . | i

Theorem: O ( Ht 1 / ✏ ) steps of this walk suffice to simulate H for time t with errork atk most ² (in trace distance).

[Childs 10] Faster simulation of sparse Hamiltonians Perform quantum walk steps by brute force (query all d neighbors): O(d H t/✏) O(d3/2 H t/✏) O(d2 H t/✏) k k1  k k  k kmax This is exactly linear in t; also scales better in d, but worse in ². Improved alternative: using only two queries, prepare

N 1 Hjk⇤ Hjk j0 := j k, 0 + 1 | | k, 1 | i | i⌦pd 0s H max | i s H max | i1 kX=1 k k k k @ H max := maxj,k Hjk A k k | | Resulting complexity: H t d H t O k k + d H t O k kmax p✏ k kmax  p✏ ✓ ◆ ✓ ◆ [Berry, Childs 12] Fractional queries

[Berry, Childs, Cleve, Kothari, and Somma, arXiv:1312.1414] Fractional- and continuous-query models Black box hides a string x 0, 1 n 2 { } Quantum query: Q i, b =( 1)bxi i, b | i | i 0 |0i |0i |0i Q Q |0i |0i U0 U1 |0i |0i | i

0 |0i |0i 1/2 1/2 1/2 1/2 |0i Q Q Q Q |0i |0i U0 U1 U2 U3 |0i |0i | i

0 | i 0 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 |0i |0i Q Q Q Q Q Q Q Q |0i U0 U1 U2 U3 U4 U5 U6 U7 |0i |0i |0i | i

0 |0i |0i |0i |0i i⇡(HQ+HD )t |0i |0i e |0i | i Useful for designing algorithms [Farhi, Goldstone, Gutmann 07] More powerful than the discrete query model? log t No: Can simulate a t-query fractional-query algorithm with O(t log log t ) discrete queries [Cleve, Gottesman, Mosca, Somma, Yonge-Mallo 09] Simulating fractional queries

Fractional-query gadget: 0 R↵ P R↵ 0 | i •

. Q . Q↵ | i . . | i

⇡↵ 1 pc ps c = cos 2 10 R↵ = P = pc + s ps pc s =sin⇡↵ 0 i ✓ ◆ 2 ✓ ◆

↵m ↵1 “Segment” implementing U m Q U m 1 U 1 Q U 0 : ··· 0 R↵ R↵ P | i 1 • ··· 1 ......

0 R↵ R↵ P | i m ··· • m ··· . . . U0 Q U1 .. Um 1 Q Um | i ··· Behavior of a segment

↵m ↵1 “Segment” implementing U m Q U m 1 U 1 Q U 0 : ··· 0 R↵ R↵ P | i 1 • ··· 1 ......

0 R↵ R↵ P | i m ··· • m ··· . . . U0 Q U1 .. Um 1 Q Um | i ···

log(1/✏) Truncating the ancillas to Hamming weight k = O ( log log(1 / ✏ ) ) introduces error at most ² By rearranging the circuit, k queries suffice But this still only succeeds with constant probability Correcting faults

success! ¾ ! ¾ ! ¾ ! segment 1! segment 2! …! segment t fault! ¼! ¾ ! ¼! ¾ ! undo! undo!

¼! ¾ ! ¼! ¾ ! undo! undo!

¼! ¾ ! ¼! ¾ ! undo! undo!

¼! ¼!

Rube Goldberg, Professor Butts and the Self-Operating Napkin [Cleve, Gottesman, Mosca, Somma, Yonge-Mallo 09] Oblivious amplitude amplification

Suppose U implements V with amplitude sin µ: U 0 =sin✓ 0 V + cos ✓ 1 | i| i | i | i | i| i segment (without 1 ideal evolution final measurement) 2

To perform V with amplitude close to 1: use amplitude amplification?

But the input state is unknown!

Using ideas from [Marriott, Watrous 05], we can show that a - independent reflection suffices to do effective amplitude amplification.| i

With this oblivious amplitude amplification, we can perform the ideal evolution exactly with only three segments (one backward). Hamiltonian simulation using fractional queries We reduce Hamiltonian simulation to fractional-query simulation.

Suppose H = H1 + H2 where H1, H2 have eigenvalues 0 and ¼. i(H +H )t iH t/r iH t/r r Write e 1 2 ( e 1 e 2 ) for very large r (increasing r does not affect the⇡ query complexity, and only weakly affects the gate complexity). iH iH This is a fractional-query algorithm with oracles e 1 and e 2 .

iH iH Package them as a single oracle Q = 1 1 e 1 + 2 2 e 2 . | ih | ⌦ | ih | ⌦ This may not be diagonal in the standard basis, but the fractional- query simulation doesn’t require that. To give a complete simulation, we decompose the Hamiltonian into a sum of terms, each with eigenvalues 0 and ¼ (up to an overall shift and rescaling). (Start by coloring the edges to make the Hamiltonian 1- sparse and then further refine the decomposition.) Query and gate complexity

log(⌧/✏) Query complexity of this approach: O ⌧ log log(⌧/✏) where ⌧ := d2 H t k kmax log(⌧/✏) Gate complexity is not much larger: O ⌧ log log(⌧/✏) (log(⌧/✏)+n) where H acts on n qubits Local Hamiltonians

k Recall: A k-local Hamiltonian with m terms is d-sparse with d = 2 m. We can reduce the overhead below d2 given a nice decomposition. Using the decomposition into m terms, each k-local (and hence 2k- sparse), we can replace ¿ by ⌧ ˜ := 2 k m H t . k kmax Lower bounds No-fast-forwarding theorem [BACS 07]: ⌦(t) Main idea: • Query complexity of computing the parity of n bits is ⌦ ( n ) . • There is a Hamiltonian that can compute parity by running for time O(n).

log(1/✏) New lower bound: ⌦( log log(1/✏) ) Main idea: • Query complexity of parity is ⌦ ( n ) even for unbounded error. • The same Hamiltonian as above computes parity with unbounded error by running for any positive time. Running for constant time gives the parity with probability £(1/n!). Comparison of sparse Hamiltonian simulations

Product formulas Quantum walk Fractional queries

log(⌧/✏) o(1) O ⌧ 3 d H t d H maxt log log(⌧/✏) Query complexity d H t k k O k k ✏ p✏ 2 k k ⌧ := d H maxt ⇣ ⌘ ✓ ◆ k k

Best known scaling with evolution time t ✓ and sparsity d

Best known scaling ✓ with error ²

Handles time-dependent ✓ ✓ Hamiltonians How should we simulate small systems? Consider a sum of m 2-local terms of constant norm acting on n qubits.

with m = O(n) Product formulas

should use next-lowest order (k = 1) gate complexity O(m5/2t3/2/p✏) O(n5/2t3/2/p✏)

Quantum walk gate complexity O(mnt/p✏) O˜(n2t/p✏)

Fractional queries log(mt/✏) ˜ 2 gate complexity O mt log log(mt/✏) (log(mt/✏)+n) O(n t) Open questions • Improvements to methods; (optimal?) tradeoffs between evolution time, error, and locality/sparsity • Careful estimates of gate complexity for small systems: what approach is best for particular simulations of interest? • Investigate parallel algorithms • Improved simulation of specific kinds of Hamiltonians • Better understanding of applications to problems in quantum chemistry, etc. • Real implementations!