<<

Exponentially More Precise Algorithms for Quantum Simulation of Quantum Chemistry

The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters

Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:38811452

Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Abstract

In quantum chemistry, the “electronic structure problem” refers to the process of numer- ically solving Schrodinger’s equation for a given molecular system. These calculations rank among the most CPU-intensive computations that are performed nowadays, yet they are also crucial to the design of new drugs and materials. The idea behind quantum simulation of quantum chemistry is that a quantum computer could perform such calcu- lations in a manner that is exponentially more efficient than what a classical computer could accomplish. In fact, quantum simulation is quite possibly the application most naturally suited to a quantum computer. In this work we introduce novel algorithms for the simulation of quantum chemistry, algorithms that are exponentially more pre- cise than previous algorithms in the literature. Previous algorithms are based on the Trotter-Suzuki decomposition and scale like the inverse of the desired precision, while our algorithms represent the first practical application of the recently developed trun- cated Taylor series approach, and they achieve scaling that is logarithmic in the inverse of the precision. We explore algorithms for both second-quantized and first-quantized encodings of the chemistry Hamiltonian, where the second-quantized encoding requires O(N) for an N -orbital system, and the highly compressed first-quantized encoding requires O(η) qubits, where η << N is the number of electrons in the system. We first introduce a second-quantized algorithm that uses pre-computed molecular in- tegrals, resulting in a gate count that scales like Oe(N 8t). Next we introduce algorithms that compute molecular integrals on the fly by carefully discretizing these integrals. In- deed, figuring out how to evaluate the integrals in an efficient manner is one of the main challenges of applying the truncated Taylor series approach to the chemistry problem: the complexity of a general is traditionally formulated in terms of the number of queries made to an oracle whose assumption we just assume, but in this case, we actually construct the oracle for our specific problem. The evaluation of the oracle itself requires evaluating molecular integrals, which adds additional complexity to an algorithm. We find that the gate count of the second-quantized on-the-fly algorithm scales like Oe(N 5t). We also find that the first-quantized on-the-fly algorithm, which uses the configuration interaction (CI) matrix representation of the Hamiltonian, requires at most Oe(η2N 3t) gates. In general, the discovery of exponentially more precise algorithms will allow for the simulation of larger molecular systems than was previously possible under Trotter-based methods. In addition, the results here can be generalized to the simulation of other fermionic systems beyond the electronic structure problem. Acknowledgements

First off I would like to thank Professor Aspuru-Guzik for his generosity and support over the time I’ve been working in his lab, and because I’ve learned so much about what it means to do interdisciplinary research—in chemistry, computer science, physics—in the time I’ve been there. I also owe a lot to Ryan Babbush, who has very patiently dealt with my many questions and explained things to me, and from whom I’ve learned a lot about chemistry and . I’d like to thank Ian Kivlichan for many helpful conversations, and I’d like to thank all of our other collaborators on this project, Dominic Berry and Peter Love, as well. I’d also like to thank Jhonathan Romero Fontalvo for all his help and patience with running Psi4 and quantum chemistry packages; Thomas Markovich and Jarrod McClean for help with Psi4 as well; and Rafa Bombarelli, Jorge Aguilera, and Tim Hirzel for selecting and providing molecular input files to run these quantum chemistry packages on.

iii Contents

Abstract ii

Acknowledgements iii

1 Introduction1 1.1 Motivation...... 1 1.2 Organization...... 3 1.3 Citations to Previous Work...... 4

2 Quantum Computation5 2.1 Asymptotic Notation...... 5 2.2 Quantum Mechanics...... 6 2.2.1 States and Wave Functions...... 6 2.2.2 Time Evolution...... 7 2.3 Quantum Computing...... 8 2.3.1 Model...... 9 2.3.1.1 A Grab Bag of Useful Gates...... 9 2.3.2 Adiabatic Model...... 11 2.4 Summary...... 11

3 Quantum Simulation of Quantum Chemistry 12 3.1 The Chemistry Hamiltonian...... 12 3.1.1 First-Quantized vs. Second-Quantized...... 13 3.2 The Electronic Structure Problem...... 15 3.3 The Canonical Algorithm...... 16 3.3.1 State Preparation...... 16 3.3.2 Hamiltonian Evolution...... 17 3.3.3 Trotterization...... 17 3.3.3.1 Simulation of Sparse Hamiltonians...... 18 3.3.4 Phase Estimation Algorithm...... 19 3.4 Summary...... 19

4 Truncated Taylor Series Algorithm 21 4.1 Overview of Algorithm...... 21 4.2 Integral Hamiltonians...... 28

iv Contents v

5 The Integrand Oracle 32 5.1 Basis Function Circuit Construction...... 33 5.2 Integrand Circuit Complexity...... 34 5.3 Integrand Circuit Construction...... 37 5.4 Summary...... 38

6 Second-quantized Algorithms 39 6.1 Hamiltonian Oracle...... 40 6.2 Simulating Hamiltonian Evolution with Pre-Computed Integrals: the Database Algorithm...... 42 6.3 Simulating Evolution Under Integral Hamiltonians: the On-the-Fly Al- gorithm...... 44

7 First-Quantized Algorithm 50 7.1 CI Matrix Encoding...... 50 7.2 CI Matrix Decomposition...... 52 7.2.1 Decomposition into 1-sparse matrices...... 53 7.2.2 Decomposition into hij and hijk` ...... 54 7.2.3 Discretizing the Integrals...... 55 7.2.4 Decomposition into Unitary Matrices...... 55 7.3 CI Matrix Hamiltonian Oracle...... 56 7.4 Simulating Hamiltonian Evolution: the On-the-Fly Algorithm...... 58

8 Algorithms for Real Molecules 62 8.1 Second-Quantized Database Algorithm...... 62 8.2 Future Directions...... 64

Bibliography 67 Chapter 1

Introduction

1.1 Motivation

This thesis explores the interplay between quantum chemistry and quantum computa- tion, and how advances in the latter can inform the way we approach the former.

In quantum chemistry, an important problem is being able to solve Schrodinger’s equa- tion for molecular systems, yielding the energy eigenvalues of the system, as this allows us to understand, from first principles, the way these systems function and evolve. Solv- ing these equations is quite difficult to do for large, multi- systems, and many existing methods rely on approximations. Being able to carry out these computations allows us study properties of chemical reactions like the ground-state, excited-state, and transition-state energies of reacting molecules. In particular, being able to understand electronic structures can help with the design of new drugs and materials.

Meanwhile, the term “quantum computation” refers to using properties of quantum mechanical systems to develop a computing paradigm that is altogether conceptually different from classical computation. The hope is that quantum effects such as the su- perposition of quantum bits (qubits) and entanglement between qubits can allow for algorithms that are more efficient than classically possible. The idea of a quantum com- puter first originated with Richard Feynman in 1982 [1]. More specifically, Feynman’s first, hypothetical quantum computer was a , as this would have been the most natural application for a quantum computer. Quantum simulation is defined as the use of a controllable quantum system—in this case, the quantum computer—to investigate the behavior of another, less accessible quantum system—in this case, the model chemistry. In our specific case, this would allow us to observe how a given wave function evolves under the chemistry Hamiltonian. Feynman’s 1982 paper showed that a

1 Chapter 1. Introduction 2 classical Turing machine could not simulate quantum phenomena without experiencing an exponential slowdown, while a hypothetical would not face such a problem.

This leads us, next, to the question of quantum simulation, specifically, for quantum chemistry. Feynman left open the question of being able to simulate fermionic systems, where fermions are the antisymmetric half-integer-spin particles, like , that make up the matter of everyday life. This remained an open question until 1996, when Lloyd and Abrams demonstrated that it could in fact be done [2,3]. The 1996 paper also introduced the Trotter-Suzuki decomposition for evolving a given state under a given Hamiltonian. Meanwhile, the first algorithm for the simulation of quantum chemistry, specifically, was developed in 2005 by Aspuru-Guzik et. al. [4]. It relies on being able to prepare a suitable state and being able to evolve this state under the chemistry Hamiltonian, and then uses phase estimation to determine ground-state energy for the state, as the energy eigenvalue will always be encoded in the phase of the time-evolved Hamiltonian. This phase estimation algorithm, together with Trotterization for the actual Hamiltonian evolution itself, have become the standard approach to quantum simulation of quantum chemistry.

Recent experimental advances have made the possibility of small, fault-tolerant quan- tum computers seem viable in the near future [5–8]. This has in turn made quantum simulation of quantum chemistry even more appealing, both because of its industrial importance and because of its natural applicability to a quantum computer. One of the obstacles to being able to simulate very large molecules in the canonical approach is the complexity of Trotterization for the Hamiltonian evolution step. The tightest known analytical bound on the number of quantum gates required for Trotterization of the chemistry Hamiltonian is Oe(N 8t/o(1))[9], where  is the desired precision and N is the number of spin orbitals in the molecule, although numerical simulations on p real molecules indicate that the bound may be closer to Oe(N 6 t3/) gates [10] for real molecules. Meanwhile, a recent 2015 paper proposes carrying out the evolution step for a general Hamiltonian that can be written as a sum of weighted unitary matrices by Tay- lor expanding the propagator, truncating the Taylor series, and performing “oblivious amplitude amplification” to deterministically project out the desired, evolved state [11]. This algorithm, which we have referred to as the “truncated Taylor series approach,” requires exponentially fewer gates that Trotterization, but it is described for general sparse Hamiltonians, and the complexity is quantified in terms of the number of queries made to a black box oracle that is assumed to return elements of the Hamiltonian. Ac- tually constructing this oracle for specific Hamiltonians is a highly non-trivial problem and can add additional complexity to an algorithm. Chapter 1. Introduction 3

The work detailed in this thesis applies the truncated Taylor series approach to the chem- istry Hamiltonian. This entails constructing an oracle for the chemistry Hamiltonian, which in turn depends on being able to evaluate the molecular integrals that appear in the Hamiltonian. We provide constructions for both the first- and second-quantized encodings of the chemistry Hamiltonian, where the second-quantized representation is expressed in terms of operators that raise and lower electrons between different orbitals, and the more compressed first-quantized representation expresses molecular wavefunc- tions in a basis of Slater determinants. In particular, if we let N be the number of spin orbitals in the molecule, and if we let η << N be the number of electrons in the molecule, the number of qubits required for the second quantized representation scales like O(N), while the number of qubits required for the first quantized representation scales like O(η). We first introduce a second-quantized algorithm that uses pre-computed molecu- lar integrals, as it is the algorithm that is easiest to implement experimentally, resulting in a gate count that scales like Oe(N 8t). Next we introduce a second-quantized algorithm that evaluates molecular integrals on-the-fly and scales like Oe(N 5t), and a first-quantized on the fly algorithm that scales like Oe(η2N 3t). Finally we make a foray into investigat- ing the scaling of these algorithms for real molecules, noting that the second-quantized database algorithm may potentially scale like Oe(N 6t) for real molecules. The hope is that the discovery of exponentially more precise algorithms will allow for the simulation of larger molecular systems than was previously possible. We also note that the results here can be generalized to the simulation of a wide class of fermionic systems that are defined by simplified versions of the chemistry Hamiltonian, and we note as well that the bounds are likely to be even tighter for real molecules.

1.2 Organization

This thesis is organized as follows:

Chapter2 provides a brief review of quantum mechanics and quantum computing, and of the quantum computing notation that will appear throughout this thesis.

Chapter3 introduces the chemistry Hamiltonian and the chemistry problem. This chap- ter will also consider quantum simulation, Trotterization, and phase estimation, and the role that each plays in the canonical quantum simulation algorithm.

Chapter4 summarizes the truncated Taylor series approach from [11] in language that will be re-used in subsequent chapters, and it provides motivation for the on-the-fly algorithms, which evaluate molecular integrals that appear as elements in the chemistry Hamiltonian on the fly. Note that the general Taylor series algorithm assumes the Chapter 1. Introduction 4 existence of operators that we will actually need to construct for the specific case of the chemistry Hamiltonian in subsequent chapters.

Chapter5 describes the construction of the integrand oracle for molecular integrals. This oracle plays a crucial role in the on-the-fly algorithms for both the first-quantized and second-quantized representations as it will be used to construct operators that we will need to in order to apply the general Taylor series approach to the specific case of the chemistry Hamiltonian with on-the-fly integration.

Chapter6 introduces the second-quantized representation of the Hamiltonian and pro- vides a mapping between the qubits of a quantam computer and the fermionic system represented by this Hamiltonian. It also introduces and bounds both the database al- gorithm and the on-the-fly algorithm for this encoding of the Hamiltonian.

Chapter7 does the equivalent for the first-quantized encoding of the Hamiltonian by first describing the configuration interaction (CI) matrix representation of the Hamiltonian, and then describing how this Hamiltonian can be decomposed into sparse matrices which can then be used to construct a CI matrix oracle that returns a given element of a given sparse matrix. As with the second-quantized representation, we will use this oracle to construct operators that allow us to apply the general Taylor series algorithm to this encoding of the Hamiltonian. We then describe and bound the on-the-fly algorithm.

Chapter8 takes a foray into applying these algorithms to real molecules. As with Trotterization, we expect that the analytic bounds we derived in the earlier chapters are tighter for real molecules. Here we investigate the second-quantized database algorithm and show that it may exhibit better scaling in terms of N by a factor of N 2.

1.3 Citations to Previous Work

The work detailed in this thesis is largely based on the following two manuscripts:

R. Babbush, D. W. Berry, I. D. Kivlichan, A. Y. Wei, P. J. Love, A. Aspuru-Guzik. Exponentially more precise quantum simulation of fermions I: Quantum chemistry in second quantization. New Journal of Physics. Volume 18, Number 2: 023023. 2016. arXiv:1506.01020.

R. Babbush, D. W. Berry, I. D. Kivlichan, A. Y. Wei, P. J. Love, A. Aspuru-Guzik. Exponentially more precise quantum simulation of fermions II: Quantum chemistry in the CI matrix representation. 2015. arXiv:1506.01029. Chapter 2

Quantum Computation

This chapter introduces the language that will be employed in the rest of this thesis, namely that of quantum mechanics, of computer science, and of quantum computing. First we will briefly introduce asymptotic notation, which will give us a way to com- pare resource consumption among algorithms. Then we briefly introduce the postulates of quantum mechanics, along with terminology like “quantum state,” “wave function,” “Hamiltonian,” and “time evolution,” some of which will be important for understand- ing quantum computing, and some of which will be important for understanding the chemistry problem. Finally we introduce quantum computing, primarily in the quan- tum circuit model, and we introduce gate notation along with several important basic gates in the quantum circuit model. Much of the information on quantum computing that is presented in this chapter can be found in a textbook like [12], and a more detailed presentation of quantum mechanics can be found in a textbook like [13] or [14].

2.1 Asymptotic Notation

Certain branches of computer science focus on designing algorithms to solve computa- tional problems. To compare the efficiency of algorithms, we generally consider resources such as time, space, and energy. For quantum algorithms specifically, we often consider resources like running time, count, and gate count when we work within the quantum circuit model of quantum computing, and we would like to be able to quantify these resources in a coarse-grained manner. This is exactly the function that asymptotic notation serves.

In computer science, asymptotic notation, also sometimes referred to as “Big O Nota- tion,” allows us to summarize the essentials of the growth of a function. We can use big O notation to set upper bounds on the way a function behaves. 5 Chapter 2. Quantum Computation 6

Let f(x) and g(x) be two functions. Then f(x) is O(g(x)) if there exist a constant c

and x0 in the domain of f(x) such that f(x) ≤ cg(x) for all x > x0. f(x) is Oe(g(x)) if this relationship holds true up to polylogarithmic factors.

We can also lower bound resources in a similar way. We say that f(x) is Ω(g(x)) if there exist a constant c and x0 in the domain of f(x) such that f(x) ≥ cg(x) for all x > x0.

Finally, we say that f(x) is Θ(g(x)) if it is both O(g(x)) and Ω(g(x)).

2.2 Quantum Mechanics

Here we provide a brief overview of the basic postulates and formalism of quantum mechanics, but instead of talking about quantum mechanics in full generality, we will introduce just what is needed to understand the quantum computing notation in the section that follows this one. Note that in quantum computing we always work in a vector space with two basis elements, which makes this system much simpler than general quantum mechanical systems.

2.2.1 States and Wave Functions

In quantum mechanics we often talk about a given state |ψi that lives in a complex vector space H defined with an inner product h· | ·i : H×H → C; this vector space is also known as a Hilbert space H. In quantum computing we are concerned, in particular, with qubits (quantum bits), which are defined over a two-dimensional vector space. Letting |0i and |1i denote the two orthonormal basis vectors for this two dimensional space, we can write our state as |ψi = a |0i + b |1i (2.1) where a and b are complex numbers, and the condition that hψ | ψi = 1, or equivalently, |a|2 + |b|2 = 1, means that |ψi is normalized. Note that a classical bit must be either 0 or 1, but a qubit can exist in such a superposition of two states.

We note that a = h0 | ψi and b = h1 | ψi, and we call a and b amplitudes since we can associate each of a and b with probability amplitudes. In particular, we interpret |a|2 and |b|2 as probabilities that sum to 1, where |a|2 is the probability that we obtain state |0i when we perform a measurement on the qubit, and |b|2 is the probability that we obtain state |1i when we perform a measurement on the qubit. Thus we see that computing with qubits is in some sense inherently probabilistic. Chapter 2. Quantum Computation 7

Although qubits live in a vector space that is two dimensional, we note that the real world is continuous. When we do quantum chemistry, for example, we will work with states |ψi that live in infinite dimensional vector spaces. One basis we use for this infinite dimensional vector space is the position basis {|xi}, where we label each basis element by a position. Whenever we use the term “wave function,” we are referring to the amplitude hx | ψi, which we can also write as ψ(x), a function of x.

As a final note, we can also build a new Hilbert space out of existing Hilbert spaces by

taking the tensor product of the two spaces. Thus, if we had |ψ1i ∈ H1 and |ψ2i ∈ H2,

taking the tensor product would give us the new state |ψi = |ψ1i ⊗ |ψ2i ∈ H1 ⊗ H2. In quantum computing, this will allow us to go from vector spaces of size 2 to vector n spaces of size 2 where n is a positive integer greater than 1. For example, if we had H1

spanned by {|0i , |1i} and H2 spanned by {|0i , |1i}, then H1 ⊗ H2 would be spanned by {|00i , |01i , |10i , |11i}.

2.2.2 Time Evolution

We can apply a linear operator to our state |ψi, resulting in a new state |ψ0i. This operator, U, must be unitary, meaning that we must have UU † = 1, in order to have a valid operation that respects conservation of probability. That is, all operators that map a state |ψi to a state |ψ0i must look like the following:

|ψ0i = U |ψi (2.2)

In quantum computing, some examples of useful unitary operators include the Pauli matrices: ! ! ! 0 1 0 −i 1 0 σx = , σy = , σz = (2.3) 1 0 i 0 0 −1

As we will see later, the Pauli matrices generate single-qubit rotations. Additionally, we note that σx flips a bit, and σz gives the phase −1 to the state |1i while leaving the state |0i intact.

An additional, important unitary operation is the Hadamard transform, which is often used as an initial step in quantum algorithms, because in n dimensions it maps |0i onto a superposition of 2n orthogonal states with equal weight. In two dimensions it looks like the following: ! 1 1 1 H = √ (2.4) 2 1 −1

We are in particular interested in an operator that evolves a state over time, mainly because the quantum simulation problem is concerned with finding an efficient way to Chapter 2. Quantum Computation 8 evolve a state under the chemistry Hamiltonian over time. Such an operation, which we call time evolution, is described by the Schrodinger equation:

d |ψi i~ = H |ψi (2.5) dt where ~ is Planck’s constant and H is known as the Hamiltonian of the system. The Hamiltonian is a linear operator whose role is precisely to tell us how the system evolves over time, as governed by the Schrodinger equation. Note, in particular, that the Hamil- tonian must always be Hermitian for time evolution to be unitary (that is, we must al- ways have H = H†). Because the Hamiltonian is Hermitian, by the Spectral Theorem, it has a spectral decomposition into eigenstates; these eigenstates are known as the “en- ergy eigenstates” of the system, and the corresponding eigenvalue of a given eigenstate is known as the “energy” of that eigenstate. In particular, we call the lowest eigenvalue the “ground state energy,” and it corresponds to a state known as the “ground state.”

Letting t1 be the initial time, with t2 the final time, the solution to Schrodinger’s equation is given by   −iH(t2 − t1) |ψ(t2)i = exp ~ |ψ(t1)i (2.6) as can be verified by directly plugging this solution into the Schrodinger equation. Thus we see that time evolution under a Hamiltonian must be effected by the following unitary operator:   −iH(t2 − t1) U(t2, t1) ≡ exp ~ (2.7)

In general (in units where ~ = 1) the operator e−iHt is also known as the “propagator.”

2.3 Quantum Computing

Thus we can already see that a quantum algorithm will differ from a classical algorithm because of the fundamental differences between a qubit and a classical bit. For example, the fact that a qubit is a superposition of states allows quantum operators to act si- multaneously on the different states in the superposition, and quantum algorithms take advantage of this fact to achieve speed-up. In addition, the Hilbert space can potentially be quite large, of size 2N , because we have the ability to construct new spaces by taking the tensor product of existing states. For our space of size 2N , we can represent each basis vector in the form |dN dN−1...d2d1i where di ∈ {0, 1}. Such a Hilbert space contains √ certain entangled states like the Bell state (|00i + |11i)/ 2, which cannot be written as the tensor product of two separate single-qubit states. This gives us a quantum effect known as entanglement, a concept that simply does not exist in classical computation. Chapter 2. Quantum Computation 9

However, we also noted that quantum algorithms are inherently probabilistic, and fre- quently to obtain our final state we will need to measure the qubit, which will return our desired state with some given probability.

2.3.1 Quantum Circuit Model

The work described in this thesis takes place within the quantum circuit model of quan- tum computation, as opposed to other models like the adiabatic model. In the quantum circuit model we denote operators by gates acting on qubit registers, and we consider the complexity of algorithms in terms of the gate count that the algorithm requires, or in terms of the number of qubit registers that we need. This is in direct analogy to classical circuits, where the NAND gate is universal and can be used to construct any logical circuit. Similarly, in the quantum circuit model, we can represent unitary operators as gates, and there are certain gate sets that are universal.

2.3.1.1 A Grab Bag of Useful Gates

As an example of operators that can be denoted by gates, we recall the Pauli matrices from the previous section (Eq. (2.3)). We also recall how we obtained the time evolution operator from a Hermitian Hamiltonian by exponentiating the Hamiltonian; similarly, we can obtain rotation operators by exponentiating the Pauli matrices. The rotation operator will rotate a state |0i onto a linear combination of |0i and |1i specified by some angle θ:

! x cos(θ/2) −i sin(θ/2) Rx(θ) ≡ exp [−iθσ /2] = (2.8) −i sin(θ/2) cos(θ/2) ! y cos(θ/2) − sin(θ/2) Ry(θ) ≡ exp [−iθσ /2] = (2.9) sin(θ/2) cos(θ/2)

−iθ/2 ! z e 0 Rz(θ) ≡ exp [−iθσ /2] = (2.10) 0 eiθ/2

Then we can depict a rotation gate acting on |ψi to give us Ry(θ) |ψi in the following manner:

|ψi Ry(θ) Ry(θ) |ψi

Similarly, we can depict a Hadamard gate, which we introduced in the last section in Eq. (2.4), like so: Hd Chapter 2. Quantum Computation 10

Sometimes it is enough to have our final state on the right side of the circuit diagram, but other times we want to first make a measurement on our qubit to obtain the desired final state with some probability. In that case, we depict the act of measurement using the following symbol:

Next we introduce controlled operations, which either apply or don’t apply a given operation to a target qubit depending on whether or not a control qubit is set: if the control qubit is 1, then U is applied to the target qubit, and if not, the target qubit is left alone. The controlled operation U is represented by the following circuit:

|ci • |ci

|ti U U c |ti

Finally, another operator frequently used in quantum algorithms is the quantum Fourier transform. For an orthonormal basis |0i , ..., |N − 1i, and a given basis state |ji, the QFT effects the following transformation:

N−1 1 X |ji 7→ √ e2πijk/N |ki (2.11) N k=0

Recall that the regular discrete Fourier transform maps a vector of complex numbers x0, ..., xN−1 to the vector of complex numbers

N−1 1 X 2πijk/N yk ≡ √ xje (2.12) N j=0

For an arbitrary state |ψi, the quantum Fourier transform performs the classical Fourier transform in superposition:

N−1 N−1 X X |ψi = xj |ji 7→ yk |ki (2.13) j=0 k=0

This is a unitary operation that turns any quantum state into its Fourier representation, and it can be effected using a series of Hadamard gates and controlled rotations. (See [12] for more details.)

Although there are many more gates and operators that are commonly employed in quantum algorithms, we will conclude our discussion here as we believe that this is an adequate introduction for what follows in subsequent chapters. Chapter 2. Quantum Computation 11

2.3.2 Adiabatic Model

Finally, we will comment briefly on another model of quantum computing, the adiabatic model. This is relies on the adiabatic theorem from quantum mechanics, which states that a system that evolves under a time-dependent Hamiltonian will remain in the same energy level over the course of the evolution if the evolution time is slow enough. This model is frequently used to solve optimization problems, where we first design a Hamiltonian whose ground state is the solution to the problem and then prepare an easily obtained ground state of some simpler Hamiltonian. By parametrizing a path between the two Hamiltonians, we can obtain the ground state to the more complex Hamiltonian with some margin of error.

2.4 Summary

In this chapter we have provided a brief survey of some quantum computing tools that we will need in the rest of this thesis. In the following chapter, we will detail the problem that will concern us in the rest of the thesis, that of quantum simulating quantum chemistry. We will also outline the existing canonical algorithm for this problem as a chance to employ some of the notation that we have introduced in this chapter. Then in the remaining chapters we will improve upon one step of the canonical algorithm, the time evolution step. Chapter 3

Quantum Simulation of Quantum Chemistry

In this chapter we introduce the chemistry Hamiltonian and the electronic structure problem, presenting the chemistry Hamiltonian in both second-quantized and first- quantized forms. We survey the current, standard algorithm for quantum simulation of quantum chemistry, which involves preparing an initial state, time-evolving this state under the chemistry Hamiltonian via Trotterization, and then using the quantum phase estimation algorithm to obtain eigenvalues of the chemistry Hamiltonian [15]. Note that further information on electronic structure theory can be found in a textbook like [16].

3.1 The Chemistry Hamiltonian

The electronic structure Hamiltonian for a molecular system is given by the following expression: 2 X ∇~r X Zi X 1 H = − i − + (3.1) 2 ~ |~r − ~r | i i,j |Ri − ~rj| i,j>i i j Here we are working in units where ~ = 1 = k, where k stands for the Coulomb con- stant and ~ is Planck’s constant. We also note that the R~ i denote the coordinates of the nuclei, the Zi denote the magnitude of the nuclear charges, and the ~ri denote the coordinates of the electrons. We can see from the form of the Hamiltonian that the first term represents the kinetic energy of the electrons, the second term represents energies resulting from the interactions between the electrons and the particles in the nucleus, and the last term represents energies resulting from repulsive interactions between the electrons themselves. We also note that we work within the Born-Oppenheimer approx- imation, where it is assumed that the motion of the nuclei and the electrons can be

12 Chapter 3. Quantum Simulation of Quantum Chemistry 13 analyzed separately (i.e., we can consider their wave functions separately) due to the large difference in mass between the two types of particles. The electronic structure problem then concerns studying the motion of the electrons with the nuclei held fixed.

3.1.1 First-Quantized vs. Second-Quantized

Before describing the first-quantized and second-quantized encodings of the chemistry Hamiltonian, we will first discuss the difference between the first-quantized approach to quantum mechanics, and the second-quantized approach. We can think of these as two different ways of viewing the same thing, and we will restrict our treatment to the context of quantum chemistry (as opposed to, say, quantum field theory).

All of the quantum mechanics that we introduced in Chapter2 was expressed in the first-quantized formulation, where states are encoded by wave functions ψ(~x). In the context of quantum chemistry, we work in a vector space given by an orthonormal basis

{ϕi(~x)} where the ϕi(~x) are known as “spin orbitals” and can either contain an electron or not. Here ~x = (~r, σ), where ~r = (x, y, z) is the spatial coordinate of the electron and σ represents the spin of the electron. We will generally denote the number of orbitals by N, and the number of electrons by η, with η << N. We note that electrons are indistinguishable, so we can exchange them, and there is no sense of a single electron belonging to any one spin orbital. However, an electron is also a type of particle known as a fermion, and fermions acquire a minus sign when exchanged once. Fermions also obey the “Pauli exclusion principle,” which states that two fermions cannot occupy the same quantum state simultaneously. Thus the basis we tend to use in quantum chemistry is a basis of what are known as Slater determinants. A Slater determinant, which we can

construct from a collection of η occupied spin orbitals {ϕχ0 (~x), ϕχ1 (~x), ..., ϕχη−1 (~x)}, is given by the following:

ϕ (~x ) ϕ (~x ) ··· ϕ (~x ) χ0 0 χ1 0 χη−1 0

1 ϕχ0 (~x1) ϕχ1 (~x1) ··· ϕχη−1 (~x1)

√ . . . . (3.2) η! . . .. .

ϕχ0 (~xη−1) ϕχ1 (~xη−1) ··· ϕχη−1 (~xη−1)

Because the determinant is antisymmetric under the exchange of any two rows, cor- responding to the minus sign that we need to account for whenever we exchange two electrons, we see that an advantage of using a basis of Slater determinants is that it enforces antisymmetry on the level of the wave function. Whenever we write the wave function as a linear combination of Slater determinants, where the coefficients in the lin- ear combination are determined through approximation calculations, we work in what is Chapter 3. Quantum Simulation of Quantum Chemistry 14 known as the configuration interaction (CI) matrix representation; in this thesis, we will use the terms “CI matrix representation” and “first-quantized representation” some- what interchangeably. Note that when we say “approximation calculations,” we are referring to the solution of a mean-field procedure like Hartree-Fock, which returns an approximate solution to Schrodinger’s equation by approximating the interactions in a given many-body system and iteratively solving the equations within some convergence tolerance.

Now we introduce the second-quantized representation, where we enforce antisymme- try by the way we define the operators rather than the way we define the states and wave functions. We define a vector space with vectors |ki that correspond to Slater determinants so that  1 ϕi occupied |ki = |k1, k2, ..., kN i where ki = (3.3) 0 ϕi unoccupied

That is, we enumerate all the spin orbitals in a given Slater determinant, and when there is an electron in orbital ϕi and ϕi appears in the Slater determinant, ki is 1, otherwise it is 0. We define a vacuum state |0, 0, ..., 0i where there are no electrons in any spin orbitals.

† Now we are ready to define the raising (or creation) operator ai and its Hermitian conjugate, the lowering (or annihilation) operator ai. We define the operators so that

Pj−1 † s=1 aj |fN ...fj+1 0 fj−1...f1i = (−1) |fN ...fj+1 1 fj−1...f1i (3.4) Pj−1 aj |fN ...fj+1 1 fj−1...f1i = (−1) s=1 |fN ...fj+1 0 fj−1...f1i (3.5) † aj |fN ...fj+1 1 fj−1...f1i = 0 (3.6)

aj |fN ...fj+1 0 fj−1...f1i = 0 (3.7)

† We note that the creation operation ai creates an electron at the i-th orbital, and the

annihilation operator ai removes the electron at the i-th orbital. The phase factor is there to automatically enforce antisymmetry. A result of the enforced antisymmetry is that the raising and lowering operators obey the following anti-commutation relations:

† † † {ai , aj} = δij, {ai , aj} = {ai, aj} = 0 (3.8)

where the “anti-commutator” of two operators {a, b} means ab + ba. We also note that the operator † Ni = ai ai (3.9) counts the number of electrons in spin orbital i and is known as the number operator. Chapter 3. Quantum Simulation of Quantum Chemistry 15

Now we are ready to state the second-quantized version of the chemistry Hamiltonian, where we rewrite the Hamiltonian in our new formulation. The second-quantized version of Eq. (3.1) is given by the following:

X 1 X H = h a†a + h a†a†a a (3.10) ij i j 2 ijk` i j k ` i,j i,j,k,`

where hij and hijk` represent what are known as the one- and two-electron integrals, and which are given by the following expressions:

Z 2 ! ∇ X Zq h = ϕ∗(~r) − − ϕ (~r) d~r (3.11) ij i 2 ~ j q |Rq − ~r| Z ∗ ∗ ϕi (~r1)ϕj (~r2)ϕ`(~r1)ϕk(~r2) hijk` = d~r1 d~r2 (3.12) |~r1 − ~r2|

Note that electron number is always conserved because each term has an equal number of raising operators and lowering operators. We also notice that no more than two electrons are moved at a time—this is due to what are known as the Slater-Condon rules, which specify where the non-zero matrix elements lie in a Hamiltonian that is expressed as a matrix over Slater determinants. In particle, they state that matrix elements involving the movement of more than two electrons cannot be non-zero. (We will return to the Slater-Condon rules in Chapter7 when we consider the first-quantized encoding in more depth.)

3.2 The Electronic Structure Problem

The objective of the chemistry problem, then, is to find the eigenvalues of the chemistry Hamiltonian under the approximations we have stated; one example of such an eigenvalue is the ground state energy. That is, we’d like to solve the equation

H |ψi = E |ψi (3.13)

Currently such computations are performed in approximation on classical computers and are extremely computationally expensive. (As we have alluded to previously, one way to do this is iteratively, by starting with a trial solution for |ψi, obtaining an answer for E with this value of |ψi, refining our estimate for |ψi based on our estimate for E, and then repeating the process.) The idea behind quantum simulation of quantum chemistry is that such a problem would most naturally be suited to a quantum computer because it is a problem that concerns a system that is inherently quantum mechanical. Chapter 3. Quantum Simulation of Quantum Chemistry 16

In particular, consider a Hamiltonian H with eigenvalues {Ek} and eigenvectors {|ki}, and consider a state |ψi that we will then evolve under H. We can express |ψi in the eigenbasis of H like so: X |ψi = ck |ki (3.14) k Then, if we evolve the state |ψi under H for time t, the resulting state will be

U(t) |ψi = e−iHt |ψi

X −iEkt = cke |ki (3.15) k

We note that if we could start out with an initial state that is already an eigenstate of H, then after time evolution, the eigenvalue of that eigenstate will be encoded in the phase of that state, and we can use an existing quantum algorithm known as the phase estimation algorithm to retrieve that phase.

3.3 The Canonical Algorithm

The canonical algorithm consists of three steps: first, we prepare an initial state close to the eigenstate for which we would like to retrieve the eigenvalue. Next we perform time evolution under the chemistry Hamiltonian through Trotterization. Finally we read out the eigenvalue encoded in the phase of the evolved state using the quantum phase estimation algorithm.

3.3.1 State Preparation

First we will describe state preparation, following the treatment in [15], and based on concepts from [4, 17, 18]. Being able to prepare the initial state that we wish to time evolve is a formidable challenge, and one that we will not focus too much on in this thesis. For example, if we wanted our algorithm to return the ground state energy of the Hamiltonian, we would want our initial state to have as much overlap with the ground state of the chemistry Hamiltonian as possible. One means of preparing the initial state is through adiabatic quantum computing: we start out with the Hartree-Fock wave

function |ψHF i obtained through a classical calculation that takes polynomial time.

Letting HHF denote the Hartree Fock Hamiltonian, to increase the overlap between

|ψHF i and the ground state |ψ0i of H, we would parametrize a path between the two Hamiltonians like so:

H(s) = (1 − s)HHF + sH (3.16) Chapter 3. Quantum Simulation of Quantum Chemistry 17

for s ∈ [0, 1], to adiabatically evolve |ψHF i to |ψ0i with quantifiable error.

3.3.2 Hamiltonian Evolution

Now we will describe existing methods for the specific problem that this thesis is con- cerned with, namely that of effecting time evolution under the chemistry Hamiltonian. We will survey existing methods and point out how they provide motivation for the work described in this thesis.

3.3.3 Trotterization

In this section we consider how to effect the time evolution |ψi 7→ e−iHt |ψi using Trotter-based methods that were first applied in [2]. The Trotter decomposition takes PL inspiration from the fact that if H = k=1 Hk with [Hj,Hk] = 0 for all j, k at all t, then

e−iHt = e−iH1te−iH2t...e−iHLt (3.17)

Here [a, b] is the commutator ab − ba, so saying that [a, b] = 0 is equivalent to saying that the operators a and b commute.

In general, however, matrices do not commute. The key insight behind the Trotter decomposition is that, for matrices A and B,

lim (eiAt/neiBt/n)n = ei(A+B)t (3.18) n→∞

Thus for some finite n, we can break up the time evolution like so to get the following approximation: eiHt ≈ (eiHt/n...eiHt/n)n (3.19)

incurring error given by the Baker-Campbell-Hausdorf formula, which is stated for a Lie algebra defined by commutators:

(A+B)t At Bt − 1 [A,B]t2 3 e = e e e 2 + O(t ) (3.20)

This is the inspiration for Trotter-based time evolution algorithms. For the chemistry problem in particular, if we analytically bound the Baker-Campbell-Hausdorff error as a function of N, the number of spin orbitals, we would find that the Trotter-based time evolution methods scale like Oe(N 8t/o(1))[9], where  denotes the error of the simulation. Numerical simulations have improved this bound for real molecules, so that p the algorithm scales like Oe(N 6 t3/) for certain real molecules [10]. Chapter 3. Quantum Simulation of Quantum Chemistry 18

3.3.3.1 Simulation of Sparse Hamiltonians

The Trotter-based methods that we mentioned in the previous section have been formu- lated for the simulation of Hamiltonians that take the form of sparse matrices, where the non-zero entries of these sparse matrices are in turn given by an oracle that the algorithm can query. When we use the term “sparse matrix,” we mean that a matrix is d-sparse if there are at most d non-zero entries in each row and column. In particular, [19] showed that a d-sparse Hamiltonian can be decomposed into a sum of Θ(d2) 1-sparse Hamiltonians through a graph coloring argument, and the decomposed Hamiltonian can in turn be simulated efficiently, with an oracle, using Trotter-based methods. Tolui and Love [20] showed that the chemistry Hamiltonian, specifically, in the configuration interaction (CI) matrix encoding, can be decomposed into a sum of Θ(d) 1-sparse ma- trices and likewise simulated efficiently under Trotter-based methods. We will use this decomposition result when we outline our first-quantized algorithm, but in doing so we will also move away from Trotter-based methods. We note again, that the the Trotter- Suzuki decomposition has been used in all quantum algorithms for quantum chemistry up until now [9, 10, 20–31], with a couple of exceptions like an adiabatic algorithm from 2014 [32].

Recent work has also been done in performing time evolution using methods other than Trotterization, although the results have been demonstrated for general Hamiltonians that again assume the existence of an oracle that can be queried. In 2009 Cleve et. al. showed that under a query model that allowed for fractional queries, time evolution could be performed probabilistically in a manner that was exponentially more precise than Trotterization [33], although the number of ancilla qubits required still exhibited the 1/ scaling of Trotterization. This fractional query model was subsequently improved upon so that it would not require an an exponential number of ancilla qubits [34], and the fractional query model was also applied explicitly to the simulation of sparse Hamiltonians [34, 35]. The algorithm of [34, 35] was also an improvement over [33] as it was made deterministic through a procedure known as oblivious amplitude amplification. Most recently, [11] demonstrated that the same exponential improvement in both qubit and query count could be achieved, again for a sparse Hamiltonian given by an oracle, by Taylor expanding the propagator and truncating the Taylor series in place of using the Trotter decomposition to decompose the propagator, and without needing to resort to the fractional query model, either. This Taylor series algorithm has also been made deterministic through oblivious amplitude amplification, and it is less complicated than the fractional query model.

We shall adapt the Taylor series algorithm to the chemistry Hamiltonian in subsequent chapters, providing a specific use case for these exponentially more precise algorithms Chapter 3. Quantum Simulation of Quantum Chemistry 19 that have been developed for general Hamiltonians that assume the existence of a black box oracle. That is to say, we will explicitly construct the oracles that we need for the chemistry Hamiltonian, and we will also explicitly construct abstract operators whose existence is likewise assumed in these more general results.

3.3.4 Phase Estimation Algorithm

For completeness, in this section we will examine the phase estimation algorithm, which can be used to read out energies encoded in the phase of a time-evolved state. In the most general case of the phase estimation problem, given an operator U with eigenvector |ui and corresponding eigenvalue e2πiφ, we would like to estimate φ. Note that in the case of the chemistry Hamiltonian, our eigenvector |ui is the initial state, which we would like to be as close to the ground state of the chemistry Hamiltonian as possible, and the corresponding eigenvalue is the ground state energy of the chemistry Hamiltonian.

For the phase estimation algorithm, we will need to use the quantum Fourier transform, which we introduced in the previous chapter. Recall that for a given basis state |ji, the QFT effects the following transformation: |ji 7→ √1 PN−1 e2πijk/N |ki. N k=0

The steps involved in the phase estimation algorithm are the following:

1. Start with initial state |0i |ui.

t 2. Apply a Hadamard gate to obtain the superposition √1 P2 −1 |ji |ui. 2t j=0

t 3. Apply U to obtain the state √1 P2 −1 e2πijφ |ji |ui. 2t j=0 4. Apply the inverse Fourier transform and measure the first register to probabilisti- cally obtain the desired phase.

Here is the circuit diagram corresponding to the phase estimation algorithm:

|0i H • FT †

|ui U |ui

3.4 Summary

In this chapter we have outlined the canonical algorithm for quantum simulation of quantum chemistry, and we have surveyed recent developments to provide motivation for the work that follows. In Chapter4 we describe the actual Taylor series algorithm, Chapter 3. Quantum Simulation of Quantum Chemistry 20 which is exponentially more precise than Trotterization, in its full generality, and in sub- sequent chapters we examine how to adapt the Taylor series algorithm to the chemistry Hamiltonian. Chapter 4

Truncated Taylor Series Algorithm

The truncated Taylor series algorithm is an algorithm for Hamiltonian evolution that is exponentially more precise than Trotterization. In this chapter we summarize the truncated Taylor series algorithm in its full generality using results from [11] so that subsequent chapters can elaborate on how to adapt this algorithm to the more specific case of the chemistry Hamiltonian. We also describe how the general algorithm can be adapted for Hamiltonians that can be expressed as an integral over some dimension (be it time or space), as this will provide motivation for quantum chemistry algorithms involving on-the-fly evaluation of molecular integrals that we will describe in later chap- ters.

4.1 Overview of Algorithm

The truncated Taylor series algorithm is formulated for a Hamiltonian that can be written in the form Γ X H = WγHγ (4.1) γ=1

1 where the Wγ are complex-valued scalars and each Hγ is a unitary operator, with some means of selectively applying a given Hγ available to the algorithm. Recall that the Hamiltonian is a Hermitian operator, so there must be some available means to decompose the Hamiltonian into a sum of unitaries—this is also a procedure that we

1 In the original treatment in [11], the Wγ are required to be real, non-negative scalars. We break with this convention to better conform with the way that we decompose the chemistry Hamiltonian in future chapters. In practice this is just a difference in whether we decide to absorb signs and factors of i into the Hγ . 21 Chapter 4. Truncated Taylor Series Algorithm 22 will need to specify for the two representations of the chemistry Hamiltonian that we describe in subsequent chapters. What we would like to do is perform time evolution under the time evolution operator U = e−iHt for time t, with error .

We begin by dividing t into r time segments of size t/r. For each segment we then Taylor expand the propagator and truncate the Taylor series at order K:

K X (−iHt/r)k U ≡ e−iHt/r ≈ r k! k=0 K Γ X X (−it/r)k = W ...W H ...H (4.2) k! γ1 γk γ1 γk k=0 γ1,...,γk=1

We select K so that each of the r segments contributes error less than /r. From the Taylor series error formula, we know that truncating at order K incurs error

(||H||t/r)K+1  O (4.3) (K + 1)!

Thus, provided that r ≥ ||H||t, we can choose

 log(r/)  K ∈ O (4.4) log log(r/) to achieve our desired precision.

Having approximated the propagator Ur, the next question is how we will actually implement evolution under this propagator in order to obtain state Ur |ψi for starting state |ψi. To do this we will use a procedure from [35] known as oblivious amplitude amplification, which will allow us to deterministically apply a sum of unitary operators

to a given state. While the truncated form of Ur, which we will refer to as Ue, is not exactly unitary because of the Taylor series truncation, it is almost unitary, and this allows us to proceed within reasonable error bounds. First we note that U˜ can be written in the form X Ue = βjVj (4.5) j where

j ≡ (k, γ1, ··· , γk) , tk β ≡ W ··· W , j rkk! γ1 γk k Vj ≡ (−i) Hγ1 ··· Hγk (4.6) Chapter 4. Truncated Taylor Series Algorithm 23

Already we can comment on the gate count required for this implementation: we need an ancillary selection register |ji = |ki |γ1i ... |γK i where 0 ≤ k ≤ K and 1 ≤ γv ≤ Γ

for all v. We can encode each |γvi in binary using Θ(log Γ) qubits, and we will do this

for a total of K such |γvi, although only k of these K registers will be in use for any given value of |ki. Finally we encode |ki itself in unary for reasons that will become clear later in this section. This means that each |ki = |1k0K−ki, which in turn requires Θ(K) qubits. Thus, letting J denote the total number of ancilla qubits required for the selection register |ji,

log(Γ) log(r/) J ∈ Θ(K log Γ) = O (4.7) log log(r/)

Why do we refer to |ji as the selection register? [11] next assumes that there exists an abstract operator called select(V ) so that

select(V ) |ji |ψi = |ji Vj |ψi (4.8)

In subsequent chapters we will actually implement select(V ) by first constructing a circuit that we will refer to as select(H), with

select(H) |γi |ψi = |γi Hγ |ψi (4.9)

This construction will differ depending on the specific encoding of the chemistry Hamil- tonian that we use (either second-quantized or first-quantized). For now we note that we can construct select(V ) by applying select(H) k times and then multiplying by −i for a total of k times. To obtain our k desired applications of select(H), we perform K applications of select(H) with each application controlled on the corresponding, successive qubit in the unitary representation of |ki. The gate count of select(H) will again depend on its specific construction, which we will detail in subsequent chapters. Thus, letting H denote the gate count of select(H) for now, where H depends on the encoding of the Hamiltonian that we choose, we see that select(V ) must have a gate count that scales like O(H K).

Now that we have constructed select(V ), we also need an operator prepare(β) so that r 1 X prepare(β) |0i⊗J = pβ |ji (4.10) s j j where s is a normalization factor so that the probabilities sum to 1.

To actually implement prepare(β) in the most general case, we would first like to construct a circuit prepare(W ). The actual implementation of prepare(W ) will again Chapter 4. Truncated Taylor Series Algorithm 24 differ depending on the encoding of the chemistry Hamiltonian that we use, and we again defer the actual construction of this circuit to subsequent chapters. The prepare(W ) operator ought to effect the following:

r Γ 1 X prepare(W ) |0i⊗ log Γ = pW |γi (4.11) Λ γ γ=1 where Γ X Λ ≡ |Wγ| (4.12) γ=1 and the complexity of the normalization factor Λ depends on the encoding of the chem- istry Hamiltonian that we use as well.

Then, to effect prepare(β) assuming the existence of prepare(W ), we will first perform K rotations to obtain the state

−1/2  K  K r X (Λt/r)q X (Λt/r)k |ki (4.13)  q!  k! q=0 k=0

Specifically, using the convention that Ry(θ) ≡ exp[−iθσy/2], we do this by applying

Ry(θ1) to the first qubit of the unitary encoding of k, and then Ry(θk) to the kth qubit controlled on the k − 1 qubit of k for k ∈ [2,K], where

v −1 u  K  u (Λt/r)k−1 X (Λt/r)q θ ≡ 2 arcsin u1 −  (4.14) k t (k − 1)!  q!   q=k

After performing these rotations, we apply prepare(W ) K times for each of the reg- isters |γ1i , ..., |γK i. (Technically we only need to perform prepare(W ) k times since the registers past the kth one are not used.) Again, the gate count of prepare(W ) will depend on its specific construction. Thus, letting W denote the gate count of prepare(W ) for now, we see that prepare(β) must have a gate count that scales like O(W K). Figure 7.2 provides a circuit diagram for prepare(β).

Using the two operators select(V ) and prepare(β) we can define an operator W so that

W ≡ (prepare(β) ⊗ I)T select(V )(prepare(β) ⊗ I) r ⊗J 1 1 W |0i |ψi = |0i Ue |ψi + 1 − |Φi (4.15) s s2 Chapter 4. Truncated Taylor Series Algorithm 25

|0i Ry (θ1) • ···

|0i Ry (θ2) ··· ...... |0i ··· •

|0i ··· Ry (θK )

|0i⊗ log Γ prepare (W ) . . . .

|0i⊗ log Γ prepare (W )

Figure 4.1: The circuit for the operator prepare(β). An expression for θk is given in Eq. (4.14). The actual implementation of prepare(W ) will depend on the encoding of the chemistry Hamiltonian that we use.

where |Φi is a state whose ancilla qubits are orthogonal to |0i⊗J . Note that we have

departed from the convention of [11] by allowing the Wγ to be complex scalars, so our definition of W differs from the definition there by a complex conjugation. The circuit implementing W is shown in Figure 4.2.

|0i |0i . .

|0i prepare(β) prepare(β)> |0i |0i⊗ log Γ . . select(V )

|0i⊗ log Γ

Figure 4.2: The circuit implementing W. The oval indicates the control register for select(V ).

Now, obtaining our desired state Ue |ψi is a simple matter of applying the projection operator P ≡ (|0i h0|)⊗J ⊗ I:

⊗J 1 ⊗J PW |0i |ψi = |0i Ue |ψi (4.16) s

The value of s can be adjusted by setting r, the number of segments. In particular, Chapter 4. Truncated Taylor Series Algorithm 26

Table 4.1: Parameters and bounds

Parameter Description Bound Λ normalization factor, Eq. (4.12) Depends r number of time segments, Eq. (4.17) Λt/ ln(2)  log(r/)  K truncation point for Taylor series, Eq. (4.4) O log log(r/) Γ number of terms in unitary decomposition, Eq. (4.1) Depends J number of ancilla qubits in selection register, Eq. (4.7) Θ(K log Γ) to conform with the way oblivious amplitude amplification is defined in [11], we choose s = 2. This is equivalent to choosing

r = Λt/ ln(2) (4.17)

since K X X 1 s = |β | = ln(2)k ≈ 2 (4.18) j k! j k=0 Now, letting R = I − 2P be the reflection operator, we define the amplification operator

G ≡ −WRW†RW (4.19)

If Ue were perfectly unitary, we would obtain

G |0i |ψi = |0i Ue |ψi (4.20)

but it is not perfectly unitary because of our Taylor series truncation. Instead, we get an approximation whose error we will bound by δ = /r for each segment. To prove this error bound, we note that

3 4  PG |0i |ψi = |0i Ue − UeUe†Ue |ψi (4.21) s s3

Under the conditions of oblivious amplitude amplification, with |s − 2| = O(δ) and † ||Ue − Ur|| = O(δ), we must have ||UeUe − I|| = O(δ) and

||PG |0i |ψi − |0i Ur |ψi || = O(δ) (4.22)

Finally, the gate count of the entire algorithm is r times the cost of select(V ) plus r times the cost of prepare(β), and thus the whole algorithm scales like O(r[H +W ]K).

Now we summarize the steps involved in the general Taylor series algorithm:

1. Express the Hamiltonian as a sum of weighted unitaries, as in Eq. (4.1). Chapter 4. Truncated Taylor Series Algorithm 27

Table 4.2: Operators and gate counts

Operator Description Gate Count select (H) apply specified terms from decomposition, Eq. (4.9) Depends select (V ) apply specified product of terms, Eq. (4.8) Depends prepare (W ) prepare weighted superposition, Eq. (4.11) Depends prepare (β) prepare weighted superposition, Eq. (4.10) Depends W probabilistically simulate for time t/r, Eq. (4.15) Depends P projection operator Θ(K log Γ) G amplification to implement sum of unitaries, Eq. (4.19) Depends (PG)r entire algorithm Depends

2. Subdivide the simulation into time segments Ur of size r = Λt/ ln(2) where t is the PΓ total simulation time and Λ = γ=1 |Wγ|.

3. Expand the evolution for time t/r, as in Eq. (4.2).

4. For each segment, we do the following: q prepare 1 PΓ p (a) Apply (W ) to obtain the state Λ γ=1 Wγ |γi.

(b) Apply a series of controlled rotations by θk, where θk is as given in Eq. (4.14). Overall, Steps (a) and (b) combined give us the operator prepare (β), which gives us the state in Eq. (4.13). (c) Use the ancilla prepared in Steps (a) and (b) as controls for the operation select (V ), which performs K controlled applications of select (H) along with K phase shifts, and which is described in Eq. (4.8). (d) Apply prepare (β)T to clear out the ancilla registers. (e) Apply oblivious amplitude amplification to obtain the desired state with unit probability.

Table 4.1 lists relevant parameters along with their bounds, and Table 4.2 lists relevant operators and their gate counts, although we note that many parameters and bounds will depend on the exact encoding of the chemistry Hamiltonian that we adapt the algorithm to, which entails being able to decompose the Hamiltonian into a sum of unitaries, and being able to construct operators whose existence we have otherwise simply assumed. Already we can say that this algorithm is exponentially more precise than the Oe(N 8t/o(1)) scaling of Trotterization because it is polylogarithmic in the error. What remains is to determine the algorithm’s dependence on our parameters of interest for the chemistry problem—N and η—which in turn depends on how we encode the Hamiltonian.

In subsequent sections we will adapt this algorithm to specific encodings of the chemistry Hamiltonian to obtain scalings in terms of N and η, and we will reproduce Table 4.1 Chapter 4. Truncated Taylor Series Algorithm 28 and Table 4.2 with all the currently undetermined parameters, like H and W , filled in. In doing so we will provide decompositions of the chemistry Hamiltonian that match the form given in Eq. (4.1), and we will provide implementations of select (H) and prepare (W ) that differ depending on the decomposition. For the on-the-fly algo- rithms, which evaluate molecular integrals that appear in the Hamiltonian on the fly, implementing select (H) and prepare (W ) will require constructing an integrand ora- cle that can be queried by these operators, and the construction of this oracle is detailed in Chapter5.

The following section describes how to adapt the Taylor series algorithm to Hamiltonians that can be written as an integral over some dimension; in this case, the dimension that concerns us will be spatial coordinates. This will provide the motivation for the on-the- fly algorithms that we describe later in Chapters5,6, and7.

4.2 Integral Hamiltonians

An integral Hamiltonian is one that can be written as an integral over a dimension such as space or time. We draw inspiration for how to handle the case of integral Hamiltonians for the on-the-fly algorithms that we describe later in Chapters5,6, and7, by considering the treatment of time-dependent Hamiltonians as described in [11]. The propagator for a time-dependent Hamiltonian contains an integral over time in it; this ends up looking quite similar to the propagator for a Hamiltonian that can be written as an integral over space.

Recall, from Chapter2, that the Schrodinger equation governing time evolution is given d|ψi by i dt = H |ψi. When H is not time-dependent, we can directly integrate this equation over a given time t to say that

|ψ(t)i = e−iHt |ψ(0)i (4.23)

However, when the Hamiltonian is some time dependent operator H(t) we can’t simply integrate this equation because the Hamiltonian at different times does not necessarily commute with itself, and we recall that in general eA+B 6= eAeB. For time dependent H(t) we instead write the time-evolved state as

" Z t/r # |ψ(t)i = T exp −i H(t) dt |ψ(0)i (4.24) 0

where T , the “time-ordering operator,” is a notational way of indicating that the terms in the integrand must be evaluated in a certain order because they don’t commute. Chapter 4. Truncated Taylor Series Algorithm 29

The way to deal with a time-dependent Hamiltonian for the Taylor series algorithm is to simply discretize the integral so that we don’t need to worry about the time-ordering operator T . Specifically, for a given time segment (one of r time segments), we again Taylor expand and truncate the Taylor series as before, and then we further discretize the integral into µ time segments of size t/µ:

" Z t/r # Ur ≡ T exp −i H(t) dt 0 K X (−i)k Z t/r ≈ T H(t )...H(t ) dt k! k 1 k=0 0 K µ−1 X (−it/r)k X ≈ H(t )...H(t ) (4.25) µkk! jk j1 k=0 j1,...,jk=0

We will use Eq. (4.25) to guide our treatment of Hamiltonians that can be expressed as an integral over space. For a Hamiltonian that is itself an integral over a spatial region Z, we write Z H = H(~z) d~z (4.26) Z For both the first-quantized and second-quantized chemistry Hamiltonian, the integral is defined over all space, but the integrand decays exponentially so that we can approximate Z as a finite volume V. Then the integral can be approximated as

µ Z V X H ≈ H(~z) d~z ≈ H(~z ) (4.27) µ ρ Z ρ=1

Note that the one-electron integrals hij are defined over 3D space ~x = (x, y, z), and

the two-electron integrals are defined over 6D space (~x,~y) = (x1, y1, z1, x2, y2, z2), but here we incorporate the six coordinates into the vector ~z. In practice carrying out the algorithm would involve discretizing each integration coordinate separately, but for pedagogical purposes we work just with the vector ~z and total volume V.

Now we again write out the propagator, which will be an integral over space, and then Taylor expand and truncate the Taylor series at order K to obtain the same scaling as Chapter 4. Truncated Taylor Series Algorithm 30 in Eq. (4.4) in terms of r and . Then we discretize the integral over space:

 t Z  Ur ≈ exp −i H(~z) d~z (4.28) r Z K k X (−it/r)k Z  ≈ H(~z) d~z (4.29) k! k=0 Z K X (−it/r)k Z = H(~z )...H(~z ) d~z (4.30) k! 1 k k=0 Z k K  µ  X (−itV)k X ≈ H(~z ) (4.31) rkµkk!  ρ  k=0 ρ=1 K µ X (−itV)k X = H(~z )...H(~z ) (4.32) rkµkk! ρ1 ρk k=0 ρ1,...,ρk=1

This is similar to the original Taylor series algorithm, except that we now work with the integrand H, evaluated at some point in space, rather than the original Hamiltonian H. Now we must be able to write each integrand, most generally, in the form

Γ X H(~z) = wγ(~z)Hγ(~z) (4.33) γ=1 and thus

K Γ µ X (−itV)k X X U ≈ w (~z )...w (~z )H (~z )...H (~z ) (4.34) r rkµkk! γ1 ρ1 γk ρk γ1 ρ1 γk ρk k=0 γ1,...,γk=1 ρ1,...,ρk=1

This looks like the Taylor series expansion for constant Hamiltonians, except for the fact that there is a double sum, and except for the fact that we work with integrand elements rather than Hamiltonian elements. Additional space complexity will result from the extra inner sum, as it requires additional ancillary registers to keep track of the grid points ~zρi . As before, we need to encode |ki using one register with Θ(K) qubits, and

K registers with Θ(log Γ) qubits each to encode |γ1i , ..., |γki. We also need to encode K grid point registers ~zρ1 , ..., ~zρk , which means that we would need Θ(K log(µΓ)) additional qubits to store the grid points.

As before, we will again need to construct select (H) and prepare (W )—or rather, R their counterparts select (H) and prepare (w), for integrands Hγ where Hγ = Hγ d~z R and wγ where Wγ = wγ d~z—according to the encoding of the Hamiltonian that we choose. However, both constructions will make use of an integrand oracle that returns the value of the one- and two-electron chemistry integrals evaluated at a certain point. In the next chapter, Chapter5, we will construct the integrand oracle. Then, in Chapters Chapter 4. Truncated Taylor Series Algorithm 31

6 and7, we will detail the full on-the-fly algorithms, and the full gate counts for these algorithms. Chapter 5

The Integrand Oracle

The chemistry Hamiltonian contains one-electron and two-electron integrals of the form

Z 2 ! ∇ X Zq h = ϕ∗(~r) − − ϕ (~r) d~r, (5.1) ij i 2 ~ j q |Rq − ~r| Z ∗ ∗ ϕi (~r1)ϕj (~r2)ϕ`(~r1)ϕk(~r2) hijk` = d~r1 d~r2. (5.2) |~r1 − ~r2| where the integrals are evaluated over all space. Here, ϕi(~r) represents the basis function of the i-th spin orbital, evaluated at point ~r. The one-electron integrals result from the kinetic term and the nuclear attraction term in the chemistry Hamiltonian, and the two-electron integrals result from inter-electron repulsion terms.

A major question in developing quantum algorithms for quantum chemistry is figuring out how to evaluate these integrals. When we consider the second-quantized encoding of the chemistry Hamiltonian, we first introduce a database algorithm that uses pre- computed molecular integrals, as this is easier to implement experimentally, and it is also pedagogically sensible to introduce this algorithm first. Then we introduce a second- quantized algorithm that evaluates these integrals on the fly by discretizing them, and a first quantized algorithm that evaluates these integrals on the fly in the same way. Both on-the-fly algorithms will utilize a circuit that can evaluate and return the value of a given integrand sampled at a given spatial coordinate.

In this chapter we will introduce this circuit, which will allow us to sample the integrands at a specific point, as this circuit will be used to construct select (H) and prepare (w) for the on-the-fly algorithms in Chapters6 and7. We will also bound the gate count of this circuit by considering the complexity of the discretized two-electron integral.

32 Chapter 5. The Integrand Oracle 33

5.1 Basis Function Circuit Construction

We would like to introduce a circuit that samples the above integrands at a specific grid point ρ. First we will construct an oracle that evaluates a specific basis function ϕi(~zρ):

Q |ρi |0i⊗ log M = |ρi |ϕ (~z )i , (5.3) ϕj ej ρ

where ϕej(~zρ) is the binary encoding of ϕj(~zρ), requiring log M qubits. We will ultimately need N of these, for the N basis functions ϕ1(~z), ..., ϕN (~z). We claim that it is possible to construct these oracles without adding too much additional complexity to the Taylor series algorithm.

We note that it is possible to classically evaluate the basis functions ϕi(~z) with complex- ity that is polylogarithmic in 1/. This is because it is possible to classically evaluate Gaussian functions with complexity that is polylogarithmic in 1/, and molecular spin- orbital basis functions have historically been represented as sums of Gaussians [16]. The STO-nG basis set, for example, represents each spin orbital as a linear combination

of n Gaussians [16]. Thus we can construct Qϕj simply by constructing a reversible classical circuit that evaluates and sums Gaussian functions. This may not be the most

optimal way to construct the quantum circuit Qϕj , but it will not negatively impact the complexity of our overall algorithm because its complexity will be polylogarithmic in 1/.

Next we will construct a circuit that will allow us to apply any one of the N basis

functions ϕj(~z) by combining N of the Qϕj circuits, with choice of spin orbital ϕj completely controlled on a register encoding the index j:

N Y Qϕ = |ji hj| ⊗ Qϕj . (5.4) j=1

We claim that the depth of this circuit will be O(Npolylog(Nt/)). The registers en- coding the indices j will require O(log N) qubits, while the fact that each Qϕj needs to compute sums of Gaussians, by our analysis in the previous paragraph, contributes a factor that scales like O(polylog(Nt/)) for each Qϕj . Finally we note that each Qϕj needs a register to encode ~z, the point where we evaluate the integrand. The integrals are technically defined over all space, but we would like to bound the complexity of the ~z register, and of the integrand oracle in general, so we truncate the domain of integration in a manner that preserves error . Note that we can do this because the integrals decay exponentially over space. Specifically, we will demonstrate in the next section that we can pick ~zmax so that ~z ∈ O (log(Nt/)). An example implementation of Qϕ for four basis functions is shown in Figure 5.1. Chapter 5. The Integrand Oracle 34

|j1i • • |j1i |j2i • • |j2i |zi |zi Qϕ0 Qϕ1 Qϕ2 Qϕ3 |0i |ϕj (~z)i

Figure 5.1: An oracle which returns the value of a particular basis function at a particular position ~z depending on the state of an ancilla register |ji which selects the oracle. Here j is represented in binary, where j1 refers to the first bit of j, and j2 refers to the second bit of j. This example is only valid when there are four basis functions.

5.2 Integrand Circuit Complexity

Now we will describe how to use the circuit Qϕ that we constructed in the previous section in order to actually compute the integrands. Specifically we will discuss how to compute the two-electron integrands that look like the following:

∗ ∗ ϕ (~x) ϕ (~y) ϕ` (~x) ϕk (~y) h (~x,~y) = i j , (5.5) ijk` |~x − ~y|

The complexity of computing the two-electron integrands dominates the complexity from computing the one-electron integrands, so we will focus on just the two-electron integrands. First we will bound the actual complexity that results from discretizing and evaluating the integrals, and then, in the next section, we will describe how to call the

Qϕ circuit each time we need to evaluate an integrand.

We start by changing variables to ξ~ = ~x − ~y to avoid singularities that result when two electrons occupy the same point in space. Then our new integral looks like

∗ ∗ ~ ~ Z ϕ (~x)ϕ (~x − ξ)ϕ`(~x)ϕk(~x − ξ) i j d3ξ~ d3~x (5.6) |ξ~|

Next, changing variables to spherical coordinates with ξ = |ξ~|, we have Z ∗ ∗ ~ ~ 3 ϕi (~x)ϕj (~x − ξ)ϕ`(~x)ϕk(~x − ξ)ξ sin θ dξ dθ dφ d ~x (5.7)

Now we will truncate the integral at some finite xmax and upper bound the integral by

discretizing it and then upper bounding the discretized version. We start by letting ϕmax 0 and ϕmax denote the maximum value of any spin-orbital function and of the derivative of any spin-orbital function, respectively, over the truncated domain of integration. We also discretize the ~x in units of δx in each direction and note that the maximum value of ξ

is also xmax. Finally, we discretize the other dimensions as δξ = δx, δθ = δφ = δx/xmax. Chapter 5. The Integrand Oracle 35

Now we can bound the discretized value of the integral in Eq. (5.7). We note that the 4 maximum value that the integrand can take is upper bounded by xmaxϕmax. Consid- ering the integral as a discretized sum, each term in the sum will thus be bounded by 4 4 2 4 6 6 O(xmaxϕmaxδx (δx/xmax) ) = O(ϕmaxδx /xmax). There are O((xmax/δx) ) terms, so the total integral will scale as 4 5 O(ϕmaxxmax) (5.8)

We would like to express this in terms of N. We note that ϕmax is determined by the model chemistry, and is thus independent of N. Thus we need to find the dependence

of xmax on N.

First we note that there are O(N 4) two-electron integral terms that will ultimately be summed in the Hamiltonian, so the allowable simulation error for each integral must be O(/(N 4t)) in order for the sum to have error . Next we use the fact that we can choose standard basis functions ϕj(~z) that decay exponentially as a function of ~z to note that this allows us to choose xmax that is logarithmic in the allowable error for the integral. Finally, we also note that molecular volume grows as O(N), so the space between basis functions on different atoms must grow as O(N 1/3). Then in total we obtain scaling

 1/3  xmax ∈ O N log(Nt/) (5.9)

We further note that in a model chemistry, each individual orbital ϕj(~z) is non-negligible on a region that grows as O(log N). We can then modify the grid used to discretize the integral so that it only includes points where the orbitals take on values that are non- negligible. We can do this at unit cost by specifying the center of each spin-orbital in an additional register used to query the circuit Qϕ. This means that we can choose the coordinates of ~x to be centered on the region of the orbital for which its value is non-negligible. Then

xmax ∈ O(log(Nt/)) (5.10)

We briefly note that using spherical coordinates is advantageous only when we’re in a region where ξ~ is near zero. In regions where ~x is large, the spherical coordinates version of the integral has an extra factor of ξ that increases the complexity. Specifically, when 4 3 it is acceptable to use Cartesian coordinates, we will obtain a scaling of O(ϕmaxxmax) for the integral. Thus it makes sense to make the switch to Cartesian coordinates when the minimum value of |ξ~| such that ϕj(~x − ξ~) is non-negligible is Ω(log(Nt/)).

Next we determine the appropriate grid size for discretizing the integrals in order to achieve error . First we consider the analysis for the case of Cartesian coordinates. Then we have a six-dimensional block with sides of length δx. Because the error in the integrand scales as the maximum derivative of the integrand times δx, the error on the Chapter 5. The Integrand Oracle 36 six-dimensional cube scales as this maximum derivative times δx7. We also note that 6 the total number of blocks will be O((xmax/δx) ), so multiplying the two factors, we 6 get that the overall error scaling will be xmaxδx times the maximum derivative of the integrand.

Thus we now need to find a way to bound the maximum derivative of the integrand. 4 In Cartesian coordinates, where |ξ~| = Ω(x ) and the integrand scales like O( ϕmax ), max xmax the derivative of the integrand with respect to a component of ~x and with respect to a component of ξ~ in the numerator both scale like

ϕ0 ϕ3  O max max (5.11) xmax

The derivative of the integrand with respect to a component of ξ~ in the denominator scales like  4  ϕmax O 2 (5.12) xmax Then, considering both types of derivatives, the total error we get from performing the discretization in Cartesian coordinates is

0 3 5 O((ϕmax + ϕmax/xmax)ϕmaxxmax δx) (5.13)

In spherical coordinates we can start by scaling the angular variables like so:

0 0 θ ≡ xmaxθ, φ ≡ xmaxφ (5.14)

6 6 Again each block has volume δx , and there are O((xmax/δx) ) such blocks so that the 6 total error will again be the maximum derivative of the integrand multiplied by xmaxδx. With the rescaled angular variables, we find that the derivative of the integrand with respect to any component of ~x and with respect to ξ, θ0, or φ0 appearing in any of the spin orbitals is given by Eq. (5.11), while the derivative of the integrand with respect to 0 0 ξ or θ as it appears in the factor ξ sin(θ /xmax) scales like Eq. (5.12).

Thus the spherical and Cartesian cases end up incurring the same error from discretiza- tion. Using the expression in Eq. (5.13), we see that we need to choose grid size δx so that   1  δx ∈ Θ 4 0 3 5 (5.15) N t (ϕmax + ϕmax/xmax)ϕmaxxmax Then the total number of terms in the sum scales as ! x 6 N 4t 6 O max = Θ (ϕ0 + ϕ /x )ϕ3 x6 (5.16) δx  max max max max max Chapter 5. The Integrand Oracle 37

|ii Qϕ |ii

|ji Qϕ |ji

|`i Qϕ |`i

|ki Qϕ |ki

|~xi Qϕ Qϕ |~xi

|ξ~i Qϕ Qϕ R |ξ~i

|0i Qϕ |ϕi(~x)i

|0i Qϕ |ϕj(~x − ξ~)i

|0i Qϕ |ϕ` (~x)i M ~ |0i Qϕ |ϕk(~x − ξ)i

|0i R |ξ sin θi ~ |0i |hij`k(~x,~x − ξ)i

Figure 5.2: Circuit to sample the integrand of Eq. (5.17). The circuit combines four copies of Qϕ with R and M. The target registers for Qϕ and R are denoted by boxes, and the control registers are denoted by circles and ovals.

Although this expression is large, we will use a number of qubits that scales as the logarithm of this, which will give us a factor of O(log(Nt/)).

5.3 Integrand Circuit Construction

Finally we describe how to compute the integrand in Eq. (5.7) using the circuit Qϕ.

First we note that we need to call Qϕ four times because four basis functions appear in the integrand, and these four basis functions will need to access the registers encoding ~x and ξ~. We will also define a circuit R so that

R |ξ~i |0i = |ξ~i |ξ sin θi (5.17) and a reversible circuit M that multiplies five registers together. This allows us to compute ∗ ∗ ~ ~ ϕi (~x)ϕj (~x − ξ)ϕ`(~x)ϕk(~x − ξ)ξ sin θ (5.18)

The operation of the full circuit is depicted in Figure 5.2. Chapter 5. The Integrand Oracle 38

Next we describe how to compute the integrand of the one-electron integrals using an equivalent construction to Qϕ, although the complexity will be dominated by the computation of the two-electron integrals. First we need to construct N circuits in the spirit of Eq. (5.3) so that

⊗ log M ] 2 2 Q∇ ϕj |ρi |0i = |ρi |∇ ϕj(~zρ)i (5.19)

Again, like the basis function oracles, these can be constructed without adding additional

2 2 complexity to the algorithm. We can also again combine the Q∇ ϕj into a circuit Q∇ ϕ like so: N Y 2 2 Q∇ ϕ = |ji hj| ⊗ Q∇ ϕj . (5.20) j=1 We can combine this oracle together with a routine to compute the nuclear Coulomb

interactions, giving us a one-electron version of the Qϕ routine. As in the two-electron case, we can switch to spherical coordinates with the spherical coordinates centered at the nuclei to avoid singularities. We can switch between the one- and two-electron routines by selecting on the register |γi = (i, j, k, `), where |γi is as described in the previous section.

The full integrand circuit has total gate count dominated by the circuit Qϕ, so it has gate count Oe(N).

5.4 Summary

We have described how to construct a circle that samples a one- or two-electron integrand evaluated at a given point.

In Chapters6 and7 we will outline adaptations of the Taylor series algorithm of Chapter 4 that compute the molecular integrals on the fly using the integrand oracle described in this chapter. In particular, the integrand oracle will be used to construct the operators select (H) and prepare (w) (the integrand versions of select (H) and prepare (W )), and it will make a significant contribution to the complexity of the on-the-fly algorithms. Chapter 6

Second-quantized Algorithms

In Chapter3, we introduced the chemistry Hamiltonian, which looks like

2 X ∇~r X Zi X 1 H = − i − + (6.1) 2 ~ |~r − ~r | i i,j |Ri − ~rj| i,j>i i j where R~ i are the coordinates of the nuclei, Zi are the nuclear charges, and the ~ri are the coordinates of the electrons. We represent the system in a basis of N single-particle spin orbit functions ϕ, so that ϕi(~rj) represents the i-th spin orbital when occupied by the j-th electron.

In second-quantized form, the Hamiltonian is rewritten in terms of raising and lowering operators that automatically enforce fermionic antisymmetry (see Chapter3 for more explanation). The second-quantized chemistry Hamiltonian looks like the following:

X 1 X H = h a†a + h a†a†a a (6.2) ij i j 2 ijk` i j k ` i,j i,j,k,` where hij and hijk` denote the following one- and two-electron integrals:

Z 2 ! ∇ X Zq h = ϕ∗(~r) − − ϕ (~r) d~r (6.3) ij i 2 ~ j q |Rq − ~r| Z ∗ ∗ ϕi (~r1)ϕj (~r2)ϕ`(~r1)ϕk(~r2) hijk` = d~r1 d~r2 (6.4) |~r1 − ~r2|

† and the operators ai and aj obey the fermionic anti-commutation relations:

† † † {ai , aj} = δij, {ai , aj} = {ai, aj} = 0 (6.5)

39 Chapter 6. Second-Quantized Algorithms 40

In general the second-quantized Hamiltonian contains O(N 4) terms, and the spatial encoding of the Hamiltonian requires Θ(N) qubits, one for each spin orbital.

Having established what the second-quantized Hamiltonian looks like, we will proceed to describe how to adapt the Taylor series algorithm of Chapter4 to this Hamiltonian. This will involve first mapping second-quantized states onto the appropriate qubit encoding, and then finding a way to construct the operators select (H), and prepare (W ) (for the database algorithm) or prepare (w) (for the on-the-fly algorithm). We will present two ways to construct the operators, resulting in two different algorithms: the first

way uses pre-computed molecular integrals hij and hijkl, and its complexity scales like Oe(N 8t); we will refer to this algorithm as the database algorithm. The second algorithm computes molecular integrals on the fly using the integrand oracle of Chapter5, and this algorithm scales like Oe(N 5t).

6.1 Hamiltonian Oracle

In Chapter4, we noted that the Taylor series algorithm can be applied to a Hamiltonian PΓ that can be written in the form H = γ=1 WγHγ, and some means for applying each

Hγ must be available in order to implement select (H). Looking again at Eq. (6.2), we see that the Hamiltonian is already in the desired form. We also trivially have a way

to apply the desired Hγ by simply applying the desired sequence of raising and lowering † † † operators, ai aj or ai ajaka`.

However, we note that the raising and lowering operators apply to fermions, which are antisymmetric, distinguishable particles. Qubits are distinguishable and have no such symmetry. The main challenge in constructing the Hamiltonian oracle select (H), then, will be finding a mapping from the fermionic operators to operators that can act on qubits.

The qubit raising and lowering operators are easy to specify:

1 σ+ = |1i h0| = (σx − iσy) (6.6) j 2 j j 1 σ− = |0i h1| = (σx + iσy) (6.7) j 2 j j

x y z where σj , σj , and σj denote the Pauli matrices acting on the j-th tensor factor. However, the qubit operators do not satisfy the fermionic commutation relations, so we would need to apply either the Jordan-Wigner transformation [36, 37] or the Bravyi-Kitaev transformation [38–40] in order to enforce antisymmetry. Chapter 6. Second-Quantized Algorithms 41

An additional requirement of the fermionic raising and lowering operators is that aj or † aj introduce a phase depending on the parity of the occupancies of all orbitals with index less than j. Specifically, if we let fj ∈ {0, 1} denote the occupancy of orbital j, then we need to have

Pj−1 † s=1 aj |fN ...fj+1 0 fj−1...f1i = (−1) |fN ...fj+1 1 fj−1...f1i (6.8) Pj−1 aj |fN ...fj+1 1 fj−1...f1i = (−1) s=1 |fN ...fj+1 0 fj−1...f1i (6.9) † aj |fN ...fj+1 1 fj−1...f1i = 0 (6.10)

aj |fN ...fj+1 0 fj−1...f1i = 0 (6.11)

The Jordan-Wigner transformation accomplishes this by mapping the occupancy of each spin orbital j directly onto the state of qubit j, so the Jordan-Wigner operators are N- local tensor products of up to N Pauli operators. Specifically,

j−1 O 1 a† ≡ σ+ σz = (σx − iσy) ⊗ σz ... ⊗ σz (6.12) j j s 2 j j j−1 1 s=1 j−1 O 1 a ≡ σ− σz = (σx + iσy) ⊗ σz ... ⊗ σz (6.13) j j s 2 j j j−1 1 s=1

However, we cannot directly apply the operators we constructed in Eq. (6.12) because the operators σ+ and σ− are not unitary. We can fix this by adding four qubits to the selection register indicating whether the σx or the iσy part of the σ+ and σ− † operator should be applied. We define new, unitary fermionic operators aj,q and aj,q with q ∈ {0, 1} so that

j−1 j−1 † x O z † y O z aj,0 ≡ σj σs , aj,1 ≡ −i σj σs , (6.14) s=1 s=1 j−1 j−1 x O z y O z aj,0 ≡ σj σs , aj,1 ≡ i σj σs (6.15) s=1 s=1

Then, using our newly defined operators, we can write the second-quantized Hamiltonian in order to obtain the following:

X X hij † X X hijk` † † H = a aj,q + a a ak,q a`,q (6.16) 4 i,q1 2 32 i,q1 j,q2 3 4 q1,q2 i,j q1,q2,q3,q4 i,j,k,` Chapter 6. Second-Quantized Algorithms 42

Whenever we want to apply select (H) we use a register with four additional qubits and apply our newly defined operators in the following manner:

|ijk`i |q1q2q3q4i |ψi 7→ |ijk`i |q1q2q3q4i a`,q4 |ψi (6.17)

7→ |ijk`i |q1q2q3q4i ak,q3 a`,q4 |ψi (6.18)

7→ |ijk`i |q1q2q3q4i ak,q3 a`,q4 |ψi (6.19) 7→ |ijk`i |q q q q i a† a a |ψi (6.20) 1 2 3 4 j,q2 k,q3 `,q4 7→ |ijk`i |q q q q i a† a† a a |ψi (6.21) 1 2 3 4 i,q1 j,q2 k,q3 `,q4

The Jordan-Wigner transformation itself can be accomplished in O(1) time, and the number of gates it requires—and by extension, the number of gates that select (H) requires—will be O(N) since the Jordan-Wigner transformation is N-local. Meanwhile, † † each query takes O(1) time since each of the aj,0, aj,1, aj,0, and aj,1 can be applied in parallel.

We note that the Bravyi-Kitaev transformation requires a number of qubits that scales like O(log N)[38–40], so select (H) could likewise be implemented with O(log N) gates. However, the Bravyi-Kitaev transformation is more complicated, and the complexity of our overall algorithm would not change because the implementation of prepare (W ) requires at least O(N) qubits.

We also note that it is possible to implement select (H) by storing all the Pauli strings that appear in the Hamiltonian in a database, and then accessing the appropriate string each time we want to apply select (H). However, searching the database would have time complexity Ω(Γ), where Γ is O(N 4), so we would prefer to dynamically perform the Jordan-Wigner transformation. We mention this as an aside because it is simpler to construct and may be more suited to early experimental implementations.

6.2 Simulating Hamiltonian Evolution with Pre-Computed Integrals: the Database Algorithm

Now we outline the steps involved in applying the Taylor series algorithm to the second- quantized Hamiltonian for the database algorithm, which uses a database of pre-computed molecular integrals. This will be similar to the procedure we detailed in Chapter4, except here we will completely specify all bounds after now being able to explicitly con- struct prepare (W ) and select (H). Note that select (H) was just constructed in the previous section. Chapter 6. Second-Quantized Algorithms 43

Table 6.1: Parameters and bounds for database algorithm

Parameter Description Bound Λ normalization factor, Eq. (4.12) O(N 4) r number of time segments, Eq. (4.17) Λt/ ln(2)  log(r/)  K truncation point for Taylor series, Eq. (4.4) O log log(r/) Γ number of terms in unitary decomposition, Eq. (4.1) O(N 4) J number of ancilla qubits in selection register, Eq. (4.7) Θ(K log Γ)

To construct prepare (W ), we note that the fact that we have assumed the existence of a database means that this can be classically implemented with O(Γ) gates. Thus the gate count for prepare (β) scales like O(KΓ), but this can further be parallelized to depth O(K) + O(Γ).

Following the treatment in Chapter4, the steps in the algorithm are as follows:

PΓ 1. Express the Hamiltonian as a weighted sum of unitary operators H = γ=1 WγHγ. This has already been done in the previous section by nature of the form of the second-quantized Hamiltonian, as in Eq. (6.16).

2. Subdivide the simulation into time segments Ur of size r = Λt/ ln(2) where t is the PΓ total simulation time and Λ = γ=1 |Wγ|.

3. Expand the evolution for time t/r, as in Eq. (4.2).

4. For each segment, we do the following: q prepare 1 PΓ p (a) Apply (W ) to obtain the state Λ γ=1 Wγ |γi.

(b) Apply a series of controlled rotations by θk, where θk is as given in Eq. (4.14). Overall, Steps (a) and (b) combined give us the operator prepare (β), which gives us the state in Eq. (4.13). (c) Use the ancilla prepared in Steps (a) and (b) as controls for the operation select (V ), which performs K controlled applications of select (H) along with K phase shifts, and which is described in Eq. (4.8). (d) Apply prepare (β)T to clear out the ancilla registers. (e) Apply oblivious amplitude amplification to obtain the desired state with unit probability.

Now we will reproduce Table 4.1 and Table 4.2 with the bounds completely filled in for the database algorithm. These appear in Table 6.1 and Table 6.2.

As we mentioned in Chapter4, the gate count of the entire algorithm will be r times the cost of implementing select (V ) plus r times the cost of implementing prepare (β). In Chapter 6. Second-Quantized Algorithms 44

Table 6.2: Operators and gate counts for database algorithm

Operator Description Gate Count select (H) apply specified terms from decomposition, Eq. (4.9) O(N) select (V ) apply specified product of terms, Eq. (4.8) O(NK) prepare (W ) prepare weighted superposition, Eq. (4.11) O(Γ) prepare (β) prepare weighted superposition, Eq. (4.10) O(KΓ) W probabilistically simulate for time t/r, Eq. (4.15) O(KΓ) P projection operator Θ(K log Γ) G amplification to implement sum of unitaries, Eq. (4.19) O(KΓ) (PG)r entire algorithm O(rKΓ) the database algorithm, the gate cost of implementing select (V ) is O(NK), and the gate cost of implementing prepare (β) is O(KΓ). Thus, the total cost of the database algorithm scales like

N 4Λt log(Nt/) O(rKΓ) = O = Oe(N 8t) (6.22) log log(Nt/)

In terms of , this is already exponentially more precise than Trotter-based algorithms. In the next section we will outline the on-the-fly algorithm, which is even more efficient than the database algorithm in terms of its dependence on N.

6.3 Simulating Evolution Under Integral Hamiltonians: the On-the-Fly Algorithm

We note that the most costly part of the database algorithm was implementing prepare (W ), which prepared a superposition of states weighted by integrals that we classically precom- puted and stored in a database. In the on-the-fly algorithm, we discretize the integrals and numerically sample them on-the-fly.

In the last section of Chapter4, we described how to adapt the Taylor series algorithm R for a Hamiltonian H that is itself an integral over a spatial region Z: H = Z H(~z) d~z. PΓ In this case, the Hamiltonian is of the form H = γ=1 WγHγ where Z Wγ = wγ(~z) d~z (6.23) Z

Then we can discretize the integrals like so:

µ V X W ≈ w (~z ) (6.24) γ µ γ ρ ρ=1 Chapter 6. Second-Quantized Algorithms 45

where ~zρ is the point in the domain of integration located at grid point ρ, and where V is the the truncated volume that we choose to perform the integral over (recall that the one- and two-electron integrals were originally defined over all space).

In Eq. (4.28) of Chapter4, we showed that for a Hamiltonian that can generally be R V Pµ written as H = Z H(~z) d~z and then discretized as H ≈ µ ρ=1 H(~zρ), the propagator k can be discretized as U ≈ PK (−itV) Pµ H(~z )...H(~z ). For our particular r k=0 rkµkk! ρ1,...,ρk=1 ρ1 ρk second-quantized encoding, Γ X H(~z) = wγ(~z)Hγ (6.25) γ=1 and K Γ µ X (−itV)k X X U ≈ w (~z )...w (~z )H ...H (6.26) r rkµkk! γ1 ρ1 γk ρk γ1 γk k=0 γ1,...,γk=1 ρ1,..,ρk=1 We note that the coordinates that we have denoted ~z ought to include both spatial coordinates, since the two-electron integrals are integrals over two spatial coordinates while the one-electron integrals are defined over one spatial coordinate. We can handle this case by defining the integrals to always be over two spatial coordinates; for the one-electron integrals we would simply normalize at the end by dividing by the total volume of the second integration coordinate.

We can now proceed with the truncated Taylor series algorithm. We have already constructed select (H) in the first section of this chapter, so all that remains is to construct the equivalent of prepare (W ). Note, however, that we will construct not prepare (W ) but rather prepare (w), which creates a superposition of states weighted by the integrands wγ(~zρ).

prepare sample To construct (w), we will first need a circuit (w) that returns weγ(~zρ), the binary representation of wγ(~zρ), as follows:

sample ⊗ log M (w) |γi |ρi |0i = |γi |ρi |weγ(~zρ)i (6.27)

To construct sample (w), we use the integrand oracle that we defined in Chapter5, converting γ to (i, j, k, `), and converting the sampling point ρ to ~x and ξ~, to obtain ~ hijk`(~x,~x − ξ) = wγ(~z). This allows us to implement sample (w) with gate count O(N log N). However, we shall eventually see that although prepare (w) has lower gate count than prepare (W ), the on-the-fly algorithm requires more segments than the database algorithm because we must discretize space as well as time.

After constructing sample (w), our next step in constructing prepare (w) will be to further decompose the wγ(~zρ) returned by sample (w). The reason why we need to perform the decomposition is because in the on-the-fly algorithm we obtain a speed-up Chapter 6. Second-Quantized Algorithms 46

over the database algorithm by computing state coefficients wγ rather than storing and √ accessing them, a speed-up of O( Γµ) over O(Γ) (recall in Chapter5 that µ indicates how finely we discretize the integral), but this would not actually be an improvement in the worst case since we also increase the dimension, the number of coefficients we need to sum over, when we discretize the integral—that is, we would need to compare µ and

Γ in those two expressions. By decomposing the wγ(~z) so that the terms differ only by a phase we can avoid the worst case bound.

Specifically, we will decompose each wγ(~z) into terms that differ only by a sign as follows:

M X wγ(~z) ≈ ζ wγ,m(~z), wγ,m(~z) ∈ {−1, +1} (6.28) m=1

where ζ is the precision of the decomposition, and

     ζ ∈ Θ ,M ∈ Θ max |wγ(~z)|/ζ (6.29) ΓVt ~z,γ

so that Γ M µ ζV X X X H = w (~z )H (6.30) µ γ,m ρ γ γ=1 m=1 ρ=1 By the way we’ve chosen ζ,

M X |wγ(~z) − ζ wγ,m(~z)| ≤ ζ (6.31) m=1

Since all the amplitudes are identical up to a phase, we can easily implement prepare (w) so that it performs the following:

r L s 1 X ζV prepare (w) |0i⊗ log L = w (~z ) |`i (6.32) λ µ γ,m ρ `=1 where |`i = |γi |mi |ρi, L ∈ Θ(ΓMµ), and

  λ ≡ LζV/µ ∈ Θ ΓV max |wγ(~z)| (6.33) ~z,γ is a normalization factor whose magnitude depends on how many slices we discretize the integral into.

We accomplish the decomposition of wγ(~z) into the wγ,m(~z) on the fly, determining whether wγ,m(~z) for a given m should be 1 or −1. Since the superposition we desire in prepare (w) will be weighted by the square root of this coefficient, we want to either Chapter 6. Second-Quantized Algorithms 47 have or not have a phase of i. Using Eq. (6.31), we dynamically return a phase as follows:

  |`i |weγ (~zρ)i weγ (~zρ) > (2m − M) ζ |`i |weγ (~zρ)i 7→ (6.34) i |`i |weγ (~zρ)i weγ (~zρ) ≤ (2m − M) ζ

sample where |`i = |γi |mi |ρi, and where we obtain |we`(~zρ)i by calling (w). Then we sample erase the register |we`(~zρ)i by applying (w) again.

Now we can comment on the additional qubits required for the on-the-fly algorithm, as opposed to the database algorithm. As before, we need Θ(K) qubits to encode the

|ki, and K registers with Θ(log Γ) qubits each to encode the |γ1i , ..., |γki. We also

need to encode K grid point registers ~zρ1 , ..., ~zρk , which in total requires Θ(K log(Mµ)) additional qubits.

Finally we will outline the steps involved in the on-the-fly algorithm, and then comment on the scaling of the whole algorithm. The steps involved in the on-the-fly algorithm are the following:

ζV 1. Decompose the Hamiltonian with discretized integrals so that we have H = µ PΓ PM Pµ γ=1 m=1 ρ=1 wγ,m(~zρ)Hγ.

2. Subdivide the simulation into r time segments where r = λt/ ln(2), with t the total simulation time and λ = LζV/µ.

3. Expand the evolution for time t/r, as in Eq. (6.26).

4. For each segment, we do the following: q q prepare 1 PL ζV (a) Apply (w) to obtain the state λ `=1 µ wγ,m(~zρ) |`i, where |`i = |γi |mi |ρi and we weight the states by i or −i. Note that the operator prepare (w) utilizes the operator sample (w) to sample an integrand, which in turn uses the integrand oracle from Chapter5.

(b) Apply a series of controlled rotations by θk, where θk is as given in Eq. (4.14). Overall, Steps (a) and (b) combined give us the operator prepare (β), which gives us a state similar to the one depicted in Eq. (4.13), except with Λ replaced by λ. (c) Use the ancilla prepared in Steps (a) and (b) as controls for the operation select (V ), which performs K controlled applications of select (H) along with K phase shifts, and which is described in Eq. (4.8). (d) Apply prepare (β)T to clear out the ancilla registers. (e) Apply oblivious amplitude amplification to obtain the desired state with unit probability. Chapter 6. Second-Quantized Algorithms 48

Table 6.3: Parameters and bounds for on-the-fly algorithm

Parameter Description Bound Γ number of terms in unitary decomposition O(N 4) λ normalization factor, Eq. (6.33) Θ(ΓV max~z,γ |wγ(~z)|) M number of terms in decomposition of wγ(~z) Θ(max~z,γ |wγ(~z)|/ζ) L number of terms in final decomposition Θ(ΓMµ) r number of time segments, Eq. (4.17) λt/ ln(2)  log(r/)  K truncation point for Taylor series, Eq. (4.4) O log log(r/) J number of ancilla qubits in selection register Θ(K log(µL))

Table 6.4: Operators and gate counts for on-the-fly algorithm

Operator Description Gate Count select (H) apply specified terms from decomposition, Eq. (4.9) O(N) select (V ) apply specified product of terms, Eq. (4.8) O(NK) prepare (w) prepare weighted superposition, Eq. (6.32) Oe(N) prepare (β) prepare weighted superposition, Eq. (4.10) Oe(NK) W probabilistically simulate for time t/r, Eq. (4.15) Oe(NK) P projection operator Θ(K log(µL)) G amplification to implement sum of unitaries, Eq. (4.19) Oe(NK) r (PG) entire algorithm Oe(rNK)

The main difference between the on-the-fly algorithm and the database algorithm is the use of prepare (w) versus prepare (W ), and the fact that there are more terms in the discretized integral.

Now we will reproduce Table 4.1 and Table 4.2 with the bounds completely filled in for the on-the-fly algorithm. These appear in Table 6.3 and Table 6.4.

Now we bound the total complexity of the on-the-fly algorithm by noting that it scales like Oe(rNK) = Oe(NKλt) (6.35)

and we note that

4 5 4 5 λ ∈ ΓV max |wγ(~z)| ∈ O(Γϕmaxxmax) ∈ O(N [log(Nt/)] ) (6.36) ~z,γ

using results from Chapter5, where we bounded the two-electron integrals, to bound

V max~z,γ |wγ(~z)|. Then the overall gate count of the on-the-fly algorithm scales like

Oe(N 5Kt) = Oe(N 5t) (6.37)

This is the lowest gate count of any algorithm for second-quantized quantum chemistry simulation in the literature. Chapter 6. Second-Quantized Algorithms 49

In the next section we move onto the first-quantized encoding, which requires less space than the second-quantized encoding. We present another on-the-fly algorithm, again using the integrand oracle that we constructed in Chapter5 in order to apply the Taylor series algorithm from Chapter4 to the first-quantized Hamiltonian. Chapter 7

First-Quantized Algorithm

Now we will adapt the Taylor series algorithm to the CI matrix encoding of the first- quantized chemistry Hamiltonian, which uses a basis of Slater determinants to represent the wave function. By using the Slater-Condon rules, which specify which matrix ele- ments are non-zero in this basis, we can obtain a 1-sparse decomposition of the chemistry Hamiltonian. We further decompose this into a sum of 1-sparse unitaries, allowing us to write our Hamiltonian in the starting form required for the Taylor series algorithm. We will also use this sparse decomposition to construct an oracle that returns the in- tegrand of a given element of the CI matrix; note that we will again use the integrand oracle constructed in Chapter5. Then we outline a Taylor series algorithm that com- putes molecular integrals on the fly by discretizing those integrals, using this oracle to construct required operators like select (H).

7.1 CI Matrix Encoding

As described in Chapter3, we know that the chemistry Hamiltonian looks like

2 X ∇~r X Zi X 1 H = − i − + (7.1) 2 ~ |~r − ~r | i i,j |Ri − ~rj| i,j>i i j where R~ i are the coordinates of the nuclei, Zi are the nuclear charges, and the ~ri are the coordinates of the electrons. We represent the system in a basis of N single-particle spin orbitals ϕ, so that ϕi(~rj) represents the i-th spin orbital when occupied by the j-th electron.

In Chapter6 we described the second quantized encoding, which required Θ( N) qubits, one for each spin-orbital. We note that more spatially efficient encodings exist because

50 Chapter 7. First-Quantized Algorithm 51 many states that are represented in the second quantized encoding are in fact inaccessi- ble to the system due to symmetries in the Hamiltonian. For example, η, the number of electrons in the system, is a good quantum number since the second quantized Hamil- tonian’s number operator ν, whose expectation value gives us η, commutes with the Hamiltonian: N X † ν = ai ai, [H, ν] = 0, η = hνi (7.2) i=1 To put this in more combinatorial terms, the second quantized encoding includes 2N N possible states, but there are at most ξ = η total configurations of the electrons. Thus the first-quantized encoding represents states in a manner that is more efficient in terms of the number of qubits required, but is worth noting that with the exception of [20, 26, 27], all prior quantum algorithms for quantum chemistry work with the second- quantized Hamiltonian.

We will instead work in a basis of Slater determinants, which give us wave functions for a particular η-electron configuration that are explicitly antisymmetrized in space and spin, and in which the electrons are indistinguishable because fermions are indistinguishable.

We will index each particular spin orbital with χi ∈ {1, ..., N}, although we note again that in spite of the fact that there are η occupied spin orbitals, we can’t explicitly associate a specific electron to a specific χi because electrons are interchangeable. We η will denote each possible state as |αi = |χ0, ..., χη−1i, where α ∈ {1, ..., N }. Specifically, the way we antisymmetrize the Slater determinants is by defining them as follows:

h~r0, . . . , ~rη−1|αi = h~r0, . . . , ~rη−1|χ0, χ1, ··· , χη−1i

ϕ (~r ) ϕ (~r ) ··· ϕ (~r ) χ0 0 χ1 0 χη−1 0

1 ϕχ0 (~r1) ϕχ1 (~r1) ··· ϕχη−1 (~r1)

= √ . . . . (7.3) η! . . .. .

ϕχ0 (~rη−1) ϕχ1 (~rη−1) ··· ϕχη−1 (~rη−1)

Here taking the determinant ensures that the resulting function is antisymmetric under the exchange of any two rows, and it ensures that there is zero probability of two electrons ever having the exact same wave function (which would correspond to two identical rows in the determinant), in turn enforcing the Pauli exclusion principle.

This encoding of the wave function uses η registers to encode the spin orbitals, which means that it requires Θ(η log N) = Oe(η) qubits. It is also possible to use an even more compressed representation of the wave function that would enumerate all the possible Slater determinants in binary, thus requiring Θ(log ξ) qubits. Chapter 7. First-Quantized Algorithm 52

7.2 CI Matrix Decomposition

When we express the chemistry Hamiltonian in a basis of Slater determinants, we get what is known as the configuration interaction (CI) matrix, whose elements are sums of one- and two-electron integrals, hij and hijk`, computed according to the Slater-Condon rules [16]. The Slater-Condon rules will motivate our 1-sparse decomposition, and we state them in this section, denoting

Hαβ = hα | H | βi (7.4) as the matrix element between state |αi and |βi. We choose to list the occupied orbitals in ascending order, and we note that changing the order of orbitals in an odd permutation incurs a minus sign by fermionic antisymmetry. Then the Slater-Condon rules are as follows [16]:

1. If |αi and |βi contain the same spin orbitals χi, then

η η−1 η αβ X X X  H = hχiχi + hχiχj χiχj − hχiχj χj χi (7.5) i=1 i=1 j=i+1

2. If |αi and |βi differ by one spin orbital, which we will call k for |αi and ` for |βi,

η αβ X H = hk` + (hkχi`χi − hkχiχi`) (7.6) i=1

3. If |αi and |βi differ by two spin orbitals so that |αi has spin orbitals i and k, and |βi has spin orbitals j and `,

αβ H = hijk` − hij`k (7.7)

4. If |αi and |βi differ by more than two spin orbitals,

Hαβ = 0 (7.8)

We note that because of the Slater rules, the Hamiltonian is d-sparse, and there do exist ways to simulate arbitrary sparse Hamiltonians using Trotterization (this cannot be said of arbitrary Hamiltonians in general). By the Slater rules, the sparsity of this Chapter 7. First-Quantized Algorithm 53

Hamiltonian is

ηN − η ηN − η d = + + 1 (7.9) 2 2 1 1 η4 η3N η2N 2 = − + + O η2N + ηN 2 . 4 2 2

N is always greater than η, so d ∈ O(η2N 2). We note that this is more sparse than the second-quantized Hamiltonian, which was d-sparse with d ∈ O(N 4).

We recall that for the Taylor series algorithm we want a Hamiltonian that is decomposed PΓ into the form H = γ=1 WγHγ where the Hγ are unitary and there is some means for

selectively applying a given Hγ. We will decompose the CI matrix Hamiltonian into the appropriate form in four steps as follows:

1. Decompose the Hamiltonian into the O(η2N 2) 1-sparse matrices.

2. Decompose these 1-sparse matrices into 1-sparse matrices with entries that are

proportional to a sum of molecular integrals hij and hijk`.

3. Discretize the integrals in the 1-sparse matrices, as we did for the integrals appear- ing in the second-quantized Hamiltonian, so that we can formulate an on-the-fly Taylor series algorithm.

4. Further decompose the discretized integrals into sums of unitary matrices.

7.2.1 Decomposition into 1-sparse matrices

In [19], Aharonov and Ta-Shma describe how to simulate an arbitrary d-sparse Hamil- tonian given by an oracle using Trotter-based methods. They depict the Hamiltonian matrix as an undirected graph where each state corresponds to a node, and each matrix ∗ element Hαβ = Hβα 6= 0 corresponds to an edge connecting nodes |αi and |βi. An edge coloring of a graph using Γ colors is equivalent to the division of that graph into Γ sets of graphs of degree 1, so the edge coloring is equivalent to a decomposition of the Hamiltonian into Γ 1-sparse matrices. Aharonov and Ta-Shma demonstrate how to decompose the arbitrary d-sparse Hamiltonian into Θ(d2) terms, and how to efficiently perform the simulation under a oracle, a result that was later tightened to d2 terms in [35]. In [20], Tolui and Love showed how the chemistry CI matrix in particular can be ef- ficiently simulation using Trotter-based methods using a decomposition with d = O(N 4) terms.

According to the Slater rules described in the previous section, a matrix element in the CI matrix is non-zero only if the two basis elements differ by two or fewer spin orbitals, Chapter 7. First-Quantized Algorithm 54 which in turn means that if we depict the Hamiltonian as a graph, the two vertices are connected if and only if they are connected by a single excitation operator or a double excitation operator. That is, for i, j < η and p, q < N,

p q ai aj |αi = |βi (7.10)

p where ai excites the i-th occupied spin orbital, χi, by p modulo N spin orbital indices. To satisfy the Pauli exclusion principle we also require that a matrix element that might potentially excite a valid Slater determinant to a Slater determinant that has more than one electron in each spin orbital must be zero in value.

As in [20], we can color the Hamiltonian using only d colors by assigning distinct colors to each edge associated with a distinct excitation operator. Every term of a particular color will be disconnected, which in turn means that that term in our matrix will be 1-sparse, since different operators acting on the same Slater determinant must map to different Slater determinants. As for diagonal matrix entries, we can assign them all the same color since they are automatically 1-sparse. Finally, what remains is ensuring

that the resulting 1-sparse matrices are Hermitian, which means ensuring that Hαβ and ∗ Hβα have the same color. We can ensure this by performing our simulation under the Hamiltonian σx ⊗ H, which is bipartite and has the same sparsity as H. Since it’s p q bipartite, we can choose to act on |αi with ai aj is |αi if |αi is on the left partition of the p q † N−p N−q graph, and with (ai aj ) = ai aj if |αi is on the right partition of the graph. We note that we can recover a simulation under H from a simulation under σx ⊗ H because e−i(σx⊗H)t |+i |ψi = |+i e−iHt |ψi [35].

7.2.2 Decomposition into hij and hijk`

Thus far we have decomposed the CI matrix into O(η2N 2) 1-sparse matrices similar to the treatment in [20]. Next we depart from [20] by further decomposing the 1-sparse B B matrices into a sum of 1-sparse matrices Hγ where each Hγ has entries proportional to hij and hijk`. (Here we use the superscript B to denote the fact that we’ve performed this second decomposition.) According to the Slater rules in Eq. (7.5) and Eq. (7.6), the diagonal entries, corresponding to two basis elements that don’t differ in any spin orbitals, are composed of a sum of O(η2) integral terms. For the entries of the CI matrix where |αi and |βi differ by one spin orbital, we have O(ηN) sparse matrices where each entry is a sum of O(η) integral terms, giving us a total of O(η2N) terms after our second decomposition. Finally, for the entries of CI matrix where |αi and |βi differ by two spin orbitals, we have O(η2N 2) such matrices, where each matrix is a sum of two integral terms. This contributes O(η2N 2) terms to the second decomposition. Thus, in Chapter 7. First-Quantized Algorithm 55

B total, we see that the decomposition into the 1-sparse matrices Hγ can be achieved with Γ = O(η2N 2) terms. Thus we have decomposed our Hamiltonian into the following:

Γ X B H = Hγ (7.11) γ=1

7.2.3 Discretizing the Integrals

Now, just as we did for the on-the-fly algorithm for the second-quantized Hamiltonian, we will discretize the integrals that appear in the CI matrix encoding of the first-quantized Hamiltonian so that we can likewise devise a on-the-fly algorithm for the CI matrix encoding.

In Eq. (4.28) of Chapter4, we showed that for a Hamiltonian that can generally be R V Pµ written as H = Z H(~z) d~z and then discretized as H ≈ µ ρ=1 H(~zρ), the propagator k can be discretized as U ≈ PK (−itV) Pµ H(~z )...H(~z ). For our particular r k=0 rkµkk! ρ1,...,ρk=1 ρ1 ρk CI matrix encoding, Γ X B H(~z) = Hγ (~z) (7.12) γ=1 with Z B B Hγ = Hγ (~z) d~z (7.13) so that in discretized form, Γ µ V X X H ≈ HB(~z ) (7.14) µ γ ρ γ=1 ρ=1 The bounding of the error due to the discretization of the the integrals follows the analysis we gave in Chapter4.

7.2.4 Decomposition into Unitary Matrices

The truncated Taylor series algorithm requires that we be able to represent our Hamil- tonian as a weighted sum of unitary matrices. We follow the treatment in [35] to further decompose our 1-sparse matrices into unitary matrices; [35] gives a procedure for decom- posing 1-sparse matrices into a sum of self-inverse matrices with eigenvalues 1, a result that is slightly stronger than the one we need. Letting ζ be the precision of the decom- B B  position, we will decompose each Hγ (~zρ) into a sum of M ∈ Θ maxγ,~z ||Hγ (~z)||max/ζ 1-sparse unitary matrices so that

M B B X m Hγ (~zρ) ≈ Heγ (~zρ) ≡ ζ Cγ (~zρ) (7.15) m=1 Chapter 7. First-Quantized Algorithm 56

B To effect this decomposition, we first round Hγ (~zρ) to the nearest multiple of 2ζ to B B B give us the approximation Heγ (~zρ) so that ||Hγ (~zρ) − Heγ (~zρ)||max ≤ ζ. Next we define B B Cγ(~zρ) ≡ Hγ (~zρ)/ζ with ||Cγ(~zρ)||max ≤ 1 + ||Hγ (~zρ)||max/ζ. Then we can decompose each Cγ(~zρ) into ||Cγ(~zρ)||max 1-sparse matrices with entries in {0, −2, 2}, and indexed by m, as follows:  +2 Cγ,ij(~zρ) ≥ 2m  m  Cγ,ij(~zρ) ≡ −2 Cγ,ij(~zρ) < 2m (7.16)  0 Otherwise.

m This allows us to remove zero eigenvalues by further sub diving each Cγ (~zρ) into m,1 m,2 Cγ (~zρ) and Cγ (~zρ) that have entries in {0, +1, −1}. In particular, for a given all-zero m m,1 column β in Cγ (~zρ), we can pick some row α and then choose Cγ (~zρ) to have a +1 in m,2 the (α, β) position, and choose Cγ (~zρ) to have a -1 in the (α, β) position.

Having decomposed each 1-sparse matrix into a sum of unitary 1-sparse matrices with entries in {0, −1, 1}, we now have our fully decomposed Hamiltonian, which is a sum of O(µη2N 2M) unitary 1-sparse matrices. We will simplify the notation somewhat by m,s letting ` = (s, m, γ) and H`(~zρ) ≡ Cγ (~zρ):

L µ ζV X X H = H (~z ) (7.17) µ ` ρ `=1 ρ=1

PΓ Thus our decomposed Hamiltonian is of the form H = γ=1 WγHγ, but here all the Wγ are the same.

7.3 CI Matrix Hamiltonian Oracle

We recall from Chapter4, and from the algorithms for the second-quantized represen- tation of the chemistry Hamiltonian in Chapter6, that we need to be able to construct the operator select (H)—or rather, select (H) for on-the-fly algorithms—where Z † H` = H`(~z) d~z, H`(~z) = H`(~z) (7.18)

Since the Hamiltonian can be expressed as an integral, the matrix elements Hαβ can be expressed as integrals as well: Z Hαβ = Hαβ(~z) d~z (7.19) so we will develop a mechanism to compute the integrand Hαβ(~z) in order to help us construct select (H). Chapter 7. First-Quantized Algorithm 57

Specifically, we want select (H) to operate as follows:

select (H) |`i |ρi |ψi = |`i |ρi H`(~zρ) |ψi (7.20) where ~zρ is the value of ~z at grid point ρ in the discretized integration domain.

To implement select (H) we will use the integrand oracle that we constructed in Chap- ter5; we recall that this circuit has complexity Oe(N). We will ultimately construct select (H) by constructing two intermediate oracles. The first of these, Qcol, returns information on the non-zero entries of a given 1-sparse Hermitian matrix in the decom- position from Eq. (7.11). We will also construct an oracle Qval that returns the value of integrands for entries in the CI matrix. It does this by using the integrand oracle to obtain the hij and hijk`, and then performing the further decomposition of integrand m,s matrix elements into entries Cγ (~zρ) that are in {0, −1, 1} on the fly, as specified in the previous section on matrix element decompositions.

To construct Qcol, we recall that we keep track of the index γ = (i, p, j, q, κ), which tells B p q us about the sparsity structure of the matrix Hγ , since |βi = ai aj |αi for valid Slater determinants |αi, |βi. κ, meanwhile, is an index associated to the second decomposition col into hij and hijk` in Section 7.2.2. Q acts by taking a register containing the color |γi = |ipjqκi, together with a row index |αi, to return a column index |βi so that (α, β) B gives the location of the non-zero element in row α of the matrix Hγ :

col ⊗ log N p q Q |γi |αi |0i = |γi |αi ai aj |αi (7.21) = |γi |αi |βi

val We query Q with a register |`i = |si |mi |γi, a register |ρi specifying ~zρ, and registers |αi , |βi specifying the two Slater determinants in the non-zero entry of the Hamiltonian (which we can retrieve from Qcol). |`i allows us to specify a specific term in the 1-sparse decomposition since γ gives us the color of the original 1-sparse matrix, m ≤ M indexes B m the term in the further decomposition of the Hγ (~zρ) into Cγ (~zρ), and s ∈ {1, 2} tells m,1 m,2 val us whether we have Cγ (~zρ) or Cγ (~zρ). Specifically Q acts as follows:

val Q |`i |ρi |αi |βi = H` |`i |ρi |αi |βi (7.22)

To determine the complexity of this operator, we note that determining which spin- orbitals differ between the Slater determinants |αi and |βi can be done O(log N) time (the time required to read their values), and we know from Chapter5 that the integrand oracle has complexity Oe(N), which means that Qval overall has complexity Oe(N). Fi- nally, to ensure that |αi and |βi represent valid Slater determinants, we can construct Qcol so that whenever |αi or |βi is invalid, it gives phase −1 to the first half of the |`i Chapter 7. First-Quantized Algorithm 58 register and +1 to the second half so that the corresponding matrix element has zero value.

col val Now we can use Q and Q to construct select (H), which applies the term H`(~zρ) P col to the wave function |ψi = α cα |αi. First we call Q to obtain the columns corre- sponding to the nonzero entries of the Hamiltonian:

⊗η log N X col ⊗η log N |`i |ρi |ψi |0i 7→ cαQ |`i |ρi |αi |0i α X = cα |`i |ρi |αi |βi . (7.23) α

val Now we apply Q to each |αi, |βi pair, effectively resulting in a phase kα = 1 since m,s we originally constructed Cγ (~zρ) ∈ {0, −1, 1}:

X X val cα |`i |ρi |αi |βi 7→ cαQ |`i |ρi |αi |βi (7.24) α α X = cαkα |`i |ρi |αi |βi . α

Now we clear up the ancilla register by swapping the locations of |αi and |βi and applying Qcol again, noting that Qcol is self-inverse:

X X cαkα |`i |ρi |αi |βi 7→ cαkα |`i |ρi SWAP |αi |βi α α X = cαkα |`i |ρi |βi |αi (7.25) α X X col cαkα |`i |ρi |βi |αi 7→ cαkαQ |`i |ρi |βi |αi α α X ⊗η log N = cαkα |`i |ρi |βi |0i α ⊗η log N = |`i |ρi H`(~zρ) |ψi |0i (7.26)

P since H`(~zρ) |ψi = α cαkα |βi, where applying the Hamiltonian has mapped each Slater determinant |αi to some other Slater determinant |βi with a phase 1. The circuit for select (H) is shown in Figure 7.1.

7.4 Simulating Hamiltonian Evolution: the On-the-Fly Al- gorithm

Now we outline the on-the-fly algorithm for the CI matrix encoding. We note that applying the Taylor series algorithm requires constructing the operators select (H) and Chapter 7. First-Quantized Algorithm 59

|ρi |ρi |si |si |mi |mi Qval |γi |γi

col col |ψi Q × Q H`(~zρ)|ψi |0i⊗η log N × |0i⊗η log N

Figure 7.1: The circuit implementing select(H), which applies the term H`(~zρ) labeled by ` = (γ, m, s) in the unitary 1-sparse decomposition to the wavefunction |ψi. Note that the “X”’s denote applying a SWAP operator on the two indicated registers.

prepare (W ). We note that we just constructed select (H) in the previous section

on the CI matrix oracle, while constructing prepare (W ) is trivial since all the Wγ are the same for our CI matrix encoding Hamiltonian. Specifically, we can simply apply a Hadamard operator every time we want to apply prepare (W ). (Recall that a Hadamard operator prepares a superposition of equally weighted states.)

As with the second quantized on-the-fly algorithm, the additional qubits we need for the on-the-fly algorithm include the Θ(K) qubits to encode |ki, and K registers with

Θ(log Γ) qubits to encode the |γ1i , ..., |γki. We also need to encode the grid point

registers ~zρ1 , ..., ~zρk , which requires Θ(K log(Mµ)) additional qubits.

|0i Ry (θ1) • ···

|0i Ry (θ2) ··· ...... |0i ··· •

|0i ··· Ry (θK )

|0i⊗ log Γ Hd⊗ log L . . . .

|0i⊗ log Γ Hd⊗ log µ

Figure 7.2: The circuit for prepare (β) as described in Eq. (7.29). An expression for θk is given in Eq. (7.28).

Finally, the steps involved in the algorithm are the following:

ζV PL Pµ 1. Decompose the Hamiltonian so that we have H = µ `=1 ρ=1 H`(~zρ).

2. Subdivide the simulation into time segments of size t/r where r = ζLVt/ ln(2). Chapter 7. First-Quantized Algorithm 60

3. Expand the evolution for time t/r and truncate the Taylor series at order K like so: K L µ X (−itζV)k X X U ≈ H (~z )...H (~z ) (7.27) r rkµkk! `1 ρ1 `k ρk k=0 `1,...,`k=1 ρ1,...,ρk=1 P Note that we have now written the segment in the form Ue = j βjVj, where k k k j = (k, ` , ..., ` , ρ , ..., ρ ), V = (−i)kH (~z )...H (~z ), and β = t ζ V . 1 k 1 k j `1 ρ1 `k ρk j rkµkk!

4. For each segment, do the following:

(a) Apply the Hadamard gates to every qubit in the |`vi and the |ρvi registers.

(b) Apply a series of controlled rotations by θk, where θk is given by the following y expression, and we apply Ry(θk) ≡ exp[−iθkσ /2]:

v  u K !−1 u (ζLVt)k−1 X (ζLVt)s θ ≡ 2 arcsin t1 −  (7.28) k  rk−1(k − 1)! rss!  s=k

Overall, Steps (a) and (b) combined give use the operator prepare (β), which prepares the following state:

K !−1/2 K r X (ζLVt)k X (ζLVt)k |ki (7.29) rkk! rkk! k=0 k=0

We depict the circuit for prepare (β) in Figure 7.2. (c) Use the ancilla prepared in Steps (a) and (b) as controls for the operation select (V ), which performs K controlled applications of select (H) along with K phase shifts to account for factors of i. (d) Apply prepare (β)T to clear out the ancilla registers. (e) Apply oblivious amplitude amplification to obtain the desired state with unit probability.

Now we will reproduce Table 4.1 and Table 4.2 with the bounds completely filled in for the on-the-fly algorithm. These appear in Table 7.1 and Table 7.2.

Finally, we bound the total complexity of the on-the-fly algorithm by noting that it scales 2 2 B  like Oe(rNK) where r ∈ O η N tV maxγ,~z ||Hγ (~z)||max and V maxγ,~z ||Hγ(~z)||max = 4 5 O(ϕmaxxmax) by our analysis of the complexity of the integrand oracle in Chapter5. Thus using the results from that section,

r ∈ O η2N 2t[log(Nt/)]5 (7.30) Chapter 7. First-Quantized Algorithm 61

Table 7.1: Parameters and bounds for on-the-fly algorithm

Parameter Description Bound Γ number of terms in original decomposition O(η2N 2) B B M number of terms in decomposition of Hγ (~z) Θ(max~z,γ |Hγ (~z)|max/ζ) 2 2 B L terms in final decomposition O(η N maxγ,~z |Hγ (~z)|/ζ) r number of time segments, Eq. (4.17) ζLVt/ ln(2)  log(r/)  K truncation point for Taylor series, Eq. (4.4) O log log(r/) J number of ancilla qubits in selection register Θ(K log(µL))

Table 7.2: Operators and gate counts for on-the-fly algorithm

Operator Description Gate Count select (H) apply specified terms from decomposition, Eq. (7.20) O(N) select (V ) apply specified product of terms, Eq. (4.8) O(NK) prepare (β) prepare weighted superposition, Eq. (4.10) O(K log(µL)) W probabilistically simulate for time t/r, Eq. (4.15) Oe(NK) P projection operator Θ(K log(µL)) G amplification to implement sum of unitaries, Eq. (4.19) Oe(NK) r (PG) entire algorithm Oe(rNK)

Thus the overall algorithm scales like

Oe(NKr) = Oe(η2N 3t) (7.31)

We note that η << N, so the CI matrix encoding algorithm may represent both the most compressed and the most efficient algorithm for quantum simulation of chemistry in the literature. Chapter 8

Algorithms for Real Molecules

We have presented exponentially more precise algorithms for the simulation of the chem- istry Hamiltonian using the results of [11], which effects evolution under a general Hamil- tonian, assuming the existence of an oracle, by using a truncated Taylor series.

First we considered the second-quantized Hamiltonian, which requires O(N) qubits to encode. We introduced two algorithms for this encoding: the first algorithm, the database algorithm, uses a database to store molecular integrals and scales like Oe(N 8t), while the second algorithm computes molecular integrals on-the-fly and scales like Oe(N 5t).

Next we considered the first-quantized Hamiltonian in the CI matrix encoding, which requires O(η) qubits to encode. We introduced an algorithm that computes molecular integrals on the fly and scales like O(η2N 3t). All of these results represent an exponential improvement over Trotter-based methods, which scale like O(N 8t/poly()). In addition, we note that η << N, so the CI matrix encoding algorithm may represent the most compressed and most efficient algorithm for quantum simulation of chemistry in the literature.

In this chapter we make a first foray into investigating the performance of these algo- rithms for real molecules. We note that just as the scaling of the Trotterization algorithm was tighter for real molecules than the analytical bound, we also believe that the bounds that we have derived here should be tighter for real molecules.

8.1 Second-Quantized Database Algorithm

We focus on the second-quantized database algorithm, which is also the algorithm that is easiest to implement experimentally. We believe that the bound for the normalization

62 Chapter 8. Algorithms for Real Molecules 63 factor Λ ∈ O(N 4) from the database algorithm is likely very loose. Recall that we obtained Λ by summing the two-electron integrals, a sum that analytically scales like O(N 4). We believe that for real molecules this sum should scale like O(N 2) instead, based on locality arguments such as those detailed in [16, 30]. Namely, although we

nominally sum over all four indices i, j, k, ` in hijk`, because the total volume of the molecule increases with N but the volume of individual molecules does not increase with N, each individual orbital has non-negligible overlap only with a constant number

of orbitals, which means that when we sum the hijk` we really only need to sum over two of the indices.

We have performed numerical studies indicating this relationship by computing Λ for a wide variety of real molecules. We used the electronic structure package Psi4 [41] to

obtain the one- and two-electron integrals hij and hijk` given an input file specifying the Cartesian coordinates of the atoms in the molecule. We ran the computation in the STO-3G basis, without freezing core orbitals, with a convergence tolerance of 10−10, and performing restricted open-shell Hartree Fock (ROHF) computations. We then summed the resulting integrals to obtain Λ, and we plotted Λ versus N, and versus η, on a regular plot, and on a log-log plot, to determine possible relationships.

The molecules that we used to generate the plot come from several sources. The larger molecules clustered around N = 100 to 300 come from a collection of 1070 candidate battery molecules with molecular mass under 400 and η < 200 that were screened for usage as organic LEDs [42]. The molecules with N < 100 come from a set of 463 molecules retrieved from a dump of the CCCBDB molecular geometry database [43]. Finally, we also ran the computations for 27 elements from columns 1, 2, 13, 14, 15, 16, 17, and 18, and rows 1, 2, 3, and 4 of the periodic table (these elements were available in the STO3G basis, without having to resort to the use of a more exotic basis).

We obtain the plot in Figure 8.1 for Λ vs N on normally scaled axes, and the plot in Figure 8.2 for Λ vs N on a log-log plot using 10 as the base for the log (that is, Figure 8.2 is the same plot as Figure 8.1, but with differently scaled axes). In figures Figure 8.3 and Figure 8.4 we do the same for Λ vs η. Note in particular that the slope of the log-log plot for Λ vs N appears to be around 2.22.

Thus, given the numerical evidence, we believe that it should be possible to prove the Oe(N 2) scaling of Λ analytically in certain basis sets. This would mean that the database algorithm scales like Oe(N 6t) rather than Oe(N 8t), which in turn makes the database algorithm even more alluring for experimental implementation. Chapter 8. Algorithms for Real Molecules 64

Figure 8.1: Plot of Λ vs N for real molecules. Here we computed Λ by summing all the hij and hijk` terms for a given real molecule.

Figure 8.2: Log-log plot of Λ vs N for real molecules, indicating that Λ ∈ O(N 2). Here we computed Λ by summing all the hij and hijk` terms for a given real molecule.

8.2 Future Directions

With Trotter-based algorithms, we saw how an analytical bound of Oe(N 8t/o(1))[9] was p reduced to Oe(N 6 t3/) gates [10] for real molecules. We believe that an important next step for our Taylor series-based algorithms is to perform full numerical simulations for all of the algorithms that we have introduced in this thesis, as we believe that the scaling of these algorithms for real molecules will be even better than the analytical bounds Chapter 8. Algorithms for Real Molecules 65

Figure 8.3: Plot of Λ vs η for real molecules, given for completeness. Here we computed Λ by summing all the hij and hijk` terms for a given real molecule.

Figure 8.4: Log-log plot of Λ vs η for real molecules, given for completeness. Here we computed Λ by summing all the hij and hijk` terms for a given real molecule.

indicate. Ultimately, we are interested in being able to apply these algorithms to real molecules as quantum computers come even closer to one day becoming reality, with quantum simulation as one of the most useful, most natural applications for a quantum computer.

We will also briefly note that the analyses applied to the chemistry Hamiltonian in this thesis can also be applied to the quantum simulation of Hamiltonians modeling other Chapter 8. Algorithms for Real Molecules 66 systems that look quite similar to the second-quantized chemistry Hamiltonian. In par- ticular, the Fermi-Hubbard model from solid state physics, which may have implications for studies on high-temperature superconductivity, is described by a Hamiltonian that looks very similar to the chemistry Hamiltonian, except it contains O(N) terms rather than O(N 4) terms, making it an even better candidate for early experimental imple- mentation on quantum computers with limited qubit availability. Bibliography

[1] R. P. Feynman. Simulating physics with computers. International Journal of Theoretical Physics, 21:467–488, June 1982. URL http://link.springer.com/ article/10.1007%2FBF02650179.

[2] S. Lloyd. Universal quantum simulators. Science, 273(5278):1073–1078, Au- gust 1996. URL http://science.sciencemag.org/content/273/5278/1073. abstract.

[3] D. S. Abrams and S. Lloyd. Simulation of many-body Fermi systems on a universal quantum computer. Physical Review Letters, 79(13), September 1997. URL http: //journals.aps.org/prl/abstract/10.1103/PhysRevLett.79.2586.

[4] A. Aspuru-Guzik, A. D. Dutoi, P. J. Love, and M. Head-Gordon. Simulated quan- tum computation of molecular energies. Science, 309(5741):1704–1707, September 2005. URL http://science.sciencemag.org/content/309/5741/1704.

[5] R. Barends, J. Kelly, A. Megrant, A. Veitia, D. Sank, E. Jeffrey, T. C. White, J. Mutus, A. G. Fowler, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, C. Neill, P. O’Malley, P. Roushan, A. Vainsencher, J. Wenner, A. N. Korotkov, A. N. Cleland, and J. M. Martinis. Superconducting quantum circuits at the surface code threshold for fault tolerance. Nature, 508:500–503, February 2014. URL http: //www.nature.com/nature/journal/v508/n7497/full/nature13171.html.

[6] J. Kelly, R. Barends, A. G. Fowler, A. Megrant, E. Jeffrey, T. C. White, D. Sank, J. Y. Mutus, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, I.-C. Choi, C. Neill, P. J. J. O’Malley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, A. N. Cleland, and J. M. Martinis. State preservation by repetitive error detection in a superconducting quantum circuit. Nature, 519:66–69, January 2015. URL http: //www.nature.com/nature/journal/v519/n7541/full/nature14270.html.

[7] D. Nigg, M. Muller, E. A. Martinez, P. Schindler, M. Hennrich, T. Monz, M. A. Martin-Delgado, and R. Blatt. Quantum computations on a topologically encoded

67 Bibliography 68

qubit. Science, 345(6194):302–305, July 2014. URL http://science.sciencemag. org/content/345/6194/302.

[8] A. D. Corcoles, E. Magesan, S. J. Srinivasan, A. W. Cross, M. Steffen, J. M. Gambetta, and J. M. Chow. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits. Nature Communications, 6(6979), January 2015. URL http://www.nature.com/ncomms/2015/150429/ncomms7979/ full/ncomms7979.html.

[9] M. B. Hastings, D. Wecker, B. Bauer, and M. Troyer. Improving quantum al- gorithms for quantum chemistry. and Computation, 15(1), 2015. URL http://arxiv.org/pdf/1403.1539v2.pdf.

[10] D. Poulin, M. B. Hastings, D. Wecker, N. Wiebe, A. C. Doherty, and M. Troyer. The Trotter step size required for accurate quantum simulation of quantum chemistry. Quantum Information and Computation, 15:361–384, 2015. URL http://arxiv. org/pdf/1406.4920.pdf.

[11] D. W. Berry, A. M. Childs, R. Cleve, R. Kothari, and R. D. Somma. Simulat- ing Hamiltonian dynamics with a truncated Taylor series. Physical Review Let- ters, 114, March 2015. URL http://journals.aps.org/prl/abstract/10.1103/ PhysRevLett.114.090502.

[12] M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information, Cambridge University Press, 2010.

[13] D. J. Griffiths. Introduction to Quantum Mechanics, Prentice Hall, 1995.

[14] J. S. Townsend. A Modern Approach to Quantum Mechanics, University Science Books, 2012.

[15] J. D. Whitfield, J. Biamonte, and A. Aspuru-Guzik. Simulation of electronic struc- ture Hamiltonians using quantum computers. Molecular Physics, 109(5), January 2010. URL http://arxiv.org/pdf/1001.3855v3.pdf.

[16] T. Helgaker, P. Jorgensen, and J. Olsen. Molecular Electronic Structure Theory, Wiley, 2002.

[17] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. Science, 292(472), 2000.

[18] L. A. Wu, M. S. Byrd, and D. A. Lidar. Physics Review Letters, 89(057904), 2002.

[19] D. Aharonov and A. Ta-Shma. Adiabatic quantum state generation and statistical zero knowledge. Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Computing, STOC’03:20–29, 2003. URL http://dl.acm.org/citation.cfm? doid=780542.780546. Bibliography 69

[20] B. Tolui and P. J. Love. Quantum algorithms for quantum chemistry based on the sparsity of the CI-matrix. December 2013. URL http://arxiv.org/pdf/1312. 2579v2.pdf.

[21] N. C. Jones, J. D. Whitfield, P. L. McMahon, M.-H. Yung, R. V. Meter, A. Aspuru- Guzi, and Y. Yamamoto. Faster quantum chemistry simulation on fault-tolerant quantum computers. New Journal of Physics, 14(115023), November 2012. URL http://iopscience.iop.org/article/10.1088/1367-2630/14/11/115023/ meta;jsessionid=D922C295394A2CAD864B1C6C6455B236.c3.iopscience.cld. iop.org.

[22] L. Veis and J. Pittner. Quantum computing applied to calculations of molecular energies: CH2 benchmark. Journal of Chemical Physics, 133(194106), November 2010. URL http://scitation.aip.org/content/aip/journal/jcp/133/19/10. 1063/1.3503767.

[23] Y. Wang, F. Dolde, J. Biamonte, R. Babbush, V. Bergholm, S. Yang, I. Jakobi, P. Neumann, A. Aspuru-Guzik, J. D. Whitfield, and J. Wrachtrup. Quantum simu- lation of helium hydride cation in a solid-state spin register. ACS Nano, 9(8):7769– 7774, April 2015. URL http://pubs.acs.org/doi/10.1021/acsnano.5b01651.

[24] Z. Li, M.-H. Yung, H. Chen, D. Lu, J. D. Whitfield, X. Peng, A. Aspuru-Guzik, and J. Du. Solving quantum ground-state problems with nuclear magnetic reso- nance. Scientific Reports, 1(88), 2011. URL http://www.nature.com/articles/ srep00088.

[25] M.-H. Yung, J. Casanova, A. Mezzacapo, J. McClean, L. Lamata, A. Aspuru- Guzik, and E. Solano. From transistor to trapped-ion computers for quantum chemistry. Scientific Reports, 4(3589), December 2013. URL http://www.nature. com/articles/srep03589.

[26] I. Kassal, S. P. Jordan, P. J. Love, M. Mohseni, and A. Aspuru-Guzik. Proceedings of the National Academy of Sciences, 105(18681), 2008.

[27] J. D. Whitfield. Communication: Spin-free quantum computational simula- tions and symmetry adapted states. Journal of Chemical Physics, 139(021105), 2013. URL http://scitation.aip.org/content/aip/journal/jcp/139/2/10. 1063/1.4812566.

[28] J. D. Whitfield. Unified views of quantum simulation algorithms for chemistry. February 2015. URL http://arxiv.org/pdf/1502.03771v1.pdf.

[29] D. Wecker, B. Bauer, B. K. Clark, M. B. Hastings, and M. Troyer. Gate-count estimates for performing quantum chemistry on small quantum computers. Physical Bibliography 70

Review A, 90(022305), August 2014. URL http://arxiv.org/pdf/1502.03771v1. pdf.

[30] J. R. McClean, R. Babbush, P. J. Love, and A. Aspuru-Guzik. Exploiting locality in quantum computation for quantum chemistry. Journal of Physical Chemistry Letters, 5(24):4368–4380, November 2014. URL http://pubs.acs.org/doi/abs/ 10.1021/jz501649m.

[31] R. Babbush, J. McClean, D. Wecker, A. Aspuru-Guzik, and N. Wiebe. Chemical basis of Trotter-Suzuki errors in quantum chemistry simulation. Physical Review A, 91(022311), February 2015. URL http://pubs.acs.org/doi/abs/10.1021/ jz501649m.

[32] R. Babbush, P. J. Love, and A. Aspuru-Guzik. Adiabatic quantum simulation of quantum chemistry. Scientific Reports, 4(6603), May 2014. URL http://www. nature.com/articles/srep06603.

[33] R. Cleve, D. Gottesman, M. Mosca, R. D. Somma, and D. L. Yonge-Mallo. Ef- ficient discrete-time simulations of continuous-time quantum query algorithms. Proceedings of the Forty-first Annual ACM Symposium on Theory of Comput- ing, STOC’09:406–416, November 2009. URL http://dl.acm.org/citation.cfm? doid=1536414.1536471.

[34] D. W. Berry, R. Cleve, and S. Gharibian. Quantum Information and Computation, 14, 2014.

[35] D. W. Berry, A. M. Childs, R. Cleve, R. Kothari, and R. D. Somma. Exponential improvement in precision for simulating sparse Hamiltonians. Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, STOC’14:283–292, 2014. URL http://dl.acm.org/citation.cfm?doid=2591796.2591854.

[36] P. Jordan and E. Wigner. Uber das paulische quivalenzverbot. Zeitschrift fr Physik, 47(9):631–651, September 1928. URL http://link.springer.com/article/10. 1007%2FBF01331938.

[37] R. Somma, G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme. Simulating phys- ical phenomena by quantum networks. Physical Review A, 65(042323), April 2002. URL http://journals.aps.org/pra/abstract/10.1103/PhysRevA.65.042323.

[38] S. B. Bravyi and A. Y. Kitaev. Simulating physical phenomena by quantum networks. Annals of Physics, 298(1):210–226, May 2002. URL http://www. sciencedirect.com/science/article/pii/S0003491602962548. Bibliography 71

[39] J. T. Seeley, M. J. Richard, and P. J. Love. The Bravyi-Kitaev transformation for quantum computation of electronic structure. Journal of Chemical Physics, 137 (224109), 2012. URL http://scitation.aip.org/content/aip/journal/jcp/ 137/22/10.1063/1.4768229.

[40] A. Tranter, S. Sofia, J. Seeley, M. Kaicher, J. McClean, R. Babbush, P. V. Coveney, F. Mintert, F. Wilhelm, and P. J. Love. The Bravyi?Kitaev transforma- tion: Properties and applications. International Journal of Quantum Chemistry, July 2015. URL http://onlinelibrary.wiley.com/doi/10.1002/qua.24969/ abstract;jsessionid=67EC301727E25AEF2073D0EAB584A50F.f03t02.

[41] J. M. Turney, A. C. Simmonett, R. M. Parrish, E. G. Hohenstein, F. A. Evangelista, J. T. Fermann, B. J. Mintz, L. A. Burns, J. J. Wilke, M. L. Abrams, N. J. Russ, M. L. Leininger, C. L. Janssen, E. T. Seidl, W. D. Allen, H. F. Schaefer, R. A. King, E. F. Valeev, C. D. Sherrill, and T. D. Crawford. Psi4: an open-source ab initio electronic structure program. Wiley Interdisciplinary Reviews: Computational Molecular Science, 2(4):556–565, October 2011. URL http://onlinelibrary. wiley.com/doi/10.1002/wcms.93/abstract.

[42] R. Gomez-Bombarelli, J. Aguilera-Iparraguirre, T. D. Hirzel, D. Duvenaud, D. Maclaurin, M. A. Blood-Forsythe, H. S. Chae, M. Einzinger, D.-G. Ha, T. Wu, G. Markopoulous, S. Jeon, H. Kang, H. Miyazaki, M. Numata, S. Kim, W. Huang, S. I. Hong, S. L. Buchwald, M. Baldo, R. P. Adams, and A. Aspuru-Guzik. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Submitted, 2016.

[43] R. D. Johnson et. al. NIST computational chemistry comparison and benchmark database. 17b, September 2015.