Los Alamos National Lab 2019 Summer School

Introduction to Hamiltonian Complexity

Elizabeth Crosson

University of New Mexico Many-Body Quantum Physics

The energy of a quantum mechanical system is an observable that corresponds to a Hermitian operator H called the Hamiltonian. H generates the time evolution of the system by the Schrodinger equation.

By a “many-body quantum system” we mean a quantum system with interacting particles (e.g. spins, fermions, bosons, etc) in the regime where is large. The axioms of QM tell us that the full system Hilbert space is a tensor product of single particle Hilbert spaces:

The limit is called the thermodynamic limit. Often we describe the asymptotic scaling of physical quantities with by “big-O” notation. is if for sufficiently large n, g(n) is no larger than a constant times f(n). Ground States and Local Hamiltonians

Many-body quantum systems can be extremely complicated, but if they are thermal equilibrium at low temperature then their properties are determined by ground states (the states of lowest energy):

Where is the degeneracy of the ground space. Often without symmetry and the GS is unique.

Another simplifying observation is that interactions in nature are few-body i.e. involve a few particles at a time. For example, in a classical system of many charged particles the interactions are governed by Coulomb's law which describes a potential between two charges at a time, and we sum these interactions to get the full energy.

We restrict our attention to local Hamiltonians with . Precisely speaking, the local terms have the form , where acts on at most k particles. Leaving these identity factors implicit simplifies the notation. Locality: Spatial vs Combinatorial

The word “locality” is overloaded. We just called Coulomb’s law “local”, but of course it is also acts far across space, decaying in strength with the square of the distance. To be more precise, a system is spatially local if the interactions involving each particle are confined to Euclidean ball of radius around the particle.

Spatially local systems are important in the quantum description of matter (e.g. crystals, magnets, conducting metals), which often involves spins / bosons / fermions interacting with near-neighbors on a lattice:

But we can also consider more general kinds of connectivity, where connections are long-range but still involve a few particles at a time. To be precise this could be called “combinatorial locality”.

One justification for considering combinatorially local Hamiltonians that are not spatially local is that we can simulate the time evolution of such systems using a quantum computer. Examples of Local Hamiltonians

The [classical, 1D, ferromagnetic] Ising model describes a chain of ½ particles on a line whose interaction energetically favors alignment along the Z direction:

The ground space of this model can be determined by inspection to be . This is essentially a classical model so all the local terms commute and can be simultaneously diagonalized in a tensor product basis (the computational basis) and the ground space is spanned by unentangled states.

We can make this model more interesting by adding a transverse field in the X direction. This is called the 1D ferromagnetic transverse Ising model:

This 1D ferromagnetic TIM is more distinctly quantum. The local terms no longer commute, the ground state is quite entangled, and the analytical solution was an important milestone result in mathematical physics. Examples of Local Hamiltonians

1D spin models are often called “spin chains.” An example of a distinctly quantum spin chain whose ground state is relatively easy to analyze is the ferromagnetic Heisenberg model:

The ground states of this model maximize the sum of the total angular momentum,

And so (with a bit of work) one sees that the ground space is the symmetric subspace, spanned by the states:

Where . To read this note that is the set of n-bit strings, and here we use to denote the Hamming weight of the string x (the number of 1’s in the string). Note that the ground states with m = 0 and m = n are product states, while for m = n/2 the ground state is entangled (i.e. spins in disjoint regions of the chain are entangled). Hamiltonian Complexity

Analytical solutions describing ground states of strongly interacting many-body quantum systems are extremely rare. This motivates the use of classical and quantum computers in finding ground states.

The field of Hamiltonian complexity characterizes the difficulty of finding ground states of local Hamiltonians using the theoretical tools of computational complexity.

This is done using a formal model of computation that allows us to describe the asymptotic time and space requirements used to solve these problems.

Using a formal notion of computational (which tells us when a problem A is at least as difficult as another problem B) we will define complexity classes, such as P and NP, and we will relate the hardness of local Hamiltonian problems to other problems that appear in contexts which are distinct from quantum physics. Constraint Satisfaction Problems

In part to establish some notation and terminology, we begin with a classical subset of local Hamiltonian problems called Boolean constraint satisfaction problem. Consider a collection of Boolean variables,

In the Boolean context, we think of “1” as true and “0” as false. We express propositions (statements) by combining these variables with Boolean functions like AND (“ “), OR (“ “), and NOT (“ “).

For example, is read “ and “ , which is true if is true and is true, and it is false otherwise. Any Boolean function can be described by a truth table (a list of inputs and outputs).

By combining these basic functions we can form more complicated propositions e.g. Constraint Satisfaction Problems

A generalized SAT problem asks us to decide whether there exists (“ “) an assignment to the variables which suffices to make a collection of propositions true simultaneously:

Here is the i-th proposition, which is a statement about the k variables . Generalized SAT is the problem of deciding the truth value of existentially quantified Boolean formulas of the above form.

If each is an OR statement involving variables or their negations, e.g.

Then the existential formula is said to be in “conjunctive normal form” and the problem is called k-SAT, which may be more recognizable than generalized SAT. What is an upper bound on the time needed to solve generalized SAT ? Boolean CSPs as Local Hamiltonians

We can also express Boolean formulas using our notation for quantum systems by promoting the Boolean variables to .

In this setting we construct a Hamiltonian with one local term for each in the Boolean formula. We want to map satisfying assignments to low energy states, so will have energy 0 when is satisfied.

If we have a 3-SAT problem then each clause forbids one configuration e.g.

Therefore our Hamiltonian assigns a higher energy to spin configurations that include this assignment:

So that is a many-body Hamiltonian with energy corresponding to the number of violated clauses in the assignment .

Therefore the ground energy of H is zero if and only if the formula is satisfiable. Boolean Circuits The Boolean connectives AND, OR, and NOT also form a universal set of logic gates for Boolean circuits e.g.

AND

In this example, the inputs are , the output is c, and a and b are intermediate gate outputs.

Universality means that arbitrary Boolean functions can be expressed using such Boolean circuits. If the circuit outputs a single bit then we call it a Boolean verifier circuit.

The formal model of classical computation we consider today is based on Boolean verifier circuits (instead of Turing machines) because they generalize more naturally to the quantum setting. Problems and Languages We are building towards formal definitions of computational complexity classes. For example, P is informally the set of problems that can be solved in a time that scales polynomially with the size of the problem description. This is regarded as the set of problems that can be solved efficiently by a deterministic classical computer.

Before we can formally define P, we need to formally define our notion of what a computational problem is. This is done using the notion of a (formal) language.

Definition (language): consider an alphabet (e.g. ) and the set of all strings of any length formed by letters of , e.g.

A subset of is called a language. Languages are very general, for example they could describe the set of all 3-SAT instances with a satisfying assignment, or the set of all local Hamiltonians with ground state energy 0, or the set of triangles with area A, etc. Circuit Verifiers

Definition (Circuit Verifier): a circuit verifier is a family of Boolean circuits satisfying:

1. Each circuit has size (size = number of gates in the circuit)

2. The output of each is a single bit, representing YES/NO or accept/reject.

3. (uniformity) There is a simple rule to go from the pattern of gates in to those in

As is common in many references, we state property (3) informally to save time. The point of property (3) is to prevent us from embedding the hardness of the problem in the computation of .

We don’t notate the input size of the verifiers at this stage, because the relation of r to the input size will be crucial to defining different complexity classes. Classical Complexity Theory

Definition (polynomial time). if there exists a polynomial p and a verifier such that

Here L is a language and |x| is the length of a string. P is regarded as the set of problems (languages) that can be efficiently solved (recognized) with a deterministic classical computer.

Although P includes problems that run in time on inputs of size n, which is not a truly feasible computation, the definition includes these problems for two reasons:

1) History shows that once a problem is known to be in P, subsequent optimizations can lower its run time.

2) Abstracting away from “minor” details allows us to do more with the theory. Classical Complexity Theory

Now we turn to the class NP. The name refers to “nondeterministic polynomial time.”

Informally, NP is the class of problems with efficiently checkable proofs. (which may be hard to find)

If a 3-SAT instance is satisfiable then a description of the satisfying assignment is an efficiently checkable proof of this fact.

Similarly, if a positive integer N has a prime factor less than M then a description of one such factor is an efficiently checkable proof of this fact.

For the class NP it does not matter where the proof comes from, only that it exists. It may as well come from a all-powerful wizard, who hands you a string (a “witness”) to help you see that the fact is true. Classical Complexity Theory

Definition (nondeterministic polynomial time). if there exists polynomials p and q, and a verifier such that

NP is the set of problems with efficiently verifiable proofs. The string y in the definition is often called a witness. We call the case a YES instance, and say that for all YES instances there is witness that allows the verifier to read the input and the witness, and decide in poly time that .

The case is a NO instance, and for a NO instance any witness y will be rejected by the verifier. This captures the fact that the verifier cannot be fooled or cheated by a false proof. Classical Complexity Theory

The fact that the verifier always rejects NO instances is crucial to making NP an interesting definition.

Consider a 3-SAT problem. If it has a satisfying assignment, Merlin can give that assignment to me and I can check it in polynomial time by “plug and chug.” But if there is no satisfying assignment, then there is nothing he could give me that would lead me to falsely conclude the formula is satisfiable. I’ll find a violated clause.

Similarly, I may ask “does the integer N have a factor less than R?”. If the answer is yes then Merlin can give me the factor and I can check it with division. If the answer is no, then no integer he gives me will lead me to falsely conclude that the answer is yes, since I can do division efficiently and see the remainder is non-zero. Classical Complexity Theory From the definition, NP contains all the languages in P (any witness will do). Therefore

But some of the problems in NP, like 3-SAT, are believed not to be in P. This is unsurprising since we do not have all-powerful wizard assistants. Therefore we conjecture

But this remains one of the greatest unsolved problems in modern science. The Clay institute offers 1 million USD for a proof of . (note they do not consider claimed proofs of ).

Complexity theory is full of conjectures that are strongly believed but we are unable to prove. For example the exponential time hypothesis states there are constants such that any classical algorithm for solving k-SAT requires time . The strong ETH further asserts, Classical Complexity Theory

Since NP contains lots of languages that are easy to recognize, like the ones in P, we would like some way to talk about the hardest problems in NP.

To do this we introduce a notion of reducing one problem to another.

Definition (poly-time reduction): A reduction from language A to language B is an efficiently with the property

Why can’t we take , let f be the zero function, and reduce every problem to recognizing 0?

The answer is that the reduction needs to be efficiently computable. We can’t map every to the zero bit unless we already know how to recognize the language A in polynomial time. Classical Complexity Theory

A Language L is called NP-hard if every Language L’ in NP is poly-time reducible to L.

Informally, if a problem is NP-hard then it is “at least as difficult as all of the problems in NP.”

What about problems that are NP-hard but are not in NP? The NEXP is defined analogously to NP, but allows for exponentially large witnesses and verifiers. We can prove , so NEXP is a strictly more powerful class.

The point is that many problems in NEXP are NP-hard, but not representative of problems in NP.

Therefore we are most interested in the NP-hard problems that can also be solved in NP, and these languages are called NP-. Classical Complexity Theory

From these definitions alone, it is not obvious that any NP-complete languages exist.

The theory of NP-completeness was launched by the Cook-Levin theorem, which states that 3-SAT is NP-complete.

After this many NP-complete languages were found by reducing them to 3-SAT.

To simplify some details, we will prove today that generalized SAT is NP-complete. The reduction from generalized SAT to 3-SAT is then left as an exercise, or can be found in many texts. Classical Complexity Theory

Let . The Cook-Levin theorem is based on a poly-time reduction from L to generalized SAT.

Each will be mapped to a satisfiable SAT instance with variables and clauses.

Each will be mapped to an unsatisfiable SAT instance with variables and clauses.

For each gate in the circuit, we can write down a Boolean function on the inputs and outputs that is TRUE if and only if the inputs match the outputs.

Therefore logic gates can be enforced by Boolean propositions. The Cook-Levin proof is based on using propositions to enforce the history of a valid classical computation. Classical Complexity Theory

Suppose the state of a circuit at time t the bit string , so that is the state of the i-th bit at time t.

If two bits pass through a gate at time t, then the output bit is determined by the input bits, and there is a corresponding Boolean proposition

That is true if and only if the inputs match the outputs. The entire history of the circuit can be enforced by the conjunction (AND) of many such clauses.

The reason k-SAT is NP-complete only for is because universal circuits need gates with at least two inputs and one output. Classical Complexity Theory

The clauses enforce the correct operation of all the gates in the circuit, and also enforce the initial state to be x. But what about the part of the input that corresponds to the witness y? This is unconstrained.

The deterministic part of the circuit is just a proposition: start from this state, go to this state, then this state, and so on until you output the bit 1.

But the nondeterministic part of the circuit is the witness y. Therefore we ask whether there exists some y that can make the overall proposition true (“take x,y as input, operate the circuit correctly, output 1”)

This quantifier is what makes this a SAT problem. The search for an acceptable witness is the same as the search for a satisfying assignment. This is the Cook-Levin theorem. Classical Complexity Theory

AND Classical Complexity Theory

If the input x has size |x| and runs for time T = poly(|x|), then our reduction to 3-SAT uses an additional variables to represent the time evolution of the initial state through the circuit.

This is an example of how polynomial blowups naturally arise from reductions.

Once we know 3-SAT is NP-complete, we can find other NP-complete problems by reducing them to 3- SAT. The theory of NP-completeness arose in the late 60s/early 70s as computer scientists realized the connection between dozens of difficult combinatorial problems they had studied, which did not appear to have efficient algorithms but had concise efficiently checkable proofs. Karp, “Reducibility Among Combinatorial Problems”, 1972. Classical Complexity Theory

SUBSET SUM is NP-complete: given a set S of n integers, decide whether there is a subset of S whose elements sum to zero.

K-COLORING is NP-complete: a k-coloring of a graph with n vertices assigns one of k colors to each vertex of the graph, in such a way that no two vertices sharing an edge have the same color. 3-COLORING is NP-complete.

MAXCUT is NP-complete. The max cut of a graph is the subset of vertices that maximizes the number of edges leaving the set. The decision version (“is there a subset that cuts more than M edges?”) is NP-complete.

FACTORING is an example of a problem that is in NP, believed not to be in P, but also not believed to be NP- complete. Another such problem is GRAPH ISOMORPHISM (deciding whether two graphs are isomorphic).

P also has complete problems, which tend to look like things that are straightforward and laborious. Evaluating a Boolean proposition when all the variables are literals is P-complete.

As humans, we try to solve special cases of NP problems by using creativity or luck to get the witness y. If P = NP this creativity would be unnecessary: we could prove all provable theorems by plugging and chugging for poly time. Classical Complexity Theory

Solving 3-SAT is as difficult as solving any problem in NP. But any 3-SAT instance can be expressed as a local Hamiltonian problem: asking whether a particular Hamiltonian (which happens to be diagonal in the Pauli Z basis) has a ground state energy of 0 or not.

Therefore finding quantum ground states is at least as hard, in the worst-case, as performing nondeterministic classical computation.

But what if I have a general quantum Hamiltonian, what polynomial sized bit string should I use to convince myself that the ground state energy is 0?

In fact we believe such a bit string to succinctly describe an entangled quantum ground state does not exist in the worst-cases of this problem. This conveys that NP is not all powerful, which is crucial to making it interesting.

It turns out that the hardness of the local Hamiltonian problem can be characterized by relating it to nondeterministic quantum computation, and this will be our goal in the next lecture. Part II: Quantum Complexity Theory Classical Randomized Complexity

Randomized complexity: classical randomized computation can be thought of as Boolean circuits with an unlimited supply of fair coins.

Alternatively, randomized classes can be thought of in terms of counting paths over unconstrained inputs in a deterministic circuit.

Definition (Bounded-Error Probabilistic Polynomial-time). if there exists polynomials p and q, and a verifier such that Classical Randomized Complexity

The acceptance probability for a BPP verifier is an average over all strings in the register containing y.

In defining the probabilistic analogue of NP, we want YES instances to have an witness with acceptance probability at least 2/3s. In NO instances, we want the maximum acceptance probability for any witness to be less than 1/3.

In both cases we are interested in the Maximum of an Average. Therefore the probabilistic analogue of NP was named MA.

Theoretical computer scientists like to have fun, and so the inventor of this class decided instead that MA would stand for “Merlin-Arthur.” The idea is that the all-powerful wizard Merlin gives a witness to the mortal Arthur who proceeds to check it using a BPP machine. ☺ Classical Randomized Complexity

For randomized classes, the probability to output 1 when is called the completeness, and the probability to output 1 when is called soundness. (these terms come from logic, an axiomatic system is complete if you can prove everything true, and sound if you cannot prove anything false).

The values of 2/3 and 1/3 are just chosen for convenience, any constants bounded away from ½ would be equivalent because we used parallel repetition to show that

Technically BPP does not have any known complete problems. This issue also occurs for BQP and QMA, the most important quantum complexity classes. The solution is to define promise languages:

And only require the BPP machine to decide whether or promised that one of these two possibilities is the case. We don’t care what happens when x is outside of . Quantum Complexity Theory

BQP: Bounded-Error Quantum Polynomial-Time. The quantum analog of P (or BPP), this is the set of problems which we regard as efficiently solvable with a quantum computer.

QMA: Quantum Merlin Arthur. The quantum analog of NP (or MA), the hard problems in this class represent problems we are unlikely to be able to solve, even with a quantum computer.

, and Shor’s proof that provided evidence that .

The other major development that kickstarted quantum complexity theory was Kitaev’s generalization of the Cook-Levin theorem to the quantum setting in 1999.

This cornerstone result states that approximating the ground state energy of a local Hamiltonian is to reasonably high precision is QMA-complete. Therefore it is very unlikely it can be solved efficiently in general, even with a quantum computer. Quantum Complexity Theory To formally define BQP, we only need to replace our classical circuit verifiers with verifiers (uniform families of quantum circuits) .

The classical inputs are replaced with inputs, the Boolean gates are replaced with (local) unitary gates, and the output is the measurement of a single qubit in the computational basis.

Definition (Bounded-Error Quantum Polynomial Time). if there exists a polynomial p and a quantum circuit verifier such that

For what range of completeness and soundness to we have ? Quantum Complexity Theory To go from NP to MA, we allowed the verifier Arthur to be a BPP circuit instead of a P circuit. But QMA does more than just giving Arthur a quantum computer.

In QMA, Merlin is not restricted to sending a classical witness, but rather he can send an arbitrarily complex quantum state, which Arthur plugs into his quantum computer to verify.

Definition (Quantum Merlin Arthur). if there exists polynomials p and q, and a quantum circuit verifier such that Quantum Complexity Theory

For what range of completeness and soundness to we have ?

It is possible to amplify the completeness and soundness for QMA as we have done for BPP, MA, and BQP, but the proof for QMA is more complicated because we only get one copy of Merlin’s state.

If we are in a YES instance (there exists some state Merlin would like to send to convince us), then Merlin could send us many copies of the state to use for parallel repetition.

But if it is a NO instance, and we allow Merlin to send us a state that he claims to be many copies of the witness, what is to stop him from somehow entangling the copies and gaining an advantage? Quantum Complexity Theory

Definition (the k-local Hamiltonian problem)

Input: a set of m positive semi-definite Hermitian matrices , each acting on . Each matrix entry has poly(n) bits. The operators norms are bounded by for all i. Each matrix comes with a specification of the k qubits on which it acts (out of the total of n qubits). Also given as input are two numbers a, b (each described by poly(n) bits) with .

Output: Is the smallest eigenvalue of smaller than a, or larger than b?

Note that we are promised that the ground energy is below a or above b (and is the promise gap). As with the other bounded-error classes, all the complete problems for QMA are promise problems.

Whether the ground state energy is non-zero or exactly zero, without a promise, is undecidable. If the promise gap is exponentially small (the precise local Hamiltonian problem) then it is complete for QMA with exponentially small bounded error, which turns out to be equal to PSPACE (2016). Quantum Complexity Theory

Proof Strategy for the Quantum Cook-Levin Theorem

To put LH in QMA, we challenge Merlin to send us the ground state, and then we check it using phase estimation. In the NO instance he can’t cheat because of the variational principle.

Here we need to show that finding ground state energies can be as difficult as doing nondeterministic quantum computation. To do this we will map an arbitrary quantum circuit with a constrained output and an unconstrained input register (i.e. a QMA verifier) into the ground state of a local Hamiltonian in such a way that the ground state energy will be sensitive to the acceptance prob of the QMA verifier.

Like the Cook-Levin tableau, these ground states will record the history of a quantum computation in a way that allows us to check validity with local constraints. These are called Feynman-Kitaev history states. Quantum Complexity Theory

Suppose the QMA verifier runs the sequence of local unitary gates on the input and the witness. The history of the computational steps looks like this:

In the classical Cook-Levin proof, we would put each time step on its own set of bits. If we did this in the quantum case, the time steps might look like this:

Again in the classical proof, each gate acts on a few input bits and a few output bits. We can check that the inputs match the correct outputs using a local constraint that only acts on those bits.

We want local constraints that distinguish the state from some other state . What is the problem with this if the ‘s are n qubit states and we check them with a k-local operator? Quantum Complexity Theory

The problem occurs if and are both highly entangled states (which is the generic case).

Local observables are only sensitive to local reduced density matrices. If both states are highly entangled, then in both cases the RDMs will be very close to maximally mixed, and so local observables tell us nothing.

Entanglement fundamentally breaks the proof strategy of the classical Cook-Levin theorem.

However, paraphrasing the comments of Mike and Ike on quantum cloning and QKD, whatever quantum takes away with one hand, it gives us something new and beautiful with the other.

In particular, the solution to locally checking the time steps of a quantum computation is to entangle them, instead of recording them on separate registers in a tensor product. Quantum Complexity Theory

Distilling the previous example, even locally checking the identity gate is impossible to do across a tensor product. This would require distinguishing from using a k-local operator, when are arbitrary n-qubit quantum states.

But notice that if we have the state , then the RDM of the first qubit (a local RDM) tells us a lot about the relation of .

This gives us hope for local constraints if we entangle the time steps of the computation,

Which is the (baseline form) of what is called a Feynman-Kitaev history state. Quantum Complexity Theory

The notation refers to the input and the witness. We are also free to consider the history state of a circuit ,

Where is the state of the circuit at time t. Note the Hilbert space is now , where is a qudit of dimension T + 1 called the clock.

If someone hands you this state, then how would you check the output of the circuit? To check the output we could collapse the clock register, and hope we get , which happens with probability 1/T. Then if we look at the computational register we’ll see the state of the circuit at .

How could we modify the history state to increase the probability of seeing the output of the circuit? Quantum Complexity Theory

Given the history state corresponding to a verifier circuit, the probability of acceptance (output of ) is given by the expectation value,

These equations are correct, but when we map a QMA verifier to a local Hamiltonian we want to assign a higher energy to inputs that the verifier rejects. Therefore we include an energy penalty for rejection: Quantum Complexity Theory

Similarly, the Hamiltonian in our reduction will include terms that enforce the input of the computation. If the input is , then we include

Which assigns higher energy to states that do not have the intended input bit at t = 0. This covers the case when the string x is input to the verifier. If instead we define QMA verifiers in terms of circuits that are efficiently computable from x, the input constraint may just check for ancillas in a standard state like ,

In any case, just as in the Cook-Levin proof the local constraint terms do not act on the registers that hold the witness at t = 0. Rather, we will design the overall Hamiltonian so that is has lower energy iff an acceptable witness exists. Quantum Complexity Theory

So far we have written down a few local terms that check the inputs and outputs of a history state. But what about the main problem of creating ground states that look like history states?

Our solution to this problem will be closely related to a much easier problem, which is to construct a Hamiltonian on just with a unique ground state given by

Which is a uniform superpositon state of a particle on a line with T + 1 sites. From physics, this looks like a low energy state of a particle hopping on a line (in a higher energy state we would expect the magnitude and phase of the wave function to oscillate rapidly across space). This propagation Hamiltonian is: Quantum Complexity Theory

The terms are most useful as projectors, but we can expand them out as

Consider an arbitrary state , for which the expectation value is

And so the states that minimize have amplitude distributions that are as “flat as possible.” This is one of many ways to see that the uniform superposition is the ground state of . Quantum Complexity Theory

To go from a single particle to the history state, define a unitary:

Append a register of to our single particle ground state, and note that

Is a ground state of . The rotated propagation terms have the form Quantum Complexity Theory

Because of the unitary equivalence, we can move freely between the single hopping particle Hamiltonian and the propagation Hamiltonian that enforces history state ground states.

To reduce the notation, redefine to act on , with

By itself, only enforces the correct propagation of the gates in the circuit, but it does not check the input. Therefore is - fold degenerate. Combining it with

Singles out an input, and enforces to be the unique ground state. Quantum Complexity Theory

The ground energy of is 0. We now want to add and show that the ground energy changes by an amount that is related to the acceptance probability of the circuit.

Standard perturbation theory would tell us that the first order shift in the ground energy caused by perturbing the Hamiltonian with corresponds to the expectation of in the original ground state, and is therefore related to the acceptance probability.

But if we leave the part of the input register containing the witness to be unconstrained, would have a degenerate ground space (spanned by possible witness inputs). This exponential degeneracy is too much for perturbation theory to handle. Quantum Complexity Theory

We begin with a QMA verifier that has completeness and soundness amplified exponentially close to 1 and 0. In the YES instance, we can check variationally that the ground energy of is very near zero. In the NO instance, we need to show the ground energy is pushed up by some amount that is at least .

To solve this problem and analyze the ground energy of in a NO instance, Kitaev proved the following “geometrical lemma.”

Lemma. Let A, B be positive semi-definite operators, each with a zero eigenspace satisfying . Let denote the minimum non-zero eigenvalues of A and B, then

Where is the angle between . Quantum Complexity Theory

In our setting, we take . Therefore

To compute the angle between the kernels, we use the fact that the cosine of the angle is the maximum inner product between vectors in the respective kernels.

This question asks, “what is the maximum overlap between valid history states, and states that are accepted at time t = T, given that this is a NO instance?” The answer is (1/T) times 1 – soundness.

The end result is that the ground energy is at least in a NO instance. Quantum Complexity Theory

So far we have given a reduction from QMA verifier acceptance probabilities to the ground energy of a Hamiltonian. But is our Hamiltonian local?

If we represent the clock states as binary strings, then we need at most qubits to represent the clock. Therefore terms like are -local.

This is where Kitaev’s 1999 proof stopped. He showed that the -local Hamiltonian problem is QMA- complete. (note that this also requires noting that the -local Hamiltonian problem is in QMA). Quantum Complexity Theory

A bit about history: the form of was introduced by Feynman around 1985 in the context of classical computers built from microscopic components. He called the clock a “pointer” and did not look at ground states, but rather time evolution generated by .

Feynman’s gates were classical and reversible; in part because quantum logic gates were not yet widely conceptualized. Richard Feynman

None of the quantum computer scientists read Feynman’s old papers, so it was left to the physicist Kitaev to find this gem.

As the story goes, Kitaev agreed to give a talk on “Quantum NP” and worked out the details (defining quantum NP and proving the quantum Cook-Levin theorem) on the long flight over to give the talk.

Alexei Kitaev Quantum Complexity Theory

The reduction to a 5-local Hamiltonian came soon after Kitaev’s original proof, and it is based on encoding the clock in unary:

A new part of the Hamiltonian ensures that all clock qubit states with energy below 1 are valid unary encodings.

The propagation terms are now encoded as follows:

Which suffices to make them 5-local. See “Quantum NP: A survey” for more unary encoding details. Quantum Complexity Theory

There is a general method to reduce k-local Hamiltonians to r-local Hamiltonians with r < k using “perturbative gadgets.” These gadgets are r-local Hamiltonians whose physics at low energy resembles the target k-local Hamiltonian. Some terms in the gadget have norm poly(n), while others have norm 1. The higher order interaction terms then appear at a higher order in perturbation theory.

Using these kinds of tricks, it’s possible to show that the local Hamiltonian problem is QMA-complete even for Hamiltonians of the form: