On the relation between quantum computation and classical statistical mechanics

by

Joseph Geraci

A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Mathematics

Copyright c 2008 by Joseph Geraci ! Abstract

On the relation between quantum computation and classical statistical mechanics

Joseph Geraci Doctor of Philosophy Graduate Department of Mathematics University of Toronto 2008

We provide a quantum algorithm for the exact evaluation of the Potts partition function for a certain class of restricted instances of graphs that correspond to irreducible cyclic codes. We use the same approach to demonstrate that quantum computers can provide an exponential speed up over the best classical algorithms for the exact evaluation of the weight enumerator polynomial for a family of classical cyclic codes. In addition to this we also provide an efficient quantum approximation algorithm for a function (signed-Euler generating function) closely related to the Ising partition function and demonstrate that this problem is BQP-complete. We accomplish the above for the Potts partition function by using a series of links between Gauss sums, classical coding theory, graph theory and the partition function. We exploit the fact that there exists an efficient approximation algorithm for Gauss sums and the fact that this problem is equivalent in complexity to evaluating discrete log. A theorem of McEliece allows one to turn the Gauss sum approximation into an exact evaluation of the Potts partition function. Stripping the from this result leaves one with the result for the weight enumerator polynomial. The result for the approximation of the signed-Euler generating function was accomplished by fashioning a new mapping between quantum circuits and graphs. The mapping provided us with a way of relating the cycle structure of graphs with quantum circuits. Using a slight variant of this mapping, we present the final result of this thesis which presents a way of testing families of quantum circuits for their classical simulatability. We thus provide an efficient way of deciding whether a quantum circuit provides any additional computational power over classical computation and this is achieved by exploiting the fact that planar instances of the Ising partition function (with no external magnetic field) can be efficiently classically computed.

ii Acknowledgements

I would like to first off thank my supervisor Daniel Lidar 1 who provided me with opportunity beyond my expectations and a project with so much potential. The first portion of this thesis would surely have ended up in the Pacific ocean if it were not for him, as he continued to have faith in my vision long after I had lost it. He was an excellent supervisor that led me but left me to create as I saw fit. I not only learned a great amount of physics from him, but his example led me to learn and aspire to what it takes to be a great scientist. And he gave me the opportunity to dwell in the sub-concious of the USA, better known as Los Angeles. The experience still lingers within me as the residue of a warm dream with images of beaches, dolphins, mountains, freeways, guns and palm trees. My choice to enter graduate level mathematics was due to my Master’s supervisor, I.M. Sigal, who saw in me the potential to pursue a career in mathematics. I would have surely never have continued in mathematics if it wasn’t for his help and inspiration. He was an excellent mentor and he inspired me to finally push myself. His greatness still inspires me and my gratitude goes out to him. Another professor of mathematics who inspired me was Man-Duen Choi. He was an excellent instructor and I regret never taking full advantage of having access to a mathematician of such a calibre when I was an undergraduate, being that I was a lazy sod. He was there for me when, during my Ph.D., I was going through some personal difficulties and helped my immensely. I thank him very much. One mathematician was with me throughout my whole mathematical education, and that is Professor Catherine Sulem. Recently she described me during my earlier days as a “papillon”. The truth is that I was undisciplined and distracted as an undergrad. Dr. Sulem’s guidance however helped me to avoid disaster. Her excellence in mathematics and teaching was also a great inspiration to me. I am indebted to her. I would like to thank my fellow graduate students and the other friends I have made (here and in California) who have made this experience a great one. Many thanks go to Itamar Halevy and Ravi Minhas for their guidance and friendship. I must give a special thanks to the whole staff at the University of Toronto mathematics department especially Marie Bachtis and Ida Bulat. They have put up with me gracefully for years and have helped me in so many ways that it is impossible to thank them enough. My parents Peter and Franca Geraci, have always encouraged me in my studies . They, along with my brother Tony Geraci and his wife Mary, provided a great support system for me while I was away. Their visits were appreciated more than they could know. I thank them all. Lastly, a special thanks to my amazing and generous mother-in-law LouAnn Leon, who enabled me to an extent that I had never ever expected. I thank her with all of my being and consider myself blessed to have her in my life. This and any future accomplishments I enjoy will be imbued with her spirit.

1Thanks goes to the people at ARO/DTO (grant W911NF-05-1-0440) for their financial support. iii Dedication

This thesis is dedicated to my lovely wife Summer Nudel, without whom I would have never been able to cross the desert, both literally and figuratively, that lay ahead of me. I will always be indebted to you for the transformation that my life underwent over the last ten years and I thank you and love you with full surrender.

iv Contents

1 Introduction 1

1.1 Quantum Computation ...... 1

1.1.1 Models of Quantum Computation ...... 4

1.2 A few definitions from Complexity Theory ...... 7

1.3 Statistical Physics ...... 9

1.3.1 Ising spin model ...... 10

1.3.2 The Potts Model ...... 11

1.4 My contributions ...... 12

2 Review of previous work 15

2.1 Relation between the Potts partition function and knot invariants ...... 15

2.2 Graphs, quantum circuits and classical simulations ...... 18

3 A review of some key concepts from coding theory 21

3.1 Introduction ...... 21

3.2 Cyclic Codes ...... 21

3.2.1 A Pause for Cyclotomic Cosets ...... 22

3.2.2 An application of cyclotomic cosets ...... 23

3.3 Codes continued ...... 24

3.4 Gauss sums and their relationship to the weight spectrum of linear codes ...... 25

4 An evaluation of the Weight Enumerator via Quantum Computation 31

4.0.1 A Theorem on the evaluation of certain Weight Enumerators ...... 31

4.0.2 Overview of the Algorithm to Obtain the Exact Weight Enumerator of a Code in ICQ! 33

5 A quantum algorithm for the Potts partition function 35

5.1 Structure of this chapter ...... 35

5.2 A Theorem about QC and instances of the Potts Model ...... 35 v 5.2.1 Main Theorem ...... 35

5.2.2 Background ...... 37

5.2.3 The relationship to linear codes ...... 40

5.2.4 Testing the graph for membership in the ICCC! class ...... 41 5.2.5 Proof of the Main Theorem ...... 41

5.2.6 Proof of the Corrolary ...... 48

5.2.7 Reducing the Computational Cost of the Algorithm via Permutation Symmetry . . . . 48

5.3 Classical and Quantum Complexity of the Scheme ...... 49

5.4 Detailed Summary ...... 50

5.5 Examples and Discussion ...... 52

5.5.1 Example ...... 52

5.5.2 Degenerate Cyclic Codes ...... 55

5.6 Conclusions, Future Directions and Critical Analysis ...... 56

6 Additive Approximation of the Signed-Euler Generating Function 59

6.1 Introduction ...... 59

6.1.1 Generating function of Eulerian subgraphs ...... 59

6.2 QWGTs and their relation to the Ising partition function ...... 60

6.3 A relationship between hypergraphs and quantum circuits via QWGTs ...... 62

6.3.1 The Mapping ...... 63

6.4 BQP-completeness ...... 65

6.4.1 Examples ...... 69

6.5 Future work: Approximating the Ising partition function ...... 70

6.6 Conclusion ...... 71

7 On classically simulatable quantum circuits 73

7.1 Introduction ...... 73

7.1.1 Definitions from Graph Theory ...... 74

7.2 Once again - The Mapping ...... 75

7.3 Determination of the edge interaction distribution and consequences ...... 76

7.4 Proof of the main theorem ...... 78

7.4.1 “The Test” and consequences for the structure of quantum circuits ...... 82

7.4.2 Quantum circuits corresponding to a class of sparse graphs ...... 83

7.5 The Next Step ...... 86

7.5.1 On the existence of edge interactions ...... 86 vi 7.5.2 Computing the Ising partition function ...... 87 7.6 Conclusion and Critical Analysis ...... 89 7.7 Proof of the Lemma ...... 89

8 Conclusion 93

9 Appendix 97 9.1 A Classical Algorithm for the Computation of Coset Leaders and Coset Size ...... 97 9.2 Matroids ...... 98 9.2.1 Generator matrix of a cyclic code and the cycle matroid matrix ...... 99 9.3 Characters ...... 100 9.4 Discrete Log ...... 100 9.5 Samples of Mathematica Notebooks ...... 101

Bibliography 123

vii viii Chapter 1

Introduction

1.1 Quantum Computation

A large portion of the people who will ever attempt to read any part of this thesis will most likely know very little about quantum computation. Thus I feel inclined to include an introduction on the subject, brief as it may be. “Quantum Computation” has become an umbrella term for several pursuits including algorithms and complexity, , models of quantum computaiton and theoretical and experimental realizations of quantum information processing systems. Few will disagree that it also includes “Quantum Information Theory” which contains efforts geared towards understanding quantum cryptography and the generalization of classical information to quantum information. We will not discuss quantum cryptography except to mention that practical implementations of quantum key distribution systems are already in effect and MagiQ Research Labs already offers commercial products. Of course there is still much work to be done on this front, but much progress has been made. However, as things look now, an exceptional amount of work still remains to be done before a commercial device is available.

The work in this thesis assumes that one has access to a fully-blown ideal quantum computer. This thesis includes most of the work I have done in understanding the link between classical statistical physics, which I will define below, and quantum computation. This includes possible approaches to use quantum computers to compute quantities more efficiently than with classical resources as well as a direct comparison between the quantum circuit model and certain instances from statistical physics. In other words, I work in algorithms and complexity theory. People in this business attempt to understand how powerful ideal quantum computers will be and they try to discover situations where having quantum resources will provide a distinct advantage. The earliest practical examples are Shor’s algorithm for prime factorization [98] and Grover’s search algorithm [72]. Given a number, Shor’s algorithm provides the prime decomposition of it exponentially faster than any known classical algorithm and given any item that you wish to find in 1 2 Chapter 1. Introduction some database, Grover’s algorithm locates it with a quadratic speed up over the best classical algorithm. Interestingly, it is known that a classical computer can never achieve this but that no quantum algorithm can do better [19].

Enough advertising! Let us define what a quantum computer is. Most simply, it is a computational device that relies specifically on quantum mechanical phenomenon to perform its operations. The machine that I am writing this thesis on requires transistors for its operation, and in order to understand how transistors work one needs . This is because transistors are made from semiconductors and these can not be fully understood via classical physics. So is my Mac a quantum computer? No, of course not. Though it requires a device that works by the principles of quantum mechanics, the actual information processing is classical. This means that each bit in a classical computer is either in an “off” or “on” state, either “0” or “1”. A two bit computer for example is in one of the four following states: 00, 01, 10, or 11. A 64 bit computer has access to 264 states and in any one moment it is one of those states. Imagine for a moment that your bits were states of some physical object that was governed by quantum mechanics, like an electron. The laws of quantum mechanics tells us that a two-state spin system like the electron (or spin

1 2 particle for you physicists) can be in the following state

ψ = α + β . |↑# |↓# The parameters α and β indicate to what degree which state is favored over the other. ψ is then in a superposition of the up and down state, and thus in some respect, a combination of the two. This is a very important resource where one has an object which is in some strange hybrid state but not in one or the other state until a measurement is performed on it. Quantum mechanics tells us that after some measurement process, the state collapses into a classical state, i.e., either with probability α 2 or with probability β 2. ↑ | | ↓ | | If you replace with 0 and with 1 , you have a superposition of 0 and 1’s. This means that if our |↑# | # |↓# | # computer had access to n electrons for its bits, for example, then this computer can be in a superposition of all n-bit states, i.e.,

ψ = c i , | # i| # i 0,1 n ∈{!} n 2 i.e., i ranges over all 2 bit strings of length n and ci C such that i 0,1 n ci = 1. We call each bit in ∈ ∈{ } | | this case a qubit. Thus a 3 qubit computer can be in some superposition" of all 8 possible states, where as a classical computer will be in only one of them. When we perform a measurement on our quantum computer the superposition of all the states will collapse to a single state, and thus you retrieve a readable state.

2 2 n Mathematically, a qubit lives in the Hilbert space C and a system of n qubits lives in (C )⊗ . Thus the states of different qubits are tensored together and by convention the two qubit state 01 is in fact 0 1 . | # | # ⊗ | # Before going on to discuss the difficulties involved we need to introduce one more very important quantum 1.1. Quantum Computation 3 resource. Quantum entanglement. Entanglement is best illustrated by an example but it is a very special feature of the quantum world that has to do with how certain states share information even though they are separated in space-time. There is, in some sense, a non-locality to these states. We will not even attempt to go into the fascinating but tangled philosophical world of entanglement. Mathematically, an n qubit state ψ is entangled if and only if it cannot be written as a tensor product φ φ φ . For example, 01 1 ⊗ 2 ⊗ · · · ⊗ n | # is obviously not entangled but try as you might, no such decomposition exists for the state

1 ( 01 + 10 ). √2 | # | #

Now, keep this state in mind and consider the following scenario. Let’s say we create this two qubit state in the laboratory and we give one of the qubits to Alice (qubit 1) and the other to Bob (qubit 2) and that somehow the qubits are sufficiently isolated from the environment. Now Alice and Bob leave the laboratory and go their separate ways but they do not “look” at their qubits and so the state is preserved. This type of state is a resource for quantum communication and computation. A good discussion of this is given in [85]. Let’s say that Alice is ready to measure her qubit and after measurement she obtains the state 0 . She will | # know that Bob’s qubit will collapse to the state 1 . If she instead got 1 then Bob’s qubit will collapse to | # | # 0 . This is precisely what the state 1 ( 01 + 10 ) tells you will happen as it tells you that if Alice “sees” | # √2 | # | # a 0 (i.e., 0 is in the first position) then Bob must “see” a 1 (the second position) and vice versa. Superposition and entanglement are the quantum mechanical phenomena that I was referring to. Note that it is these resources taken together that are important. Classical systems may have access to superpo- sition but it is entanglement that allows efficient physical representations of superpositions [104]. Thus any machine that can make use of these two resources directly to compute some quantity is a quantum com- puter. Realistically however, these two resources that provide so much additional power are also the source of much of the difficulty when attempted to fashion an actual quantum computer. Quantum states are very fragile in the sense that the environment may destroy the states. Quantum error correction is the business of attempting to protect quantum systems for a sufficiently long time so that this destructive decoherence does not interfere with the computation. This is an extremely important part of quantum computation as it will ensure fault-tolerant computation. We have the theoretical underpinnings of a machine that will use superposition and entanglement but how do we extract any information from this system? An algorithm for this type of system will involve a series of steps where the computer will process the quantum states or evolve it from an initial state to a final one after which a measurement is performed. The algorithm will have to ensure that with some high probability the “correct” state is favored or the state that corresponds to the correct answer is favored. In Grover’s search algorithm for example, the algorithm is designed so that with some high probability the correct item is located. One may repeat the algorithm several times to shrink the uncertainty to a very small quantity. Thus quantum computation is a variant of probabilistic computation. The goal of a quantum algorithm is to 4 Chapter 1. Introduction amplify the probability of obtaining the correct answer. This is done by using the fact that quantum states can interfere, i.e., the complex amplitudes (for example α and β of the example above) are altered as states evolve. This is where the wave analogy of quantum states is very useful as you can now imagine interference being much like the constructive and destructive interference that water waves are subject to. Now, once one has an algorithm which is guaranteed to provide the correct answer with sufficiently high probability, one repeats the algorithm over a few times and due to the Chernoff bound [85], the probability of an error decreases exponentially.

Now it is time for an analogy for lay people. Warning: You cannot do the following with a quantum computer but it is just an analogy of how quantum and classical computers differ! Let’s take the stock market as our playground and let us imagine that we have two players, Joe and Summer. Joe will have access to one stockbroker and Summer will have access to a whole group of stockbrokers that can exist in a quantum superposition. Now, Joe sends his stockbroker out there and tells him to make some money. Joe’s stockbroker will do his best and decide what to do with his money. If Joe is lucky and his stockbroker is relatively competent he will make some money, but the chances that he will make a huge gain in the short term is unlikely. Summer on the other hand has this “magical” team on her side that exists in a superposition. Summer’s goal is to make a huge amount of money, of course. The trick for her is to find a way for her stockbrokers to go out into the market and simultaneously invest in as many possible ways as possible and then to get them all to interfere with each other, the way ripples on a pond behave, so that the big winner is amplified, i.e., her stockbroker state at the end will be in a superposition such that the highest probability amplitude will be the coefficient of the “big winner”. Thus, when Summer’s stockbroker state collapses at the end (after she measures her quantum state of stockbrokers) she will end up with the one who found the “great” investment opportunity with her investment intact (with some high probability) and all the rest will vanish.

This analogy is meant to highlight what is happening in a quantum algorithm. We have access to a superposition of an exponential number of states, and these states can interfere with each other in interesting ways as the amplitudes are complex. And entanglement is what allows quantum computers to have the ability to represent superpositions efficiently. Having this we may evolve an n qubit state so that the desired state will result with some high probability after the measurement at the end of the computation.

1.1.1 Models of Quantum Computation

There are several models of quantum computation. We have the quantum circuit model, topological quantum computation, adiabatic quantum computation and measurement based quantum computation to just mention a few. We shall give a very brief description of each and leave the quantum circuit model for last as this thesis contains work that utilizes this model directly. 1.1. Quantum Computation 5 For topological quantum computation [87] the goal is to have a device that is governed by a 2+1 topo- logical quantum field theory populated by special two dimensional particles known as non-abelian anyons. The device will be able to manipulate n such anyons such that there will be some starting configuration

(s1, s2, . . . , sn) that evolves to some final configuration (f1, f2, . . . , fn) in such a way that the evolution forms a “braid”. The topological class of all such trajectories are equivalent to the braid group over n braids. Thus the device evolves from some starting quantum state to another. A measurement at the end will collapse the state to some final readable output. This style of quantum computer will have the advantage of being fault tolerant because if some “noise” were to deform a pathway of an anyon, it would make no difference to the computation since a deformed path is still homeomorphic to any other path in its topological class. In other words, we are using topological data, and in topology a deformation of an object is of no matter. Coffee mugs and donuts should come to mind. The reader familiar with the Jones polynomial should note that the power of quantum computation is exactly equivalent to the ability to compute additive approximations of the Jones polynomial [25]. The main setback with this method is that non-abelian anyons are thus far only theoretical entities and have not yet been observed in nature. The power of this model of computation is equivalent with the quantum circuit model [86].

Adiabatic quantum computation is based on the quantum adiabatic theorem. Let’s say you have a quantum system and assume that the system is in the energy ground state given by the eigenstate ψ(x, t0). Now form the Hamiltonian

H$(s) = (1 s)H + sH − 1 2 and assume that it has a unique ground state for all s. Then, the quantum adiabatic theorem ensures that if the system evolves slowly enough, i.e., if you let s vary from 0 to 1 slowly (such that s = 0 corresponds to t0 and s = 1 corresponds to t1), then the quantum system will end up in the ground state ψ(x, t1) of H2.

Actually, the system ends up very close to it in the l2 norm. So if the system starts in the ground state of H1, it will end up in the ground state of H2 [26]. Computationally this can be useful in the following way. Imagine you design a quantum system so that your initial Hamiltonian is known and the ground state is easy to prepare, i,e., one can efficiently set the quantum system to be in the ground state of H1.

Further, your design includes the fact that the ground state of H2 encodes the answer to the problem you wish to “compute”. You evolve slowly and eventually you will end up very close to the ground state. You measure and obtain your answer. One problem concerns a way to make adiabatic quantum computation fault tolerant. There exist proposals for methods at the time of writing, for example my supervisor Daniel Lidar, has proposed a method that can guard against decoherence and certain control errors [29], but unlike the circuit model and the measurement model, there is no complete theory of fault tolerance for adiabatic quantum computation. Again, it is known that the power of adiabatic quantum computation is equivalent to the quantum circuit model [5]. 6 Chapter 1. Introduction Measurement based quantum computation (MQC) is a relatively new and surprising approach [101] often referred to as the “one-way quantum computer.” There are actually several models of MQC ([96] for example) but I shall only briefly describe the one-way quantum computation model. Imagine that you have a device where you are able to create and store a highly entangled state. One can imagine this state as a grid, where the vertices of the grid represent qubits. Once you have this state, an algorithm is “run” on this state meaning that a sequence of one qubit measurements are made on the state. Hence, the name “one-way” refers to the fact that the sequence of measurements destroy the entanglement in the state. You can imagine having a bingo card in front of you, and the measurements can be likened to blotting out the called numbers. Don’t take this analogy too far. In any case, a record of all the results of your measurements is the actual computation, i.e., you deduce your “answer” from the sequence of measurement results. This model is fully capable of keeping up with the quantum circuit model and does indeed provide a resource for universal quantum computation, as long as the initial state is sufficiently entangled.

The quantum circuit model is the easiest to understand. This is due to the fact that the architecture of this model is what we are used to thinking of when we think of circuits. A flow of information come into the system and goes through “gates” and is transformed and then out comes an answer. With quantum computation there are some differences. Now imagine several parallel “wires” coming into the circuit that carry the qubits, i.e., each wire represents a qubit. Each wire comes into contact with some operation and then flows out of the operation. Please refer to the following figure.

Figure 1.1: A three gate circuit.

Here we have a diagram of a simple three gate quantum circuit. As illustrated, each gate can act on any 1.2. A few definitions from Complexity Theory 7 number of qubits. Gate B for example is only acting on qubit one and two. A very important type of gate for computation, classical or quantum, is the controlled-gate. That is, a gate that performs an action on a target qubit n depending on the state of qubit k. The most famous of these is the controlled-NOT gate. This has the action that if the control qubit is 1 then the target qubit will be “flipped”, i.e., a NOT operation | # will be performed on the target qubit. The controlled-NOT’s fame is due to a theorem that says that if you have one of these two qubit gates and you are able to perform arbitrary one-qubit operations, then you have universal quantum computation [85]. What does this really mean? Note the following important mathematical facts about quantum circuits.

1. A quantum circuit is equivalent to a unitary matrix U and hence it is reversible.

2. The matrices corresponding to gates that are connected in serial like gates A,B and C are multiplied together to obtain U.

3. The matrix of a gate that consists of several gates which lie one on top of the other is equal to the tensor product of the corresponding matrices.

4. Control gates are equal to the direct sum of the identity matrix and the gate that operates on the target qubit.

5. A measurement may be made at any time on any qubit but this process is not reversible, unless it reveals no information about the system. Measurement is in general, the interface between quantum and classical information and all intermediate measurements can be moved to the end of the circuit [85].

Every quantum circuit is equal to some unitary matrix. Thus any gate set that is universal for quantum computation will have to be able to approximate every unitary matrix to arbitrary precision. This does not mean it can be done efficiently for any unitary matrix [85]. Those that can be approximated efficiently correspond to computations that quantum computers can handle practically, i.e., the problem scales only polynomially in the instance size. We say these efficiently solvable decision problems are in BQP which stands for Bounded-Error Quantum Polynomial-Time.

1.2 A few definitions from Complexity Theory

In this thesis one will come across acronyms such as NP and BPP, which are complexity classes. Here we provide a very brief and informal introduction to complexity theory. A complexity class is a set of problems that are collected according to their difficulty. A problem’s difficulty usually refers to the amount of time (asymptotically) that it would require to solve instances of that problem, but it could also be characterized 8 Chapter 1. Introduction by how much memory would be required. Below we define a few (of the vast number of) complexity classes that are relevant to this thesis. There are different tasks that may characterize the complexity class. For example, there are decision classes where the problem is to determine a YES or NO answer to some question. On the other hand, some complexity classes are characterized by problems that count the number of YES answers to some question. These are referred to as decision and counting classes respectively. For more formal definitions and an excellent resource for complexity theory see [21].

Definition. P (Polynomial Time) is the class of decision problems that a classical computer can solve in polynomial time.

Definition. NP (Non-Determinisitc Polynomial Time) is the class of decision problems for which a classical computer can verify the correctness of a YES answer in polynomial time.

Knowledge of whether P=NP is a very coveted mathematical result, and it remains completely open despite decades of effort.

Definition. BPP (Bounded-error Probabilistic Polynomial Time) is the class of decision problems that a probabilistic classical computer can solve in polynomial time with probability of error at most 1/3. These classical machines are allowed to have access to randomness to solve their problems, hence the term “prob- abilistic”.

P BP P but it is in doubt that P is a proper subset, i.e., they may be equal. This is not known ⊆ however.

Definition. BQP (Bounded-error Quantum Polynomial Time) is the class of decision problems that a quantum computer can solve in polynomial time with probability of error at most 1/3.

Think of this class as quantum P. It is hoped that P BQP is proper and that even though it is highly ⊆ unlikely that NP BQP , it is believed that BQP contains problems outside of NP. ⊆

Definition. RP (Random Polynomial Time) is the class of decision problems for which a probabilistic classical computer returns a YES in polynomial time with certainty half the time. If it returns YES then it is correct but if it returns NO, the answer may still be YES. Thus a NO answer may be incorrect.

Definition. PP (Probabilistic Polynomial Time) is the class of decision problems for which a probabilistic classical computer gives a Yes or NO answer with probability less than a 1/2.

It is known that P BP P BQP P P . This is due to the fact that the probabilities in PP may be ⊆ ⊆ ⊆ so close to 1/2 that a majority vote upon many samples will not be able to distinguish between the YES and NO instances, but the bounds that BPP and BQP are subject to allow one to distinguish between YES and NO instances with far fewer iterations, i.e., the gap away from 1/2 allows for this distinguishability. 1.3. Statistical Physics 9 Definition. #P (Sharp P) is the counting class of problems associated with problems in NP. These problems are characterized by functions which count the number of YES instances.

Even though #P is a counting class, one can compare them with the above classes by creating a new class P #P which is the symbol for a classical computer with access to an oracle(or “wizard”) that can access the power of a machine that can solve #P problems. In this case we have P NP P #P , where P NP is the ⊆ class of problems solvable in polynomial time by a classical computer that has access to an oracle that can solve NP problems. Again one does not know if this is a proper inclusion.

Definition. PSPACE (Polynomial Space) are all those decision problems which may be solved by a classical computer with access to memory that grows polynomially in the length of the problem instance.

It is known that P NP P #P P SP ACE and P NP P SP ACE [21]. ⊆ ⊆ ⊆ ⊆ One hope is that the techniques offered by quantum computation may help to resolve certain open problems in complexity theory. One goal of the work in this thesis is to better understand the standing of BQP amongst these other classes.

1.3 Statistical Physics

Classical statistical physics or statistical mechanics is a branch of physics that involves applying probability theory to large ensembles of particles (or other objects). It would be hopeless to try to track the dynamics of an individual member of an ensemble as the number of particles in a sample of gas, for example, is staggering. The whole point is to make macroscopic predictions about systems consisting of microscopic entities and the main assumption is that in equilibrium, such a system is not biased about what configuration it will be in. Your kitchen for example can be messy in many many different ways. Each way is considered a configuration and an unattended but used kitchen will drift into a mess. It may even drift into orderliness but as there are so many way for it to be messy, the orderliness configuration is unlikely to occur by chance. The title of this thesis is slightly misleading. I have been studying two particular simple models of statistical physics and their connections to quantum computation. Namely, the Ising model and the Potts model. The Ising model was originally meant to be a simple model for ferromagnets, like iron, but it was eventually shown that under the right interpretation it can also be a model for simple gases [69]. The Ising model is defined on a graph made from E edges and V vertices (or nodes). Each vertex represents | | | | a particle’s spin and each edge has an interaction energy connecting two spins. The spins can either be up or down and the interaction can either be ferromagnetic or anti-ferromagnetic. Associated with the whole model is an energy functional called the Hamiltonian, which we define below. The ferromagnetic interaction energetically favors spins that are the same or aligned (the energy is minimized for an edge whose vertices consist of equal spins), and anti-ferromagnetic interactions energetically favor spins that are 10 Chapter 1. Introduction pointing in opposite directions. One goal is to find what state, i.e., which spin configuration, minimizes the Hamiltonian. The other goal is to calculate a quantity called the partition function Z. Knowing this quantity is equivalent to having complete statistical knowledge of the system in equilibrium. Z plays a central role in statistical physics, since many thermodynamic quantities can be derived from it [69]. The Potts model is a generalization of the Ising model in that a spin is not confined to be in one of two states but can be in one of q Z states. Thus, the Ising model defined on a graph where V = 10 can be in 210 states where ∈ | | as the 3-state Potts model can be in 310 states. The Ising and Potts models are excellent toy models for studying the phenomenon of phase transitions. A phase transition is a sharp change in some characteristic of a physical system which occurs under a small change in some variable. The most obvious example of this is the change of state that occurs in water as the temperature drops below freezing and water changes from liquid to solid. Note that despite the simplifications inherent in the Ising and Potts models, calculating the ground state and Z are intractable for most cases [118]. A brief mathematical introduction of the partition function of these two models follow.

1.3.1 Ising spin model

Let Γ = (E, V ) be a finite, arbitrary undirected graph with E edges and V vertices. In the Ising model, | | | | each vertex is associated with a classical spin (σ = 1) and each edge (i, j) E with a bond (J = J). i ± ∈ ij ± The Hamiltonian of the spin system is

H(σ) = J σ σ . (1.1) − ij i j (i,j) E !∈

V The probability of the spin configuration σ = σ | | in thermal equilibrium at temperature T is given by { i}i=1 the Gibbs distribution: P (σ) = 1 W (σ), where the Boltzmann weight is W (σ) = exp[ βH(σ)], β = 1/kT , Z − k is the Boltzmann constant, and Z is the partition function:

Z J (β) = exp[ βH(σ)]. (1.2) { ij } − σ ! The sum is taken over all possible spin configurations, and so for our 10 vertex example given above, there are 210 terms in the sum. There are various situations where the sum can be simplified and even analytically solved but for most cases this is not possible. In fact, for general non-planar graphs even good approximations of Z are unlikely due to the fact that it is known that this would have serious and unexpected consequences in complexity theory. For example, it is known that if there is a fully polynomial approximation scheme for the fully anti-ferromagnetic Ising model, then NP=RP (random polynomial time). This is believed to be highly unlikely [118]. 1.3. Statistical Physics 11 1.3.2 The Potts Model

Let Γ = (E, V ) be as above. The q-state Potts model is a generalization of the Ising model where a q-state spin resides on each vertex. In the Ising model q = 2, whereas in the Potts model q 2. The edge connecting ≥ vertices i and j has weight Jij, which is also the interaction strength between the corresponding spins. The

Potts model Hamiltonian for a particular spin configuation σ = (σ1, ..., σ V ) is | |

H(σ) = J δ (1.3) − ij σiσj ! where summation is over nearest neighbors, and where δ = 1 (0) if σ = σ (σ = σ ). Thus only nearest σiσj i j i + j neighbor parallel spins contribute to the energy. The probability P (σ), of finding the spin in the Potts model in some configuration σ at a given temperature T , is as in the Ising model, given by the Gibbs distribution

e βH(σ) P (σ) = − . (1.4) Z(β) Again, the normalization factor is the partition function

βH(σ) Z J (β) = e− , (1.5) { ij } σ !{ } When for all configurations β H(σ) , the probability distribution becomes flat: P (σ) 1/Z(β), so that , | | ≈ at high temperatures randomness dominates.

The partition function can be rewritten as a polynomial:

β J δ βJ δ Z(β) = e ij σiσj = e ij σiσj σ P σ !{ } !{ } #

= (1 + vij(β)δσiσj ), (1.6) σ !{ } # where

v (β) = eβJij 1. (1.7) ij −

Now let us consider the case when the interactions Jij are a constant J. Then the Hamiltonian (1.3) of this system can be written as

H(σ) = J U(σ) (1.8) − | | where U(σ) is the subset of edges whose vertices have the same spin for a particular spin configuration σ, and U(σ) is the number of such subsets. If we let | |

J y = e− kT (1.9) 12 Chapter 1. Introduction we can write the Potts partition function as

U(σ) Z(y) = y−| |. (1.10) σ ! This form will be very important when presenting the algorithm for the Potts partition function in this thesis.

1.4 My contributions

The motivation for the work presented in this thesis was the following question my supervisor posed to me: For what instances of the Ising or Potts model can a quantum computer provide a speed up over classical machines for the evaluation of Z? An “instance” in this case is the following data: the graph, the edge interactions, and the type of evaluation, i.e., types of approximations or exact evaluations. Most of the work that I have done in an attempt to answer this question is included in this thesis and is as follows.

1. Using connections that I have found between classical coding theory, graph theory and the partition function, I constructed a class of graphs for which a quantum computer can return the exact value of Z more efficiently (polynomial speed up) than any known classical algorithm. Consequently, I have identified a class of classical linear codes for which a quantum computer can find the weight spectrum faster than any known classical algorithm, with an exponential speed up in one of the parameters.

2. By constructing a new mapping between quantum circuits and hypergraphs, I give an additive approx- imation scheme for a function that is closely related to the Ising partition function. With some work, this may be used to obtain an algorithm for Z itself, even for non-planar instances

3. By using the same mapping, I demonstrate that any family of quantum circuits whose members cor- respond to a certain family of planar graph instances can be classically simulated. The same structure also provides another candidate for an approximation scheme for the Ising partition function.

The algorithm for the Potts partition function has the potential of being extended to a larger class of graphs and to an approximation method for an even larger class of graphs. This part of my thesis may be interpreted as providing the graph instances of the Potts model for which the resources required for Shor’s algorithm [98] translates to a quantum speed up for the exact evaluation of the Potts partition function. This is due to the fact that my methods rely on either using approximations of Gauss sums or Zeta functions ([115, 64]) and these resources are equivalent in complexity to the ability to do discrete log and may be used to implement prime factorization algorithms [80]. As for the work dealing with the Ising Z, a quote of Polya comes to mind,“Look around when you have got your first mushroom or made your first discovery: they grow in clusters.” There seems to be some 1.4. My contributions 13 potential lurking here for really understanding the exact power of quantum computation in terms of the classical Ising model. But work will have to be done to dig this “mushroom” up. In another direction, the methods used here may be extended to knot invariants like the Kauffman bracket or to graph invariants. 14 Chapter 1. Introduction Chapter 2

Review of previous work

A wealth of results has been obtained since the dramatic early results [98, 72] on quantum speedups relative to classical algorithms. A relatively unexplored field is quantum algorithms for problems in classical statistical mechanics. The earliest contribution to this subject [31] obtained a modest speedup in that it avoided critical slowing down [103] in the problem of sampling from the Gibbs distribution for Ising spin glass models. Subsequently Ref. [28] raised the question of providing a classification of classical statistical physics problems in terms of their quantum computational complexity. In this thesis we shed light on this classification by considering the problem of evaluating the Potts model partition function Z for classical spin systems on graphs. It is known that under particular conditions even certain approximations for Z are unlikely to be efficient, barring an NP = RP surprise [118]. In chapter 5 we present a class of sparse graphs (which we call ICCC!) for which exact quantum evaluation of Z is possible with a polynomial speedup in the size of the graph and an exponential speedup in the number of per-spin states, over the best classical algorithms available to date.

The Potts partition function of graphs in ICCC! are equivalent to the weight enumerators of certain linear codes. The evaluation of weight enumerators (in this case) involves the evaluation of Gauss sums or Zeta functions. The evaluation of Gauss sums is in general hard and equivalent to the calculation of discrete log [116]. This suggests that ICCC! includes cases that are unlikely to be solved as efficiently on classical computers.

2.1 Relation between the Potts partition function and knot in-

variants

There is a rich inter-relation between classical statistical mechanics and topology, in particular, the theory of the classification of knots. The first such connection was established by Jones [114], who discovered the 15 16 Chapter 2. Review of previous work second knot invariant (the Jones polynomial (a Laurent polynomial), after the Alexander polynomial) during his investigation of the topological properties of braids [113]. It is known that the classical evaluation of the Jones polynomial is #P-hard [44].

A direct connection between knots and models of classical statistical mechanics was established by Kauff- man [66]. Knot invariants are, in turn, also tightly related to graph theory; e.g., the graph coloring problem can be considered an instance of evaluation of the Kauffman bracket polynomial, via the Tutte polynomial [66, 92]. The q-state Potts partition function on a graph Γ is connected to the Tutte polynomial for the TΓ same graph via q + v Z (v) = qn ( , v + 1) (2.1) Γ TΓ v

β where as in Eq. (1.7), v + 1 = e− . This means that the Potts partition function is equivalent to some easily computed function times the Tutte polynomial along the hyperbola = (x 1)(y 1) = q. But Hq − − for planar graphs, when q > 2 the Tutte polynomial is #P-hard to evaluate at points along [118]. For a Hq review of the connection between the Potts partition function and the various polynomials mentioned above, see [66, 118] and also [110, 28]. It immediately follows from Eq. (2.1) and complexity results concerning the Tutte polynomial, that the evaluation of the Potts partition function is also #P-hard. It is not known whether there is an fpras (fully polynomial randomized approximation scheme) [118] for the q-state fully ferromagnetic Potts partition function, but it is known that if there is an fpras for the fully anti-ferromagnetic Potts partition function then NP = RP [118] and therefore it seems unlikely that an fpras will be found for this case.

The first connection between knots and quantum field theory was established by Witten, who showed that the Jones polynomial can be expressed in terms of a topological quantum field theory [42]. Recently this connection was extended to the possibility of efficient evaluation of the Jones polynomial by Freedman and co- workers, after showing that quantum computers can efficiently simulate topological quantum field theory [86]. More specifically, there are recent results demonstrating the efficacy of quantum computers in approximating the Jones polynomial at primitive roots of unity [87, 25, 97]. In Ref. [87] tools from topological quantum field theory [42] were utilized and it was shown that approximating the Jones polynomial at primitive roots of unity is BQP-complete, but no explicit algorithm was provided. More recently in [25], a combinatorial approach was taken which yielded an explicit quantum algorithm and which extended the results in [87] to all primitive roots of unity. This leads one to hypothesize that quantum computers will also be efficient at estimating partition functions. Indeed, an immediate corollary of the results in [87, 25, 97, 71], is that the Potts partition function over any planar graph can be approximated efficiently on a quantum computer at certain imaginary temperatures (see also [28]). This follows by noting that in order to obtain an equality between the Potts partition function and the Jones polynomial (up to multiplication by an easily computed

J /kBT function), the Jones variable t and the temperature T must be related by t = e± ± [66]. With t a root − 2.1. Relation between the Potts partition function and knot invariants 17 2πi of unity (t = e r ) we then find: J r T = i ± , r N. kBπ(2 + r) ∈

This result is of interest mainly in light of quantum Monte Carlo simulations [53], where one retrieves real time dynamics from a simulation in terms of imaginary time, via analytic continuation. Perhaps a similar extrapolation can be achieved here between imaginary and real temperature dynamics. While this is interesting, here we are concerned with thermodynamics, and hence evaluations of the Potts partition function at physically relevant, real temperatures.

Most closely related to our work is the very recent result due to Aharonov et al. [24] who – generalizing Temperley-Lieb algebra representations used in [25] – provided a quantum algorithm for the additive ap- proximation of the Potts partition function (and other points of the Tutte plane) for any planar graph with any set of weights on the edges. These results are the most impressive to date in the context of approximate evaluations of the Potts partition function, but are also subject to certain caveats. To quote from the ab- stract of Ref. [24]: “Additive approximations are tricky; the range of the possible outcomes might be smaller than the size of the approximation window, in which case the outcome is meaningless. Unfortunately, ruling out this possibility is difficult: If we want to argue that our algorithms are meaningful, we have to provide an estimate of the scale of the problem, which is difficult here exactly because no efficient algorithm for the problem exists!”. And: “The case of the Potts model parameters deserves special attention. Unfortunately, despite being able to handle non-unitary representations, our methods of proving universality seem to be non- applicable for the physical Potts model parameters. We can provide only weak evidence that our algo- rithms are non-trivial in this case, by analyzing their performance for instances for which classical efficient algorithms exist. The characterization of the quality of the algorithm for the Potts parameters is thus left as an important open problem.” Finally, quoting from Section 1.5 of Ref. [24]: “Proving anything about the complexity of our algorithm for the Potts model remains a very important open problem. It is still possible that this case of the Tutte polynomial, with our additive approximation window, can be solved by an efficient classical algorithm.” To summarize, Ref. [24] leaves as an open problem the complexity of physical instances (real temperature, positive partition function) under the restriction of an additive approximation. Nor is it clear whether the algorithm found in Ref. [24] provides a quantum speedup. The authors state: “We believe that the main achievement here is that we demonstrate how to handle non-unitary representations, and in particular, we are able to prove universality using non-unitary matrices.”

Recently Ref. [82] gave a scheme for studying the partition function of classical spin systems including the Potts and Ising model. Their approach involves transforming the problem of evaluating the partition function into the evaluation of a probability amplitude of a quantum mechanical system and then using classical techniques to extract the pertinent information. In essence their method involves moving into a quantum mechanical formalism to obtain a classical result. The scheme is therefore classical and not a 18 Chapter 2. Review of previous work quantum algorithm.

We also mention some work that began years ago but has been unavailable to the scientific community until recently. In [6], Nayak, Schulman and Vazirani give a quantum algorithm for sampling from the Gibbs distribution for the fully ferromagnetic Ising model. They provide a method to approximate the following state:

p (σ) σ . G | # σ +1, 1 n ∈{ !− } $

Here pG(σ) is the probability of the fully ferromagnetic Ising model over the graph G to be in the state σ. Once one has such a state, a measurement of this state is essentially a sample from a distribution that is close to the Gibbs distribution. Thus, this algorithm provides an approximate simulation of the classical Ising model on a quantum computer. This problem is a special case of the one addressed in [31], where the Gibbs distribution was sampled from a quantum computer for the case of Ising spin glasses. Moreover, the Fourier transform technique used in [6] was generalized in [28] to the Ising spin glass case, and forms the basis for some of the results presented in the second part of this thesis.

In addition, two purely classical results should be mentioned here. One is a state of the art result by Hartmann [7], who provides an algorithm which is well suited to large ferromagnetic systems for either the Potts or Ising model. We do not know the exact complexity of this algorithm, however. The approach taken in our work is to utilize the connection between classical coding theory and the partition function. For this reason we mention the classical algorithm given in [54] for calculating the Zeta function of certain curves. This is also a state of the art algorithm and it can be used to find the Potts partition function via the scheme we present in this thesis, though it is slower than using quantum resources.

A quantum algorithm for finding the Zeta function of a curve is given in [64]. One could replace the role that the Gauss sum estimation [115] plays in our scheme with this quantum algorithm for the Zeta function. It seems that using Gauss sums is more efficient but further work is required to make this conclusive.

Finally, we mention that it was recently shown that one can construct interesting classes of graphs for which the Potts model can be computed analytically [102]. These so called n-ladder graphs are recursively defined.

2.2 Graphs, quantum circuits and classical simulations

In this thesis I also present a result on the connection between graphs and quantum circuits as well as how this mapping says something about the classical simulatibility of a certain family of quantum circuits. From its early days quantum computing was perceived as a means to efficiently simulate physics problem [106, 109], and a host of results have been derived along these lines for quantum [111, 32, 13, 36, 20, 30, 47, 14, 86, 67, 1, 38, 94], and classical systems [31, 60, 33, 11, 10, 81, 28, 24, 56, 107, 52, 8]. A natural problem relating 2.2. Graphs, quantum circuits and classical simulations 19 quantum computation and statistical mechanics is to understand for which instances quantum computers provide a speedup over their classical counterparts for the evaluation of partition functions [31, 28]. For the Potts model, results obtained in [24] provide insight into this problem when the evaluation is an additive approximation. We provided a class of examples for which there is a quantum speedup when one seeks an exact evaluation of the Potts partition function [56].

Relationships between quantum computation and graph theory are emerging beyond applications to graph theoretic problems. For example in the one-way quantum computation setting, quantum graph states and their relationship to entanglement is already well known [76]. Here one sees the correspondence between a graph and a quantum state where the vertices correspond to the qubits of the state and the edges correspond to pairs of qubits. For quantum circuits one may make use of a graph to actually represent the circuit or architecture [27]. For example, if we let Γ = (V, E) be a graph, then the set of vertices V may represent the individual qubits (or input into the circuit) and the edges E correspond to any pair of qubits that may be acted upon by a two qubit gate. In another related approach, ref. [51] instruct to “regard each gate as a vertex, and for each input/output wire add a new vertex to the open edge of the wire.” Using this correspondence, I.L. Markov et al. prove statements about families of quantum circuits that are classically simulatable.

In chapter 6 I address the connection between quantum computing and statistical mechanics in the context of the Ising model partition function Z. I present a mapping between graph (and edge interaction distributions) instances of the Ising model and quantum circuits that I first introduced in [55]. Using this mapping I prove a theorem about a certain class of quantum circuits which may be classically simulated via these same constraints. Restricted classes of quantum circuits which can be efficiently simulated classically have been known since the Gottesman-Knill theorem [85]. This theorem states that a quantum circuit using only the following elements can be simulated efficiently on a classical computer: (1) preparation of qubits in computational basis states, (2) quantum gates from the Clifford group (Hadamard, controlled-NOT gates, and Pauli gates), and (3) measurements in the computational basis. Such “stabilizer circuits” on n qubits can be be simulated in O(n log n) time using the graph state formalism [9]. Other early results include [70], where the notion of matchgates was introduced and the problem of efficiently simulating a certain class of quantum circuits was reduced to the problem of evaluating the Pfaffian. This was subsequently shown to correspond to a physical model of noninteracting fermions in one dimension, and extended to noninteracting fermions with arbitrary pairwise interactions [39, 15, 35] (see further generalizations in Refs. [16, 63]), and Lie-algebraic generalized mean-field Hamiltonians [105]. Criteria for efficient classical simulation of quantum computation can also be given in terms of upper bounds on the amount of entanglement generated in the course of the quantum evolution [48].

A result that is more directly related to mine is given in Ref. [107], but within the measurement-based 20 Chapter 2. Review of previous work quantum computation (MQC) paradigm. MQC relies on the preparation of a multi-qubit entangled resource state known as the cluster state. It is known that MQC with access to cluster states is universal for quantum computation. Reference [107] considers planar code states which are closely related to cluster states in that a sequence of Pauli-measurements applied to the two-dimensional cluster state can result in a planar code state. MQC with planar code states consists of a sequence of measurements M , M , . . . , M , M where { 1 2 n } the Mi are one qubit measurements and M is a final measurement done on the remaining qubits in some basis which depends on the results of the Mi. Reference [107] demonstrates that planar code states are not a sufficient resource for universal quantum computation (and can be classically simulated). This fact is attributed to the exact solvability of the Ising partition function on planar graphs. My results complement the work in [107], as they are provided in terms of the circuit model, and generalize to Ising model instances that correspond to graphs which are not necessarily subgraphs of a 2D grid. Other conceptually related work uses the connection between graphs and quantum circuits and the formalism of tensor network contractions, to show that any polynomial-sized quantum circuit of 1- and 2- qubit gates, which has log depth and in which the 2-qubit gates are restricted to act at bounded range, may be classically efficiently simulated [51, 63, 95]. A tensor network is a product of tensors associated with vertices of some graph G such that every edge of G represents a summation (contraction) over a matching pair of indexes. We also use a relationship between quantum circuits and graphs, but one whose construction is quite different [55]. Finally, Ref. [112] connects matchgates and tensor network contractions to notions of efficient simulation. Chapter 3

A review of some key concepts from coding theory

3.1 Introduction

This is the first mathematically technical chapter. We shall introduce the mathematical background required for two of the results presented in this thesis, namely a way to use an ideal quantum computer to evaluate the weight enumerator polynomial from coding theory for a specific class of linear codes, and the intimately related Potts partition function for a related class of graphs.

3.2 Cyclic Codes

Definition. We shall denote a finite field Fq, where q is a power of some prime number, as GF (q), or a Galois field of order q.

Recall that a Galois Field always contains pk elements, where p is a prime number, and any two finite

k fields with p elements are isomorphic. Also recall, that GF (q)∗ = GF (q) 0 is a cyclic group, i.e., it − m is generated by one of its non-zero elements. Thus, any element in GF (q)∗ may be written as γ , where

+ γ GF (q)∗ and m Z . ∈ ∈ For completeness we introduce inner products over finite fields. Take the finite field GF (q) where q is a power of a prime and it is also a square (i.e., of the form v2 for some integer v). Then GF (√q) is a subfield of GF (q). Define the conjugate s of s GF (q) as ∈

s = s√q. 21 22 Chapter 3. A review of some key concepts from coding theory n Then the inner product of two vectors (u1, . . . , un) and (v1, . . . , vn) in GF (q) is given by

n (u , . . . , u ) (v , . . . , v ) = u v . 1 n · 1 n i i i=1 ! n Definition. A linear code C is a k dimensional subspace of the vector space Fq and is referred to as an [n, k] code. The code is said to be of length n and of dimension k.

Definition. A linear code C is a cyclic code if for any word (c0, c1, . . . , cn 1) C, also (cn 1, c0, c1, . . . , cn 2) − ∈ − − ∈ C. If C contains no subspace (other than 0 ) which is closed under cyclic shifts then it is irreducible cyclic. { } Definition. A ring is a set R which is an abelian group (R, +) with 0 as the identity, together with (R, ), × which has an identity element with respect to where is associative. × × Definition. An ideal I is a subset of a ring R which is itself an additive subgroup of (R, +) and has the property that when x R and a I then xa and ax are also in I. ∈ ∈ Definition. A principal ideal is an ideal where every element is of the form ar where r R. ∈ Thus, a principal ideal is generated by the one element a and a principal ideal ring is a ring in which every ideal is principal. There is an isomorphism between powers of finite fields Fn and a certain ring of polynomials. Let (xn 1) q − be the principal ideal in the polynomial ring F [x] generated by xn 1. q − Therefore the residue class ring F [x]/(xn 1) is isomorphic to Fn since it consists of the polynomials q − q

n 1 a0 + a1x + + an 1x − ai Fq, 0 i < n . { · · · − | ∈ ≤ }

Taking multiplication modulo xn 1 we can make the following identification: −

n n 1 n (a0, a1, . . . , an 1) Fq a0 + a1x + + an 1x − Fq[x]/(x 1). (3.1) − ∈ ←→ · · · − ∈ −

This implies the following theorem.

Theorem 1. A linear code C in Fn is cylic C is an ideal in F [x]/(xn 1).[61] q ⇐⇒ q −

n n 1 Proof. In one direction, if C is an ideal in Fq[x]/(x 1) and c(x) = a0 +a1x+ +an 1x − is a codeword, − · · · − then by definition xc(x) C as well and so (an 1, a0, a1, . . . , an 2) C. In the other direction, one just has ∈ − − ∈ to note that since C is cyclic, xc(x) is in C for every c(x) C which means that xkc(x) is in C for every k. ∈ But C is linear by assumption so if h(x) is any polynomial then h(x)c(x) is in C and thus C is an ideal.

3.2.1 A Pause for Cyclotomic Cosets

We now pause to introduce a useful mathematical object that will be used in the algorithm presented in the next chapter. This does not reduce the complexity of the algorithm asymptotically but for certain instances it can contribute significantly. 3.2. Cyclic Codes 23 Let S = 0, 1, 2, . . . , N 1 and let p be prime such that gcd(N, p) = 1. The p-cyclotomic cosets of this { − } set is given by the collection of subsets

0 , 1, p, p2, . . . , pr , . . . , a, ap, ap2, . . . , aps { } { } { } where elements are computed mod N and s is the minimal exponent such that a(ps 1) = 0 mod N i.e. s − is the smallest integer before one begins to get repeats in the coset. (The same is true for r.)

As an example consider N = 16 and p = 3. One obtains

0 , 1, 3, 9, 11 , 2, 6 , 4, 12 , 5, 15, 13, 7 , 8 , 10, 14 . { } { } { } { } { } { } { }

One sees that this defines an equivalence relation, i.e., for g, f S we have that g f if g = f pl mod N ∈ ∼ · for some l. Each equivalence class in known as a cyclotomic coset or class and referred to as Cj where j is the coset leader, i.e., the smallest coset representative. For the example given above we have C = 0 (as 0 { } always), C = 1, 3, 9, 11 , C = 2, 6 , C = 4, 12 , C = 5, 15, 13, 7 , C = 8 , and C = 10, 14 . 1 { } 2 { } 4 { } 5 { } 8 { } 10 { }

3.2.2 An application of cyclotomic cosets

Factorization of XN 1 − Proofs of all claims can be found in [74]. Take p to be prime.

Definition. Consider an element α in the finite field extension GF(pl) of GF(p). The minimal polynomial of α is the monic, irreducible polynomial M(x) of least degree such that M(α) = 0.

n The following is a classical result and it is an extension of the fact that Xp X is equal to the product − of all monic polynomials, irreducible over GF(p) whose degree divides n. The idea is that once one has the cyclotomic cosets of S, then one can find a factorization of XN 1 into a product of monic polynomials as − well.

Theorem 2.

M (X) X η s ≡ − η C #∈ s is the minimal polynomial of αs over GF(pk).

Corollary 1.

XN 1 = M (X) − s s # where s runs over any set of coset representatives modulo N over GF(p).

Detailed examples of using cyclotomic cosets for finding factorizations are provided in [74]. 24 Chapter 3. A review of some key concepts from coding theory 3.3 Codes continued

Note that F [x]/(xn 1) is a principal ideal ring and therefore the elements of every cyclic code C are just q − multiples of g(x), the monic polynomial of lowest degree in C; g(x) is called the generator polynomial of C. We see that g(x) divides xn 1 since otherwise g(x) could not be the monic polynomial of lowest degree in C. − This is where the factorization of xn 1 becomes important. First let us explain what it means to generate − a code by making use of a simple relationship between g(x) and a special matrix well known in the theory of

n k error correcting codes called the generator matrix. Note that we can write g(x) = g0 + g1x + gn kx − . · · · − We then can write the k n generator matrix of the code as ×

g0 g1 gn k 0 0 0 · · · − · · ·   0 g0 gn k 1 gn k 0 0  · · · − − − · · ·  .    0 0 0   · · · · · ·     0 0 g0 g1 gn k   · · · · · · −    In this way, the row space of this matrix is C. The previous arguments all point to the fact that if one is able to factorize xn 1 into irreducible − polynomials, then you can generate every cyclic code of length n over F . If we can write xn 1 = q − w (x)w (x) w (x) as the decomposition of xn 1 into irreducible factors, then we can generate 2t 2 1 2 · · · t − − different cyclic codes by taking any non-trivial product of the factors wi(x) as the generator polynomial. If for example you take wi(x) to be the generator polynomial, you obtain what is known as a maximal cyclic

xn 1 code and if you choose − then you obtain an irreducible cyclic code. It is clear that t is the number of wi(x) cyclotomic cosets modulo n, from Theorem 2 and Corollary 1. The primary object of interest for us is the weight enumerator polynomial, often referred to as the weight enumerator.

Definition. Let C be a linear code of length n and let Ai be the number of vectors in C having i non-zero entries (Hamming weight of i) . Then the weight enumerator of C is the bi-variate polynomial

n n i i A(x, y) = Aix − y . i=0 ! The set A is called the weight spectrum of the code. { i}

Associated with any [n, k] linear code C is its [n, n k] dual code C⊥. The relation between the weight − enumerator A of a code C over the field Fqk , and the weight enumerator A⊥ of the dual code C⊥ is given by the MacWilliams identity [100]:

k2 k A⊥ (x, y) = q− A y x, y + (q 1)x . (3.2) − − + , Informally, we mention a practical issue. A code C consists of an alphabet that one may use to send a message. When the receiver obtains a word from the sender it is likely that some error occurred along the 3.4. Gauss sums and their relationship to the weight spectrum of linear codes 25 way. Under certain circumstances one may use the parity check matrix M to determine the error and to correct it. M is precisely the generator matrix for the dual code C⊥ and any word c in C satisfies Mc = 0. See [61] for details.

For convenience, we also introduce the check polynomial and parity check matrix.

xn 1 Definition. The polynomial h(x) = g(−x) in an [n, k] cyclic code is called the check polynomial.

It has earned this name due to the following fact. If a word (v0, v1, . . . , vn 1) C then − ∈

n n (v0 + v1x + + vn 1x 1)h(x) = 0 mod x 1. · · · − − −

This follows from the observation that every word in C is equal to a polynomial p(x) multiplied by the generator polynomial g(x) and thus we have that

n n n (v0 + v1x + + vn 1x 1)h(x) = p(x)g(x)h(x) = p(x)(x 1) = 0 mod x 1. · · · − − − −

Definition. The parity check matrix H of a code C is the generator matrix for the code dual to C. If c C ∈ then Hc = 0.

The computation of the weight enumerator polynomial is known to be a #P-hard problem [21, 118]. This should not be surprising as the weight enumerator is an instance of the Tutte polynomial (as is the Jones polynomial from knot theory and the Potts partition function) [118]. We shall address the computation of the weight enumerator via quantum computation in Chapter 4 and then extend the technique so that it applies to the Potts partition function in Chapter 5.

3.4 Gauss sums and their relationship to the weight spectrum of

linear codes

We now briefly introduce characters over finite fields and Gauss sums as this will provide a vital link between quantum computation and the weights of words in a certain subset of the set of all irreducible cyclic codes. Ultimately, we wish to provide a quantum algorithm for the exact evaluation of the weight enumerator for a restricted class of codes by making use of a quantum algorithm for Gauss sums [115]. This result is presented in [56] but in the guise of evaluating the Potts partition of statistical physics.

Given a field Fqk , there is a multiplicative and additive group associated with it. Namely, the multiplica- tive group is F∗k = F k 0 and the additive group is F k itself. Associated with each group are canonical q q \ q homomorphisms from the group to the complex numbers, named the additive and multiplicative characters.

The multiplicative character χ is a function of the elements of Fq∗k and the additive character is a function of F k and is parameterized by β F k . q ∈ q 26 Chapter 3. A review of some key concepts from coding theory

Definition. Let eβ and χj be an additive and multiplicative character respectively. Then the Gauss Sum

G(χj, eβ) is defined as:

G(χj, eβ) = χj(x)eβ(x). (3.3) x F !∈ ∗ A Gauss sum is then a function of the field Fqk , the multiplicative character χ and the parameter β, and can always be written as G (χ, β) = qkeiγ , (3.4) Fqk $ where γ is a function of χ and β. It is in general quite difficult to find the angle γ. The complexity of estimating this quantity via classical computation is not known but it can be shown it is equivalent in complexity to evaluating discrete log [115]. There is a trace function over finite fields that we now define.

k Definition. Let q be prime, k a positive integer, and let F k be the finite field with q 1 non-zero elements. q −

The trace is a mapping Tr : F k F and is defined as follows. Let ξ F k . Then q 6→ q ∈ q k 1 − j Tr(ξ) = ξq . (3.5) j=0 ! The canonical form of an additive character is given by

2πi/qTr(βa) eβ(a) = e and the canonical form of a multiplicative character is given by

i 2πjm m qk 1 χj(α ) = e −

m where any non-zero element in Fqk may be written as α for some positive integer m, i.e., α is the generator of this finite field.

We deal specifically with irreducible cyclic codes. Let α generate the multiplicative (cyclic) group Fq∗k =

F k 0 . q \{ }

Theorem 3. Each of the qk words of an [n, k] irreducible cyclic code may be uniquely associated with an element τ F k and may be written as ∈ q

N 2N (n 1)N (Tr(τ), Tr(τα ), Tr(τα ), . . . , Tr(τα − )), (3.6) where k is the smallest integer such that qk = 1 mod n.

For a proof of this statement see [61]. In order to obtain A(x, y) we need to find the weight spectrum A . One step in this direction is the { i} following theorem that connects the weights of irreducible cyclic code words to Gauss sums. Let w(x) be the Hamming weight of the code word associated with x F∗k . ∈ q 3.4. Gauss sums and their relationship to the weight spectrum of linear codes 27

Theorem 4. (McEliece Formula) Let w(ξ) for ξ F∗k be the weight of the code word given by Eq. (3.6), ∈ q let qk = 1 + nN where q is prime and k, n and N are positive integers, let d = gcd(N, (qk 1)/(q 1)), − − and let the multiplicative character χ¯ be given by χ¯(α) = exp(2πi/d), where α generates Fq∗k . (χ¯ is called the character of order d.) Then the weight of each word in an irreducible cyclic code is given by

k d 1 q (q 1) q 1 − a a w(ξ) = − − χ¯(ξ)− GF (χ¯ , 1). (3.7) qN − qN qk a=1 ! For a proof of this see [12, 78].

The main difficulty in using this theorem is that even estimating Gauss sums is computationally difficult. Fortunately, it has been shown that this is an application for which quantum computers are efficient [115]. Specifically, in order to approximate γ to within an error -, the computational cost is O( 1 (log(qk))2). We ! · now review this algorithm due to Van Dam and Seroussi. The following is an outline of the essentials of the proof; we refer the reader to [115] for a complete description as well as a discussion of the complexity of estimating Gauss sums.

Theorem 5. Quantum Amplitude Amplification Let f : S 0, 1 be a function for which we know { } 6→ { } the total weight f but not those values x S for which f(x) = 1. Then the corresponding state 7 7l1 ∈ 1 f = f(x) x | # f l | # 2 x S 7 7 !∈ can be efficiently and exactly prepared on a quantum computer where we have to make a number of queries

S to f of the order O f| | . ( (l1 -. / This is an essential ingredient in Grover’s quantum search algorithm. For a proof and details see [72]. It follows from Eq. (9.1) and Shor’s discrete log algorithm [98] that given g, qk and j, we can efficiently create the state χ . The following lemma is essential in this regard. First note that for any set S we define | j# 1 S x . | # ≡ | # S x S | | !∈ $ k Lemma 3.4.1. For a finite field Fqk and the triplet (q , g, r) (the specification of a multiplicative character

χr), the state 1 χr = χr(x) x | # k | # q 1 x F − ∈!qk $ and its Fourier transform χˆ can be created in polylog(qk) time steps on a quantum computer. | r#

Proof. We first create the state

qk 2 − ˆ 1 j Fq∗k 1 = x ζqk 1 j | #| # qk(qk 1) | # − | # x F∗ j=0 − ∈!qk ! $ 28 Chapter 3. A review of some key concepts from coding theory by using Grover’s amplitude amplification on Fqk and the Fourier transform. Next, in superposition over all x F∗k , we calculate log (x) and subtract r log (x). ∈ q g g

qk 2 − ˆ 1 j Fq∗k 1 x ζqk 1 j r logg(x) (3.8) | #| # −→ qk(qk 1) | # − | − # x F∗ j=0 − ∈!qk ! $ qk 2 − 1 j r logg (x) = x ζqk 1ζqk 1 k (3.9) qk(qk 1) | # − − | # x F∗ j=0 − ∈!qk ! $ 1 r log (x) g ˆ = ζqk 1 x 1 (3.10) qk 1 − | #| # x F∗ − ∈!qk = $χ 1ˆ (3.11) | r#| #

To get χˆ we just need to apply the Fourier transform. | r#

The technique used in the above proof is known as the phase kickback trick. Now we are ready for the following.

Theorem 6. Algorithm for approximating Gauss Sums. Consider Fqk , a nontrivial multiplicative character χ and β F∗k . If we apply the quantum Fourier transform over this field to χ , followed by a r ∈ q | r# phase change

y χ2(y) y (3.12) | # −→ r | # then we generate an overall phase change given by

G (χ , β) 1 Fqk r χr = χr(x) x χr . | # k | # −→ k | # q 1 x F q − ∈!qk $ $ Proof. After a Fourier transform we have

1 Tr(βxy) χˆr = χr(x)ζq y | # qk(qk 1)   | # y F∗ x F k − ∈!qk ∈!q $ 1   = GF (χr, βy) y qk(qk 1) qk | # y F∗ − ∈!qk $ 1 1 = χr(y− )GF (χr, β) y . qk(qk 1) qk | # y F∗ − ∈!qk $ Then GF (χr, β) qk 1 χr = χr(y− ) y . | # qk(qk 1) | # y F∗ − ∈!qk $ Now we know that we can efficiently (and exactly) create the phase change given by (3.12). Doing so gives us GF (χr, β) GF (χr, β) qk 1 2 qk χˆ χr(y− )χr(y) y = χr | # −→ qk(qk 1) | # qk | # y F∗ − ∈!qk $ $ 3.4. Gauss sums and their relationship to the weight spectrum of linear codes 29 since χ = 1 χ (y) y and χ (y 1)χ (y) = 1. Thus, the coefficient of χ is just eiγ . It is well r k y F∗ r r − r r √q 1 qk | # − ∈ | # | # known that one can"efficiently estimate the phase of such a function to within an expected error of O(1/n) where n is the number of copies of eiγ χ we sample [85]. Therefore we arrive at an estimate of γ and hence | r# of the Gauss sum in question.

This gives way to the following theorem about the time complexity of the algorithm and is the culmination of the first part of the paper [115].

Theorem 7. For any - > 0, there is a quantum algorithm that estimates the phase γ in G (χ , β) = Fqk r qkeiγ , with expected error E( γ γ˜ ) < -. The time complexity of this algorithm is bounded by O( 1 | − | ! · $polylog(qk)) [115].

Note that the “poly” in polylog refers to a quadratic polynomial.

Now, to continue define the function

d 1 qk(q 1) q 1 − ι a k iγa S(ι) = − − χ¯(α )− q e . (3.13) qN − qN a=1 ! $ f This equation is just the expansion of the formula for w(y) where now we take α to be the primitive

ι element in Fqk (i.e., any element in the field may be written as α ). This means that if we were able to find the range of S(ι) we would have all the weights of the corresponding code. Of course, it does look like we have to evaluate an exponential number of words in k, the dimension of the code. This is not the case in all situations however and this is where cyclotomic cosets will play a role. Note the following proposition.

Proposition 1. In an [n, k] irreducible cyclic code there are at most N words of different non-zero weight where N = (qk 1)/n. −

Proof. For any irreducible cyclic code we have the relation qk 1 = nN over the field F . The length of each − q word is n and any cyclic permutation of a word preserves the Hamming weight. Therefore, for each word there are n 1 other words of equal weight. As there are qk 1 words of non-zero weight, if we assume that − − every word that does not arise from the cyclic permutation of another word is of a different weight, then there are (qk 1)/n words of different weight. Being however that there is the possibility of repeats in weight − among words which are not cyclic permutations of each other, there are at most N different weights.

This means that it is in fact N and not n which will determine the complexity of finding the weight spectrum A . The first restriction that we make on our codes is that we only consider families of codes { i} where N grows polynomially in k. In this way, we may claim that our algorithm for the exact evaluation of the weight enumerator is efficient as will be shown below.

It turns out that cyclotomic cosets are a help here. This occurs because each element in a given coset has

qj the same value of S(ι). This is due to the fact that the mapping x x is a permutation of F k (Frobenius 6→ q 30 Chapter 3. A review of some key concepts from coding theory automorphism) and in fact this mapping is an automorphism for ZN when q and N are relatively prime [100]. Let us assume that we have all d 1 Gauss sums necessary to compute S(ι) (via a quantum computation for − example). Let us call these Gauss sums Λa. We then must convince ourselves that S(g) = S(f) whenever g = fqj for some integer j. We have

d 1 a d 1 qj j a − 2πifq − − 2πif − S(g) e d Λ = e d Λ . ∼ a a a=1 a=1 ! 0 1 ! 0- / 1 One can show that gcd(qj, d) = 1 and therefore the mapping

j 2πif 2πif q e d e d 6→ - / - / is just a permutation of the cyclic group of order d generated by the primitive root of unity, i.e., the above mapping is an automorphism. This means that the sum does not change and therefore we have that S(g) = S(f). This means that S(ι) is invariant over individual cosets.

Definition. Let the Euler totient function be denoted by φ(f) and define it to be the number of positive integers less than f which are relatively prime to f.

s Definition. s = ordqf is defined to be the smallest positive integer s such that q = 1 mod f. s is called the multiplicative order of q mod f.

It is known that the number of cyclotomic cosets is equal to

φ(f) NC = (3.14) ordqf f N !| [100] There are many instances where N N but asymptotically, it does not make an exponential difference. C , However the difference can be significant. Take for example N = 358701. The number of 2-cyclotomic cosets is 546. One can clearly see that this has the potential for a large speed up for the task of evaluating weight enumerators. We include a classical algorithm for cyclotomic cosets in the appendix. In the next chapter we string together the facts presented here into a scheme to obtain the exact weight enumerator polynomial for a certain class of linear codes. Chapter 4

An evaluation of the Weight

Enumerator via Quantum

Computation

In this chapter we demonstrate a way to compute the weight enumerator for a certain restricted family of the duals of irreducible cyclic codes. It is likely that these techniques may be extended to a larger class of codes and we leave this for future work. We leave out several details in this chapter and reserve the full details for the next chapter where we use the techniques discussed to obtain a scheme for the exact evaluation of the Potts partition function. We just outline the approach and in the next chapter no details are spared.

4.0.1 A Theorem on the evaluation of certain Weight Enumerators

Earlier, we discussed a theorem presented in [115] that gives a poly-logarithmic algorithm for the estimation of a Gauss sum. The algorithm is for an approximation of the angle γ in

G (χ, β) = qkeiγ , (4.1) Fqk $ up to an error -. This means that if γa is the actual angle then the quantum algorithm returns γ such that γ γ < -. The smaller we wish to make - the more times we would have to run our quantum algorithm, | a − | i.e., if we want - accuracy we have to run the algorithm 1/- times. How can we use this result to obtain the exact weight spectrum? Clearly the error would propagate when we attempt to find the range of S(ι). Fortunately, there is a theorem which gives us some information about the weights of words in irreducible cyclic codes.

θ 1 Theorem 8. (McEliece [119]) All the weights of an [n, k] irreducible cyclic code are divisible by q n,k− , 31 32 Chapter 4. An evaluation of the Weight Enumerator via Quantum Computation where θn,k is given by 1 θn,k = min S$(jn) (4.2) q 1 0

Being that the weights are integers, this theorem gives us a clue as to the distance between weights. What this means is that if we can make - small enough, we will be able to guarantee that the range of S(ι) are the actual weights even though we are using an approximation of the Gauss sum. In the next chapter we show that qθn,k 1 - − ≤ 4 qk is sufficient. Further, it can be shown that for any fixed$- < 1, there is a family of cyclic codes which conform to the necessary restrictions required to obtain the exact weight enumerator. There is a polynomial speed up in the dimension k and an exponential speed up in q over the best classical algorithms. See [56, 84, 79] for details. For completeness we mention justification for this claim of algorithmic speed up. Note that in [79], M. Moisio gives an algorithm for computing the weight distribution of binary index 2 irreducible cyclic codes. The algorithm is efficient and is due to the fact that there is an efficient way of solving the Diophantine equation necessary for this case. As indicated in [84], the weight distributions of irreducible cyclic codes are intimately related to Gauss sums (as these functions are related to the number of rational points on Hasse-Davenport curves). Thus, for the index 2 cases explored in [79], M. Moisio used a special form that Gauss sums take for this situation as well as information from the solution of the particular Diophantine equation. Now, index 2 refers to the fact that the dimension k of the code is equal to φ(N)/2, where φ is the

1 ! Euler totient function. Asymptotically it is well known that N − < φ(N) < N [88], and thus we essentially have k N. This means that the situations that we are able to handle are computationally much more ∼ difficult to deal with than these situations and the ability of quantum computers to approximate Gauss sums provides a very significant advantage. In fact, the assumption that the length of the codes considered in this chapter grow exponentially with k, makes it very unlikely that any approach devoid of computations of Zeta functions or Gauss sums will be sufficient. We now give a formal definition for the class of codes for which this applies and a theorem that summarizes the results.

Definition. Given a constant - < 1, ICQ! is the class of irreducible cyclic codes of dimension k and length n, such that qk 1 n = − (4.3) αks (where α R is chosen so that n N and where s R determines the complexity and the instances of ∈ ∈ ∈ codes considered) and 1 θn,k = min S$(jn) (4.4) q 1 0

(where S$(x) is the sum of the digits of x in base q) so that

qθn,k 1 - − . (4.5) ≤ 4 qk ICQ also includes the cyclic [n, n k] dual codes and$all equivalent codes [61]. ! − Theorem 9. A quantum computer can return the exact weight enumerator polynomial A(x, y) for codes in

2s 2 ICQ!. For each family ICQ! (- fixed), the overall running time is O(k (log q) ) and the success probability k 2 1 is at least 1 δ, where δ = [2((q 1) - 2)]− . − − − This theorem imposes a restriction on the fundamental relationship nN = qk 1 in that we impose that − asymptotically N = O(ks). This essentially means that we consider codes for which the lengths of the codes grow exponentially. This is a good restriction for it makes brute force classical computation not feasible. We do not supply a proof for the theorem as it is essentially the same as the proof given in the next chapter for the Potts partition function. We do however supply an overview of the algorithm for computing the weight enumerator of a code in ICQ!. The success probability comes from the fact that the evaluation does depend on a quantum algorithm and thus is ultimately probabilistic. See [115, 85] for details.

4.0.2 Overview of the Algorithm to Obtain the Exact Weight Enumerator of a

Code in ICQ!

The ability to test whether codes are members of ICQ! requires the ability to perform discrete log and fortunately, that is a task that quantum computers can perform efficiently. It is widely believed that no efficient classical algorithm for discrete log exists, but this is not known with certainty. The next chapter includes such a test; we do not present this test here but instead just assume that we know that a code does indeed belong to ICQ!.

1. Let N = O(ks) where s is a constant integer that determines the complexity of the algorithm. Take C

qk 1 as our irreducible cyclic code of length n = N− and dimension k (or the dual code).

2. Find the q-cyclotomic cosets of 0, 1, . . . , N 1 . This step requires at most linear time in N. (See { − } the next section)

3. Using the quantum algorithm for Gauss sums [115] we are be able to estimate the weights of the words.

Use the Gauss sum algorithm to return the phases γ1, . . . , γd 1 [Eq. (4.1)] and then input these values − into the function S(ι). According to the McEliece Formula (Th. 12) we have to make d 1 (where − qk 1 d = gcd(N, q −1 )) calls to the quantum oracle and we can use these evaluations for each representative − i of the q-cyclotomic cosets of 0, 1, . . . , N 1 . This step has time complexity O(dk2(log q)2) [115, 56]. { − }

4. Let b1, b2, . . . , bNC be the coset representatives from the NC cosets. Now each coset has cardinality vi,

i.e., bi belongs to coset i which has vi elements. We evaluate ωi = S(bi) for each bi, remembering that 34 Chapter 4. An evaluation of the Weight Enumerator via Quantum Computation

each ωi occurs vi times. We end up with a list (ω1, ω2, . . . , ωNC ) as well as a list (v1, v2, . . . , vNC ) of multiplicities. This step will have an O((d 1) N ) time cost. − · C

5. Now perform a tally of repeats of the ω for each i 1, ..., N . This returns a set of indices i ∈ { C }

Λi ji 1, ..., NC . We add the corresponding vji which yields ai = j Λ vj, the number of ≡ { } ⊆ { } ∈ i words of weight ωi up to cyclic permutations. To account for cyclic permutations" due to the fact that

we are working over cyclic codes, we have Ai = nai, which is the desired weight spectrum. The tally

will have an O(√NC ) time cost using Grover’s quantum search algorithm [72]. (This will have no affect on the overall complexity.)

2s 2 6. Combining the previous steps, we now have determined the weight spectrum Ai in time O(k (log q) )

s (by modestly taking NC = O(k ), i.e., essentially ignoring the contribution of the cyclotomic cosets). This means that we have the coefficients for A(x, y) as well as the exponents and thus, are done. Chapter 5

A quantum algorithm for the Potts partition function

5.1 Structure of this chapter

In Section 5.2 we define the class of graphs our quantum algorithm applies to, and present our main theorem. In Section 5.3 we compare the computational complexity of our algorithm with the state of the art in classical algorithms. In Section 5.4 we give a brief summary of the entire algorithm. In Section 5.5 we provide several illustrative examples of graphs and codes our algorithm applies to. Finally, in Section 5.6 we conclude and discuss future directions. The appendix provides pertinent background on matroids and we have already reviewed irreducible cyclic codes, and Gauss sums.

5.2 A Theorem about QC and instances of the Potts Model

5.2.1 Main Theorem

We present here a polynomial time quantum algorithm for the exact evaluation of the q-state (fully fer- romagnetic or anti-ferromagnetic) Potts partition function Z for a certain class of graphs. This class of graphs, which we call “Irreducible Cyclic Cocycle Code” ICCC! graphs, comprises graphs whose incidence matrices generate certain cyclic codes. This and other concepts used below are given precise definitions in Section 5.2.2. The key ingredients used are the connection of Z to the weight enumerators of codes [2] and a quantum algorithm for the approximation of Gauss sums [115].

The overall structure of the algorithm is the following:

1. Given a graph Γ = (E, V ), first determine if Γ belongs to the ICCC! class. This decision problem can 35 36 Chapter 5. A quantum algorithm for the Potts partition function be solved efficiently using the quantum discrete log algorithm [98]. If Γ ICCC proceed to step 2, ∈ ! otherwise the algorithm may not evaluate ZΓ efficiently.

2. Identify the linear code C(Γ) for which we shall determine the weight spectrum.

3. Using the quantum Gauss sum estimation algorithm find the weight spectrum of the words in C. This step is believed to be classically hard but the exact complexity is unknown. It is known, however, that this step is at least as hard as determining discrete log [115]. This is the most expensive step of the algorithm due to the large number of words one has to deal with. This is because the number of possible spin configurations grows exponentially in the number of vertices.

4. Take a tally of the weight spectrum obtained in the previous step. Grover’s search algorithm can be used to give an additional quadratic speed up but this does not help in reducing the overall complexity since the computational cost of step 3 is greater than that of the current step.

5. Using the relation given by equation (5.5) between the weight spectrum of a code and Z, use the tally

from the previous step to obtain Z (for graphs in ICCC!).

We now give the main theorem, after the definition of the family of graphs for which the scheme applies.

Definition. (ICCC!) Given a constant - < 1, ICCC! is the family of graphs whose cycle matroid matrix (CMM) representation generates a cyclic code whose dual is irreducible cyclic of dimension k and length n, such that qk 1 n = − (5.1) αks(k) (where α R is chosen so that n N and where s(k) is an arbitrary function whose role will be clarified ∈ ∈ below) and 1 θn,k = min S$(jn) (5.2) q 1 0

qθn,k 1 - − . (5.3) ≤ 4 qk $ Below we define the concepts entering this definition and clarify the role of θn,k and of the bound on -.

We work in units such that the Boltzmann constant kB = 1.

Theorem 10. (Main Theorem) Let Γ = (E, V ) be a graph, n = E and k = E V + c(Γ) where | | | | − | | c(Γ) is the number of connected components of Γ. A quantum computer can return the exact q-state fully anti-ferromagnetic or ferromagnetic Potts partition function ZΓ for graphs in ICCC!. For each family (- fixed), the overall running time is O( 1 k2 max[1,s(k)](log q)2) and the success probability is at least 1 δ, where ! − k 2 1 δ = [2((q 1) - 2)]− . − − 5.2. A Theorem about QC and instances of the Potts Model 37 Some remarks:

1. The function s(k) determines the complexity of the schemes. If s(k) = c R (constant) then we have ∈ a polynomial time algorithm for the exact evaluation of Z for each family ICCC!. This restriction is reflected in the graphs by enforcing that n = O(qk/ks), i.e., that the number of edges (n) and vertices (n k) is close. We have numerically solved for the number of edges E as a function of the number − | | of vertices V , given by the corresponding transcendental equation E = V c(Γ) + log ( E ( E | | | | | | − q | | | | − V + c(Γ))s + 1) [Eq. (5.1)]. A numerical fit reveals that to an excellent approximation | |

E = V + a + b log V , (5.4) | | | | | |

where the constants a and b depend on q and s, and both increase slowly with s, and decrease with q, as shown in Fig. 5.1. By direct substitution of Eq. (5.4) into the above transcendental equation it can be seen that the analytical solution will have a a correction of order log log( V ) to the right-hand | | side of Eq. (5.4). The fact that there are logarithmically more edges than vertices in the graphs that

are members of ICCC! is the reason we call these graphs sparse. The important point is that there are families of graphs for which there exist exact polynomial-time evaluation schemes via the methods presented in this chapter. As we show below, in these cases we also obtain polynomial speed ups over the best classical algorithms available.

2. Note that if we have an efficient evaluation for ICCC!# then we also have an efficient evaluation for

ICCC!, provided - > -$.

3. We provide a discussion of the computational complexity, both classical and quantum, in subsection 5.3. As argued there, we obtain a polynomial speed up in the difference between the number of edges

and vertices and an exponential speed up in q over the best current classical algorithm for the ICCC! class of graphs.

Corollary 2. For a given graph Γ, whose CMM is the direct sum of the CMMs of two graphs Γ1 and Γ2 in

ICCC!, a quantum computer will be able to return ZΓ with a running time equal to the sum of the running

times required to obtain ZΓ1 and ZΓ2 .

Proofs of the main theorem and the corollary are provided in Sections 5.2.5 and 5.2.6.

5.2.2 Background

Theorem 10 connects the problem of estimating the Potts partition function to a quantum algorithm for Gauss sums, via weight enumerators for irreducible cyclic codes. In somewhat more detail, the connections we need are as follows. In [2], it was shown that the Potts partition function can be written as the weight 38 Chapter 5. A quantum algorithm for the Potts partition function

  



       

 

               

Figure 5.1: Coefficients a and b as a function of s, for different values of q. Here c(Γ) = 1. See text for details. 5.2. A Theorem about QC and instances of the Potts Model 39 enumerator of the cocycle code of the graph Γ, over which the Potts model is defined. Weight enumerators of irreducible cyclic codes are related to Gauss sums via the McEliece Theorem [68].

Cycle Matroid Matrix Representation of a Graph

A connected component of a graph is any subset of vertices which are all connected to each other via a path along the graph’s edges. We denote the number of connected components by c(Γ). The incidence matrix of a finite graph Γ(E, V ) is a V E binary matrix where column c represents edge c with non-zero entries | | × | | in row i and j if and only if vertices i and j are the boundaries of edge c. Every finite graph Γ also gives rise to a cycle matroid matrix (CMM) [117], which essentially captures the presence and locations of cycles in the graph.

Definition. The cycle matroid matrix of a graph Γ = (E, V ), CMM(Γ), is formed as follows: write down the incidence matrix of Γ using 1 for the ith and 1 for the jth rows, where i < j. Then apply elementary − row operations and Gaussian reduction to obtain a ( V c(Γ)) E matrix of the form [I V c(Γ) X], where | | − × | | | |− | I is the a a identity matrix and X is a ( V c(Γ)) ( E V + c(Γ)) matrix. This is CMM(Γ) (See a × | | − × | | − | | Prop. 4.7.14 of [57]).

We give more details on cycle matroids in the appendix. As an example consider the square [ V = E = 4, | | | | c(Γ) = 1] and its incidence matrix

1 0 0 1 −   1 1 0 0  −  .    0 1 1 0   −     0 0 1 1   −    Applying elementary row operations and Gaussian reduction one obtains the CMM

1 0 0 1 −   0 1 0 1 ,  −     0 0 1 1   −    which is indeed of the form [I V c(Γ) X] with dimensions as in the definition, i.e., X is (4 1) (4 4 + 1). | |− | − × − Over Z one would replace all the 1’s with +1’s. The column space of this matrix represents the cycle 2 − structure of the graph where a cycle (or circuit) is a path in the graph for which the first vertex of the path is the same as the last. Any set of columns that are linearly dependent indicate a cycle. The first three columns in the CMM of the square are linearly independent, but together with the fourth column they become linearly dependent, since there is a cycle in the graph involving the corresponding four edges.

What is the equivalence class of graphs with the same CMM? This is answered by the following: 40 Chapter 5. A quantum algorithm for the Potts partition function

Definition. Two graphs G and G$ are called 2-isomorphic if there exists a 1 1 correspondence between − the edges of G and G$ such that the cycle (or circuit) relationships are preserved.

Thus all 2-isomorphic graphs have the same CMM (up to elementary row and column operations).

5.2.3 The relationship to linear codes

Recall the following definitions.

Definition. Let Fq be a finite field with q prime. A linear code C is a k dimensional subspace of the vector

n space Fq and is referred to as an [n, k] code. The code is said to be of length n and of dimension k.

In our case q is the number of possible states per spin in the Potts model.

Definition. A k n matrix whose rows are a basis for C is called a generator matrix for C. ×

Recall from Definition 5.2.2 that CMM(Γ) is a ( V c(Γ)) E matrix. The E columns of CMM(Γ) | | − × | | | | n reflect the cycle structure of the given graph via linear independence in the vector space Fq . We now view the V c(Γ) rows of the CMM as generating an [n = V , k = E c(Γ)] “cocycle code” C: | | − | | | | −

Definition. The cocycle code C(Γ) of a graph Γ is the row space of CMM(Γ). [2]

In this chapter our only concern for the weight spectrum is its connection to the Potts partition function, but in coding theory it can be used to reveal information about the effiency of a code [61]. The connection between equation (1.10) and the cocycle code of the graph Γ for the Potts model is given in the following theorem proved in [2].

Theorem 11. Let A(x, y) be the weight enumerator of the [n = E , k = V c(Γ)] cocycle code C(Γ) of | | | | − the graph Γ = (E, V ), and let the number of states per spin (vertex) in the corresponding Potts model be a prime q. Then

n c(Γ) ZΓ(y) = y− q A(1, y). (5.5)

We take q to be prime and not a power of a prime to simplify matters. In this manner the cocycle code has words whose entries are in Fq, as will the corresponding irreducible cyclic code in the trace representation over Fqr . The connection between the Potts partition function and weight enumerators can also be understood via a previous result which shows that Z is equivalent to the Tutte polynomial (under certain restrictions) and that the weight polynomial of a linear code is also equivalent to the Tutte polynomial [43]. We also note that a relation similar to Eq. (5.5) was established in [28] for the Ising spin glass partition function and so-called quadratically signed weight enumerators, along with a discussion of computational complexity. 5.2. A Theorem about QC and instances of the Potts Model 41

5.2.4 Testing the graph for membership in the ICCC! class

We now have the tools to address the issue of whether a graph should be accepted as input into the main algorithm, i.e., whether a graph belongs to the ICCC! class. This is handled as follows. - Input: A graph Γ with E edges and V vertices, the given Galois field of qk elements and -. | | | | - Output: Accept or Reject. Let n = E and k = E V + c(Γ) as in the main theorem. | | | | − | | - Overall Complexity: O( E k2 log k log log k) due the ability to take the discrete log E times efficiently | | · | | with a quantum computer [98].

1. Compute θn,k as given in Definition 5.2.1.

2. Find CMM(Γ). It is an (n k) n matrix of the form [In k X], where X is a (n k) k matrix. − × − | − × T T T Form the k n (transpose parity check) matrix H = [ X I ]. H generates an [n, k] code C⊥(Γ) × − | k that is dual to the cocycle code C(Γ).

θ 1 3. Determine if - q n,k− and if k is the multiplicative order of q mod n (i.e., k is the smallest integer ≤ 4√qk such that qk = 1 mod n). If both are true then go to the next step. Otherwise skip the next step and continue.

4. Main Loop:

(a) Fix a basis of GF (qk) over GF (q) and consider the columns of HT as coordinate vectors of some

k elements gi of GF (q ).

(b) Calculate the discrete logarithms log(gi) of each gi with respect to a fixed primitive element g (every element in the field can be written as gl for some l) of GF (qk) using Shor’s algorithm [98] on a quantum computer.

(c) Accept or Reject Γ based on the fact that C⊥ is (equivalent to) an irreducible cyclic code if and only if the numbers log(g ) are some permuted list of consecutive integer multiples of N := (qk 1)/n i − in some order. This is due to the fact that by definition the generator matrix of an irreducible

Nj 2Nj (n 1)Nj cyclic code is equivalent to (1 g g ... g − ) where gcd(n, j) = 1 [61].

5. Step (c) failed. Using elementary row operations transform HT to a block diagonal matrix if possible. If not possible then Reject. If possible then go to Step (c) and input each sub-matrix and continue.

5.2.5 Proof of the Main Theorem

As stated in Theorem 10, we are essentially interested in obtaining the weight spectrum of [n, k] irreducible cyclic codes. The number of words with different non-zero weight is at most N where N = (qk 1)/n. This − result is given as Proposition (1). Now let w(x) be the Hamming weight of the code word associated with x F∗k . The McEliece Theorem connects the weights of words of irreducible cyclic codes to Gauss sums. ∈ q 42 Chapter 5. A quantum algorithm for the Potts partition function

Theorem 12. (McEliece Formula) Let w(y) for y F∗k be the weight of the code word given by Eq. (3.6), ∈ q let qk = 1 + nN where q is prime and k, n and N are positive integers, let d = gcd(N, (qk 1)/(q 1)), − − and let the multiplicative character χ¯ be given by χ¯(γ) = exp(2πi/d), where γ generates Fq∗k . (χ¯ is called the character of order d.) Then the weight of each word in an irreducible cyclic code is given by

k d 1 q (q 1) q 1 − a a w(y) = − − χ¯(y)− GF (χ¯ , 1). (5.6) qN − qN qk a=1 ! For a proof of this see [12].

The important feature here is that if we had the ability to efficiently estimate G (χ, β), then we Fqk would be able to find the weights of the words in an irreducible cyclic code efficiently under the restrictions mentioned in Theorem 10 . This would in turn allow us to find the weight spectrum A of the code. Recall { i} the following theorem.

Theorem 13. (van Dam & Seroussi [115]) For any - > 0, there is a quantum algorithm that estimates

k iγ the phase γ in GF (χ, β) = q e , with expected error E( γ γ˜ ) < -. The time complexity of this qk | − | algorithm is bounded by O( 1 (log$ (qk))2). [115] ! ·

The Gauss sum algorithm allows one to estimate γ in Eq. (4.1) to within any accuracy -, i.e., the algorithm returns γ$ such that γ$ γ < -. The hope is that if one can approximate γ precisely enough then | − | one would get an exact evaluation of the weight. In fact an essential step here is to use a quantum computer to obtain a list of approximate angles γ$ for t = 1, . . . , d 1 for d given above. { t} − The next theorem gives some minimum distance between weights so that we can choose an appropriate error that will allow one to be able to distinguish between weights, which allows us to obtain accurate coefficients for A(1, y) and hence exact values for the exponents.

θ 1 Theorem 14. (McEliece [119]) All the weights of an [n, k] irreducible cyclic code are divisible by q n,k− , where θn,k is given in Definition 5.2.1.

The Proof

We are now ready to prove Theorem 10.

Proof. Assume that a given graph Γ = (E, V ) is a member of ICCC , where n = E and k = E V +c(Γ). ! | | | |−| | θ 1 q n,k− Hence it is given that - -0. We want to obtain ZΓ for either the fully ferromagnetic or anti- ≤ 4√qk ≡ ferromagnetic Potts model. It follows from Definition 5.2.1 that the dual of the cocycle code of Γ is an irreducible [n, k] cyclic code. We must demonstrate that we can obtain the weight enumerator A(1, y) of this dual code within the claimed number of steps. As mentioned above, since nN = qk 1 there are then at − most N different weights, with at least n words of each weight (see chapter 3). In order to find the spectrum 5.2. A Theorem about QC and instances of the Potts Model 43 A , we are faced with the computational task of finding the range of { i} d 1 qk(q 1) q 1 − i a k iγa S(i) = − − χ¯(α )− q e (5.7) qN − qN a=1 ! $ f (where again d = gcd(N, qk 1/q 1) and i 0, . . . , N 1 ) and then performing a tally. − − ∈ { − } The proof consists of five main parts:

1. Proof that with - bounded by -0 as given, it is possible to distinguish between weights of the words of

the code that corresponds to the given graph. This ability allows for an exact evaluation of ZΓ.

2. We need to justify our asymptotic approach and show that for a fixed error - there are a countable

number of graphs in ICCC!.

3. Proof that the success probability δ is as stated in the Theorem.

4. Proof that the running time is as stated in the Theorem.

5. A transformation from the dual (irreducible cyclic) code to the cocycle code of the graph whose Potts partition function we are evaluating.

Let us now prove each of these five parts.

1. The first question we must address is the following: how small do we need to make the error - in the phases returned in the Gauss sum approximation algorithm so that we will be able to distinguish between weights? We now show that - - is sufficient, and hence that for every member of the class ICCC it is ≤ 0 ! possible to distinguish between weights.

Let w!(y) be the approximated weight returned by the quantum Gauss sum algorithm. It follows from

θ 1 Theorem 14 that two consecutive weights are separated by a distance that is an integer multiple of q n,k− . Hence, a sufficient condition for being able to associate w!(y) with the correct weight w(y) (and not another neighboring weight) is: qθn,k 1 w(y) w!(y) < − . (5.8) | − | 2

Let the error between the actual phase γi and the approximated phase γi be -, i.e.,

γ γ < -. 2 | i − i|

Let us derive a bound on -. Taking w(y) given in Theorem2 12 and the necessary bound given in equation (5.8) we find that we need the inequality

θn,k 1 a iγ a iγ qN q − 1 χ¯(y)− e a χ¯(y)− e a < (5.9) 3 − 3 (q 1) 2 qk 3 a a f3 − 3! ! 3 3 3 $ 3 3 44 Chapter 5. A quantum algorithm for the Potts partition function to be satisfied. Now, we have

a iγ a iγ iγ iγ χ¯(y)− e a χ¯(y)− e a e a e a 3 − 3 ≤ − 3 a a f3 a 3 f3 3! ! 3 ! 3 3 3 3 3( cos(γ ) 3cos(γ )) + (sin(γ ) sin(γ )) ) 3 3 ≤ | a − a | | a − a | a ! 2(d 1) γ γ < 2(d 1)- ≤ − | a − a| 4 − 4 where the last inequality follows from the Mean Value Theorem4 of elementary calculus. Therefore, if we impose qN qθn,k 1 1 - < − (5.10) (q 1)(d 1) 4 qk − − qN $ s k then inequality (5.9) is satisfied. Consider the factor (q 1)(d 1) . Noting from N αk that d = gcd(N, (q − − ≤ − s qN q s(k) 1)/(q 1)) αk = N, it follows that 1 < (q 1)(d 1) q 1 N = O(k ). Thus we can replace the bound − ≤ − − ≤ − (5.10) by the tighter bound qθn,k 1 k/2 - < − − = - , (5.11) 4 0

1 and this - is definitely small enough to satisfy the required bound given in equation (5.8). Hence, if - < -0 it is possible to resolve the weights w!(y) for different words y. This, in turn, gives us the ability to exactly reconstruct the weight enumerator A, and from there the partition function ZΓ. 2. We prove the following lemma.

Lemma 5.2.1. Given a fixed - < 1 there are countably many graphs in ICCC!, i.e., there are infinitely many corresponding irreducible cyclic codes [n , k ] such that θ satisfies i i { ni,ki }

1 θ 1 ki q ni,ki − − 2 > -. (5.12) 4

What this means is that there is at least one family of graphs for a given fixed - for which one will be able to obtain the exact Potts partition function. This also justifies the complexity arguments used herein.

Proof. We shall construct one such family and show that it satisfies the required relations. For simplicity take 4- instead of -. We must construct one family of graphs for which the corresponding irreducible cyclic codes, [ni, ki], satisfy

1 ki θ + log (-− ) > 1 + . (5.13) ni,ki q 2 Take q fixed and consider the following countable set of irreducible cyclic codes: [qm 1, k ] . { − m }m=1,2,3,... First we must note that

θqm 1,k = m. (5.14) − m

θ 1 1 s(k) q n,k− Note, however, that when d = 2, we have in fact ! < k !0 where !0 = , and in this case the computational cost of 4√qk the algorithm (see Theorem 10) is scaled down from O( 1 k2s(k)(log q)2) to O( 1 ks(k)(log q)2), where the upper bound (5.11) !0 !0 s(k) still applies. This means that within the family ICCC!0 some instances can be solved faster than others by a factor of k , at fixed !0. 5.2. A Theorem about QC and instances of the Potts Model 45 This follows from the properties of addition in base q: the k digits of qk 1 in base q are all (q 1), and − − adding integer multiples of qk 1 will not decrease the digit sum. I.e., −

m S$(η(q 1)) m(q 1) η N. − ≥ − ∀ ∈

This is important to keep in mind when we consider extending this family later in this proof. Now we must demonstrate that there is at least one km that satisfies Eq. (5.13). Because we are dealing with irreducible cyclic codes we must have

qkm = 1modn = 1mod(qm 1). − This is trivially satisfied by k = m and indeed Eq. (5.13) becomes m m > 1 + log -, which is clearly m − 2 q true for any - < 1. This family is computationally trivial, however, being that N = 1.

Let us now extend this family to include many interesting instances. Let us first consider a fixed code [qm 1, k ] (i.e., N = 1, m fixed). Let us next generate a family of codes [η (qm 1), k ] by − m { j − mj }j=1,2,...,M taking integer multiples η of qm 1, and picking k k such that η (qm 1)N = qkmj 1 (this is just j − mj ≥ m j − − the irreducible cyclic code condition nN = qk 1). We obtain a finite set of codes (M < ) because it − ∞ follows from Eq. (5.12) that eventually the k will become too large for the fixed error -, for each m. { mj}j We then do this for every m N paying special attention to the integer multiples η . The η are selected in ∈ j j this construction so that two conditions are satisfied: (i) the corresponding k are sufficiently small to { mj}j ensure that Eq. (5.12) is satisfied, (ii) that N is bounded by O(ks ) . { mj }m,j Regarding (i), the steps above are conveniently summarized as the following loop:

Given -:

1. For m = 1, 2, ...

2. Repeat j = 1, 2, ...

n := j(qm 1) −

calculate kmj =ordq(n)

θ 1 k /2 if q n,kmj − − mj < - then reject k , else accept k and let η j. mj mj j ≡ Until j = M

Note that we are guaranteed to find such a non-empty finite set k due to the fact that if gcd(q, η (qm { mj}j j − 1)) = 1, then k N such that qkmj = 1 mod η (qm 1) (see, e.g., Th. 7-1 of [49]). ∃ mj ∈ j − Regarding (ii), we still need to show that there exist solutions N to qkmj 1 = nN that scale as mj − mj O(ks ). To see why such solutions exist consider solving qk 1 = nN with N = αks and n = η(qm 1), mj − − where α R (we have dropped the subscripts for simplicity). The solution is ∈

m = log [(qk 1)/(αηks) + 1]. (5.15) q − 46 Chapter 5. A quantum algorithm for the Potts partition function In the loop above, only those m’s satisfying Eq. (5.15) are acceptable in terms of the scaling of our algorithm. However, note that asymptotically Eq. (5.15) yields m = k s log k log αη. This means that for every − q − q value of k and s it is possible to adjust α such that m is an integer by letting s logq k = logq αη. At this point we have constructed an infinite family of pairs [n, k ] [where n = η (qm 1) and where m mj j − satisfies Eq. (5.15)], each of which defines a graph which is a member of the set ICCC!.

Finally, we should mention without proof, that one can “fill” this family of graphs by considering the multitude of cases which do not conform to the restrictions in this construction, but which do obey relation (5.13) and obey the asymptotic conditions given in definition (5.2.1). Moreover, the graphs we have con- structed are quite sparse but they are only a subset of ICCC!. There are many more interesting graphs that can be handled by this fixed error bound. For example, graphs which are the direct sum of many copies of a smaller graph are excluded from this family. Further one may accept an error - that decreases polynomially in k for example and define a family of graphs in that way. We do not pursue this here.

3. In the van Dam-Seroussi algorithm (Theorem 1 in [115]), a prepared state must must go through a phase estimation. In [85] it is demonstrated that if the number of qubits used in phase estimation is t = log 1/-+log(2+1/(2δ)) then the probability of success is at least 1 δ. Ref. [116][p.7] states that for the − k k 2 1 Gauss sum algorithm t = 2 log(q 1). After some elementary algebra we obtain δ = [2((q 1) - 2)]− . − − − By the Chernoff bound 2, for fixed problem size k, we only need to pick - such that the probability of failure δ is less than 1/2.

4. (a) We have that if α is a generator for Fq∗k and if i = j mod n, then the code words associated with αi and αj are cyclic permutations of each other, and therefore are of the same weight. Let us denote by [αi] the (equivalence) class of all words αj with i = j mod n. In this step we wish to find the weight of [αi]. { }j This weight is given by d 1 qk(q 1) q 1 − i a k iγa S(i) = − − χ¯(α )− q e . (5.16) qN − qN a=1 ! $ f Hence (up to irrelevant classical computations) the computational cost of computing S(i) is d 1 times − the cost of computing γ . For any graph in ICCC , obtaining these d 1 phases has a (quantum) cost of a ! − O(dk2(log q)2), where d is bounded above by N. This comes from the complexity of computing the Gauss 4 sum d times. (Recall that one has to repeat this algorithm 1/- times in order to ensure that we obtain a sufficiently close approximation.)

(b) How many times must we compute S(i)? The number of times is the number of different equivalence classes [αi] . Each equivalence class [αi] is clearly of size n, and there are qk 1 words. Recall that { } − nN = qk 1 for non-degenerate irreducible cyclic codes, and hence N is the number of different equivalence −

2 2The Chernoff bound states that P n X n/2 e 2! n for independent and identically distributed random variables i=1 i ≤ ≤ − X1, . . . , Xn each taking the value 1 with a probability 1/2 + ! and 0 with probability 1/2 !. This means that the probability of an error occurring decreases exponen`Ptially in the num´ ber of repetitions of the algorithm.− 5.2. A Theorem about QC and instances of the Potts Model 47 classes. (Actually the answer to “How many times must we compute S(i)?” is that one must only do this for the number of cyclotomic cosets of N – see subsection 5.2.7).

(c) For given S(i) we must compute a sum over d terms. The cost of computing each such term is constant once we have obtained the phases γa [which we have, in step (a)]. Combining this with step (b), we see that the total cost of computing all S(i)’s is (d 1)N. 4 − At this point the total computational cost is therefore max[O(dk2(log q)2), O(dN)]. We choose N = O(ks(k)) so if one takes s(k) to be a constant, then the algorithm is polynomial in k. Thus, the overall time complexity is O(d kmax[2,s(k)](log q)2). Being that d N = O(ks(k)), the complexity is ultimately · ≤ O(k2 max[1,s(k)](log q)2).

(d) We now have the list S(i) . Next, a tally of all the weights has to be done which has complexity { } O(ks(k)/2) using quantum counting [46]. The tally will return all the weights and counts of each weight (see Section 5.4) which are the exponents and coefficients respectively, of the polynomial A(1, y) which is the weight enumerator of the dual of the cocycle code. Note that this step does not effect the overall complexity of the algorithm as it has a smaller running time than the previous steps.

5. Note that so far we have dealt with the [n, k] irreducible cyclic code that is the dual of the cocycle code of Γ, i.e., we have used n = E and k = E V + c(Γ). However, recall that Γ = Γ(E, V ) and | | | | − | | hence corresponds to the [n, n k] = [ E , V c(Γ)] code, i.e., the cocycle code of the graph Γ as desired. − | | | | − (This correspondence means that we can obtain information about interesting graphs by considering codes of smaller dimension.) Thus, in order to complete the proof we need the weight enumerator of the [n, n k] − cocycle code itself, so that we can apply Theorem 11. The relation between the weight enumerator A of a code C over the field Fqk , and the weight enumerator A⊥ of the dual code C⊥ is given by the MacWilliams Theorem [61]:

k(k n) k n A⊥ (1, x) = q − 1 + (q 1)x A(1, y), (5.17) − + , where 1 y x − . (5.18) ≡ 1 + (qk 1)y 0 − 1 Applying the MacWilliams theorem and Barg’s theorem [specifically Eq. (5.5) to A⊥ (1, x)], we arrive at the partition function

n c(Γ) Z (x) = x− q A⊥ (1, x) . (5.19)

βJ 1 Recall that y = e− (where β = ); thus we have the following final expression for the partition function kB T as a function of β:

c(Γ)+k(k n) k 1 n Z (x(β)) = q − (q 1) + x(β)− A(1, y(β)). (5.20) − 5 6 It is simple to verify that given any temperature T 0, and for both positive and negative J, Z (x(β)) is ≥ always positive, as it should be. 48 Chapter 5. A quantum algorithm for the Potts partition function 5.2.6 Proof of the Corrolary

We now give the proof of Corrolary 2.

Proof. Assume that we are given a graph Γ(E, V ) whose CMM is the direct sum of the CMMs of two graphs

Γ1 and Γ2 in ICCC! (we call such a graph Γ a “composite graph”). Let C be the code that corresponds to the graph Γ, i.e., C is the cocycle code of Γ. Let C1 and C2 be the corresponding cocycle codes of Γ1 and Γ2. This means that we may apply our algorithm to each of these sub-graphs and obtain their weight enumerators. To do this we need to obtain the weight enumerators of C1 and C2 which we can do efficiently.

By definition C = C C . If the respective lengths and dimensions of C and C are [m, l] and [m$, l$], 1 ⊕ 2 1 2 then C is an [m + m$, l + l$] linear code and its weight enumerator will be W = W1W2 [61]. Thus, once one obtains the weight enumerators of the sub-graphs, one has the weight enumerator of Γ and by using the arguments already outlined one can see that we can efficiently compute ZΓ.

The above corollary allows the scheme outlined in this chapter to be efficiently applied to many graphs because if one knows the generator matrices for C1 and C2 then one can efficiently construct the generator matrix for C by just taking the direct sum of the matrices. This gives a way of constructing examples of graphs for which the Potts partition function can be efficiently approximated. On the other hand (recall subsection 5.2.4), we can efficiently check if a generator matrix decomposes into a direct sum of smaller matrices and we can efficiently check if these matrices generate codes whose duals are irreducible cyclic.

5.2.7 Reducing the Computational Cost of the Algorithm via Permutation Symmetry

With regards to the scheme presented in this chapter, we take q-cyclotomic cosets. We are guaranteed that gcd(N, q) = 1, which ensures that in our case the cyclotomic cosets are disjoint. That gcd(N, q) = 1 is due to

qk 1 the fact that there are solutions x, y Z to Nx + qy = 1 (Thm. 2-4 of [49]). For example, since N = − , ∈ n k 1 k one can take x = n(q 1) and y = 1 + q − q , which are both integers. The relevance of the q-cyclotomic − − cosets of 0, . . . , N 1 is that each element in a given coset has the same value of S(i). This is because of { − } q the fact that the mapping x x is a permutation of F k and that the additive characters obey the identity 6→ q q q exp(2πiTr(b )/q) = exp(2πiTr(b)/q) for all b F k . Hence S(i) is invariant under the mapping x x . ∈ q 6→ (See the appendix for details on additive characters and the trace function Tr.) Therefore we only have to evaluate S(i) for one i in each coset. The computational cost of computing the coset representatives and the number of elements in each coset is linear in N [45]. This has the potential of significantly speeding up the algorithm, but how much will clearly depend on the number of cosets generated by each instance. The number of cosets is given by [100] φ(f) NC = (5.21) ordqf f N !| 5.3. Classical and Quantum Complexity of the Scheme 49 where φ(f) is the Euler totient function and ordqf is the multiplicative order of q mod f and are both defined in chapter 3 . Note that NC replaces N in the overall computational cost of our algorithm and N N. While this can lead to a significant speedup in some cases, for the sake of simplicity and of having C ≤ uniform bounds we will not pursue this further here.

As an illustration of the power of using cyclotomic cosets, consider the following numerical example. Let q = 2, 1/- 8192, and consider a binary [113, 85] code which is the dual to a binary [113, 28] irreducible ≥ cyclic code (i.e., 28 is the smallest integer such that 228 = 1 mod113). This corresponds to either the fully ferromagnetic or fully anti-ferromagnetic Ising model on a graph with 113 edges and 86 vertices. Now note that nN = 228 1 which implies that N = 2375535. Without the use of cyclotomic cosets this value of N − would set our computational cost in that it is the number of times that S(i) must be queried. However, it turns out that there are NC = 85439 cyclotomic cosets, and this is the actual number of queries to S(i). Note that there are instances where N n and cyclotomic cosets are not required. For example, consider , the binary [13981, 20] irreducible cyclic code. Here n = 13981 and N = 75. Physically this corresponds to either the fully anti-ferromagnetic or ferromagnetic Ising model over a connected graph with 13981 edges and 13962 vertices (considering the dual code).

5.3 Classical and Quantum Complexity of the Scheme

Assuming one knew that a given graph was a member of ICCC!, then classically one could proceed as follows using a state of the art algorithm ZETA for the computation of zeta functions of the family of curves C : yq y = αxN [54]. Here N is as given in the relation nN = qk 1 and the index α is in one-one α − − correspondence with the code words in the given cocycle code (specifically α F k ). The connection between ∈ q the weights of words of an irreducible cyclic code and the number of rational points on the curves Cα is well known, as is the connection between the zeta functions of such curves and Gauss sums [84]. The complexity

s(k) 6s(k)+3+!# q 5+!# of using ZETA to compute the N = αk different weights is O(k 2 ) [54] and a tally of s(k) these weights will take O(k ) operations (-$ is a small real number – unrelated+ ,to - which parameterizes the class of graphs in question). The overall complexity of finding the range of S(i) will therefore be

q 5+!# classical cost = O(k6s(k)+3+!# ), (5.22) 2 - / assuming that we know that a given graph is a member of ICCC!. As far as we know this is the fastest classical algorithm for the problem we have considered here.

For a quantum computer we do not need to assume that testing for membership is efficient: we know that this can be done efficiently using the discrete log algorithm [98]. Above we showed that the overall complexity of finding Z is bounded by O(k2 max[1,s(k)](log q)2). This should be contrasted with the best classical result available, (5.22). For example if we take s = 2, (both classical and quantum methods are polynomial when 50 Chapter 5. A quantum algorithm for the Potts partition function

Figure 5.2: A diagrammatic overview of the algorithm. (Box shapes do not have a meaning.) we take s(k) to be a constant) we obtain an O(k11) improvement and an exponential speedup in q. One could imagine fixing a graph and calculating the partition function for increasing values of q. In this situation we have an exponential speedup over the best classical algorithm available. Note that there is a quantum algorithm for finding zeta functions of curves which is exponentially faster in q than the classical algorithm in [54] (as is ours). This is given in [64]. The use of this algorithm instead of the Gauss sum approximation algorithm is left for a future publication.

On a final note, the classification ICCC! we have chosen is meant to highlight the boundary between BQP and P by fixing the acceptable error in the Gauss sum phases. One could opt for a perhaps more natural class of graphs by bounding the way that 1/- grows instead. For example, one could restrict the

k θ +1 class of graphs in such a way that 1/- q 2 − n,k grows polynomially in k, in particular such that ∼ 1 < k5s(k)+1. -

For this class of graphs one would also have a speedup in the quantum case.

5.4 Detailed Summary

For convenience we recollect our definitions and provide a diagram of our scheme. We are considering the q-state Potts model (fully ferromagnetic or fully anti-ferromagnetic) over a graph Γ = (E, V ), with q prime. 5.4. Detailed Summary 51 This includes the Ising model (q = 2 ). Every graph Γ has a cycle matroid M(Γ) associated with it and every cycle matroid has a ( V c(Γ)) E matrix representation G (the CMM), where c(Γ) is the number | | − × | | of connected components of Γ. The columns of G encode the dependence structure of the graph and the row space of G generates the cocycle code of length E and dimension V c(Γ). The length and dimension of | | | | − the dual code are respectively n = E and k = E V + c(Γ). | | | | − | | Following is a detailed synopsis of the algorithm for computing the partition function.

1. Given a graph, efficiently determine if it belongs to ICCC! (Definition 5.2.1). This step appears to be hard on a classical computer in general, since it is equivalent to computing a discrete log.

2. If the CMM G = [I V c(Γ) X] is the matrix representation over Fqk of the cycle matroid of Γ, M(Γ), | |− | T then the row space of H = [ X I E V +c(Γ)] will be the code C(Γ). − | | |−| |

3. Let N = O(ks) where s is a constant integer that determines the complexity of the algorithm. Take

qk 1 C(Γ) as an irreducible cyclic code of length n = N− and dimension k, i.e., we only consider graphs Γ where C(Γ) is an irreducible [n, k] cyclic code.

4. If we can evaluate the weight enumerator of C(Γ) we will have successfully approximated the Potts partition function over the corresponding graph Γ. To do so:

(a) Find the q-cyclotomic cosets of 0, 1, . . . , N 1 . This step requires at most linear time in N. { − } (b) Using the quantum algorithm for Gauss sums [115] we are be able to estimate the weights of the words. The error in the Gauss sum algorithm can be high in this setting, and therefore we have to restrict the class of graphs further in order to obtain exact evaluations. Use the Gauss

sum algorithm to return the phases γ1, . . . , γd 1 [Eq. (4.1)] and then input these values into the − function S(i) [Eq. (5.7)]. According to the McEliece Theorem (Th. 12) we have to make d 1 − qk 1 (where d = gcd(N, q −1 )) calls to the quantum oracle and we can use these evaluations for each − representative i of the q-cyclotomic cosets of 0, 1, . . . , N 1 . This step has time complexity { − } O(dk2(log q)2).

(c) Let b1, b2, . . . , bNC be the coset representatives from the NC cosets. Now each coset has cardi-

nality vi, i.e., bi belongs to coset i which has vi elements. We evaluate ωi = S(bi) for each bi,

remembering that each ωi occurs vi times. We end up with a list (ω1, ω2, . . . , ωNC ) as well as a

list (v1, v2, . . . , vNC ) of multiplicities.

(d) Now perform a tally of repeats of the ω for each i 1, ..., N . This returns a set of indices i ∈ { C }

Λi ji 1, ..., NC . We add the corresponding vji which yields ai = j Λ vj, the number of ≡ { } ⊆ { } ∈ i words of weight ωi up to cyclic permutations. To account for cyclic perm"utations due to the fact

that we are working over cyclic codes, we have Ai = nai, which is the desired weight spectrum. 52 Chapter 5. A quantum algorithm for the Potts partition function

Figure 5.3: A graph corresponding to a [4, 2] linear code over GF (3).

2s 2 5. Now that we have determined the weight spectrum Ai in time O(k (log q) ) we have the coefficients for A(1, y) and so via the MacWilliams identity (5.17) we finally obtain the partition function (5.20).

5.5 Examples and Discussion

In this section we provide the reader with some simple examples for illustrative purposes.

5.5.1 Example

Consider the graph depicted in Fig. 5.3. This graph depicts three spins, one of which has a self-interaction. It can be verified that this graph corresponds to the dual of a [4, 2] irreducible cyclic code over GF (3), i.e., q = 3. The generator matrix for this code is given by

0 1 1 1 .  1 0 1 2    We see that the corresponding graph must have 4 edges and 3 vertices (if the graph is connected), and this is the reason for having the spin with the self-interaction. The second, third, and fourth columns correspond to a triangle (as they sum to zero modulo 3) and the first column is the loop at one of the vertices. The self- interaction can be removed once the partition function has been obtained via a simple procedure described below.

We need to find the weight spectrum for this code. After forming the weight enumerator using MacWilliams identity, we apply Barg’s theorem which will give the q = 3 Potts partition function for this graph.

1. Using a quantum computer we evaluate the necessary Gauss sums. From the identity qk 1 = nN − (necessarily satisfied by irreducible cyclic codes) we see that N = 2. This means that there can be at most two different weights (in fact the number of non-zero cyclotomic cosets is one). 5.5. Examples and Discussion 53 2. Compute the number of times that one must repeat the quantum algorithm for Gauss sums in order to obtain an acceptable accuracy. We see that this number is given by

1 4√32 = θ 1 . - 3 n,k−

1 1 Since 4 = 11 base 3 we have that θn,k = 2 [1 + 1] = 1 and so ! = 12. This means that the algorithm must be repeated 13 times to ensure the desired accuracy.

3. After evaluating the Gauss sums and plugging them into equation (5.6), we obtain two weights: 0 and 3.

4. As only one word can have zero weight, the remaining 32 1 words have weight 3. This means that − we have the weight enumerator A(1, y) = 1 + 8y3.

5. Using relation (5.20) derived earlier, we find that for this graph

1 1 4 3 Z(x(β)) = 8 + x(β)− [1 + 8y (β)] 27 5 6 1 y(β) βJ 1 where x(β) = − , y = e− , and β = . 1+8y(β) kB T

6. At this point we can remove the self-interaction by dividing Z(x(β)) by y. This is due to the following theorems.

Theorem 15. Let T be the Tutte polynomial. If e is a loop then

T (M; x, y) = yT (M e; x, y) − where M e is the matroid (or graph) with the loop deleted [118]. − Theorem 16.

n k k 1 + (q 1)y 1 A(1, y) = y − (1 y) T M; − , − 1 y y 0 − 1 This is known as Greene’s identity and one can see either [2] or [118] for details. Putting these theorems together one finds that

n k k 1 + q 1y 1 AM e(1, y) = y − (1 y) T (M e; − , ) (5.23) − − − 1 y y − n k k 1 + (q 1)y 1 = y − (1 y) yT M; − , (5.24) − 1 y y 0 − 1 = yAM (1, y) (5.25) and therefore we find that the partition function for the triangle is then given by

1 1 4 1 x 3 Z(x(β)) = 8 + x(β)− − [1 + 8y (β)] 27 1 + (qk 1)x 0 − 1 5 6 54 Chapter 5. A quantum algorithm for the Potts partition function

Figure 5.4: Chaining of the graph in Fig. 5.3

Figure 5.5: A ladder graph illustrating a recursively defined graph. This graph corresponds to a [17, 8] binary linear code.

One should note that due to Corollary (2), we could form a string of these triangle graphs as shown in Fig. 5.4, and easily compute the partition function by multiplying the above partition function with itself three times (the number of copies of the triangle in the chain). This property is shared by all instances of the Tutte polynomial defined over direct sums of matroids.

We can extend this to certain types of recursively defined graphs [102] by forming chains made of multiple copies of different graphs. We note however that recursively defined graphs [102] do not always fit into our construction because they may not be members of ICCC!. For example, consider Fig. 5.5. This is known as a ladder graph and it is an example of a recursively defined graph. This graph corresponds to a [17, 8] binary linear code which is not irreducible cyclic, nor dual to one [2]. 5.5. Examples and Discussion 55 5.5.2 Degenerate Cyclic Codes

Here we introduce an approach to construct examples that will help to classify the types of graphs that our scheme is tailored for. The motivation for this is to clarify the relationship between graphs and codes in the sense used in our scheme. The problem is the fact that many of the irreducible cyclic codes have duals that are not graphic in the sense of cycle matroids. We ask the following question: Given an irreducible cyclic code whose dual is not graphic (and hence does not correspond to a physical Potts model), can we find another code whose dual has a weight spectrum that is simply related to the original code, and which is graphic? We provide some arguments in favor of this idea.

There exist codes whose words consist of several repetitions of a code of smaller length. Of particular interest to us is a class of degenerate codes related to irreducible cyclic codes in the following way. Lemma IV.2 in [18] states that a code of length n is degenerate if w(x) xr 1 [i.e, w(x) divides xr 1] for some | − − r n where w(x) is the check polynomial (See chapter 3). In the case of irreducible cyclic codes the check | xr 1 polynomial is the denominator of the generator polynomial introduced earlier, given by g(x) = w(−x) . This means that if we have an [r, k] irreducible cyclic code with check polynomial w(x), we find some n such that

n xn 1 w(x) x 1 such that r n. We then have a degenerate linear [n, k] code generated by − . The words in | − | w(x) the degenerate code will look like (c$, c$, . . . , c$) where c$ is a word in the non-degenerate code. This means that once we know the weight distribution of the [r, k] code, we can easily construct the weight enumerator of the [n, k] code since the weights of the words of length n will be n/r times the weight of the corresponding word of length r. This construction allows one to loosen the constraints on the dimension and length and therefore on the number of vertices and edges of the corresponding graph. In other words, for many of the codes whose corresponding cycle matroids are not graphic we may use this construction to map these instances to graphic matroids. The definition for ICCC! can be easily tailored to include these graphs as will be done in future work.

As an example consider the [5, 4] irreducible cyclic code whose check polynomial is w(x) = 1 + x + x2 + x3 + x4. The dual of this code is non-graphic, because it requires forming a cycle of five edges with only two vertices. Now notice that w(x) x15 1 and 5 15. In this way we form the [15, 4] code generated by | − | x15 1 w(x−) . The dual of this code is a [15, 11] code and the corresponding graph is given by Fig. 5.6. The weight enumerator of the [5, 4] code is A(1, y) = 1 + 10y2 + 5y4 and the weight enumerator of the degenerate code is 1 + 10y6 + 5y12. Note that the exponents are just multiplied by n/r = 3. The structure of this graph gives one a clue as to the structure of the types of graphs addressed by our approach. They will be graphs which consist of several repetitions of simple cycles of different lengths. In the example above all the simple cycles have length six, as can be seen in Fig. 5.6. As one explores codes with higher n, one finds that there will be multiple simple cycles of different lengths that will form the corresponding graph. The reason to believe this to be true in general comes from the fact that the weights of the code C correspond to the size of sets 56 Chapter 5. A quantum algorithm for the Potts partition function

Figure 5.6: Example of a graph corresponding to a [15, 11] code related to the [5, 4] irreducible cyclic code. of linearly dependent columns of the generator matrix of the code dual to C. For example, the minimum weight of a code C is the size of the smallest set of linearly dependent columns of the code’s parity check matrix, which can be used as the generator matrix of the code dual to C. On the other hand, the length of the cycles (number of edges) are given by the weights or sums of the weights.

The relation between codes and graphs is not yet well understood and future work in this regard based on our approach will hopefully reveal new results that will have applications to both statistical mechanics and knot theory.

5.6 Conclusions, Future Directions and Critical Analysis

In this chapter we have given a quantum algorithm for the exact evaluation of the fully ferromagnetic or anti-ferromagnetic Potts partition function Z under the restriction to certain sparse graphs (with logarith- mically more edges than vertices). The methods we used exploit the connection between coding theory and statistical physics. The motivation for this work is an ongoing effort to identify instances of classical statistical mechanics for which quantum computers will have an advantage over classical machines.

The approach we described involves using the link between classical coding theory and the Potts model via the weight enumerator polynomial A. One should note that A is another instance of the Tutte polynomial and so this connection is not surprising. The weight enumerator encodes information about all the different Hamming weights of the code words in a linear code and the weight of a code word can be given by a formula involving a sum of Gauss sums when dealing with a specific type of linear code. Since there exists an efficient 5.6. Conclusions, Future Directions and Critical Analysis 57 algorithm to approximate Gauss sums via quantum computation [115] we were able to efficiently calculate the weights of code words for certain codes. Much of this chapter dealt with the necessary restrictions that one must impose in order to achieve this last step. For example, once an error - in the Gauss sum algorithm is accepted, we demonstrated that there is a family of graphs for which one can find the exact partition function, and therefore the error does not scale within this family. Given a graph Γ, one can map the graph to a corresponding linear code via the incidence structure of Γ. The Potts partition function of Γ (with either fully ferromagnetic or anti-ferromagnetic interactions) is given by some easily computed function times the weight enumerator of the corresponding code. Due to the symmetries inherent in the mathematical structure of linear codes we were able to provide an efficient method to exactly determine Z for a class of graphs (ICCC!) which has a well defined correspondence to a subset of linear codes.

In [90] it was shown that the exact evaluation of weight enumerators for binary linear codes is hard for the polynomial hierarchy. As our approach involved the exact evaluation of weight enumerators, it is not surprising that we had to make restrictions on the class of graphs so as to make our scheme efficient. The vantage that coding theory gives to this particular problem, however, allows one to utilize the fact that certain graphs have properties that a quantum computer can take advantage of to provide a speed up.

Notice that the related results in [25, 24] concern additive approximations; the methods used in this chapter can be extended to a wider class of graphs if one relaxes the requirement of exact evaluation and instead similarly considers additive approximations of Z. An open question is what instances of the Potts partition function are amenable to an fpras (fully polynomial random approximation scheme). The methods used in [25, 24] have proven to be quite powerful. There is hope to extend some of these methods to non-planar graphs. One idea is to extend the algorithm in [25] to the Jones polynomial for virtual knots and then use some correspondence between the virtual knots and non-planar graphs. Another approach may involve seeing things in a new light. Note that the Jones polynomial is the Euler characteristic of a certain chain complex [77]. One can explore how effective quantum computers will be at approximating Euler characteristics in general. Perhaps there is a way of exploiting this in order to obtain knowledge about the Potts partition function.

One may also consider strengthening the results given here by exploiting theorems about the minimal distance of cyclic codes. For example, there are theorems that guarantee a lower bound for the weight between any two words. By enforcing that the generator polynomial of the code be of a certain form, one would be guaranteed a certain distance between words and therefore the error in the Gauss sum approximation will be of little consequence for certain graphs [61]. As already mentioned in the Introduction, another potentially promising approach is to consider the scheme we have presented here but to replace the Gauss sum algorithm with the quantum algorithm for obtaining Zeta functions [64]. Work has to be done on understanding the exact cost of this algorithm when one is restricted to curves that are pertinent for the evaluation of the Potts 58 Chapter 5. A quantum algorithm for the Potts partition function model. Corollary 2 deals with the combination of graphs via a direct sum of codes and gives one a way of “tiling” graphs for which one knows the partition function. This gives a quick way of obtaining the partition function of certain graphs that are made of many repeats of a simpler graph. There are other ways of combining codes that may allow one to study the partition function of new graphs, for example the concatenation or direct product of two codes [61]. The coding theoretic approach does give us a way of evaluating the partition function of instances of the Potts model at arbitrary temperatures but precisely the kinds of graphs which are involved is a question for future research. Indeed, the identification of the physical instances represented by the graphs for which our algorithm is efficient will shed light on the question that motivated this work in the first place [28]: what is the quantum computational complexity of classical statistical mechanics? Chapter 6

Additive Approximation of the

Signed-Euler Generating Function

6.1 Introduction

In this chapter we provide a simple construction which makes a direct connection between quantum circuits and graphs via their incidence structure. As an application we construct a function related to the generating function of Eulerian subgraphs and discuss the relationship to the Ising partition function. We show that quantum computers can provide additive approximations of this related generating function, which we call the signed generating function of Eulerian subgraphs, E$(Γ, λ). We demonstrate that it is a BQP-complete problem (when we allow it to be defined over hypergraphs which are a generalization of graphs) as it is intimately related to quadratically signed weight enumerators (QWGTs) [41] via this construction. It is well known that the Ising partition function Z may be expressed in terms of the generating function of Eulerian subgraphs, E(G, λ). We provide some ideas for the future to use E$(Γ, λ) for efficient additive approximations of Z. Recently in [24], an additive approximation algorithm of the Tutte polynomial was given which solved instances shown to be BQP-complete. As the Ising partition function is just a specialization of the Tutte polynomial, the complexity of additive approximations of certain non-planar instances of the Ising partition function is an interesting open problem.

6.1.1 Generating function of Eulerian subgraphs

An Eulerian subgraph of a graph Γ is a set of edges that form a tour (a path that begins and ends at the same vertex) in which every vertex is of even degree. The generating function of Eulerian subgraphs of Γ is 59 60 Chapter 6. Additive Approximation of the Signed-Euler Generating Function given by E(Γ, x) = xwt(a) a ! where the sum is over all Eulerian subgraphs and wt is a weight function (in this case the number of edges in the subgraph). This brings us to another expression for the Ising partition function which was discovered by van der Waerden and is given by

V Z(β) = 2| | cosh(βJij)E(Γ, tanh(βJij)) (6.1) i,j E { #}∈ It is this form of the partition function that motivates this work [4, 118].

6.2 QWGTs and their relation to the Ising partition function

Definition. A Quadratically Signed Weight Enumerator (QWGT) is a bi-variate polynomial of the form [41]

bBb b n b S(A, B, x, y) = ( 1) x| |y −| |, (6.2) − b:!Ab=0 where A and B are 0, 1-matrices with B of dimension n n and A of dimension m n. The variable b in × × the summand ranges over 0, 1-column vectors of dimension n. All calculations involving A, B or b are done modulo 2.

Note that the evaluation of a QWGT, given that x and y are natural numbers, is in general a #P problem, and, as it includes evaluations of the weight enumerator of binary linear codes, it is in fact #P-complete [41]. We shall now review how QWGT’s were constructed in [41] in some detail. Let G be a quantum circuit and U(G) the corresponding unitary operator. Note that a universal gate set can be achieved by allowing arbitrary rotations about any product of Pauli operators i.e.

iσ θ/2 θ θ e− b = cos I + i sin σ 2 2 b 0 1 0 1 where σ = n σ(i) such that σ = I,σ = σ , σ = σ and σ = σ [37]. This means that b is a b i=1 bi 00 01 X 11 Y 10 Z binary vector7whose length is 2n, twice the number of qubits, and the superscript (i) represents the qubit which is operated on by the corresponding Pauli matrix. It is possible to express our unitary operator as a product of real gates and we can do this as follows.

Take the product U(G) = GN GN 1 G1 where each gate is of the form − · · · 1 G = (α iβσ ) k γ ± bk where again bk is a binary vector of length 2n but each bk can only contain an odd number of 11’s, i.e., each gate can only have an odd number of Pauli Y operators σY . Note that in this case α/γ = cos(θ/2) 6.2. QWGTs and their relation to the Ising partition function 61 2 2 and β/γ = sin(θ/2) and that γ = α + β (as this will ensure the unitarity of the Gk). As a further modification, which will allow us to $have simple multiplication rules for our gates, define

b σ˜ = ( i)| |Y σ bk − bk where b is the number of σ ’s occurring in σ . | |Y Y bk We can now write 1 G = (α + βσ˜ ). (6.3) k γ bk

Now define C to be the block diagonal matrix whose blocks consist of

0 1 .  0 0    t Then the property that bk has an odd number of 11’s is given by b Cb = 1, and thus we have the multiplication rule

bt Cb σ˜ σ˜ = ( 1) 1 2 σ˜ , (6.4) b1 b2 − b1+b2 where the addition in the subscript is bit by bit modulo 2.

Let H be the (2n N) matrix whose columns are the b . H is a polynomial size representation of the × k quantum circuit where each column represents a gate and every pair of rows represents a qubit. We then have the following expansion.

1

U(G) = Gk (6.5) k#=N 1 1 = (α + βσ˜ ) (6.6) γ bk k#=N 1 atlwtr(HtCH)a a N a = ( 1) α| |β −| |σ˜ (6.7) γN − Ha a ! Now note that if we only sum over the a’s such that CHa = 0 then we assure that

1 atlwtr(HtCH)a a N a 00 U(G) 00 = ( 1) α| |β −| | < · · · | | · · · # γN − a ! is always non-zero for this omits X and Y gates from our sum.

As a simple example to illustrate the correspondence between the matrix representation H of the circuit and the actual operation of the circuit, consider 62 Chapter 6. Additive Approximation of the Signed-Euler Generating Function

1 1 1   0 0 1      0 1 1  H =      1 0 0       1 1 1         1 1 0    iσ θ/2   Using gates of the form e− b , this matrix represents a circuit which operates in the following way:

iZ(1) X(2) Y (3) θ iZ(1) Z(2) Y (3) θ iY (1) Z(2) Z(3) θ e− ⊗ ⊗ 2 e− ⊗ ⊗ 2 e− ⊗ ⊗ 2 where the superscripts represent which qubit is being acted upon. Thus each column encodes each exponen- tiated operator, i.e., each gate. When using our proposed gate set, we would have

1 [(αI iβZ(1) X(2) Y (3))(αI iβZ(1) Z(2) Y (3))(αI iβY (1) Z(2) Z(3))] γ3 − ⊗ ⊗ − ⊗ ⊗ − ⊗ ⊗ In [28] it was shown that the Ising partition function can be expressed in terms of a QWGT. Let A be the incidence matrix of a graph g, i.e.,

1 (v = i and (i, j) E) A = ∈ . (6.8) v,(i,j)   0 else

Then we have 

V V 2| | atBa a 2| | a w a Z (λ) = ( 1) λ| | = ( 1) · λ| | (6.9) w (1 λ2) E /2 − (1 λ2) E /2 − | | a | | a ker A − ! − ∈! 2 V = | | S(A, dg(w), λ, 1) (6.10) (1 λ2) E /2 − | | where w = (w12, w13, . . . ) (w gives the distribution of ferromagnetic (wij = 0) or anti-ferromagnetic (wij = 1) interactions along the edges of the given graph), λ = tanh(βJ) (the “temperature”), V is the set of vertices, E is the set of edges and B = dg(w) is the diagonal matrix formed by putting w on the diagonal and zeros everywhere else. This form of Z will be considered later in this thesis.

6.3 A relationship between hypergraphs and quantum circuits via

QWGTs

The following mapping between hypergraphs and quantum circuits was first introduced in order to find a way to compute the Ising partition function. It was extended to obtain a class of quantum circuits which can be simulated classically in [55] and is based on equation (6.10). This mapping involves interpreting the 6.3. A relationship between hypergraphs and quantum circuits via QWGTs 63 matrix representation H of the quantum circuit, as outlined above, as coming from the incidence matrix of a hypergraph. In this way we have a many to one mapping from quantum circuits to a hypergraph. First we define hypergraphs.

Definition. A hypergraph is a generalization of a graph where edges are replaced by hyperedges. Let V = v , v , . . . , v be the set of vertices and let E = e , e , . . . , e be the set of hyperedges. Each { 1 2 k} { 1 2 n} e = v , v , . . . , v is a collection of vertices where each v V . i { i1 i2 im} ij ∈

The standard reference for hypergraphs is [22]. Note that graphs are just a special case of hypergraphs where each edge just consists of two vertices and that the incidence matrix is defined in the same manner as above. For the description of the mapping in the next subsection, we shall restrict our quantum circuits so that the corresponding hypergraphs are ordinary graphs. We shall point out when this restriction is important.

6.3.1 The Mapping

The motivation for this mapping is to obtain a QWGT equal to a matrix element of the unitary matrix of a quantum circuit that looks something like the generating function of Eulerian subgraphs E(Γ, λ). If we were able to efficiently approximate E(Γ, λ) then according to equation (6.1) we would have an efficient method of approximating the Ising partition function. It turns out that if we take the ansatz

1 Gk = (λ + σ˜b ) (6.11) √λ2 + 1 k for the gate set we obtain

1 1 U(G) = (λ + σ˜ ) (6.12) (λ2 + 1) bk k#=N 1 1 = (λ + σ˜ ) (6.13) (λ2 + 1) bk k#=N 1 atlwtr(HtCH)a a = ( 1) λ| |σ˜ . (6.14) (λ2 + 1)N/2 − Ha a ! Now, ignoring the normalization we have

atlwtr(HtCH)a a 00 U(G) 00 ( 1) λ| | < · · · | | · · · # ∝ − a Ker(CH) ∈ ! Let us make the following assumptions:

1. Take CH to be a binary matrix with only two 1’s per column, i.e., CH will be identified with the incidence matrix of the given graph Γ. For hypergraphs, this is not necessary as the columns of a 64 Chapter 6. Additive Approximation of the Signed-Euler Generating Function hypergraph incidence matrix may be populated by more than two 1’s, as this corresponds to edges consisting of multiple vertices.

2. H is a matrix of dimension 2n N with one (11) and at most one (01) per column, i.e., one Y operation × and at most one X operation per gate respectively. The source of this restriction will be explained below. For example a column may look like

T 110001100010 . - / H encodes the quantum circuit. For hypergraphs, these restriction vanish, however it is necessary that there are an odd number of (11)’s per column as our gate set depends on this restriction.

These assumptions will provide the basis for a natural mapping between quantum circuits and graphs. Consider one additional assumption.

3.

atlwtr(HtCH)a = 0 mod 2 a ker(CH). (6.15) ∀ ∈

This ensures that the matrix element 00 U(G) 00 is equal to the generating function of Eulerian < · · · | | · · · # subgraphs. This is achieved as follows. First we need to associate the incidence matrix A with a matrix CH. The only thing we need to do is to create a matrix with double the number of rows of A, with row 2i 1 occupied by the ith row of A, and − each even row the zero vector. Thus we obtain

A11 A12 . . . A1N  0 0 . . . 0    CH =  A A . . . A   21 22 2N     . . .   ......       0 0 . . . 0      As far as the graph is concerned, this amounts to adding isolated vertices, which does not add any cycles. We now have the 2n N matrix CH as our representation for Γ. By the action of C, we see that CH gives × us some freedom in our choice of H, which is the matrix representation of the quantum circuit. Specifically we have

x1 x2 . . . xN   A11 A12 . . . A1N      xN+1 xN+2 . . . x2N  H =      A21 A22 . . . A2N     . . .   . . .   ......       An1 An2 . . . Ann      6.4. BQP-completeness 65

The xi must be selected according to the constraints mentioned above. We see that column k of H will only have two Aik$s which are equal to 1 as these come from an incidence matrix and by definition, an incidence matrix has only two 1’s per column. One 1 is possible as this represents a loop, i.e., an edge that begins and terminates at the same vertex. Further, by the QWGT formalism constructed above, we must have an odd number of Y ((11) entry in the column) operations per gate. Hence, there must be one 11 per column in the matrix H. There are only two positions where we could select an xi in column k to be 1 such that it will be followed by an Ajk that is equal to 1. Thus each column (or gate) must have only one Y operation. By the same reasoning we see that there is only one possible place to put an X operation, i.e., only one way to place a (01) in column k. So there can be at most one X operation per gate. There is no restriction as to the number of Z operations per gate as we have the freedom of putting a 1 before any Aik that is set to

0. By turning the xi on or off (1 or 0 respectively) we obtain different circuits. This provides a degree of freedom that allows one to choose a quantum circuit that may satisfy the final assumption which ensures

atlwtr(HtCH)a a a that the sum a Ker(CH)( 1) λ| | is equal to a Ker(CH) λ| | which is E(Γ, λ) as desired. ∈ − ∈ Without the" restriction given by "

atlwtr(HtCH)a = 0 mod 2 a ker(CH), ∀ ∈ we actually have that

1 h a 00 U(G) 00 = ( 1) a λ| |. < · · · | | · · · # (λ2 + 1) E /2 − | | a Ker(CH) ∈ ! This means that knowledge of the matrix element 00 U(G) 00 amounts to knowledge of < · · · | | · · · #

h a E$(Γ, λ) = ( 1) a λ| | (6.16) − a Ker(CH) ∈ ! t t where the ha refer to the a lwtr(H CH)a. We shall call E$(Γ, λ) the signed generating function of Eulerian subgraphs as the sum is over all subgraphs whose edges have even degree. Specifically, as CH is associated with the incidence matrix of a graph, the whole null space of CH are the characteristic vectors of all Eulerian subgraphs [3].

6.4 BQP-completeness

BQP (Bounded-Error Quantum Polynomial-Time) is the class of decision problems which are solvable by a quantum computer in polynomial time with a probability of error bounded above by 1/4. (This value can actually be any number 1/2 c where c is some constant, due to the aforementioned Chernoff bound.) − The classical analogue of this class is BPP (Bounded-Error Probabilistic Polynomial-Time). BQP-complete problems are those decision problems which represent the most difficult problems in BQP. Any problem in BQP may be reduced to a BQP-complete problem via a polynomial reduction [21]. Alternatively, a BQP- complete problem is one which belongs to BQP and is also BQP-hard. BQP-hardness refers to the fact that 66 Chapter 6. Additive Approximation of the Signed-Euler Generating Function the ability to efficiently solve the problem is equivalent to being able to efficiently solve any problem in BQP. In other words, no problem in BQP is more difficult than a BQP-hard problem. In our case we show that knowledge of E$ is enough to decide any problem in BQP by showing that E$ can be used to approximate a matrix element of the corresponding unitary matrix of the quantum circuit.

Theorem 17. Additive approximations of E$(Γ, λ) over hypergraphs is BQP-complete.

Before proving this theorem we would like to note that we include evaluations of E$(Γ, λ) for hypergraphs Γ for completeness, but that this may not be necessary. The restriction to graphs forces the corresponding gates to allow an X operation on one qubit, and forces one to have a Y operation on another qubit , but an arbitrary number of Z operations on the remaining qubits. As these gates correspond to exponentiated Pauli operators, these are multi-qubit operations and thus it is easy to implement entanglement under this restriction as well as control gates. Thus, from the results in [108, 89] we see that the quantum circuits corresponding to ordinary graphs may be capable of universal quantum computation. In addition, as our mapping depends on the sum over all simple cycles of a given graph, any one qubit operation may be inserted without effecting the sum, as these correspond to adding loops, i.e., an edge that begins and ends at the same vertex. This will be explored in some future work as there can be some fascinating consequences. One may be able to use planarity as the defining quality of a quantum computers power.

Proof. It is clear from the above that an approximation of the matrix element 00 U(G) 00 will give < · · · | | · · · # an approximation to E$(Γ, λ). Recall from [34, 99] that via the Hadamard test one can obtain an additive approximation of this matrix element. This means that one may obtain the following: With some probability of success bounded below (say by .75) an additive approximation returns m such that

00 U(G) 00 ∆ p < m < 00 U(G) 00 + ∆ p < · · · | | · · · # − · < · · · | | · · · # · where p is a polynomially small parameter (in the problem size) and ∆ is the approximation scale of the problem. Note that if ∆ = O( 00 U(G) 00 ) then the approximation will be an fpras [24]. (This < · · · | | · · · # intuitive form can be easily derived via the definition of an additive approximation provided in [75].) Now, assuming that we have at our disposal the universal gate set given by θ = 2 arccos(4/5) rotations ± of products of Pauli operators (which correspond to γ = 5, α = 4 and β = 3 in the general gate set), then with an overhead of polylog( E /-) of gates we may approximate our gates G , to accuracy O(-/ E ) [41]. | | k | | This means that we may indeed approximate the signed generator function of Eulerian subgraphs via the Hadamard test and so this problem is in BQP.

Now, the other direction. We must demonstrate that knowledge of E$(Γ, λ) is enough to simulate any quantum circuit. First, any quantum circuit corresponds to a hypergraph under the scheme presented above. Since BQP is a decision class all we have to do is convert a quantum circuit into its decision making counterpart. This may be done as follows [23]. 6.4. BQP-completeness 67

Figure 6.1: A circuit illustrating the procedure to apply a quantum circuit U to a decision problem.

Let U be an n-qubit quantum circuit for some decision problem and without loss of generality, assume that the “yes” or “no” answer is given by the output of the first qubit U1 after the application of U on a qubit

n register set to 0 ⊗ , i.e., after a measurement U is either 0 or 1 where these correspond to the “decision” | # 1 | # | # “yes” or “no” respectively. The remaining qubits are ignored and assumed to be extraneous. Now, take an ancilla qubit U set to 0 and adjoin it to U. Next, CNOT the output of U with U and encode the answer A | # 1 A as the state ψ . Apply the inverse of the circuit, U †, to all the output qubits except ψ , to uncompute the | # | # outputs of U to 00 0 . In this way, one arrives at the state 00 0 ψ which will either be 00 0 0 or | · · · # | · · · #| # | · · · #| # 00 0 1 , effectively “deciding” the decision problem. This is due to a simple observation. Let the circuit | · · · #| # given in Fig. 6.1 be denoted by Q and make the following designation

n+1 n Q 0⊗ = 0⊗ 0 = reject | # | #| #1 ⇒ and

n+1 n Q 0⊗ = 0⊗ 1 = accept. | # | #| #1 ⇒

n+1 n+1 n+1 n+1 This means that the observation 0⊗ Q 0⊗ = 1 means to reject and the observation 0⊗ Q 0⊗ = < | | # < | | # 0 means to accept.

Thus one can assume, with no loss of generality, that any quantum circuit that solves some decision problem either outputs 00 0 0 or 00 0 1 (see Fig. 6.1). This argument depends on the quantum | · · · #| # | · · · #| # circuit being able to output the correct answer with certainty. This is of no matter as a similar argument can be made for a circuit which outputs the correct answer with some constant probability above a half [17]. 68 Chapter 6. Additive Approximation of the Signed-Euler Generating Function Thus, knowledge that 00 U 00 = η implies that the control qubit will be 0 and thus ψ = 0 . (One < · · · | | · · · # | # | # | # could just consider the normalized quantity 1 00 U 00 instead and thus mathematically, everything η < · · · | | · · · # reduces to observing if 1 00 U 00 is either 1 or 0.) η < · · · | | · · · # This means that knowledge of E$(Γ, λ) can be used to effectively decide the decision problem for any quantum circuit as it is proportional to the matrix element 00 U 00 . In our case, a natural decision < · · · | | · · · # problem would be to decide if E$(Γ, λ) is bounded above by some constant or to decide its sign. The result of this decision would correspond to ψ either being 0 or 1 in Fig. 6.1 | # | # | #

Important Caveat: Upon careful inspection of the above argument one may find something amiss. We are referring to the idea that perhaps knowledge of E$(Γ, λ) may in fact not be enough to solve all decision problems in BQP, since only one λ is specified. Recall that our universal gate set consists of rotations about products of Pauli operators, with two angles at our disposal, namely 2 arccos(4/5). When we use the gate ± set given by the ansatz (6.11), the temperature λ plays the role of the angle, and so it seems that there is only one angle available for the gate set Gk. In fact, gates of the form (6.11), consist of multiple angles [41]. In our scheme, we must allow hyperedges in order to have access to both rotational angles. This occurs because the number of Y operations per gate is what determines which of the two angles is implemented [41]. 1 A quick calculation verifies the following claims.

1. If one choses λ = 4 , then one indeed recovers the rotational angles θ = 2 arccos(4/5). This occurs 3 ±

because taking this θ gives us the gates set 4/5I + 3/5σ˜bk . Referring to equation 6.11, in this case we will have λ/√λ2 + 1 = 4/5 and 1/√λ2 + 1 = 3/5 as claimed. However, in the scheme outlined in this chapter, this is not acceptable when one moves to the Ising model, as this particular choice

log 2+iπ of λ means that the physical quantity βJ become complex. In fact one would need βJ = 2 .

This is of no matter for the quantity E$(Γ, λ) but is not acceptable when considering its interpretation as a partition function. However, this does mean that the gate set given by our ansatz is capable of universal quantum computation [41].

2. If one chooses, for example, λ = 3 then one has access to the two rotational angles θ = 2 arcsin(4/5) 4 ± and thus, we may indeed claim universality for our gate set given by equation (6.11). This is due to results proved in [73]. Further, all quantities are now physically acceptable. One can see this by making the observation that for λ = 3/4, λ/√λ2 + 1 = 3/5 and 1/√λ2 + 1 = 4/5 and thus we mimic the general gate set given in equation (6.3) with θ = 2 arcsin(4/5).

This means that even though we may tune the temperature λ to whatever we wish, at certain values we have a gate set which is universal for quantum computation due to the results in [73]. Thus, certain

1This may in fact be useless to point out if one considers the results in [73]. It is likely that hypergraphs are not necessary to include in the above theorem of BQP-completeness but some work must be done in order to make this claim with certainty. 6.4. BQP-completeness 69 evaluations of E$ are BQP-hard as required. What is of interest is the change in hardness that occurs when we restrict evaluations from general hypergraphs, to arbitrary graphs and then to planar graphs. Keep in mind that it is well known that almost any entangling gate is universal [108], and thus the restriction to regular graphs may not be much of a restriction. Again, we shall explore this issue in the future.

6.4.1 Examples

Here are two simple yet instructive examples.

1) Let the given quantum circuit be encoded by

1 0 0 0 0 0  1 0 0 1 0 0       0 1 0 0 0 0       0 1 0 0 1 0  H =      0 0 1 1 1 1       0 0 1 0 1 1         0 0 0 1 0 0       1 1 1 1 0 0     

The incidence matrix (ignoring isolated vertices) can be retrieved easily from H and it is given by

Figure 6.2: The graph obtained from the simple circuit given by H. 70 Chapter 6. Additive Approximation of the Signed-Euler Generating Function

1 0 0 1 0 0   0 1 0 0 1 0      0 0 1 0 1 1       1 1 1 1 0 0      The corresponding graph is given in Fig. 7.3.

2) A simple demonstration of a controlled 2-qubit gate is given by the sign flip operator [40] given by

iσ1 σ2 π e− z ⊗ z 4 which acts in the following way

iσ2 π 0b e− z 4 0b | # −→ | # and

iσ2 π 1b e z 4 1b . | # −→ | # In our gate set we would have the corresponding gate, αI iβσ1 σ2 with the action − z ⊗ z

0b (α iβ) 0b | # −→ − | # and

1b (α + iβ) 1b . | # −→ | #

6.5 Future work: Approximating the Ising partition function

Recall the following form of the Ising partition function

V Z(β) = 2| | cosh(βJij)E(Γ, tanh(βJij)) i,j E { #}∈ and note the difference between the function we are able to approximate via quantum computation, E$(Γ, λ), and the actual generating function of Eulerian subgraphs E(Γ, λ). We have that

atlwtr(HtCH)a a E$(Γ, λ) = ( 1) λ| |. − a Ker(CH) ∈ ! If we wanted to use this for the approximation of the Ising partition function, as previously mentioned we would require that atlwtr(HtCH)a = 0 mod 2 a ker(CH). If this requirement were met then we would ∀ ∈ run the quantum approximation algorithm with gate sets corresponding to λ = tanh(βJij)). This would require O( E ) different approximations as indicated by the product over all edges in the above formula. In | | 6.6. Conclusion 71 effect we would have a polynomial additive approximation of the Ising partition function for any set of edge interactions and for any graph. But alas, there is a problem. The equation that must be solved (equation (6.15)) in order to ensure that

E$(Γ, λ) = E(Γ, λ) in fact determines which particular quantum circuit must be used for the computation. By brute force this could require an exponential number of calculations in the number of vertices. Future work will involve studying this approach to see if one can in fact a priori guarantee that equation (6.15) is satisfied for certain non-planar graphs. For example, if one has knowledge about the parity of all the Eulerian subgraphs (number of edges) then this may be used to efficiently find the representation H of the quantum circuit required. The cubic lattice is an example where every Eulerian subgraph contains an even number of edges. This issue also arises in a very similar approach outlined in [55] but which deals explicitly with equation (6.10). Other applications of E$(Γ, λ) will be explored as well as an extension to a two variable function. We will also attempt to use the methods here to find the instances of the Ising model for which evaluations of the partition function is BQP-complete.

6.6 Conclusion

We provided a new way of relating quantum circuits to graphs and vice-versa via an incidence structure of the circuit or graph. We also provided a generating function related to the generating function of Eulerian subgraphs and demonstrate that additive approximations of it for hypergraphs are BQP-complete. Con- nections to the Ising spin glass partition function were made and a discussion of future work dealing with additive approximations of the Ising partition function was provided. 72 Chapter 6. Additive Approximation of the Signed-Euler Generating Function Chapter 7

On classically simulatable quantum circuits

7.1 Introduction

We use the mapping introduced in the previous chapter to develop criteria for the class of quantum circuits that can be efficiently classically simulated. This mapping can also be used to construct a quantum algorithm for the additive approximation of Z. However, there are two issues. The instances we are able to handle are constrained by the fact that if one wants to know the interaction energy at each edge of a given graph instance, the amount of computation scales in the number of cycles (if one were to proceed naively), which can be exponential in the number of edges. The other issue is the fact that our mapping may fail to provide information about the edge interactions. We are hoping to remedy this shortcoming in future work by providing an efficient means of calculating the pertinent interaction energies. This will have consequences for a better understanding of universal quantum circuits and it will also improve our understanding of the relationship between the classical Ising model and quantum computation.

The strength of this approach is that one is forced to deal with “physical” edge interactions and thus our scheme may be able to overcome this particular shortcoming inherent in other approaches [24, 83]. This would, again, involve finding an efficient way to calculate acceptable edge interactions. It may be the case that this difficulty arises specifically because we are forced to consider only real interaction energies.

The structure of this chapter is as follows. We review the relationship between QWGT’s and Z and then introduce an ansatz that allows one to associate graph instances of the Ising model with circuit instances of the quantum circuit model. A theorem on simulatable quantum circuits is presented followed by a proof which depends on the fact that there are algorithms for the efficient evaluation of Z for planar instances of the Ising model. We provide an interesting example of a class of quantum circuits which can be shown to 73 74 Chapter 7. On classically simulatable quantum circuits be simulable and compare it directly to a result recently given in [83]. We conclude with some suggestions for future work, including the possibility of a quantum algorithm for the additive approximation of Z, using a different approach than the one given in the previous chapter.

7.1.1 Definitions from Graph Theory

Here we review some essential definitions and theorems form graph theory that are essential for the results presented in this chapter.

Definition. A subgraph h of a given graph g is called a minor (or child) if it can be obtained from g from a sequence of edge deletions and contractions.

A graph is planar simply when it can be drawn on the plane in such a way that no edges cross. This is characterized by Kuratowski’s Theorem [58] given by

Theorem 18. Every non-planar graph can be characterized by the existence of two minors, namely K3,3 and K5.

Figure 7.1: K5: one of the forbidden minors for planarity.

Figure 7.2: K3,3: the other forbidden minor for planarity.

Therefore, if you begin to delete and contract the edges of a graph and you find one of these minors, then your graph is not planar, hence they are “forbidden minors” for planarity. 7.2. Once again - The Mapping 75

We will also need to know the graph K4 which is like K5 except that it only has four vertices instead of five. We will also meet outerplanar graphs so we define them as well.

Definition. K4 is the fully connected graph on four vertices. Thus, it has four vertices and six edges where every vertex is connected to every other vertex.

Definition. For any planar graph, there are regions bounded by the cycles of the graph and an unbounded region outside of all the cycles. An outerplanar graph is a planar graph for which every vertex is within the unbounded region when it is drawn on the plane such that no edges intersect.

In this chapter we give a criterion for graph (and therefore circuit) membership based on the existence of a solution for a system of linear equations over GF(2). We want to be guaranteed that this membership has an ordering in the sense that if g is a member, then so is every minor h of g. Sets with this property are called downwardly closed. Here is the definition.

Definition. The set of graphs ΓK is downwardly closed with respect to minor ordering if whenever g is a member of ΓK , then so is any minor of g.

For our purposes, we want to be certain that membership to our set ΓK is not obstructed by an infinite set of graphs. If we are guaranteed that this is so, then we know that we can, in principle, be able to test for membership in polynomial time (this is due to the fact that searching for a minor only requires polynomial time [58]). The following theorem due to Robertson and Seymour [93] gives us just that.

Theorem 19. Every downwardly closed set of graphs (possibly infinite) may be represented by a finite set called the obstruction set.

This theorem says that any set of graphs which is downwardly closed has a finite set of minors which are forbidden. This means that if you knew what the minors were, for example K3,3 and K5 for planarity, you could check any graph for membership in that set.

7.2 Once again - The Mapping

We shall review the essential elements of this construction as there are some differences from the treatment in the previous chapter. However, many details will be left out and we refer the reader to the previous chapter for details. A natural question at this point is whether one can use the machinery just mentioned above to approxi- mate the partition function. If we were able to somehow approximate S(A, dg(w), λ, 1) then we would have an approximation of the partition function for the Ising model over the corresponding graph g and for some edge interaction distribution w. In other words, can we approximate

a w a S(A, dg(w), λ, 1) = ( 1) · λ| |, − a ker A ∈! 76 Chapter 7. On classically simulatable quantum circuits as this is equal to the partition function up to an easily computed coefficient as written above in equation (6.10)? Note that the sum here is taken over vectors that are in the null space of A, which here means that only subgraphs having an even number of bonds emanating from all vertices are allowed, i.e., the sum is taken over all even subgraphs or equivalently, over all Eulerian subgraphs.

We begin by asking ourselves if there exists some ansatz for the gate set Gk such that

1 1 a w a U(G) = G = ( 1) · λ| |σ˜ ? k (λ2 + 1)N/2 − Ha a k#=N ! Indeed, we can almost get this form. Taking

1 Gk = (λ + σ˜b ) (7.1) √λ2 + 1 k as in the last chapter, we get the previously calculated unitary U(G) given by equation (6.14). We make the same assumptions as in the last chapter except for the last one. We repeat them for convenience.

1. Take CH to be a binary matrix with only two 1’s per column i.e. CH will be identified with the incidence matrix of the given graph g.

2. H is a matrix of dimension 2n N with one (11) and one (01) per column, i.e., one Y operation and × one X operation per gate respectively (see [55]). For example a column may look like

T 110001100010 . - / H encodes the quantum circuit.

3. w is an edge distribution which solves

atlwtr(HtCH)a = atBa = a w mod 2 a ker(CH); B = dg(w). (7.2) · ∀ ∈

The above assumptions provide a natural mapping between quantum circuits and graph instances of the Ising model. For details refer to [55].

7.3 Determination of the edge interaction distribution and conse-

quences

The above identification between quantum circuits and instances of the Ising spin model depends on the ability to know the particular edge interaction w that satisfies

a w a atlwtr(HtCH)a a ( 1) · λ| | = ( 1) λ| |. (7.3) − − a ker A a ker CH ∈! ∈! 7.3. Determination of the edge interaction distribution and consequences 77 This is where the assumption that w must satisfy

atlwtr(HtCH)a = atBa = a w mod 2 a ker(CH) · ∀ ∈ comes from. This is a system of linear equations over GF (2). The number of equations is equal to the number of simple cycles of the given graph and the number of unknowns is equal to the number of edges. Note that this is a sufficient condition but not a necessary one, as equation (7.3) may be satisfied in another way. This is briefly discussed later in this chapter.

Before proceeding, let us take stock of what we have when we know a w that satisfies equation (7.2). Under this assumption, one is now able to write down a relationship between a matrix element of the unitary matrix representing the quantum circuit of a given graph instance of the Ising model and the partition function. Specifically we have

E 2 | | 1 a w a (1 λ ) 2 00 U(G) 00 = ( 1) · λ| | = − E Zw(λ). (7.4) < · · · | | · · · # (λ2 + 1)N/2 − 2 | | V a ker CH (1 + λ ) 2 2| | ∈! Equation (7.4) just comes from applying the inner product on the left and right to U(G) given by equation (6.14). This equation has two consequences. First, if we are able to determine 00 U(G) 00 , then we < · · · | | · · · # are able to determine the partition function Z (λ). Note that estimating 00 U(G) 00 in general is w < · · · | | · · · # BQP-complete [99] and thus something we could do with a quantum computer. Alternatively, if we had a way of classically computing Zw(λ), then we would be able to classically simulate the quantum circuit G (if it were solving a decision problem) [23].

Using mathematical software it was demonstrated that the mapping used here, when we use the sufficient condition given by (7.2), can only be satisfied for graphs that do not have K3,3 or K5 as a minor. We have also identified that K4 is a forbidden minor in addition to K3,3 with one edge missing. We used a script written in Mathematica to demonstrate the existence of a satisfying interaction energy configurations w for K3,3 minus two edges. A sample of the code is included in the appendix in addition to code which demonstrates that K3,3 fails. It is very important to note that the existence of a w that satisfies the system of linear equations (7.2) is not the only criterion for the existence of some w which satisfies equation (7.3). Hence, it is sufficient but not necessary. What is true however is that restricting ourselves to the system (7.2) allows us to give a precise analysis of our situation. It is very likely that for more general non-planar graphs, equation(7.4) may be satisfied, but for the majority of this chapter we restrict ourselves to assuming that w satisfies the system (7.2). We shall briefly expand on this issue in section 7.5. We leave this matter for future work however.

In the remainder of the chapter we will prove the following theorem, provide an efficient classical test of whether a given quantum circuit can be simulated classically, and present an approach for the computation of the Ising partition function in section 7.5. 78 Chapter 7. On classically simulatable quantum circuits Let us clarify what we mean by “determined classically”. First recall that a uniform family of circuits is a sequence of circuits, one for each input length n, that can be efficiently generated by a Turing machine, in our case a quantum turing machine [85].

Definition. We will say that a uniform family of quantum circuits Cn (n-qubit circuits), which solve problems in BQP (with a probability of success bounded below by .75), are classically simulatable (or whose output may be determined classically) if the matrix element 00 U(C ) 00 of each corresponding |< · · · | i | · · · #| circuit, can be obtained to k digits of precision in time poly(n, k) by classical means.

Our definition is a modified version of the one given in [63] where they also include a discussion on how this definition can be weakened.

Theorem 20 (Main Theorem). Let ΓK be the class of graphs for which there exists a solution w to equation (7.2) and let QK be the related class of quantum circuits. The output of a quantum circuit which solves a decision problem and belongs to QK , may be determined classically in time polynomial in the number of qubits.

7.4 Proof of the main theorem

In this section we prove a theorem about a class of quantum circuits that can be efficiently simulated classically, assuming that they solve some decision problem 1. These quantum circuits are characterized by the graphs that they correspond to via the mapping outlined above. We also wish to say something more specific about this family of quantum circuits without referring to the corresponding class of graphs. Note that we do not provide an algorithm that would allow one to simulate a given quantum circuit, i.e., we assume that the existence of a satisfying w to equation (7.2) implies that we have knowledge of it. We only use the machinery outlined above to prove that there exist classical simulations of the family of quantum circuits in question and to provide a test for a given quantum circuit. In section 7.4.2 we do however provide a simple construction of a family of graphs that correspond to quantum circuits for which knowledge of w is easy to retrieve and for which the evaluation of the partition function can be computed efficiently, as they are planar [65]. In this way, we provide a concrete criterion for the classical simulatability of quantum circuits. For the benefit of the reader we summarize the proof informally:

1. Input a quantum circuit Cq.

2. Transform Cq into a matrix whose columns represent Pauli operations (that are to be exponentiated, i.e., each column is a representation of a gate of the form given by equation (7.1)) and every pair of

1One should note that this restriction to decision making circuits does not alter the power of the circuit model as any computational problem can be cast as a sequence of decisions. For example, decide whether the n’th bit of some computation is 0 or 1 and then repeat for all n. 7.4. Proof of the main theorem 79 rows are the qubits being acted upon as described in the previous section. This matrix is called H. The following constraint must be respected: Every column must have one Y -operation and can have at most one X-operation. (This constraint comes from the fact that one wants CH to be an incidence matrix for a graph. Without this restriction one has a correspondence between quantum circuits and hypergraphs as in [55].)

3. From this, construct a corresponding incidence matrix CH of a graph g. If the incidence matrix has

more than two ones per column than one has a hypergraph. As given in the theorem, we restrict QK to be quantum circuits that correspond to planar graphs.

4. Show that our mapping defines what is known as a “downward closed set” [93] of graphs which means that we may apply the Robertson-Seymour Theorem. This theorem guarantees that there is a finite set of graphs (obstruction set) for which we can test whether or not g has any members of this set as a graph minor [58] (cubic complexity in the number of quantum gates).

5. Define a set of graphs (corresponding to the given quantum circuits) ΓK , via this obstruction set, i.e.,

a graph is a member of ΓK if and only if it has none of the obstruction graphs as a minor including

K3,3, K5 and K4. We call the corresponding family of quantum circuits QK .

6. Due to the fact that these graphs are planar, the partition function Z of any graph in ΓK can be computed efficiently by a classical computer [118].

7. Using equation (7.4), show that knowledge of Z can be used to determine the outcome of a quantum circuit H for a decision problem.

8. Conclude: Quantum circuits in QK which solve a decision problem can be classically simulated.

We now present the proof of the main theorem in detail.

Proof. We need to demonstrate that we may define a set of quantum circuits, QK , that are related to graph instances of the Ising model and that they may be classically simulated. Without loss of generality we begin by assuming that the gate set we have available to us consists of a rotation (an irrational multiple of π) of products of Pauli operators which can be shown to be universal [41]. (It is classically cost-efficient to swap between different universal gate sets due to a theorem of Solovay and Kitaev [85] and furthermore, as mentioned in the previous chapter we may efficiently simulate the gates given by (7.1) via the gates given by rotations about Pauli operators.) Thus, given a quantum circuit C, we have an efficient representation of it in terms of a matrix H which has a number of rows equal to twice the number of qubits and columns equal to the number of gates as described above in section 6.2. As outlined previously, we want to draw a connection between H and the incidence matrix of some graph g. This is possible if we impose certain initial 80 Chapter 7. On classically simulatable quantum circuits restrictions on the allowed circuits that we consider, namely that every gate of C (or column of H) has at most one X and exactly one Y operation as previously mentioned (the remaining qubits may be operated on by Z operations as needed). These restrictions allow the matrix CH to have the form of an incidence matrix of a graph and thereby we have a relationship between the given circuit and a graph. (The precise relationship between CH and H was given at the end of section 6.2.) Now, recall equation (7.4) which gives the following relationship between the partition function and a matrix element of the unitary matrix of the corresponding quantum circuit:

E 2 | | (1 λ ) 2 00 U(G) 00 = − E Zw(λ). < · · · | | · · · # 2 | | V (1 + λ ) 2 2| | In order for this equation to be true, it is sufficient that equation (7.2) be satisfied. This means that there is a binary vector w that satisfies a set of linear equations over GF (2). We may write this system as

Mw = α

where M is the matrix whose rows are elements, ai, of the Null space of the incidence matrix of the graph

t t given as the Ising instance, and α is the vector whose entries are the ailwtr(H CH)ai.

Let ΓK be the set of all graphs for which there exists a solution w to Mw = α. We now introduce a lemma that will allow us to introduce an ordering on the elements ΓK .

Lemma 7.4.1. If a graph g is a member of Γ , then so is g e or g/e , i.e., the deletion or contraction K \ j j of an arbitrary edge ej from a graph in ΓK is also in ΓK .

The proof of this lemma is technical and an outline of it is given in section 7.7. Given this lemma, we now see that we have what is called a minor ordering and thus we can apply the Robertson-Seymour theorem. Another way of expressing this theorem is to say that any graph may be tested for membership in a given downwardly closed set of graphs by just searching the graph for a finite set of minors. The complexity of doing this, given knowledge of the minors, can be shown to be cubic in the number of edges. The Robertson-

Seymour theorem states that in our case there may in fact be obstructions (of which K5, K3,3 and K4 have been mentioned), but the whole set of obstructions is finite. As we are proving an existence theorem about classically simulatable quantum circuits, we just need to rely on two efficient procedures. First, that we can check whether or not a graph is a member of ΓK (which we restrict to planar instances by definition), and next that we can evaluate the partition function on graphs that belong to this set. We may then conclude with confidence that any graph instance, with any corresponding set of edge interactions w (whatever they may be) that belongs to ΓK , can be handled efficiently by a classical computer, i.e., we may compute the Ising partition function for any of these instances efficiently with a classical computer. This is due to Kasteleyn who gave a classical algorithm for the Ising partition function of any planar graph in the absence of an external magnetic field [65]. According to our definition of classical 7.4. Proof of the main theorem 81 simulatable quantum circuits, all we need is to be able to obtain the evaluation in time polynomial in the number of qubits, (which translates to the number of vertices) to exponential accuracy which is achieved by the algorithm given in [65]. One could weaken our definition and allow weaker evaluations as discussed in [63], and thus expand the class of quantum circuits which are classically simulatable.

As outlined in section 6.2, we may relate each graph in ΓK with a set of quantum circuits, i.e., we take the incidence matrix of a particular graph instance and then transform it into the form CH. We then retrieve some H from it which is a matrix representation of the quantum circuit. In this way we have the mapping Γ Q , where Q is a set of quantum circuits. K −→ K K Now, as stated above 00 U(G) 00 Z (λ). < · · · | | · · · # ∝ w

For any graph in ΓK we have an efficient way of classically determining Zw(λ), and therefore we are able to determine the matrix element 00 U(G) 00 for any quantum circuit in Q efficiently. Following [23], < · · · | | · · · # K we now can show that knowledge of this matrix element is enough to determine the output of a quantum circuit which is being used to solve a decision problem. As we did this in the previous chapter we do not repeat the proof.

Figure 7.3: A circuit illustrating the procedure given below to apply a quantum circuit U to a decision problem.

Thus any quantum circuit in QK which solves a decision problem may be simulated classically.

Note that this technique can be used to prove the simulatability of quantum circuits which correspond to 82 Chapter 7. On classically simulatable quantum circuits many classes of graphs for which the Ising partition function has efficient classical evaluation schemes, e.g., graphs of bounded tree width. We suspect that many of the results obtained in [51] may be reproduced in this way.

7.4.1 “The Test” and consequences for the structure of quantum circuits

The above results imply that we can identify a class of quantum circuits that can be classically simulated via the connection to graph instances of the Ising model. These instances are necessarily planar as we can then compute the partition function classically. However, our results indicate that it is possible that not all planar graphs can be handled by the given mapping. In section 7.5 we will give evidence that strongly suggests that all planar graphs can be handled and many non-planar instances as well. Due to algorithms for planarity testing, given a quantum circuit one can efficiently test if it belongs to the family QK of classically simulatable quantum circuits. Further, we can test for known minors efficiently as well. Using computer software we know that in addition to K3,3 and K5, K4 and K3,3 with one edge deleted are also forbidden (see the appendix for details). This can be accomplished as follows. “The Test” INPUT: A quantum circuit q.

1. Construct the matrix representation H of q, as described above.

2. Apply the mapping: q Γ where Γ is the corresponding hypergraph and retrieve the incidence matrix 6→ A from the circuit representation H.

3. Compute ∆ AAt where At is the transpose of A and ∆ is the matrix of all zeroes except with diagonal − elements equal to diag(AAt). In other words, change the sign of the elements in AAt and set all the diagonal elements of AAt to zero. This is the adjacency matrix T of Γ.

4. Input T to the algorithm given in [59] to decide if Γ is planar or not.

OUTPUT: If Γ is non-planar then q does not belong to QK .

5. If Γ is planar: test if K3,3 with one edge missing and K4 are minors of Γ.

OUTPUT: If Γ has these two minors then q does not belong to QK .

The complexity of testing if the two given graphs are minors is cubic in the number of vertices of Γ [93]. Is there an obvious thing that one can check by just glancing at the quantum circuit? Perhaps some more thought on this issue may reveal some canonical way of doing this but for now we mention the now obvious 7.4. Proof of the main theorem 83 fact. Examining the close relationship between the circuit representation H and the incidence matrix CH, one can give the restriction

# of gates 3(number of qubits) 3 ≤ − when the universal gate set consists of rotations about products of Pauli operations.

This follows from the Eulerian criterion of planarity E 3 V 3 where E is the number of edges | | ≤ | | − | | and V is the number of vertices. We shall expand on this issue in the next section by giving a criterion for | | the simulatibility of quantum circuits (unrelated to the Eulerian criterion given here) by studying a specific class of graphs.

7.4.2 Quantum circuits corresponding to a class of sparse graphs

Our motivation in this section is to present a result on simulable quantum circuits comparable to the recent results presented in [83] and [63]. In both papers, results dependent on quantum gates being restricted to nearest neighbour qubit operations are presented. Via our construction we derive a similar result, but with an interesting extension. We begin with a simple example.

Recent work in [83] demonstrates that any circuit that is built out of X-rotations and nearest neighbour Z Z rotations can be efficiently simulated. We present a similar but more general result. Our construction ⊗ immediately demonstrates that this is in fact a subset of the types of circuits that can be efficiently simulated classically. Consider for the sake of argument (and rigor) that one is restricted to a class of planar graphs, Γpc, for which the number of even subgraphs scales polynomially. This restriction ensures that one can actually implement a classical simulation of the corresponding quantum circuits, as knowledge of an acceptable w can be found efficiently on a classical computer. Let us call the corresponding set of quantum circuits (under the scheme presented here) Qpc. Upon inspection of the incidence matrix of a typical graph in Γpc one will see that even though the majority of incident vertices will be nearest neighbour, there will be several that are not, no matter how one labels the vertices. For example, consider the graph given in Fig. 7.4.

The incidence matrix is given by

1 0 0 0 0 0   1 1 0 0 0 1      0 1 1 0 0 0       0 0 1 1 1 0       0 0 0 1 0 0         0 0 0 0 1 1      84 Chapter 7. On classically simulatable quantum circuits

Figure 7.4: A graph with only one cycle. and one possible circuit representation H is given by

1 0 0 0 0 0   1 0 0 0 0 0      0 0 0 0 0 1       1 1 0 0 0 1         0 1 1 0 0 0       0 1 1 0 0 0       0 0 0 1 0 0       0 0 1 1 1 0         0 0 0 0 0 0       0 0 0 1 0 0       0 0 0 0 1 0         0 0 0 0 1 1      Note that the fifth and sixth columns correspond to gates of the form

iθ(X(4) Y (6)) iθ(Y (2) X(6)) e− ⊗ and e− ⊗ respectively. The superscripts indicate which qubit is being operated on and thus one can clearly see that non-nearest neighbour interactions are possible. This example demonstrates that our construction may 7.4. Proof of the main theorem 85 extend the results in [83]. By linking together simple graphs like the one shown (like a necklace for example) we realize immediately that firstly, the edge interaction w can be efficiently found, secondly, the partition function can be computed efficiently classically, and thus, via the construction in this chapter, we can simulate the corresponding quantum circuits efficiently. Note however that the restriction can only be relieved slightly as most of the interactions will in fact remain nearest neighbour. We will make this rigorous in future work. However, taking this example as motivation, what follows is a more rigorous construction which will be used to pursue a better understanding of classically simulable circuits.

Under the scheme presented here, we have found that solutions for the edge interaction energy w exist when the following minors are absent in a graph: K3,3 and K5 as well as K4 and K3,3 with one edge deleted. (see the appendix for an example of an algorithm that was used to test this fact). This class of minors still allow for interesting graphs. For example, one is still allowed to consider outerplanar graphs, as defined above. (The graph in Fig. 7.4 is outerplanar, for example.) Specifically, K2,3 is a forbidden minor for outerplanar graphs, where K2,3 is like K3,3 except that one side of the bipartite graph has two vertices instead of three. K2,3 however is not a forbidden minor for the existence of a solution w. It is clear now that those graphs for which a solution w to equation (7.2) exists necessarily contain outerplanar graphs, since the obstruction set for outerplanar graphs consists of K3,3, K5, K4 and K2,3, but solutions for w exist for graphs that have K2,3 as a minor, so graphs outside of the set of outerplanar graphs are allowed. Thus Outerplanar Graphs Γ . ⊂ w We now define a simple subclass of outerplanar graphs which have a polynomial (in the number of vertices) number of even subgraphs. This class of graphs correspond to quantum circuits that can be classically simulated as w can be found efficiently and because the partition function can also be computed efficiently (as they are planar and one has the algorithm given in [65]). This class of graphs is by no means an exhaustive characterization of all planar graphs which have a polynomial number of even subgraphs.

Definition. A basis of the null space of the incidence matrix CH is referred to as a cycle basis.

Definition. Let Γ be those outerplanar graphs for which the number of vertices is equal to V and the pc | | number of edges is equal to E = V + O(k log V ), where k R+. | | | | | | ∈

Theorem 21. Γ has polynomial, in V , number of even subgraphs. pc | |

Proof. This follows from a few simple observations. First, a cycle basis corresponds to the set of simple cycles of a given graph in Γpc, i.e., connected even subgraphs. The dimension of the null space (or the number of elements of the cycle basis) in this case will be equal to the number of edges minus the rank of CH by elementary linear algebra. Thus asymptotically one has,

nullity = V + O(k log V ) rank(CH) = O(k log V ) | | | | − | | 86 Chapter 7. On classically simulatable quantum circuits which means that the number of elements of the cycle basis will be O(k log V ). Next, note that the null | | space allows all possible sums of the basis and therefore we are left with O( V k) elements as claimed. | |

One should imagine graphs in Γpc as being sparse graphs consisting of cycles strung together along trees without too many branching points. This is due to the relationship between E and V given above. One | | | | can see that a branch without a cycle always adds an additional vertex (one edge has two vertices) and the only way that the relationship between vertices and edges can be honored is if the number of branches be kept less than the number of cycles. That is, there will need to be more cycles than edges that do not terminate at a cycle. Now, notice the following simple observation. The incidence matrix of these structures, like the example above in Fig. 7.4, will have columns that consist of nearest neighbour consecutive “1’s” for the majority of positions. This rule is broken when a tree branches and when one runs into a simple cycle. By the above however we realize that this can only happen O(k log V ) times. As E is the number of gates and | | | | V is the number of qubits in the corresponding quantum circuit, we conclude that our construction provides | | iθ(X(i) Y (j)) the following insight into quantum circuits: A quantum circuit consisting of gates of the form e− ⊗ , which act on nearest neighbour qubits except for O(k log(#of qubits)) gates, which can act on qubits i and j such that i j 2, is classically simulable. | − | ≥ Future work will concentrate on understanding the exact role that the complexity of finding the edge interaction w has. We will also examine if there are non-trivial methods for finding the solution w to equation (7.2). We explore this issue from a different perspective next.

7.5 The Next Step

7.5.1 On the existence of edge interactions

Here we shall discuss some consequences and ideas for future work. First, we must deal with the aforemen-

a w a atlwtr(HtCH)a a tioned issue of equating a ker A( 1) · λ| | with a ker CH ( 1) λ| |. During the course of this ∈ − ∈ − chapter we assumed that"for all vectors a (which"are elements of the null space of the incidence matrix of the given graph) being summed over, there was a w such that

a w = atlwtr(HtCH)a mod(2). ·

This assumption was taken as a simplification. Here we note that though this is sufficient for equation (7.3) to be satisfied, it is not necessary. In fact we outline another way that equality can occur. The following construction demonstrates that it is very likely that the number of cases which do not have a solution w for the interaction energy is very much smaller than the case we analyzed given by equation (7.2). We demonstrate that the ways in which a satisfying w can occur is many times greater than when restricted by equation (7.2). 7.5. The Next Step 87 Note that in equation (7.3) the powers of the λ’s are the weights of the null vectors a, that is the number of ones in a. Thus it is possible for an equality to occur for a given term in the sum (in the two equations) for different a’s as long as the weights of the a’s are equal. This gives us the constraint for the following. One can organize all the a’s in bins in terms of weights from 1 to E = N. Let’s now take bin r, i.e., the | | set of vectors of weight r. Let ar1, . . . , arn be all the null vectors of CH of weight r. One could have for example

at lwtr(HtCH)a = a w at lwtr(HtCH)a { r1 r1 r2 · } { r2 r2 A = a w at lwtr(HtCH)a r3 · } · · · { rn rn A A = a w . r1 · }

All this example is demonstrating is a way for the powers of the ( 1) in equation (7.3) to be equal for different − a’s (but of the same weight of course), that is if at lwtr(HtCH)a = a w AND at lwtr(HtCH)a = r1 r1 r2 · r2 r2 a w AND etc. Another example would be some permutation of this, say r3} ·

at lwtr(HtCH)a = a w at lwtr(HtCH)a { r1 r1 r1 · } { r2 r2 A = a w at lwtr(HtCH)a r2 · } · · · { rn rn A A = a w rn · }

(Such statements are called conjunctive statements as they are ANDed together.)

If one were to “OR” this with all other possibilities, one would have a criterion for the existence of a satisfying w for the null vectors of weight r. One would then have to AND all these statements together for all of the possible weights to arrive at the actual criterion for a satisfying w. This, however, demonstrates that the chance that a graph would have a satisfying edge interaction w is much greater than under our stringent condition given by equation (7.2). Loosely speaking, this is due the fact that there are many conjunctive statements that can be satisfied. In fact it seems likely that some class of non-planar graphs may have a satisfying w. In future work, we shall attempt to understand what kinds of graphs do in fact have a satisfying w under this much stronger situation.

7.5.2 Computing the Ising partition function

We also wish to discuss another direction that can be taken by the work presented in this chapter. An fpras for the fully-ferromagnetic Ising partition function was presented in [91]. It is well known that having an fpras for the non-ferromagnetic Ising model implies that NP = RP (randomized polynomial time) which would be quite unexpected [118]. It should therefore be of no surprise that no fpras for this problem has 88 Chapter 7. On classically simulatable quantum circuits been found, even with quantum resources. However additive approximation schemes seem likely and in fact one was given in [24] for the related Potts model partition function even though the instances that they were able to account for are not known to be BQP-complete and the hardness is in fact unknown. The following reveals the inner workings of some future work in this direction. As outlined above, we have equated a matrix element of a quantum circuit with the value of the partition function of the Ising model for a corresponding graph instance. Thus we have

00 U(G) 00 Z (λ). < · · · | | · · · # ∝ w This means that if we could approximate the matrix element, we would have an approximation for the Ising partition function. Due to the Hadamard test, it is well known that a polynomial estimation of this matrix element is BQP-complete. (See [99] for a good description of the Hadamard test.) Specifically, by making 1/-2 measurements, you can either obtain Re 00 U(G) 00 or Im 00 U(G) 00 to < · · · | | · · · # < · · · | | · · · # precision -. This process results in an additive approximation. More work must be done, however, in order to claim that we have an additive approximation algorithm for the Ising partition function. We can make the following statement: Given that we have an oracle that upon input of a graph, outputs a set of interaction energies w which satisfy equation { } (7.3), then a quantum computer can provide an additive approximation for the Ising partition function Zw at temperature λ. This is due to the fact that if one knew the interaction energy distribution of a given graph that satisfied equation (7.3), then one would know that the matrix element 00 . . . U 00 . . . < | | # is equal to the partition function. Implementing the hadamard test on the corresponding quantum circuit, one would obtain this matrix element and therefore an additive approximation of Zw. Taking the evidence given directly above, we feel that certain families of non-planar graphs will have some satisfying w but note that finding a w which satisfies equation (7.2) scales exponentially for general graphs, as this would require working with all the Eulerian subgraphs of a graph. Thus, under the scheme presented in this chapter, a quantum computer allows us to extract some infor- mation about the partition function, but not all of it. The quantum computer actually computes something, but we are forbidden to know the exact instance that the computer is solving, in particular we are unable to know the edge distribution Jij of the Hamiltonian given by equation (1.1). The information we can actually extract is an estimation of

atlwtr(HtCH)a a ( 1) λ| | − a ker CH ∈! and understanding this sum will be the source of future work. This is the partition function under certain conditions, e.g., when equation (7.3) is satisfied. Is there some easy way of determining when this function actually does correspond to a partition function? Can we use this as some type of graph invariant? Is there an interesting function, other than a partition function, that this corresponds to (we can easily extend this to a bivariate equation)? We also intend on understanding if our scheme can be adapted so that some set 7.6. Conclusion and Critical Analysis 89 of interaction energies w can be found efficiently, and thus we can study precisely over which instances of { } the Ising model, a quantum computer can provide a speed up over classical machines for the approximation of the partition function. Specifically and more ambitiously, we wish to find the instances of the Ising model for which approximations are BQP-complete.

7.6 Conclusion and Critical Analysis

We provided a construction that allows one to determine, in polynomial time, if a given quantum circuit corresponds to a planar instance of the classical Ising model using the mapping previously introduced in Chapter 6 (and in [55]). This was then used to conclude that any family of quantum circuits which solve decision problems and are restricted to certain planar instances can be classically simulated. We also propose how the same machinery may be used to give an additive approximation quantum algorithm for non-planar instances of the Ising model.

The methods used to demonstrate the classical simulatability of certain families of quantum circuits relied upon the existence of a minor ordering. It is likely this is not necessary if one were to analyze the construction given in section 7.5.1. It seems that it would not be too difficult to demonstrate that many non-planar instances have a satisfying edge interaction. This would tidy up the proof significantly.

As far as using this approach to actually obtain an algorithm for Z, it is likely that some interesting results lie ahead. However, it seems that this mapping may be unable to exploit the full power of the quantum circuit model in this direction, as information about the edge interactions is not efficiently represented. We plan to pursue this further in the future. As a final comment we mention how it will be of great interest to truly understand the boundary between universal gate sets for quantum computation and the quantum circuits we constructed in section 7.4.2. How much more powerful are quantum computers over their classical counterparts? We believe the techniques presented in this chapter have the potential to reveal some interesting information about the actual standing of BQP in the complexity hierarchy.

7.7 Proof of the Lemma

Here we present the proof of the following lemma.

Lemma 7.7.1. If a graph g is a member of Γ , then so is g e or g/e , i.e., the deletion or contraction K \ j j of an arbitrary edge ej from a graph in ΓK is also in ΓK .

Proof. Assume that g Γ . Recall that a graph g is an element of Γ if there exists some solution w to ∈ K K the set of linear equations over GF (2)

M gw = αg 90 Chapter 7. On classically simulatable quantum circuits g where M is the matrix whose rows are elements, ai, of the Null space of the incidence matrix of the graph g

g t t (given as the Ising instance), and α is the vector whose entries are the ailwtr(H CH)ai. From elementary linear algebra we know that a solution exists if αg may be written as a linear combination of columns of M g or in other words if

Rank[M g αg] = Rank[M g]. | We must demonstrate that after we either delete or contract an edge, and arrive at the subgraph h, we have

Rank[M h αh] = Rank[M h]. |

We shall demonstrate this with an edge deletion as the case of a contraction is similar. We begin with a given graph g which is a member of Γ . Let H = h . This means that K { ij}

h21 h22 h2s h2N h h h · · · · · · 11 21 v1   · · · 0 0 0 0   · · · · · · h12 h22 hv2   · · ·    . . . .   h41 h42 h4s h4N   . . . .   · · · · · ·  t g     (H CH) =    0 0 0 0  (7.5)    · · · · · ·   h1s h2s hvs   ......   · · ·   ......   . . . .     . . . .           hv1 hv2 hvs hvN   h h h   · · · · · ·   1N 2N vN     · · ·   0 0 0 0     · · · · · ·    Using Einstein notation this equals

2i,1 2i,2 2i,s 2i,N h2i 1,1h h2i 1,1h h2i 1,1h h2i 1,1h − − · · · − · · · −  2i,1 2i,2 2i,s 2i,N  h2i 1,2h h2i 1,2h h2i 1,2h h2i 1,2h − − · · · − · · · −  ......  (7.6)  ......   ......     2i,1 2i,2 2i,s 2i,N   h2i 1,N h h2i 1,N h h2i 1,N h h2i 1,N h   − − · · · − · · · −    Keep in mind that if we were to delete an edge from g, this would correspond to losing a column from CH which would correspond to losing, say the sth column from matrix (7.6). Taking the lower triangular portion of matrix (7.6) and calculating we find that the mth element of the vector αg is

g 2i,1 2i,1 2i,2 2i,1 αm = am2[am1h2i 1,2h ] + am3[am1h2i 1,3h + am2h2i 1,3h ] + + amk[am1h2i 1,kh + − − − · · · − · · · 2i,k 1 2i,1 2i,N 1 + am(k 1)h2i 1,kh − ] + + amn[am1h2i 1,N h + + am(N 1)h2i 1,N h − ] − − · · · − · · · − −

th th where ami is the i element of the m null vector of CH or the matrix element Mm,i. Now, let

s 2i,s 2i,s 2i,s ξm = am(s+1)amsh2i 1,s+1h + am(s+2)amsh2i 1,s+2h + + amN amsh2i 1,N h . − − · · · − 7.7. Proof of the Lemma 91 g th This is the portion of αm that would vanish if we were to omit the edge that corresponds to the s column of the matrix (7.6). This means that if we remove this edge, we will end up with the subgraph h and we can write

g h s αm = αm + ξm. (7.7)

This equation is saying that the mth entry of the right hand side of

M gw = αg is given by the mth entry of the right hand side of

M hw = αh

s (the corresponding system of equations for the graph h) plus the term ξm. From the assumption that g Γ and by construction we have ∈ K h s α1 + ξ1  αh + ξs  g 2 2 α =  . . .  = δici  . . .   . . .  i   !  h s   α + ξ   K K    where the δ are coefficients in GF (2) and the c are columns of M g, i.e., Rank[M g αg] = Rank[M g]. Thus i i | we have αh = δ c ξs. i i − i ! How does the matrix M change as we go from g h by this edge deletion? If the edge is a dangling edge, −→ i.e., not part of a cycle, then we lose a column (column s) but if the edge deletion causes the breaking of P cycles, then M will lose P rows (in addition to column s), as the rows encode the cycle structure of the

h g graph. In this case, the dimension (or length) of α will be P less than the dimension of α and the ci will

s also be shorter by P entries. We call these shorter ci, ci$ . Further, and most importantly, ξ will vanish, as mentioned. After taking this into consideration we now can conclude that

h α = δici$ i=s !* h where M = [c1$ c2$ cN$ 1]. Thus, · · · − Rank[M h αh] = Rank[M h]. | The proof for edge contractions is similar. The main difference is that an edge contraction does not cause the loss of a cycle except when the edge in question belongs to a cycle of length three. Thus in general, the contraction case is simpler except when dealing with cycles of length three. In this case the proof caries over in the same way. 92 Chapter 7. On classically simulatable quantum circuits Chapter 8

Conclusion

In this thesis we explored several connections between quantum computation and classical statistical physics. The motivation for the work presented herein was the following question posed by my supervisor: For which instances of the Ising or Potts model does a quantum computer provide an algorithmic speed up over classical machines for the evaluation of the partition function Z? These models are of interest because they have been used to describe the behaviour of ferromagnets as well as other phenomena from condensed matter physics. In particular, almost all thermodynamic quantities of the system may be derived from the partition function, e.g., the Helmholtz free energy is given by

F = k T ln Z, − B and derivatives of F with respect to thermodynamic variables correspond to measurable quantities such as the magnetization and heat capacity. Another reason for the great attention these models have received is that they undergo phase transitions, that is a region where the variation of some parameter results in a sharp change of some thermodynamic quantity. A common example of a phase transition is where water transitions from liquid to a solid as the temperature drops below zero degrees Celsius. The Potts and Ising model have therefore been very useful to model systems outside of condensed matter physics, e.g., cardiac dynamics [62] and neural networks [50]. From a computational perspective, these models have provided interesting challenges algorithmically. We know that the complexity of evaluating the partition function is #P-hard [118] and thus any hope of obtaining algorithms for the exact evaluation of Z (in general) is vanishingly small. However, approximation algorithms are very desirable. It turns out, however, that even certain types of approximations are too much to ask for. For example, it is well known that a fully polynomial approximation scheme for the anti-ferromagnetic Ising partition function is impossible unless NP = RP , which is highly unlikely (RP is the class of decision problems “Randomized Polynomial-Time”). We may refer to the motivating question above and interpret it as also asking what kinds of approximations a quantum computer may perform more efficiently than 93 94 Chapter 8. Conclusion a classical computer. We addressed this issue partially in Chapter 6 where we have provided a quantum algorithm for the additive approximation of a function that is closely related to the Ising partition function. We demonstrated that quantum computers will be able to efficiently provide additive approximation schemes for the so called signed-Euler generating function, but that in order to use this information to approximate Z, another computation is necessary which may very well be hard in many instances. Future work will be focused on using the paradigm presented in Chapter 6 to study precisely for what non-planar instances we can obtain an additive approximation of Z. Ultimately, the main goal of research in this area is to identify instances of either the Potts or Ising model for which evaluations of Z is BQP-complete, i.e., complete for quantum computation. This has not yet been achieved and we are hoping to pursue this further.

The essential challenge of the motivating question above was to understand when certain symmetries are present in the instances so that a quantum computer would be able to gain some advantage. We begun by reviewing some work in chapter 4 where we provided a quantum algorithm for the exact evaluation of the weight enumerator polynomial for classical linear codes. This may seem to have nothing to do with statistical physics, but the weight enumerator polynomial is an instance of the Tutte polynomial as is the Potts partition function and both are #P-hard to evaluate [118]. We showed that when we restrict our class of linear codes to a certain family of cyclic codes related to irreducible cyclic codes, the weight spectrum of the words have a structure that we may exploit for a quantum speed up over the best classical methods. In the same manner, in chapter 5 we provided an algorithm that provides a speed up over the best classical algorithm for the evaluation of the Potts Z for either the fully ferromagnetic or anti-ferromagnetic cases for a special class of graphs. The speed up was essentially due to the fact that we restricted the graph instances to those that were intimately related to the aforementioned family of cyclic codes. We took advantage of the fact that the cycle structure of the graphs correspond to the weight spectrum of the codes. Technically, the speed up was due to a series of fortunate connections: a well known relationship between Gauss sums and weights of words from irreducible cyclic codes, the previously mentioned connection between the Potts partition function and weight enumerators, and an efficient quantum algorithm for the estimation of Gauss sums. It was these connections which led to the initial suspicion that there may be a way to use a quantum computer to obtain some type of evaluation of the Potts partition function. Eventually, we realized that certain graphs that are related to irreducible cyclic codes do in fact have a structure that quantum computers can take advantage of. This structure is not completely understood but it involves the cycle structure of the graphs. More specifically, it involves the relationships between the lengths of the different cycles in the graph as these lengths correspond to the weight spectrum of the given code. This is the aforementioned “symmetry” that quantum computers can “see”.

In chapter 6, we presented a new mapping between quantum circuits and graphs. Without any restrictions the map is actually one between quantum circuits and hypergraphs, where a hypergraph is a generalization 95 of a graph where edges are replaced by hyperedges. Hyperedges are allowed to consist of several vertices instead of two. The incidence structure of a graph (or hypergraph) is an encoding of the relationship between vertices and edges and one way of encoding this information is into the aptly named incidence matrix. We demonstrated that any quantum circuit can be encoded in this way in a many to one way, i.e., several quantum circuits may be mapped to a graph. The essential character of this mapping is that an element of the unitary matrix that corresponds to the quantum circuit is related to a sum over all of the Eulerian subgraphs of the corresponding graph. After reviewing the fact that knowledge of this matrix element is equivalent to the decision making power of a quantum circuit in general, and demonstrating that this element is proportional to a function (signed-Eulerian subgraph generating function) which is closely related to the Ising partition function, we conclude that computing this function is complete for quantum computation, i.e., it is BQP-complete. Further, we speculate how this particular result may be turned into an algorithm for certain non-planar instances of the Ising partition function. Loosely speaking, one can say that quantum computers are able to “see” all of the Eulerian subgraphs of a graph and this is the essential symmetry exploited here. The hard fact is however, extracting useful information from this ability seems to be non-trivial when it comes to applying it to the computation of the partition function.

In chapter 7, we utilized the aforementioned mapping between quantum circuits and graphs to construct a test that determines whether families of quantum circuits are efficiently classically simulatable. We did this by exploiting two facts. First, by choosing the correct ansatz we were able to prove that a matrix element of the unitary matrix corresponding to a given quantum circuit is essentially equal to the Ising partition function. Next, we used the fact that there is an efficient classical algorithm for the evaluation of the Ising partition function for planar graphs. Thus, via the mapping, any family of quantum circuits (which solve some decision problem) that correspond to a certain family of planar graphs can be classically simulated. We then outlined how this particular approach may be used to construct a quantum algorithm for the Ising partition function and we discussed difficulties to be overcome.

In this thesis we presented two opposing but complementary directions in quantum computation: The ability to use quantum resources to solve problems more efficiently than in the classical case and in the other direction, understanding when classical resources suffice to simulate quantum circuits. Both directions are vibrant and active research areas today. In this work we focused our attention on classical statistical physics and were able to demonstrate that it is indeed intimately connected to quantum computation. On the one hand, certain symmetries inherent in a special class of instances of the Potts model are more “visible” to quantum computers, and on the other, knowledge of the Ising partition function is equivalent to the decision making power of a certain restricted class of quantum circuits. The work presented herein has the potential of being utilized to construct efficient algorithms for the approximation of the partition function of non-planar instances of the Ising model. More importantly, from a theoretical point of view, is 96 Chapter 8. Conclusion the potential to characterize the power of the quantum circuit model in terms of the Ising model, i.e., to classify which instances of the Ising model are BQP-complete. Ultimately however, the machinery presented herein has the potential to extract information about how powerful quantum computers will actually be. Will quantum computers really be more powerful than their classical counterparts and if so, what will be the actual advantage? Chapter 9

Appendix

9.1 A Classical Algorithm for the Computation of Coset Leaders

and Coset Size

The algorithm for the calculation of the cyclotomic cosets themselves is quite simple; it is essentially a sieve method of the kind commonly used in number theoretic algorithms such as those for prime factorization.

CosetLeaders (N, p) Array A (size N), initialize to unmarked for i = 0 to N 1 do − if Ai = unmarked do output “New coset leader = i” a i, s 0 ← ← while Aa = unmarked do

mark Aa increment s a a p (mod N) ← × end while

output “Coset size = s” end if end for end CosetLeaders

97 98 Chapter 9. Appendix The outer loop scans for coset leaders, which here are unmarked numbers of the form ap0, while the inner loop sieves out other coset members i.e. apk for k = 1 to s 1, where s is the size of a particular coset. − Since, as explained in section II, the cosets partition 1 to N 1, and s is the smallest integer such that − a(ps 1) 0 (mod N), on termination the inner loop has returned to the original coset leader ap0 after − ≡ marking every other member. While the algorithm features nested loops, its running time is linear in N, since the inner loop is activated only once per coset, and the number of iterations for a particular coset are equal to the size of that coset. In fact, it is easy to see that every element in A is read only twice (once in an unmarked state, and once in a marked state) and of course marked only once (as well as unmarked once, during initialization). It should be noted that while the algorithm is soft- (N), in terms of general complexity it is not polynomial O with respect to the input size, but only pseudo-polynomial, since N and p are given as (presumably) binary numbers. This is of course the best that can be done for enumeration problems of this sort, which have very succinct inputs consisting of only 1 or 2 numbers but outputs that consist of relatively long lists (the number

N of cosets can approach 2 , as in the example given in section II). As well, like other sieve algorithms, the storage requirements can be a bit onerous for large N, but this can be helped a bit by doing things such as implementing A as a bit-array. Such optimizations make the problem feasible for N up to several billion on one of today’s ordinary household computers.

9.2 Matroids

Definition. A matroid M on a set E is the pair (E, I) where I is a collection of subsets of E with the following properties:

1. The empty set is in I.

2. Hereditary Property: If A I and B A, then B I. ∈ ⊂ ∈

3. Exchange Property: If A and B are in I and A has more elements than B, then a A such that ∃ ∈ a / B but B a I. ∈ ∪ { } ∈

The collection of sets in I are called the independent sets and E is referred to as the ground set.

Definition. A cycle matroid of a graph Γ is the set of all edges of Γ as the ground set E together with I as the subsets of E which do not contain a cycle. So the independent sets are collections of edges which do not have cycles.

Recall that in graph theory one refers to such an edge set (the above independent set) as a forest. In matroid theory a matrix representation is a matrix whose column vectors have the same dependence relations as the matroid it is representing. More clearly, the column vectors represent the matroid elements 9.2. Matroids 99 and the usual notion of linear dependency determines the dependent sets and therefore the independent sets as well. Thus, the matrix can be said to generate the matroid. As an example, imagine the triangle graph of three nodes with three edges A,B, and C. The cycle matroid consists of each of the edges individually and any collection of two edges. All three edges form a cycle so it cannot be included. We require our matrix representation to encode this independence structure of the edges. One may work over any field here because we are only concerned with graphic matroids, i.e., matroids which can be represented as a cycle matroid of some graph. (Graphic matroids are representable over any field [117].) Now, if we think of column 1,2 and 3 as edges A,B and C respectively we can take the following matrix as a representation in F2:

1 0 1  0 1 1  Since addition is mod 2 here, a cycle is anycollection ofcolumns that sum to the 0-vector. We can take all collections where this does not happen and these collections will form I. In this way, this matrix is a representation of the cycle matroid for the triangle graph. In matroid theory one has the familiar notion of a base.

Definition. A base of a matroid M = (E, I) is a maximal independent subset of E.

It is not a coincidence that the left part of the matrix is the 2 2 identity matrix. In general one can × form a representation (known as the standard matrix representation) where one begins with an identity matrix which is r r where r is the size of the base of M and append to it columns that capture the × dependence structure of the matroid in question. In this way, the columns of the identity matrix represent the chosen basis of M. So M is isomorphic to the matroid induced on the columns of the matrix by linear dependence. A more precise explanation can be found in [117]. What is important for us is that such a matrix representation is possible.

9.2.1 Generator matrix of a cyclic code and the cycle matroid matrix

There is an alternative (but equivalent) way of constructing the generator matrix of a cyclic code which will immediately show its usefulness in its relationship with the cycle matroid matrix representation. Let C be

n k+i an [n, k] cyclic code and let g(x) be the generator polynomial. Now, divide x − by g(x) for 0 i k 1. ≤ ≤ − We have

n k+i x − = qi(x)g(x) + ri(x) where deg r (x) < deg g(x) = n k or r (x) = 0. What this means is that we have a set of linearly independent i − i code words. Namely, we have the k code words given by

n k+i x − r (x) = q (x)g(x) − i i 100 Chapter 9. Appendix in C. More explicitly, take the remainder polynomials ri(x) after applying the division algorithm and using the correspondence (3.1) above, form the k (n k) matrix R and append the k k identity matrix to it. The × − × rows of R are the coefficients of the r (x) and one then has the k n generator matrix [I R]. This is precisely i × k| the form of the matrix representation for matroids discussed above. Thus, we have a correspondence between the generator matrix for an irreducible cyclic code and the matrix representation for the cycle matroid of a graph.

9.3 Characters

A character of a finite group (G, ) is a homomorphism Φ from G to the group of the non-zero complex ∗ numbers C. We are interested in two types of characters, namely the multiplicative and additive characters. Let

F F k (where k is a positive integer) be a finite field as defined previously, and let F∗ be the multiplicative ≡ q group of F. Let g be a primitive element of F (i.e., g generates F). Let

2πi/q ζq = e

k denote the qth root of unity. Let x = g F∗. A multiplicative character χ (x) is a mapping from the set ∈ j of powers m in x = gm to powers of roots of unity. Specifically, the group of multiplicative characters { } χ = χ consists of the elements { j}j

m jm k χj(x) = χj(g ) = ζqk 1, m = 0, . . . , q 2 Fq; j = 0, . . . , q 2 F. − − ∈ − ∈

Let a F. An additive character e (a) is a mapping from F to powers of roots of unity via the trace function. ∈ j Specifically, the group of additive characters e = e consists of the elements { β}β

e (a) = ζTr(βa) a, β = 0, . . . , qk 1 F β q ∀ − ∈ where the trace is defined in Eq. (3.5).

9.4 Discrete Log

For every non-zero x F∗ the discrete logarithm with respect to a primitive element g F is given by ∈ ∈

log (x) = log (gm) = m mod (qk 1). g g −

This means that every multiplicative character can be written

m j logg (x) χj(x) = χj(g ) = ζqk 1 (9.1) − for x = 0 and χ(0) = 0. + 9.5. Samples of Mathematica Notebooks 101 9.5 Samples of Mathematica Notebooks

In the following pages is an example of a program that I wrote in Mathematica that was used to check whether there was a satisfying edge interaction for a given graph. One example illustrates the procedure for the graph K3,3 which is of fundamental importance as it is one of the minors that characterize a graph as non-planar. It ended up failing. Also included is the test I implemented for the simple example of K3,3 with two edges removed, which was successful. I used programs like this extensively to better understand the mapping between quantum circuits and graphs. 102

! Sample Notebook for K" 3,3 ! a # CompleteGraph 3, 3 ! Graph:" 9,6,Undirected #! " # $ % & ShowLabeledGraph a

% & 3 6

2 5

1 4

b # IncidenceMatrix a

1, 1, 1, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 1, 1, 1, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 1, 1, 1 , 1, 0, 0, 1, 0, 0, 1%, &0, 0 , 0, 1, 0, 0, 1, 0, 0, 1, 0 , 0, 0, 1, 0, 0, 1, 0, 0, 1

!! " ! " ! " ! " ! " ! ""

Printed by Mathematica for Students 103

MatrixForm b

1 1 1 0 0 0 0 0 0 0 0 0 1% 1& 1 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1

! This will add zeros to the Incidence matrix in order to obtain CH ! AddZero M" :# Insert M, Table 0, i, 1, Dimensions M 2 , Table $j , j, 1, Dimensions M 1 ! $ c # AddZero% &b % % " % &%% &&#& %" # " % &%% &&#&& 1, 1, 1, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 1, 1, 1, 0, 0, 0 , 0, 0, 0, 0%,&0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 1, 1, 1 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 1, 0, 0, 1, 0, 0, 1, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 1, 0, 0, 1, 0, 0, 1, 0 , !!0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 1, 0, 0, 1, 0, 0, 1", !0, 0, 0, 0, 0, 0, 0, 0, 0" ! " ! " ! " MatrixForm! c " ! " ! " ! " ! " ! "" 1 1 1 0 0 0 0 0 0 0 0 0 0% 0& 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0

! Now we find the Kernel !

! $

Printed by Mathematica for Students 104

Nll # NullSpace c, Modulus % 2

1, 0, 1, 0, 0, 0, 1, 0, 1 , 1, 1, 0, 0, 0, 0, 1, 1, 0 , 1, 0, 1, 1, 0,%1, 0, 0, 0 , 1&, 1, 0, 1, 1, 0, 0, 0, 0

!! This next loop creates" a! matrix h that can be used" to form the quantum! circuit H by using" ! the incidence matrix of"" the graph and putting it's rows on the even rows of a zero graph of the same !size as c. !

Dimensions c $ 12, 9 % & h # ConstantArray 0, Dimensions c 1 , Dimensions c 2 ! " 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0,% 0,"0, 0 , 0, 0%, 0&,' 0(, 0, 0, 0, 0, 0% ,&' 0(,#&0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , !!0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 0, 0, 0, 0, 0, 0, 0" ! " ! " ! " For! i # 1, i & Dimensions"b ! 1 , i'', h 2 ! i # b" i! " ! " ! " ! "" h % % &%% && %% && %% &&& 0, 0, 0, 0, 0, 0, 0, 0, 0 , 1, 1, 1, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 1, 1, 1, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 1, 1, 1 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 1, 0, 0, 1, 0, 0, 1, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , !!0, 1, 0, 0, 1, 0, 0, 1, 0", !0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 1, 0, 0, 1, 0, 0, 1" ! " ! " ! " For! i # 1, i & Dimensions"b ! 2 , i'', " ! " For! j # 1, j & Dimensions" b ! 1 , j'', " ! "" If h 2 ! j, i ( 1, h 2 ! j $ 1, i # 1; Break ; i ( i $ 1 % % &%% && % % &%% && % %% && %%! $ && %&&& &

Printed by Mathematica for Students 105

MatrixForm h

1 1 1 0 0 0 0 0 0 1 1 1 0% 0& 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1

! The above double loop constructs the Primary quantum circuit H.!

!H # h; $ Ht # Transpose H

1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 , 1, 1, 0, 0, 0%, 0&, 0, 0, 0, 1, 0, 0 , 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 , 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0 , 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0 , !!0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1", 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0 , !0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0", !0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1" ! " ! " ! We form Ht ! CH here. ! " ! " ! " ! "" S # Ht.c ! $ 1, 1, 1, 0, 0, 0, 0, 0, 0 , 1, 1, 1, 0, 0, 0, 0, 0, 0 , 1, 1, 1, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 1, 1, 1, 0, 0, 0 , 0, 0, 0, 1, 1, 1, 0, 0, 0 , 0, 0, 0, 1, 1, 1, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 1, 1, 1 , 0, 0, 0, 0, 0, 0, 1, 1, 1 , 0, 0, 0, 0, 0, 0, 1, 1, 1 !! " ! " ! " ! The following loop extracts" ! the lower triangular" !matrix from " !S.First define ltS #lwtr" !S as a square zero matrix" !. ! ""

! ltS # ConstantArray 0, Dimensions S 1 , Dimensions S 1$ ; ltS % " % &' ( % &' (#& 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 !! " ! " ! " ! " ! " ! " For! i # 1, i & Dimensions S" ! 1 , i'', " ! "" For j # 1, i $ j ) 0, j'', ltS i, j # S i, j

% % &%% && % Printed%% by Mathematica&& %% for Students&&&& 106

MatrixForm ltS

0 0 0 0 0 0 0 0 0 1 0 0 0% 0 0& 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0

! Now we need to contruct the whole kernel by taking all possible sums. ! ss! # ConstantArray 0, 2Dimensions Nll 1 $ 1, Dimensions Nll 2 ; $ ss % &' ( ) * % &' (+, 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 0, 0 , !!0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 0, 0, 0, 0, 0, 0, 0", !0, 0, 0, 0, 0, 0, 0, 0, 0" ! " ! " ! " ! " ! " ! " ! " ! " ! "" Dimensions Nll 1

4 % &%% && pnll # Subsets Nll

, 1, 0, 1, 0, 0, 0, 1, 0, 1 , 1, 1, 0, 0, 0, 0, 1, 1, 0 , 1, 0, 1, 1,%0, 1,& 0, 0, 0 , 1, 1, 0, 1, 1, 0, 0, 0, 0 , 1, 0, 1, 0, 0, 0, 1, 0, 1 , 1, 1, 0, 0, 0, 0, 1, 1, 0 , !!"1,!0!, 1, 0, 0, 0, 1, 0, 1 , 1"," 0,!!1, 1, 0, 1, 0, 0, 0 , "" !!1, 0, 1, 0, 0, 0, 1, 0, 1"," 1!,! 1, 0, 1, 1, 0, 0, 0, 0 ,"" !!1, 1, 0, 0, 0, 0, 1, 1, 0", !1, 0, 1, 1, 0, 1, 0, 0, 0"", !!1, 1, 0, 0, 0, 0, 1, 1, 0", !1, 1, 0, 1, 1, 0, 0, 0, 0"", !!1, 0, 1, 1, 0, 1, 0, 0, 0", !1, 1, 0, 1, 1, 0, 0, 0, 0"", !!1, 0, 1, 0, 0, 0, 1, 0, 1", !1, 1, 0, 0, 0, 0, 1, 1, 0"," !!1, 0, 1, 1, 0, 1, 0, 0, 0" ,! 1, 0, 1, 0, 0, 0, 1, 0,"1" , !!1, 1, 0, 0, 0, 0, 1, 1, 0", !1, 1, 0, 1, 1, 0, 0, 0, 0"", !!1, 0, 1, 0, 0, 0, 1, 0, 1", !1, 0, 1, 1, 0, 1, 0, 0, 0", !1, 1, 0, 1, 1, 0, 0, 0, 0"", !!1, 1, 0, 0, 0, 0, 1, 1, 0", !1, 0, 1, 1, 0, 1, 0, 0, 0", !1, 1, 0, 1, 1, 0, 0, 0, 0"", !!1, 0, 1, 0, 0, 0, 1, 0, 1", !1, 1, 0, 0, 0, 0, 1, 1, 0", !1, 0, 1, 1, 0, 1, 0, 0, 0"," 1!,! 1, 0, 1, 1, 0, 0, 0, 0 " ! " ! "" !! " ! " ! " ! """

Printed by Mathematica for Students 107

Dimensions pnll

16 % &

! "

For i # 2, i & 2^Dimensions Nll 1 , i'', For k # 1, k & Dimensions pnll i 1 , ss i $ 1 # Sum pnll i x , x, 1, k $ 1 , k'' % % &%% && ss % % %% &&&%% && %% && % %% &&%% && " #& && 1, 0, 1, 0, 0, 0, 1, 0, 1 , 1, 1, 0, 0, 0, 0, 1, 1, 0 , 1, 0, 1, 1, 0, 1, 0, 0, 0 , 1, 1, 0, 1, 1, 0, 0, 0, 0 , 2, 1, 1, 0, 0, 0, 2, 1, 1 , 2, 0, 2, 1, 0, 1, 1, 0, 1 , 2, 1, 1, 1, 1, 0, 1, 0, 1 , 2, 1, 1, 1, 0, 1, 1, 1, 0 , 2, 2, 0, 1, 1, 0, 1, 1, 0 , !!2, 1, 1, 2, 1, 1, 0, 0, 0", !3, 1, 2, 1, 0, 1, 2, 1, 1", !3, 2, 1, 1, 1, 0, 2, 1, 1", !3, 1, 2, 2, 1, 1, 1, 0, 1", !3, 2, 1, 2, 1, 1, 1, 1, 0", !4, 2, 2, 2, 1, 1, 2, 1, 1" ! " ! " ! " !Now we have to solve a""i !. w # a"i ltS a"i for "w, ! " the! edge distribution. We" skip! the vector 0,0,...",0! for equality "" follows automatically. ! ! ! $ !First we have to obtain the RHS of the above" .! # $ bb # ConstantArray 0, Dimensions ss 1 , 1 ! $ 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 % " % &' ( #& For i # 1, i & Dimensions ss 1 , i'', bb i # Mod ss i .ltS.ss i , 2 !! " ! " ! " ! " ! " ! " ! " ! " ! " ! " ! " ! " ! " ! " ! "" bb % % &%% && %% && % %% && %% && && 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0

! "

Printed by Mathematica for Students 108

! The following returns one solution to the equation. All others are arrived at by adding it to the Null space. ! LinearSolve ss, bb, Modulus % 2 ! LinearSolve::nosol : Linear equation encountered that has no solution. $ $ LinearSolve% 1, 0, 1, 0, 0, 0, 1&, 0, 1 , 1, 1, 0, 0, 0, 0, 1, 1, 0 , 1, 0, 1, 1, 0, 1, 0, 0, 0 , 1, 1, 0, 1, 1, 0, 0, 0, 0 , 2, 1, 1, 0, 0, 0, 2, 1, 1 , 2, 0, 2, 1#,!!0, 1, 1, 0, 1 , 2, 1, 1,"1, 1, 0, 1, 0, 1 , !2, 1, 1, 1, 0, 1, 1, 1, 0", !2, 2, 0, 1, 1, 0, 1, 1, 0", !2, 1, 1, 2, 1, 1, 0, 0, 0", !3, 1, 2, 1, 0, 1, 2, 1, 1", !3, 2, 1, 1, 1, 0, 2, 1, 1", !3, 1, 2, 2, 1, 1, 1, 0, 1", !3, 2, 1, 2, 1, 1, 1, 1, 0", !4, 2, 2, 2, 1, 1, 2, 1, 1" , 0!, 0, 0, 0, 0, 0, 1, 1, 0,"0,!1, 1, 1, 1, 0 , Modulus %"2 ! " ! " !!Here we form all possible" ! binary strings of length"" 9 which !represent all possible edge interactions" for K"3, $ 3. We evaluate ss.w #ccc for all such possible edge interactions. We ! find there is a modulo structure in that there are only 16 different possible ss.w 's. Then we shall check if it is possible to find an H that satisfies the right hand condition Mod ss i .ltS.ss i ,2 ! ! $ Dimensions Nll 1 % %% && %% && && $ 4 % &%% && ! The www are the edge interactions. !

www # Tuples 0, 1 , 9 ; ! $ www 33 %" # & 0, 0, 0, 1, 0, 0, 0, 0, 0 %% && Dimensions www 1 ! " 512 % &%% && ccc # ConstantArray 0, 512, 15 ;

For t # 1, t & 512, t'', ccc t # Mod ss.www t , 2 % " #& Dimensions ccc 1 % %% && % %% && && 512 % &%% &&

Printed by Mathematica for Students 109

Count ccc, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1

32 % " #& vv # ConstantArray 0, 512, 1 ;

For r # 1, r & Dimensions ccc 1 , r'', vv r # Count ccc, ccc r % " #& vv % % &%% && %% && % %% &&&& 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, !32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32

! This implies that there are only sixteen possibilites as 16!32 # 512. Now we need to find these 16 critters. We store them in the" matrix xx.! ! ccc # Sort ccc ; $ xx # ConstantArray 0, 16, 15 ; % & For z # 1, z & 16, z'', xx z # ccc z ! 32 % " #&

% %% && %% &&&

Printed by Mathematica for Students 110

xx

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1 , 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1 , !!0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0", !0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1", !0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0", !0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0", !0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1", !1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1", !1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0", !1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0", !1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1", !1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0", !1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1", !1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1", !1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0" ! " ! " ! "" NullSpace ss, Modulus % 2

1, 1, 0, 1, 1, 0, 0, 0, 1 , 0, 1, 0, 0, 1, 0, 0, 1, 0 , 1, 0, 0,%1, 0, 0, 1, 0, 0&, 0, 0, 0, 1, 1, 1, 0, 0, 0 , 1, 1, 1, 0, 0, 0, 0, 0, 0

!! The following loop transforms" ! H to a circuit with" variables v i . ! ! " ! " ! ""

!a # 1; For i # 1, i & Dimensions b 2 , i'', % & $ For j # 1, j & Dimensions b 1 , j'', If H 2 ! j, i ( 1, H 2 ! j , i # 1, H 2 ! j $ 1, i # v a ; a'' % % &%% && MatrixForm% H % &%% && % %% && %%! $ && %%! $ && % & &&& 1 1 1 v 13 v 17 v 21 v 25 v 29 v 33 1 1 % &1 0 0 0 0 0 0 v 1 v 5 v 9 1 1 1 v 26 v 30 v 34 0 0 0 1# $ 1# $ 1# $ 0# $ 0# $ 0# $ v 2 v 6 v 10 v 14 v 18 v 22 1 1 1 0# $ 0# $ 0# $ 0 0 0 1# $ 1# $ 1# $ 0 v 7 v 11 0 v 19 v 23 0 v 31 v 35 1# $ 0# $ 0# $ 1# $ 0# $ 0# $ 1 0 0 v 3 0 v 12 v 15 0 v 24 v 27 0 v 36 0 1# $ 0# $ 0 1# $ 0# $ 0 1# $ 0# $ v 4 v 8 0 v 16 v 20 0 v 28 v 32 0 0# $ 0 1# $ 0# $ 0 1# $ 0# $ 0 1# $

! #Now$ we# $calculate #ltr$HCH# where$ H is #the$ transformed# $ H from above.!

! ! $ $ Printed by Mathematica for Students 111

Ht # Transpose H

1, 1, v 1 , 0, v 2 , 0, 0, 1, v 3 , 0, v 4 , 0 , 1, 1, v 5 , 0%,&v 6 , 0, v 7 , 0, 0, 1, v 8 , 0 , 1, 1, v 9 , 0, v 10 , 0, v 11 , 0, v 12 , 0, 0, 1 , !!v 13 ,#0,$1, 1, v# 14$ , 0, 0, 1,#v$15 , 0#, v$ 16" , 0 , !v 17 ,#0,$1, 1, v# 18$ , 0,#v$19 , 0, 0, 1#, v$ 20" , 0 , !v 21 ,#0,$1, 1, v# 22$ , 0, v# 23$ , 0, v# 24$ , 0, 0, 1" , !v#25$, 0, v 26 ,#0, 1$, 1, 0, 1, v#27$, 0, v#28$, 0", !v#29$, 0, v 30 ,#0, 1$, 1, v#31$, 0, 0, 1, v#32$, 0", !v#33$, 0, v 34 ,#0, 1$, 1, v#35$, 0, v#36$, 0, 0, 1" ! # $ # $ # $ # $ " S!# Ht# .c$ # $ # $ # $ " ! # $ # $ # $ # $ "" 1, 1 & v 3 , 1 & v 4 , v 1 , v 1 & v 3 , v 1 & v 4 , v 2 , v 2 & v 3 , v 2 & v 4 , 1 & v 7 , 1, 1 & v 8 , v 5 & v 7 , v 5 , v 5 & v 8 , v 6 & v 7 , v 6 , v 6 & v 8 , 1 & v 11 , 1 & v 12 , 1, !!v 9 & v# 11$ , v 9# $& v 12# $, v#9$, v #10$ & v# 11$ ,#v $10 #& v$ 12 , v 10 , v#13$ , v# 13$ &#v$15 #, v$"13! & v #16$, 1, 1 & v#15$ , #1 &$v 16# $, v #14$, v#14$ & v# 15$ ,#v$14 #& v$ 16# $, v# 17$ &#v $19" ,! v 17# , $v 17 &#v 20$ , 1#& v$ 19#, 1$, 1 &# v$ 20 #, v$18 #&$v 19# ,$v 18# , $v 18# &$v 20# ,$ # $" !v#21$ & v#23$, v#21$ & v#24$, v#21$, 1 & v 23# , 1$ & v 24# , 1$, v#22$ & v 23 , v#22$ & v#24$, v#22$ , #v 25$" ,!v#25$& v#27$, v#25$& v#28$, # $ v 26#, v$ 26 & v 27# ,$v 26# $& v 28# ,$1, #1 & v$ 27# , 1$ & v#28$", !v#29$ & v#31$, v#29$, v#29$& v#32$, v 30# & v$ 31 , #v 30$ , v 30# $& v 32# ,$ 1#& v$31 ,# 1,$1 &#v 32$" ,! #v 33$ &#v 35$ ,#v 33$ &#v 36$ ,#v 33$ , v#34$ & v#35$, v#34$ & v#36$, v#34$, 1 & v 35# , 1$ & v 36# , 1$" ! # $ # $ # $ # $ # $ # $ # $ # $ # $ # $ ltS # ConstantArray# $ #0,$"Dimensions! # $ #S $1 ,#Dimensions$ # $ S #1 $ ; # $ # $ # $ # $ # $ # $ # $ ""

For i # 1, i & Dimensions% " S 1 , i%''&,' ( % &' (#& For j # 1, i $ j ) 0, j'', ltS i, j # S i, j

MatrixForm% ltS % &%% && % %% && %% &&&& 0 0 0 0 0 0 1 & v 7 % &0 0 0 0 0 1 & v 11 1 & v 12 0 0 0 0 v 13 v 13 & v 15 v 13 & v 16 0 0 0 v 17# &$v 19 v 17 v 17 & v 20 1 & v 19 0 0 v 21# & v$ 23 v 21# & v$ 24 v 21 1 & v 23 1 & v 24 0 v#25$ v#25$ & v#27$ v#25$ & v#28$ v 26 v 26 & v 27 v 26 & v 28 v#29$ & v#31$ v#29$ v#29$ & v#32$ v 30# & v$ 31 v 30 v 30 & v 32 v#33$ & v#35$ v#33$ & v#36$ v#33$ v 34# & v$ 35 v 34# & v$ 36 v 34 # $ # $ # $ # $ # $ # $ # $ # $ # $ # For# i $# 1,#i &$Dimensions# $ ss 1# ,$ i''#, bb$ i# $# ss# i$ .#ltS$.ss i # $ # # $ # $ # $ # $ # $ # $ # $ # $ # $ # $ bb % % &%% && %% && %% && %% &&&

Printed by Mathematica for Students 112

MatrixForm bb

2 & v 11 & 2 v 25 & v 28 & 2 v 33 & 2 v 35 2 & v 7 & 2% v &25 & v 27 & 2 v 29 & 2 v 31 2 & v 11 & 2 v 13 & v 16 & 2 v 21 & 2 v 23 2 & v#7 $& 2 v 13# $& v 15# $& 2 v 17# $& 2 v 19# $ 2 & v#12$ & 2 #v 25$ &#v 27$  v$ 25 #& v $28 & 2 v 29 & v 32 & 2 v 33 & 2 2 & v 31 2 & v#23$ & 2 v#26$ & v#28$ & 2 v# 13$ & v #16 $& v 21 & v 25 & v 28 & v 33 & 2 v 34 & 3 & v#12$ & 2 v# 13$ & v# 15$ & v #16 $& 2 v #17 $& v 19 & v 20 & 2 v 25 & 2 v 26 & 2 v 27 & 3 & v#12$ & 2 v% 13# $& v 15# $&&v 16% &# 2 v$ 21# & v$&23 &#v 24$  v$25  v$26 &% v 27# &$2 2 & v#19$ & 2 v#26$ & v#27$ & 2 %v#13$ & v#15$ & v#17$ & v#25$ & v#27$ & v#29$& & 2 v#30$ & 2 & v#12$ & 2 v# 13$ & v# 15$  $v 13 #& v$16 # & 2$ v 17# $& v 20# & 2$ v 21# & 2$ 2 & v# 19$ 2 & 2# 1 &$ v 12# &$v 13# &$v 15# &$v 21 #& v$ 23 #& v$ 24 #& 2$v 26 #& 2$ v 25# &$v 27# &$ 2 2 & v#13$ & v 16# &$ v 17# &$ v 19% #& v$20 #& 2$v 26# &$2 v#26$ & v#27$ & 2# v$&25 & v# 28$ 2 & 2# 1 &$ v 12% # & 2$ v #13 $& v 15% # & v$ 17# & v$&21  v$24 #& v$25  v$26 &% 2 v 27# $& 2 & 2 %v 13# & v$&16 # & v$ 17# & v$ 20# & v$ 21# & v$ 24# & v$ 25 #& 2 v$ 26%&#v 27$  v$28& &% 2 & v#24$ & 2# v $26 #& v $27 #& 2$ v #26 $& v 28# $& 2 v %30# &$v 32# &$2& 2 %v #13 $& v #16 $& % # $& % # $ # $& # $ # $ # $ # $ # $ # $ Solve %bb# 1$ (#0 &&$& bb# 2$ (#0 &&$ bb# 3$ (#0 &&$ bb# 4$ ( 0# &&$ # $ # $ bb #5 $ ( 0%&&#bb$ 6 # ($0& && %bb# 7$ (#0 &&$& bb #8 $( 0 #&& $bb %9 %(#0 &&$ # $& bb 10 ( 0 && bb 11 ( 0 && bb 12 ( 0 && bb 13 ( 0 && bb %14%% (&&0 && bb 15%% &(&0 && Modular%% &(&2, Mode %%%Modular&& %% && %% && %% && %% && %% && Solve::smod : Unable to solve equations for modulus. ! %% && %% && %% && %% && %% && %% && " #& Dimensions xx 1 16 % &%% && For t # 1, t & Dimensions xx 1 , t'', Solve bb 1 ( xx t 1 && bb 2 ( xx t 2 && bb 3 ( xx t 3 && bb 4 ( xx t 4 && bb 5 ( xx t 5 && bb% 6 ( xx t 6 % &&&%bb% &&7 ( xx t 7 && bb %8 %%(&xx& t %% 8&&%%&&&&bb 9 %%( xx&& t %% 9&&%%&&&& bb%%10&& ( xx%% t&&%% 10&& && bb%% &11& (%xx% &&t%% &&11 &&%% && %% &&%% && bb%%12&& ( xx%% t&&%% 12&& && bb%% &13& (%%xx&&%t% && 13 && bb%%14&& ( xx%% t&&%% 14&& && bb%% &15& (%xx% &&t%% &&15 && Modular ( 2, Mode%% %&&Modular%% &&%% && %% && %% &&%% && %% && %% &&%% && %% && %% &&%% && Solve::smod : Unable to solve equations for modulus. ! %% && %% &&%% && %% && %% &&%% && Solve::smo" d : Unable to solve#&e&quations for modulus. !

Solve::smod : Unable to solve equations for modulus. !

General::stop : Further output of Solve::smod will be suppressed during this calculation. !

Printed by Mathematica for Students 113

For t # 4, t & Dimensions xx 1 , t'', Solve bb 1 ( xx t 1 && bb 2 ( xx t 2 && bb 3 ( xx t 3 && bb 4 ( xx t 4 && bb 5 ( xx t 5 && bb% 6 ( xx t 6 % &&&%bb% &&7 ( xx t 7 && bb %8 %%(&xx& t %% 8&&%%&&&&bb 9 %%( xx&& t %% 9&&%%&&&& bb%%10&& ( xx%% t&&%% 10&& && bb%% &11& (%xx% &&t%% &&11 &&%% && %% &&%% && bb%%12&& ( xx%% t&&%% 12&& && bb%% &13& (%%xx&&%t% && 13 && bb%%14&& ( xx%% t&&%% 14&& && bb%% &15& (%xx% &&t%% &&15 && Modular ( 2, Mode%% %&&Modular%% &&%% && %% && %% &&%% && %% && %% &&%% && %% && %% &&%% && Solve::smod : Unable to solve equations for modulus. ! %% && %% &&%% && %% && %% &&%% && Solve::smo" d : Unable to solve#&e&quations for modulus. !

Solve::smod : Unable to solve equations for modulus. !

General::stop : Further output of Solve::smod will be suppressed during this calculation. !

For t # 7, t & Dimensions xx 1 , t'', Solve bb 1 ( xx t 1 && bb 2 ( xx t 2 && bb 3 ( xx t 3 && bb 4 ( xx t 4 && bb 5 ( xx t 5 && bb% 6 ( xx t 6 % &&&%bb% &&7 ( xx t 7 && bb %8 %%(&xx& t %% 8&&%%&&&&bb 9 %%( xx&& t %% 9&&%%&&&& bb%%10&& ( xx%% t&&%% 10&& && bb%% &11& (%xx% &&t%% &&11 &&%% && %% &&%% && bb%%12&& ( xx%% t&&%% 12&& && bb%% &13& (%%xx&&%t% && 13 && bb%%14&& ( xx%% t&&%% 14&& && bb%% &15& (%xx% &&t%% &&15 && Modular ( 2, Mode%% %&&Modular%% &&%% && %% && %% &&%% && %% && %% &&%% && %% && %% &&%% && Solve::smod : Unable to solve equations for modulus. ! %% && %% &&%% && %% && %% &&%% && Solve::smo" d : Unable to solve#&e&quations for modulus. !

xx 8

0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1 %% && xx 9 ! " 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1 %% &&

! " (* Here is the instance for xx[[8]]. *)

Solve bb 1 ( 0 && bb 2 ( 1 && bb 3 ( 1 && bb 4 ( 1 && bb 5 ( 1 && bb 6 ( 1 && bb 7 ( 1 && bb 8 ( 0 && bb 9 ( 0 && bb 10 ( 0 && bb 11 ( 0 && bb 12 ( 0 && bb 13 ( 0 && bb% 14%% &(& 1 && bb 15%% &(& 1 && Modular%% &(&2, Mode %%%Modular&& ; %% && %% && %% && %% && %% && ! Here%% is&&the instance%% &for& xx 9 %.%! && %% && %% && %% && " #&

! %% && $

Printed by Mathematica for Students 114

Solve bb 1 ( 1 && bb 2 ( 0 && bb 3 ( 0 && bb 4 ( 0 && bb 5 ( 1 && bb 6 ( 1 && bb 7 ( 1 && bb 8 ( 0 && bb 9 ( 0 && bb 10 ( 0 && bb 11 ( 1 && bb 12 ( 1 && bb 13 ( 1 && bb% 14%% &(& 0 && bb 15%% &(& 1 && Modular%% &(&2, Mode %%%Modular&& ; %% && %% && %% && %% && %% && Solve::smod : Unable to solve equations for modulus. ! %% && %% && %% && %% && %% && %% && " #&

For t # 10, t & Dimensions xx 1 , t'', Solve bb 1 ( xx t 1 && bb 2 ( xx t 2 && bb 3 ( xx t 3 && bb 4 ( xx t 4 && bb 5 ( xx t 5 && bb% 6 ( xx t 6 %&&&bb%% &7& ( xx t 7 && bb %8 %%(&xx& t %% 8&&%%&&&&bb 9 %%( xx&& t %% 9&&%%&&&& bb%%10&& ( xx%% t&&%% 10&& && bb%% &11& (%xx% &&t%% &&11 &&%% && %% &&%% && bb%%12&& ( xx%% t&&%% 12&& && bb%% &13& (%%xx&&%t% && 13 && bb%%14&& ( xx%% t&&%% 14&& && bb%% &15& (%xx% &&t%% &&15 && Modular ( 2, Mode%% %&&Modular%% &&%% && %% && %% &&%% && %% && %% &&%% && %% && %% &&%% && Solve::smod : Unable to solve equations for modulus. ! %% && %% &&%% && %% && %% &&%% && Solve::smo" d : Unable to solve#&e&quations for modulus. !

For t # 13, t & Dimensions xx 1 , t'', Solve bb 1 ( xx t 1 && bb 2 ( xx t 2 && bb 3 ( xx t 3 && bb 4 ( xx t 4 && bb 5 ( xx t 5 && bb% 6 ( xx t 6 %&&&bb%% &7& ( xx t 7 && bb %8 %%(&xx& t %% 8&&%%&&&&bb 9 %%( xx&& t %% 9&&%%&&&& bb%%10&& ( xx%% t&&%% 10&& && bb%% &11& (%xx% &&t%% &&11 &&%% && %% &&%% && bb%%12&& ( xx%% t&&%% 12&& && bb%% &13& (%%xx&&%t% && 13 && bb%%14&& ( xx%% t&&%% 14&& && bb%% &15& (%xx% &&t%% &&15 && Modular ( 2, Mode%% %&&Modular%% &&%% && %% && %% &&%% && %% && %% &&%% && %% && %% &&%% && Solve::smod : Unable to solve equations for modulus. ! %% && %% &&%% && %% && %% &&%% && " #&& For t # 15, t & Dimensions xx 1 , t'', Solve bb 1 ( xx t 1 && bb 2 ( xx t 2 && bb 3 ( xx t 3 && bb 4 ( xx t 4 && bb 5 ( xx t 5 && bb% 6 ( xx t 6 %&&&bb%% &7& ( xx t 7 && bb %8 %%(&xx& t %% 8&&%%&&&&bb 9 %%( xx&& t %% 9&&%%&&&& bb%%10&& ( xx%% t&&%% 10&& && bb%% &11& (%xx% &&t%% &&11 &&%% && %% &&%% && bb%%12&& ( xx%% t&&%% 12&& && bb%% &13& (%%xx&&%t% && 13 && bb%%14&& ( xx%% t&&%% 14&& && bb%% &15& (%xx% &&t%% &&15 && Modular ( 2, Mode%% %&&Modular%% &&%% && %% && %% &&%% && %% && %% &&%% && %% && %% &&%% && ! Cases%% &8&,12 and%% 14&&%seem% &&to have%% solutions&& %%. &To&%%resolve&& the issue of" complexity I have#&& reduced the equations in complexity by hand and verified their correctness. We call the new system hh.! !

$

Printed by Mathematica for Students 115

MatrixForm hh

v 11 & v 28 v 7 & v 27% & v 11 & v 16 v#7 $& v 15# $ v#12$ & v# 32$ v#23$ & v#28$ 1#& v$ 12# & v$ 15 & v 16 & v 19 & v 20 v 28 1#& v$12 #& v$15 & v 16 & v 23 & v 24 & v 27 v#19$ & v#27$ v 12# & v$ 20# $ # $ # $ # $ # $ 1 & v#7 $& v 15# $& v 24# $& v 32# $ # $ # $ 1#& v$11 #& v$16 & v 20 & v 32 1#& v$7 &#v 19$ & v 23 & v 28 1 & v#11$ & v# 19$ & v# 20$ & v# 23$ & v 24 & v 27 v 24# & v$ 32# $ # $ # $ # $ # $ # $ # $ Solve#hh$ 1 # ($0 &&# hh$ 2# ($ 1 &&# hh$ 3# $( 1 && hh 4 ( 1 && hh# $5 #( 1$&& hh 6 ( 1 && hh 7 ( 1 && hh 8 ( 0 && hh 9 ( 0 && hh 10 ( 0 && hh 11 ( 0 && hh 12 ( 0 && hh 13 ( 0 && hh %14%% (&&1 && hh 15%% (&&1 && Modular%%(&&2, Mode %%Modular% && %% && %% && %% && %% && %% && % % % % % % % Modulus%% && 3, Modular%% 2&,& v 7 1,%v% 15&& 0, v 11 %% 1,&v& 16 0, v 12 0, % % % % % % % v %27% &&0, v 19 %0%, v&&23 2, v 24 0, v 32" 1, v 20 0#,&v 28 2

Solve!! hh 1 ( 1 && hh 2 # ($ 0 && hh# $3 ( 0#&&$hh 4 # ( $0 && # $ hh# 5$ ( 1 #&& hh$ 6 #( 1$&& hh #7 $ ( 1 &&# hh$ 8 (#0 &&$ hh 9# $( 0 &&"" hh 10 ( 0 && hh 11 ( 1 && hh 12 ( 1 && hh 13 ( 1 && hh %14%% (&&0 && hh 15%% (&&1 && Modular%%(&&2, Mode %%Modular% && %% && %% && %% && %% && %% && %% && %% && %% && %% && %% && %% && " #& ! The only solution that was possible in this setting was over GF !"3 which is not acceptable. The situation was even more straight forward with K"5 as there were no solutions over any GF p . Thus, non! $planar instances are not accepted. I have been able to implement !several$ planar graphs that did have solutions. ! ! $

$

Printed by Mathematica for Students 116

!Sample Notebook for K" 3,3 minus two edges! a # CompleteGraph 3, 3 ! Graph:" 9,6,Undirected #! ! " # $ % & a # DeleteEdge a, 1, 6

! Graph:" 7,6,Undirected #! % " #& a # DeleteEdge a, 2, 5

! Graph:" 8,6,Undirected #! % " #& ShowLabeledGraph a

% & 3 6

2 5

1 4 b # IncidenceMatrix a

1, 1, 0, 0, 0, 0, 0 , 0, 0, 1, 1, 0, 0, 0 , 0, 0, 0, 0, 1, 1, 1 , 1, 0, 1, 0, 1, 0, 0% ,& 0, 1, 0, 0, 0, 1, 0 , 0, 0, 0, 1, 0, 0, 1

!! " ! " ! " ! " ! " ! ""

Printed by Mathematica for Students 117

MatrixForm b

1 1 0 0 0 0 0 0 0 1 1% 0& 0 0 0 0 0 0 1 1 1 1 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1

! This will add zeros to the Incidence matrix in order to obtain CH ! AddZero M" :# Insert M, Table 0, i, 1, Dimensions M 2 , Table $j , j, 1, Dimensions M 1 ! $ c # AddZero b % & 1, 1, %0, 0, 0, 0%, 0 ", 0, 0, 0, 0, 0, 0%, &0%%, &0&,#&0, 1, 1,%0", 0#, 0", 0, 0, 0, 0, 0,%0,&%0%,&&#&& 0, 0, 0, 0%,&1, 1, 1 , 0, 0, 0, 0, 0, 0, 0 , 1, 0, 1, 0, 1, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 1, 0, 0, 0, 1, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 1, 0, 0, 1 , 0, 0, 0, 0, 0, 0, 0 !! " ! " ! " ! " MatrixForm! c " ! " ! " ! " ! " ! " ! " ! "" 1 1 0 0 0 0 0 0 0 0 0% 0& 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0

! Now we find the Kernel !

! $

Nll # NullSpace c, Modulus % 2

0, 0, 1, 1, 1, 0, 1 , 1, 1, 0, 0, 1, 1, 0 % & ! This next loop creates a matrix h that can be used to form the !!quantum circuit H "by!using the incidence"" matrix of the graph and putting it's rows on the even rows of a zero graph of the same size as c. ! ! Dimensions c

12, 7 $ % & h # ConstantArray 0, Dimensions c 1 , Dimensions c 2 ! " 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0,% 0 ," 0, 0, 0, 0%, 0&,' 0(, 0 , 0, 0, 0%, 0&,' 0(,#&0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 !! " ! " ! " ! " # & '' ! # For! i 1, i Dimensions" ! b 1 , i , h" 2! i b i " ! " ! " ! " ! " ! ""

% % &%% && %% && %% &&& Printed by Mathematica for Students 118

h 0, 0, 0, 0, 0, 0, 0 , 1, 1, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 1, 1, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 1, 1, 1 , 0, 0, 0, 0, 0, 0, 0 , 1, 0, 1, 0, 1, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 1, 0, 0, 0, 1, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 1, 0, 0, 1 !! " ! " ! " ! " # & '' # & For! i 1, i Dimensions" ! b 2 , i , For" !j 1, j Dimensions" !b 1 , " '' ! ( ! $ # ( $ !j , If h 2 j, i" ! 1, h 2 j 1,"i ! 1; Break ; i " i! 1 "" MatrixForm h % % &%% && % % &%% && 1 1 0 %0 %0% 0 0 && %%! $ && %&&& & 1 1 0 0% 0& 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1

! The above double loop constructs the Primary quantum circuit H.!

H # h; ! $ Ht # Transpose H

1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 , 1, 1, 0, 0, 0%, 0&, 0, 0, 0, 1, 0, 0 , 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0 , 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1 , 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0 , !!0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0", 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1 ! " ! " ! ! ! ! We form Ht CH here. " ! " ! " ! "" S # Ht.c ! $ 1, 1, 0, 0, 0, 0, 0 , 1, 1, 0, 0, 0, 0, 0 , 0, 0, 1, 1, 0, 0, 0 , 0, 0, 1, 1, 0, 0, 0 , 0, 0, 0, 0, 1, 1, 1 , 0, 0, 0, 0, 1, 1, 1 , 0, 0, 0, 0, 1, 1, 1

! # !! The following loop" !extracts the lower" triangular! matrix "from S.First define ltS ! !lwtr S as a square" zero! matrix. " ! " ! ""

! ltS # ConstantArray 0, Dimensions S 1 , Dimensions S 1 ; $ ltS % " % &' ( % &' (#& 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0

!! " ! " ! " # & '' # $ ) '' # For! i 1, i Dimensions" ! S 1 , i , For" j! 1, i j 0, j ,"ltS! i, j S i, j""

% % &%% && % %% && %% &&&&

Printed by Mathematica for Students 119

MatrixForm ltS

0 0 0 0 0 0 0 1 0 0 0% 0 &0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0

! Now we need to contruct the whole kernal by taking all possible sums. !

Dimensions Nll 1 !ss # ConstantArray 0, 2 $ 1, Dimensions Nll 2 ; $ ss % &' ( ) * % &' (+, 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0

!! " ! " ! "" Dimensions Nll 1

2 % &%% && pnll # Subsets Nll

, 0, 0, 1, 1, 1, 0, 1 , 1, 1, 0, 0, 1, 1, 0 , 0, 0, 1, 1, 1, 0, 1 , 1, 1, 0, 0, 1, 1, 0 % & Dimensions pnll !!" !! "" !! "" !! " ! """ 4 % &

! "

For i # 2, i & 2^Dimensions Nll 1 , i'', For k # 1, k & Dimensions pnll i 1 , ss i $ 1 # Sum pnll i x , x, 1, k $ 1 , k'' ss % % &%% && 0,%0, 1, 1, 1, 0, 1 , 1,%1, 0,%%0,&1&,&%1%, &0&, 1%,%1, 1,&&1, 2, 1,% 1 %% &&%% && " #& &&

!Now we have to solve a"i . w # a"i ltS a"i for w, !the! edge distribution" .! We skip the vector" ! 0,0,...,0 for equality"" follows automatically. ! !First we have to obtain the RHS of the above.! ! ! $ " # $ bb # ConstantArray 0, Dimensions ss 1 , 1 ! $ 0 , 0 , 0 % " % &' ( #& For i # 1, i & Dimensions ss 1 , i'', bb i # Mod ss i .ltS.ss i , 2 !! " ! " ! "" bb % % &%% && %% && % %% && %% && && 0, 0, 1

! "

Printed by Mathematica for Students 120

! The following returns one solution to the equation. All others are arrived at by adding it to the Null space. ! LinearSolve ss, bb, Modulus % 2 LinearSolve! ::nosol : Linear equation encountered that has no solution. $ $ LinearSolve% & 0, 0, 1, 1, 1, 0, 1 , 1, 1, 0, 0, 1, 1, 0 , 1, 1, 1, 1, 2, 1, 1 , 0, 0, 1 , Modulus % 2

! Here we #form all possible binary strings of length 7 which " # !!represent all possible" ! edge interactions" !for K 3,3. We evaluate"" ! ss.w " $ ccc for all such possible edge interactions. We find there is a modulo structure in that ! there are only 4 different possible ss.w 's. Then we shall check if it is possible to find an H that satisfies the right hand condition Mod ss i .ltS.ss i ,2 ! Dimensions Nll 1 ! $ 2 % %% && %% && && $ % &%% && ! The www are the edge interactions. ! www # Tuples 0, 1 , 7 ; ! $ www 33 %" # & 0, 1, 0, 0, 0, 0, 0 %% && Dimensions www 1 ! " 128 % &%% && ccc # ConstantArray 0, 128, 3 ; ! The number of edge interactions, the number of null vectors $ the trivial null vector.! For t # 1, t & 128, t'', ccc t # Mod ss.www t , 2 % " #& ! $ Dimensions ccc 1 % %% && % %% && && 128 % &%% && vv # ConstantArray 0, 128, 1 ;

For r # 1, r & Dimensions ccc 1 , r'', vv r # Count ccc, ccc r % " #& vv % % &%% && %% && % %% &&&& 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, !32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32

! This implies that there are only four possibilites as 4!32 # ! 128. Now we need to find these 4 critters. We store them in the matrix xx. " ccc # Sort ccc ; ! $ xx # ConstantArray 0, 4, 3 ; % & For z # 1, z & 4, z'', xx z # ccc z ! 32 % " #&

% %% && %% &&&

Printed by Mathematica for Students 121

xx

0, 0, 0 , 0, 1, 1 , 1, 0, 1 , 1, 1, 0

!! " ! " ! " ! "" NullSpace ss, Modulus % 2

0, 0, 1, 0, 0, 0, 1 , 1, 0, 0, 0, 0, 1, 0 , 1, 0, 1,%0, 1, 0, 0 , 0,&0, 1, 1, 0, 0, 0 , 1, 1, 0, 0, 0, 0, 0 ! ! !! The following loop" !transforms H to a" circuit with variables v i . ! " ! " ! "" a # 1; For i # 1, i & Dimensions b 2 , i'', For j # 1, j & Dimensions b 1 , ! % & $ j'', If H 2 ! j, i ( 1, H 2 ! j , i # 1, H 2 ! j $ 1, i # v a ; a'' MatrixForm H % % &%% && % % &%% && 1 1 % %%v 9 v&&13 v %17%! v $21 &&v 25 %%! $ && % & &&& 1 1 % &0 0 0 0 0 v 1 v 5 1 1 v 18 v 22 v 26 0 0 1# $ 1# $ 0# $ 0# $ 0# $ v 2 v 6 v 10 v 14 1 1 1 0# $ 0# $ 0 0 1# $ 1# $ 1# $ 0 v 7 0 v 15 0 v 23 v 27 1# $ 0# $ 1# $ 0# $ 1 0 0 v 3 0 v 11 v 16 v 19 0 v 28 0 1# $ 0 0# $ 0 1# $ 0# $ v 4 v 8 v 12 0 v 20 v 24 0 0# $ 0 0# $ 1# $ 0# $ 0 1# $

! #Now$ we# $calculate# $ ltr HCH# where$ # H $is the transformed H from above.!

Ht # Transpose H ! ! $ $ 1, 1, v 1 , 0, v 2 , 0, 0, 1, v 3 , 0, v 4 , 0 , 1, 1, v 5 , 0, v 6 , 0, v 7 , 0, 0, 1, v 8 , 0 , v 9 , 0, 1, 1%,&v 10 , 0, 0, 1, v 11 , 0, v 12 , 0 , v 13 , 0, 1, 1, v 14 , 0, v 15 , 0, v 16 , 0, 0, 1 , !!v 17 ,#0,$v 18 ,# 0$, 1, 1, 0, 1,#v$19 , 0#, $v 20" ,!0 , # $ # $ # $ # $ " !v#21$ , 0, v 22 ,# 0,$1, 1, v 23 ,# 0,$0, 1, #v 24$ , "0 , !v#25$, 0, v 26 ,#0, $1, 1, v#27$, 0, v#28$, 0, 0, 1" ! # $ # $ # $ # $ " # S! Ht# .c$ # $ # $ # $ " !1,# 1 &$ v 3 ,#v 1$ , v 1 & v 4# ,$v 2 , v# 2$& v 3 , v""2 & v 4 , 1 & v 7 , 1, v 5 & v 7 , v 5 & v 8 , v 6 & v 7 , v 6 , v 6 & v 8 , v 9 , v 9 & v 11 , 1, 1 & v 12 , v 10 , v 10 & v 11 , v 10 & v 12 , !!v 13 &#v $15 ,# v$ 13#&$v 16# ,$ 1 &#v $15 #, 1$, v #14$ & v# 15$ ,#v $14" & v 16 , v 14 , !v 17#,$v 17 #& v$ 19 #, $v 18# ,$v 18# $& v #20$ , 1#, $1 & v#19$ , #1 &$v 20# $", !v#21$ &#v $23 ,#v $21 , v 22 #& v$ 23#, v$ 22#& v$ 24#, 1$& v #23 $, 1,#1 &$v" 24 , !v#25$ & v#27$, v#25$ & v#28$, v 26# & $v 27 ,#v 26$ , #1 & $v 27# , $1 & v#28$, 1# $" ! # $ # $ # $ # $ # $ # $ # $ # $" # ltS! # ConstantArray$ # $ # 0$, Dimensions# $ # $S #1 ,$Dimensions# $ S # 1 $ ; # $" ! # $ # $ # $ # $ # $ # $ # $ # $ # $ ""

For i # 1, i & Dimensions% " S 1 , i%''&,' For( j # 1, i $ j%) &0', j('#'&, ltS i, j # S i, j

% % &%% && % %% && %% &&&&

Printed by Mathematica for Students 122

MatrixForm ltS

0 0 0 0 0 0 0 1 & v 7 % &0 0 0 0 0 0 v 9 v 9 & v 11 0 0 0 0 0 v 13 & v 15 v 13 & v 16 1 & v 15 0 0 0 0 v 17# $ v 17 & v 19 v 18 v 18 & v 20 0 0 0 v#21$ & v 23 v#21$ # $ v 22 & v 23 v 22 & v 24 1 & v 23 0 0 v#25$ & v#27$ v#25$ & v#28$ v 26# & v$ 27 v 26 1 & v 27 1 & v 28 0 # $ # $ # $ # $ # $ # $ For# i $# 1,#i &$Dimensions# $ ss 1# ,$ i''#, bb$ #i $# ss# $i .ltS#.ss$ i # $ # $ # $ # $ # $ # $ # $ # $ # $ bb % % &%% && %% && %% && %% &&& 2 & v 15 & 2 v 18 & v 20 & 2 v 26 & 2 v 27 , 2 & v 7 & 2 v 17 & v 19 & 2 v 21 & 2 v 23 , 3 & v 7 & 2 v 9 & v 11 & 2 v 13 & 2 v 15 & v 16 & 2 v 17 & 2 v 18 & 2 v 17 & v 19 & 2 v 18 & v 20 & 2 v 21 & ! 2 v #22 $& 2 v #23 $& v #24 $& 2 v #25 $& 2 v #26 $& 2 v 27 & 2 2 & v 23 & v 27 & 2 v 28 # $ # $ # $ # $ # $ # $ # $ # $ # $ MatrixForm# $ bb# $ # $ # $ % # $ # $& % # $ # $& # $ # $ # $ # $ # $ # $ # $ % # $ # $& # $" 2 & v 15 & 2 v 18 & v 20 & 2 v 26 & 2 v 27 2 & v 7 & 2% v &17 & v 19 & 2 v 21 & 2 v 23 3 & v 7 & 2 v 9 & v 11 & 2 v 13 & 2 v 15 & v 16 & 2 v 17 & 2 v 18 & 2 v 17 & v 19 & 2 v 18 & v 20 # $ # $ # $ # $ # $ bb 3# $ # $ # $ # $ # $ 3 & v 7# &$ 2 v 9# &$ v 11# &$ 2 v 13# &$ 2 v 15# &$ v 16# &$ 2 v 17# &$ # $ % # $ # $& % # $ # & & & & & & & 2%v% 18&& 2 v 17 v 19 2 v 18 v 20 2 v 21 2 v 22 2 v 23 & v 24 & 2 v 25 & 2 v 26 & 2 v 27 & 2 2 & v 23 & v 27 & 2 v 28 # $ # $ # $ # $ # $ # $ # $ ( ( ( ( % Solve# bb$ 1% # 0$ && #bb $&2 % 0# &&$ bb # 3 $& 0 &&# Modulus$ # $2, Mode Modular # $ # $ # $ # $ # $ % # $ # $& # $ Solve::svars : Equations may not give solutions for all "solve" variables. ! % %% && %% && %% && " #& Modulus % 2, v 11 % 1 & v 16 & v 19 & v 24 , v 7 % v 19 , v 15 % v 20

! There is a solution*! !! # $ # $ # $ # $ # $ # $ # $ # $""

! $

Printed by Mathematica for Students Bibliography

[1] A. Aspuru-Guzik, A.D. Dutoi, P.J. Love, and M.Head-Gordon. Simulated quantum computation of molecular energies. Science, 309:1704, 2005.

[2] A. Barg. On some polynomials related to Weight Enumerators of Linear Codes. SIAM J. Discrete Math., 15:155, 2002.

[3] A. Galluccio and M. Loebl. On the theory of Pfaffian Orientations II. T-joins, K-cuts, and duality of enumeration. The Electronic Journal of Combinatorics, 6:1, 1999.

[4] A. Galluccio, M. Loebl and J. Vondrak. New Algorithm for the Ising Problem: Partition Function for Finite Lattice Graphs. Phys. Rev. Lett., 84, 2000.

[5] A. Mizel, D.A. Lidar, M. Mitchell. Simple proof of equivalence between adiabatic quantum computation and the circuit model. Phys. Rev. Lett., 99:070502, 2007. eprint quant-ph/0609067.

[6] A. Nayak, L.J. Schulman and U. Vazirani. A quantum algorithm for the ferromagnetic ising model. Manuscript unpublished but available at http://www.math.uwaterloo.ca/ anayak/papers/NayakSV08.pdf.

[7] A.K. Hartmann. Calculation of partition functions by measuring component distributions. Phys. Rev. Lett., 94:050601, 2005.

[8] Alejandro Perdomo, Colin Truncik, Ivan Tubert-Brohman, Geordie Rose and Aln Aspuru-Guzik. On the construction of model Hamiltonians for adiabatic quantum computing and its application to finding low energy conformations of lattice protein models. 2008. preprint arXiv:0801.3625.

[9] Simon Anders and Hans J. Briegel. Fast simulation of stabilizer circuits using a graph-state represen- tation. Physical Review A (Atomic, Molecular, and Optical Physics), 73(2):022334, 2006.

[10] B. Georgeot and D.L. Shepelyansky. Exponential Gain in Quantum Computing of Quantum Chaos and Localization. Phys. Rev. Lett., 86:2890, 2001. 123 124 Bibliography [11] B. Georgeot and D.L. Shepelyansky. Stable Quantum Computation of Unstable Classical Chaos. Phys. Rev. Lett., 86:5393, 2001.

[12] B.C. Berndt, J. Evans and K.S. Williams. Gauss and Jacobi Sums. Wiley-Interscience, New York, 1998.

[13] B.M. Boghosian and W. Taylor. Quantum lattice-gas models for the many-body Schrodinger equation in d dimensions. Phys. Rev. E, 57:54, 1998.

[14] B.M. Terhal and D.P. DiVincenzo. Problem of equilibration and the computation of correlation func- tions on a quantum computer. Phys. Rev. A, 61:022301, 2000.

[15] B.M. Terhal and D.P. DiVincenzo. Classical simulation of noninteracting-fermion quantum circuits, 2001.

[16] S. Bravyi. Lagrangian representation for fermionic linear optics. Quantum Inf. Comput., 5:216, 2005.

[17] C. H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani. Strengths and weaknesses of quantum computing. SIAM J. Comp., 26, 1997.

[18] C. Martinez-Perez and W. Willems. Is the Class of Cyclic Codes Asymptotically Good? IEEE Trans. Information Theory, 52, 2006.

[19] C. Zalka. Grover’s quantum searching algorithm is optimal. preprint quant-ph/9711070.

[20] C. Zalka. Simulating Quantum Systems on a Quantum Computer. Proc. Roy. Soc. London Ser. A, 454:313, 1998.

[21] C.H. Papadimitriou. Computational Complexity. Addison Wesley Longman, Reading, Massachusetts, 1995.

[22] Claude Berge. Graphs and Hypergraphs. North-Holland Publishing Co., 1973.

[23] C.M. Dawson, H.L. Haselgrove, A.P. Hines, D.Mortimer, M.A. Nielsen, and T.J. Osborne. Quantum

computing and polynomial equations over the finite field Z2. Quantum Information and Computation, 5, 2005.

[24] D. Aharonov, I. Arad, E. Eban and Z. Landau. Polynomial Quantum Algorithms for Additive approx- imations of the Potts model and other Points of the Tutte Plane. 2007. preprint quant-ph/0702008.

[25] D. Aharonov, V. Jones and Z. Landau. A Polynomial Quantum Algorithm for Approximating the Jones Polynomial. 2006. preprint quant-ph/0511096. Bibliography 125 [26] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev. Adiabatic Quantum Computation is Equivalent to Standard Quantum Computation. SIAM J. on Computing, 37:166, 2007. preprint quant-ph/0405098.

[27] D. Cheung, D. Maslov and S. Severini. Translation techniques between quantum circuit architectures. 2007. http://www.iqc.ca/ sseverin/qipabs.pdf.

[28] D.A. Lidar. On the Quantum Computational Complexity of the Ising Spin Glass Partition Function and of Knot Invariants. New J. Phys., 6:167, 2004.

[29] D.A. Lidar. Towards Fault Tolerant Adiabatic Quantum Computation. Phys. Rev. Lett., 100:160506, 2008.

[30] D.A. Lidar and H. Wang. Calculating the Thermal Rate Constant with Exponential Speed-Up on a Quantum Computer. Phys. Rev. E, 59:2429, 1999.

[31] D.A. Lidar and O. Biham. Simulating Ising spin glasses on a quantum computer. Phys. Rev. E, 56:3661, 1997.

[32] D.A. Meyer. Quantum mechanics of lattice gas automata. I. One particle plane waves and potentials. Phys. Rev. E, 55:5261, 1997.

[33] D.A. Meyer. Quantum computing classical physics. Proc. Roy. Soc. London Ser. A, 360:395, 2002.

[34] D.Aharonov and I. Arad. The BQP-hardness of approximating the Jones Polynomial. 2006. preprint quant-ph/0605181.

[35] David P. DiVincenzo and Barbara M. Terhal. Classical simulation of noninteracting-fermion quantum circuits. Foundations of Physics, 35(12):1967, 2005.

[36] D.S. Abrams and S. Lloyd. Simulation of Many-Body Fermi Systems on a Universal Quantum Com- puter. Phys. Rev. Lett., 79:2586, 1997.

[37] E. Bernstein and U. Vazirani. Quantum Complexity Theory. In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, page 11, New York, NY, 1993. ACM.

[38] E. Jan, G. Vidal, W. Dr, P. Zoller and J. Cirac. Simulation of quantum dynamics with quantum optical systems. 2002. arXiv:quant-ph/0207011.

[39] E. Knill. Fermionic Linear Optics and Matchgates, 2001. eprint quant-ph/0108010.

[40] E. Knill and R. Laflamme. On the power of one bit of quantum information. 1998. preprint quant- ph/9802037. 126 Bibliography [41] E. Knill and R. Laflamme. Quantum computing and quadratically signed weight enumerators. Inf. Proc. Lett., 79:173, 2001. eprint quant-ph/9909094.

[42] E. Witten. Topological quantum field theory. Comm. Math. Phys., 117, 1988.

[43] F. Jaeger. The Tutte Polynomial and Link Polynomials. Proc. Amer. Math. Soc., 103:647, 1998.

[44] F. Jaeger, D. Vertigen, D. Welsh. On the Computational Complexity of the Jones’ and Tutte polyno- mials. Math. Proc. Cambridge Philos. Soc., 108:35, 1990.

[45] Frank van Bussel and Joseph Geraci. A Note on Cyclotomic Cosets and an Algorithm for finding Coset Representatives and Size. 2007. preprint arXiv:cs/0703129.

[46] G. Brassard, P.Hoyer and A. Tapp. Quantum Counting. 1998. preprint quant-ph/9805082.

[47] G. Ortiz, J.E. Gubernatis, E. Knill, and R. Laflamme . Quantum Algorithms for Fermionic Simulations. Phys. Rev. A, 64:022319, 2001.

[48] G. Vidal. Efficient Classical Simulation of Slightly Entangled Quantum Computations. Phys. Rev. Lett., 91:147902, 2003.

[49] G.E. Andrews. Number Theory. Dover Publications Inc., 1994.

[50] I. Kanter. Potts-glass models of neural networks. Phys. Rev. A, 37, 1988.

[51] I.L. Markov and Y. Shi. Simulating quantum computation by contracting tensor networks. 2007. preprint quant-ph/0511069.

[52] Ivan Kassal, Stephen P. Jordan, Peter J. Love, Masoud Mohseni and Aln Aspuru-Guzik. Quantum algorithms for the simulation of chemical dynamics. 2008. preprint arXiv:0801.2986.

[53] J. Bonca and J.E. Gubernatis. Real-Time Dynamics from Imaginary-Time Quantum Monte Carlo Simulations: Test on Oscillator Chains. 1995. preprint cond-mat/9510098.

[54] J. Denef and F. Vercauteren. Counting Points on Cab Curves using Monsky-Washnitzer Cohomology. 2004. http://citeseer.ist.psu.edu/denef04counting.html.

[55] J. Geraci. A BQP-complete problem related to the Ising model partition function via a new connection between quantum circuits and graphs. 2008. preprint arXiv:0801.4833.

[56] J. Geraci and D.A. Lidar. On the Exact Evaluation of Certain Instances of the Potts Partition Function by Quantum Computers. Communications in Mathematical Physics, 279:735, 2008.

[57] J. Gross and J. Yellen. Graph theory and its applications. Discrete mathematics and its applications. CRC Press, Florida, 1999. Bibliography 127 [58] J. Gross and J. Yellen. Graph theory and its applications. Discrete mathematics and its applications. CRC Press, USA, 1999.

[59] J. Hopcroft and R. Tarjan. Efficient Planarity Testing. Journal of the ACM, 21(4), 1974.

[60] J. Yepez. A quantum lattice-gas model for computation of fluid dynamics. Phys. Rev. E, 63:046702, 2001.

[61] J.H. van Lint. Introduction to Coding Theory. Springer-Verlag, 1982.

[62] J.J. Rice, G. Stolovitzky, Y. Tu, and P. de Tombe. Ising model of cardiac thin filament activation with nearest-neighbor cooperative interactions. Biophysical Journal, 84, 2003.

[63] Richard Jozsa and Akimasa Miyake. Matchgates and classical simulation of quantum circuits. 2008. eprint arXiv:0804.4050.

[64] K. Kedlaya. Quantum Computation of zeta functions of curves. 2005. preprint math.NT/0411623.

[65] P.W. Kasteleyn. Graph Theory and Crystal Physics - Graph Theory and Theoretical Physics. London Academic Press, London, 1967.

[66] L.H. Kauffman. Knots and Physics, volume 1 of Knots and Everything. World Scientific, Singapore, 2001.

[67] L.-A. Wu, M.S. Byrd, and D.A. Lidar. Polynomial-Time Simulation of Pairing Models on a Quantum Computer. Phys. Rev. Lett., 89:057904, 2002.

[68] L.Baumert and R. McEliece. Weights of Irreducible Cyclic Codes. Inform. and Control, 20:158, 1972.

[69] L.E. Reichl. A Modern Course in Statistical Physics. John Wiley & Sons, New York, 1998.

[70] L.G. Valiant. Quantum circuits that can be simulated classically in polynomial time. SIAM J. on Computing, 31:1229, 2002.

[71] L.H. Kauffman and S.J. Lomonaco. q-Deformed spin networks, knot polynomials and anyonic topo- logical computation. J. of Knot Theory and its Ramifications, 16:267, 2007.

[72] L.K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th Annual ACM Symposium on the Theory of Computing, page 212. ACM, New York, NY, 1996.

[73] L.M. Adleman, J. Demarrais and M.A. Huang. Quantum computability. SIAM J. COMPUT., 26(5):1524–1540, 1997.

[74] L.R. Vermani. Elements of Algebraic Coding Theory. Chapman and Hall Mathematics, 1996. 128 Bibliography [75] M. Bordewich, M. Freedman, L. Lov´asz and D. Welsh. Approximate counting and quantum computa- tion. Combinatorics, Probability and Computing, 14:737, 2005.

[76] M. Hein, J. Eisert and H.J. Briegel. Multi-party entanglement in graph states. 2003. preprint quant- ph/0307130.

[77] M. Khovanov. A categorification of the Jones polynomial. Duke Math. J., 101:359, 2000.

[78] M. Moisio. Exponential Sums, Gauss Sums and Cyclic Codes. 1997. www.uwasa.fi/ mamo/vaitos.pdf.

[79] M. Moisio. Two recursive algorithms for computing the weight distribution of certain irreducible cyclic codes. IEEE TIT, 45:1244, 1999.

[80] M. Stefanak, W. Merkel, W.P. Schleich, D. Haase and H. Maier. Factorization with Gauss sums: scaling properties and ghost factors. New J. Phys., 9:370, 2007.

[81] M. Terraneo, B. Georgeot, D.L. Shepelyansky. Strange attractor simulated on a quantum computer. Eur. Phys. J. D, 22:127, 2003.

[82] M. Van den Nest, W. Dur and H.J. Briegel. Classical spin models and the quantum stabilizer formalism. 2006. quant-ph/0610157.

[83] M. Van den Nest, W. Dur, R. Raussendorf and H. J. Briegel. Quantum algorithms for spin models and simulable gate sets for quantum computation. 2008. eprint arXiv:0805.1214v1.

[84] M. Van Der Glugt. Hasse-Davenport Curves, Gauss Sums and Weight Distributions of Irreducible Cyclic Codes. J. Number Theory, 55:145, 1995.

[85] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge Uni- versity Press, Cambridge, UK, 2000.

[86] M.H. Freedman, A. Kitaev and Z. Wang. Simulation of topological field theories by quantum computers. Commun. Math. Phys., 227:587, 2002. eprint quant-ph/0001071.

[87] M.H. Freedman, A. Kitaev, M.J. Larsen, and Z. Wang. Topological Quantum Computation. Bull. Amer. Math. Soc., 40:31, 2003. eprint quant-ph/0101025.

[88] Milton Abramowitz and Irene A. Stegun. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York, ninth dover printing, tenth gpo printing edition, 1964.

[89] M.J. Bremmer, C.M. Dawson, J.L. Dodd, A. Gilchrist, A.W. Harrow, D. Mortimer, M.A. Nielson and T.J. Osborne. A practical scheme for quantum computation with any two-qubit entangling gate. 2002. preprint quant-ph/0207072v1. Bibliography 129 [90] M.N. Vyalyi. Hardness of approximating the weight enumerator of a binary linear code. 2003. eprint cs.CC/0304044.

[91] M.R. Jerrum, A. Sinclair. Polynomial-time approximation algorithms for the Ising model. Proc. 17th ICALP, EATCS, page 462, 1990.

[92] N. Alon, A.M. Frieze, D. Welsh. Polynomial Time Randomised Approximation Schemes for Tutte- Grothendieck Invariants: The Dense Case. Electronic Colloquium on Computational Complexity, 1:Re- port TR94–005, 1994.

[93] N. Robertson and P.D. Seymour. Graph Minors. XX. Wagner’s conjecture. Journal of Combinatorial Theory, Series B,, 9, 2004.

[94] N. Schuch, M. Wolf, F. Verstraete and J. Cirac. Simulation of Quantum Many-Body Systems with Strings of Operators and Monte Carlo Tensor Contractions. PRL, 100, 2008.

[95] Nadav Yoran and Anthony J. Short. Classical simulation of limited-width cluster-state quantum computation. Physical Review Letters, 96(17):170503, 2006.

[96] M.A. Nielsen. Universal quantum computation using only projective measurement, quantum memory, and preparation of the 0 state, 2001. preprint quant-ph/0108020.

[97] P. Wocjan and J. yard. The Jones polynomial: quantum algorithms and applications in quantum complexity theory. 2006. quant-ph/0603069.

[98] P.W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. on Comp., 26:1484, 1997.

[99] P.W. Shor and S.P. Jordan. Estimating Jones polynomials is a complete problem for one clean qubit. 2007. preprint quant-ph/0707.2831.

[100] R. Lidl and H. Niederreiter. Finite Fields, volume 20 of Encyclopedia of Mathematics. Cambridge University Press, Cambridge, 1997.

[101] R. Raussendorf, D.E. Browne and H.J. Briegel. A one-way quantum computer. PRL, 86.

[102] R. Shrock. Exact Potts model partition functions on ladder graphs. Physica A, 283, 2000.

[103] R.H Swendsen and J.-S. Wang. Phys. Rev. Lett., 58:86, 1987.

[104] Richard Jozsa and Noah Linden. On the role of entanglement in quantum computational speed-up. 2002. preprint quant-ph/0201143v2. 130 Bibliography [105] Gerardo Ortiz Rolando Somma, Howard Barnum and Emanuel Knill. Efficient solvability of hamil- tonians and limits on the power of some quantum computational models. Physical Review Letters, 97(19):190501, 2006.

[106] R.P. Feynman. Simulating Physics with Computers. Intl. J. Theor. Phys., 21:467, 1982.

[107] S. Bravyi and R. Raussendorf. On measurement-based quantum computation with the toric code states. Physical Review A, 76,Issue 2, 2007.

[108] S. Lloyd. Almost any quantum logic gate is universal. PRL, 75, 1995.

[109] S. Lloyd. Universal Quantum Simulators. Science, 273:1073, 1996.

[110] S. Nechaev. Statistics of knots and entangled random walks. eprint cond-mat/9812205.

[111] S. Wiesner. Simulations of Many-Body Quantum Systems by a Quantum Computer. eprint quant- ph/9603028.

[112] Sergey Bravyi. Contraction of matchgate tensor networks on non-planar graphs. 2008. arXiv.org:0801.2989.

[113] V.F.R. Jones. A Polynomial Invariant for Knots via von Neumann Algebras. Bull. Amer. Math. Soc., 12:103, 1985.

[114] V.F.R. Jones. On Knot Invariants Related to Some Statistical Mechanical Models. Pacific J. Math., 137:311, 1989.

[115] W. van Dam and G. Seroussi. Efficient Quantum Algorithms for Estimating Gauss Sums. 2002. eprint quant-ph/0207131.

[116] W. van Dam and G. Seroussi. Quantum algorithms for estimating Gauss sums and calculating discrete logarithms. 2002.

[117] D.J.A Welsh. Matroid Theory. Academic Press Inc, London, 1976.

[118] D.J.A. Welsh. Complexity: Knots, Colourings and Counting. London Mathematical Society Lecture Note Series 186. Cambridge University Press, London, 1993.

[119] Y. Aubry and P. Langevin. On the weights of irreducible cyclic codes. 2005. http://iml.univ- mrs.fr/ aubry/LNCS.pdf.