<<

Technische Universit¨atM¨unchen Computational Science and Engineering (Int. Master’s Program)

Master’s Thesis Contraction Schemes for 2D Problems in Quantum Simulation

Ing. Javier Enciso

First examiner: Univ.-Prof. Dr. Thomas Huckle Second examiner: Univ.-Prof. Dr. Hans-Joachim Bungartz Assistant advisor: Dipl.-Math. Konrad Waldherr

Thesis handed in on 30.09.2010

Declaration

I hereby declare that this thesis is entirely the result of my own work except where otherwise indicated. I have only used the resources given in the list of references.

M¨unchen, 30.09.2010 Javier Enciso Contents

1 Introduction 1 1.1 Overview ...... 1 1.2 Basic Notions ...... 1 1.3 Problem Definition ...... 4 1.4 Challenges ...... 5

2 Quantum Simulation 7 2.1 Overview ...... 7 2.2 Quantum Dynamics ...... 7 2.3 Spin Interaction ...... 9 2.4 Ising Model ...... 11 2.5 Numerical Solution ...... 12

3 Ground State Calculation 19 3.1 Overview ...... 19 3.2 Data Representation ...... 19 3.3 Contraction ...... 23 3.4 Numerical Ansatz ...... 24

4 Numerical Experiments 29 4.1 Overview ...... 29 4.2 Experimental Setup ...... 29 4.3 Results ...... 35 4.4 Errors ...... 38

5 Conclusions and Outlook 41 5.1 Overview ...... 41 5.2 Conclusions ...... 41 5.3 Further Improvements ...... 42

A Computer Program 43 A.1 Function Specification ...... 43 A.2 Code Example ...... 43 A.3 Licence ...... 45

i B Configuration Management 47 B.1 Overview ...... 47 B.2 Hardware Configuration ...... 47 B.3 Software Configuration ...... 47

Bibliography 51

Index 53

ii List of Figures

1.1 Dimensional and time requirements ...... 5

2.1 1D lattice with n sites ...... 8 2.2 2D lattice with m × n sites ...... 9 2.3 Bloch sphere ...... 10 2.4 1D lattice with external field ...... 12 2.5 Sparsity pattern in Hamiltonians ...... 13 2.6 Sparsity pattern after exact diagonalisation ...... 14 2.7 Integration via Monte Carlo method ...... 16 2.8 Decomposition and sweep directions in DMRG ...... 18

4.1 Setup for numerical experiments ...... 30 4.2 Sparsity pattern in numerical experiments ...... 30 4.3 Tensor blocks ...... 31 4.4 Index set arrangement for six ...... 33 4.5 Simulation results ...... 36 4.6 Energy level after 100 iterations ...... 37

B.1 Profiling sample output ...... 49

iii iv List of Tables

3.1 Mapping components of final state ...... 21

4.1 Tensor configurations ...... 34 4.2 Statistics for six tensors ...... 35 4.3 Statistics for four tensors ...... 36 4.4 Statistics for three tensors ...... 36 4.5 Statistics for two tensors ...... 37 4.6 Statistics for ground state calculation ...... 37 4.7 Condition number for selected Hamiltonians ...... 39

A.1 Function description ...... 44

B.1 Hardware configuration ...... 48 B.2 Software configuration ...... 48

v vi List of Notations

C Complex number (2) ⊗ Kronecker product operator (2)

H Hamiltonian (4)

Ln 1D lattice with n sites (8)

Lm×n 2D lattice with m × n sites (8)

σ0, . . . , σ3 (9)

In n × n Identity (9) MC Monte Carlo (14)

QMC Quantum Monte Carlo (14)

DoF Degrees of Freedom (14)

sinc x Sinc Function (14)

D&C Divide and Conquer (17)

DMRG Density Matrix Renormalisation Group (17)

R(H, |ψi) Rayleigh Quotient (24)

ALS Alternating Least Squares (25)

CPU Central Process Unit (47)

GPU Graphic Process Unit (47)

RAM Random Access Memory (47)

HD Hard Disk (47)

GB Giga Bytes (47)

vii viii Acknowledgements

Thanks are due to the Colombian Government, Colfuturo and the German Academic Exchange Service (DAAD) for the financial support through the Scholarship-loan Programme 2008 and Scholarship A/08/72528 respectively. Thanks to the people at the education and Public Outreach Department (ePOD) at the European Southern Observatory (ESO) for their outstanding collaboration during the internship and the writing process. Thanks to Univ.-Prof. Dr. Thomas Huckle at the Institute for Scien- tific Computing for his continuous advice and extraordinary support along the two years of the Master Programme. Thanks to Dipl.-Math. Konrad Waldherr for his valuable discussions and sharp thoughts that smoothed the development of the scripts. Thanks to Wilmer Hern´andezfor the superb visual ideas, spooky sound effects, and mind-blowing animation presented during the talk at the Scientific Computing in Computer Science colloquium. I thank my parents, brother, and sister for their omniscient support and good-will. Last but not least, special thanks are due to my beautiful wife for her kind support, lovely understanding, and invaluable companion in the time being. Perhaps my memory is not good enough to remember all people that made this academic project possible, however, my most sincere gratitude is with all of them, after all you know how you are. This document was produced with LATEX, vector graphics were created with Inkscape, and the drawings were processed with GIMP.

ix x Abstract

Quantum simulation describes a great variety of process observed in labo- ratories [16] and daily life [5], however, as soon as the computer programs involve more particles, time and storage turn out to be critical issues that constraint the execution of larger simulations. Tensor Contraction Schemes for 2D Problems in Quantum Simulation is based on the ideas of Huckle and Waldherr [11] and is a courageous attempt to describe many-body sys- tems accurately without the intrinsic dimensional problems. To achieve this goal, two well-established techniques are applied to represent large vectors implicitly and, through a variational-based numerical method, minimise the related optimisation problem. Remarkably, the contraction schemes demand linear storage and time resources only. As result, the proposed scheme was able to produce solutions up to the first significant digit for 12-particle sys- tem, taking no more than few iterations. Finally, there are still open issues regarding the accuracy in the vector representation and the local optimality of the iterative method that must to be addressed in future investigations.

xi Chapter 1

Introduction

“Number is the ruler of forms and ideas, and the cause of gods and daemons” Pythagoras

1.1 Overview

From the scientific crisis that leaded the definition of a new set of ideas to explain really to the difficulties in the realisation of the quantum simula- tions, first chapter is intended to provide to the non-experienced reader a general introduction into the field of quantum simulation. In section 1.2, the basic introduction to the realm of quantum mechanics is presented, mak- ing emphasis in the clever notation used by physicists. In section 1.3, the definition of the problem that motivated the implementation of the contrac- tion schemes is described. Finally in section 1.4, the main challenges in the implementation of the numerical scheme are listed.

1.2 Basic Notions

Newtonian mechanics provide reasonable explanation for the motion of bod- ies we experience in daily life [5]. From planes flying around to the tidal effects that govern the sees and the beautiful formation of the rainbow, nonetheless, the notions underlying classical mechanics straggle in the accu- rate description of the really fast/small systems, since then a new paradigm in science to reconcile reason with reality was needed. In 1907, the first problem was addressed by [6, 21] ele- gantly. Thanks to the theoretical contributions done by the German physi- cist one hundred years ago, modern marvels such as the Global Positioning System[3] have been possible. Einstein’s brilliant insight was able to explain the of objects subject to really intense gravitational fields, with ve- locities approaching the cosmic barrier of the speed of light. Certainly, his

1 moment of genius redefined mainstream science and paved the highway for the upcoming theories that would defy common sense. The second problem, related to the infinite small objects like electrons and photons, posed serious difficulties for the scientific community of the late nineteenth and early twentieth century. Among them Max Planck, who in 1900 settled the foundation of the latter know Quantum Mechanics, a new conceptual framework so powerful, that no other theory can describe at the exquisite level of detail and astonishing accuracy the phenomena at the atomic scale depicted by the quantum formalism. Quantum mechanics introduced several concepts far beyond common sense. For example, according to Heisenberg’s uncertainty principle [10], it is not possible to know particle’s position and momentum simultaneously without disturbing the other. Heisenberg’s observations implied not only profound changes in physics but a revolution in the understanding of real- ity. Additionally, quantum superposition allows particles to be in all possible states at the same time, in other words, a quantum coin might be heads and tails at the same time and the real outcome will be known after measure- ment only. Althought no formal definition for a state of a system has been presented, it will be developed thoroughly in the following section.

1.2.1 Quantum State Contrary to the intuitive notion, a quantum state is a abstract mathematical construction. It does not describe any physical or measurable property of the system, instead it supports the formalism of quantum mechanics and allows the evolution of the system through the application of unitary operations. The foundation for the evolution of quantum systems will be presented in the next section. In mathematical terms, a quantum state |ψi is a complex row vector that lays on a finite Hilbert space H. The d = dim (H) of the Hilbert space is also replicated by the number of components of the state |ψi, which grows exponentially with the number of particles in the system. A quantum state |ψi can be expressed as the linear combination of a set P P 2 of orthonormal states |ψi = i ci|kii where ci ∈ C and i |ci| = 1. The orthogonality of the support vectors |kii implies that the inner product hkj|kii = 0 for all i, j ∈ N and i 6= j. Analogously, the normality of the basis |kii is verified by hki|kii = 1.

Dirac Notation: In 1939, Paul Dirac introduced a clever notation that became standard de facto in the field of quantum computation and quantum information [13]. The so-called Bra-Ket notation uses the symbol |·i to represent column vector and h·| for its complex conjugate. 1 The linear combination of the computational basis |0i = and |1i = 0

2 0 can represent non-trivial quantum states |ψi = α|0i + β|1i, where 1 2 2 α, β ∈ C and α + β = 1. The interaction of between the state of particles is modelled via the Kronecker product,

|ψi ⊗ |φi = α|0i + β|1i ⊗ γ|0i + δ|1i = αγ|00i + αδ|01i + βγ|10iβδ|11i ,

where the vector |00i is

|00i = |0i ⊗ |0i 1 1 = ⊗ 0 0 1 0 =   . 0 0

The same analysis holds for the remaining vectors |01i, |10i, and |11i, 0 0 0 1 0 0 which correspond to column vectors  ,   , and   respectively. 0 1 0 0 0 1 As we have seen, Dirac Notation can easily represent large vectors in a compact way, which turns out to be useful considering the explosive grow in the number of components when dealing with many-body systems.

Ground States

Ground states are quantum states where the minimum level of energy is achieved. They are vital in quantum simulation because they characterise quantum systems without any external influence, making possible to en- gineer strategies for their controlled evolution. Assuming the rules for the evolution of a quantum system are summarised by a matrix, the ground state is the eigenvector related to the minimum eigenvalue of the given matrix. As direct consequence of their origin, ground states might not be unique. When there are multiple ground states, they are called degenerated states and they are derived from the multiplicity of the minimum eigenvalue.

3 1.2.2 Hamiltonian A Hamiltonian H is an operator used in quantum mechanics to account for the total system’s energy. It considers the interaction between neighbour particles as well as the contribution from external fields to produce the rules that allow the evolution in time of the many-body system. In simpler terms, a Hamiltonian H takes the form of Hermitian matrix which operates on quantum states at certain time to produce the system’s state at the next time step. Hamiltonians in quantum simulation are characterised by their high spar- sity. They are created out of the linear combination of Kronecker products of 2 × 2 matrices. As expected from the definition of the Kronecker prod- uct, Hamiltonian’s dimension d = 2p, where p is the number of interacting particles, grows exponentially with the number of particles making virtually impossible to store such matrices for even moderately sized systems.

1.3 Problem Definition

Ground state calculation involves the solution of the minimisation problem

hψ|H|ψi min (1.1) ⊗2p |ψi∈C hψ|ψi where the quantum state |ψi is given by the Kronecker product of states |ψ(1) . . . ψ(q)i, the Hamiltonian H is a highly sparse matrix, and the number of tensors q is smaller or equal to the number of particles p. Because the number of components in the state |ψi increases exponen- tially with the number of particles p, we are aiming to implement a numerical scheme capable of

1. constructing a quantum state |ψi able to describe systems with large number of particles,

2. calculating the inner product hψ|φi between two arbitrary non-trivial states, and

3. minimising problem (1.1) without storing neither huge Hamiltonians nor large vectors explicitly.

To some extend, the experimental results would be comparable to those produced in the exact solution of the problem. Leaving the threshold oc- curring about 15 particles, it is not feasible to store the problem in the simulation’s machine, hence only conjunctures about the correctness of the outcome will be available.

4 (a) Dimensional requirements (b) Time requirements

Figure 1.1: Dimensional and time requirements associated with quantum simulation. The exponential nature of the problem makes it intractable under normal assumptions and forces to find alternative solutions.

1.4 Challenges

Understanding the problem’s domain is the first implicit challenge. Albeit the foundation of quantum mechanics relies on the mathematics of linear al- gebra and complex numbers, the incredible diversity of ideas and approaches make it an ideal scenario to test the ability to grasp concepts on the fly, ap- propriate them, and integrate them in the solution of complex problems. The high dimensionality of the related minimisation problem is the sec- ond challenge. The number of elements in matrices and vectors increases exponentially with the number of particles involved, therefore even prob- lems dealing with 50 sites cannot be represented by traditional means (e.g., full matrix). One can think in compressed formats such coordinate form, compressed sparse row, diagonalwise storage, etc., to store sparse matrices and vectors and circumvent the issue, however, no one knows beforehand about the sparsity of the resulting vectors, hence compressed formats might be insufficient to treat dense vectors. In Figure 1.1 time and storage requirements for the simulation of many- body system is presented. As the draw suggest, there is an exponential relationship between the number of particles and the performance indicators, therefore it is needed a new approach that let the simulations go beyond tens of particles without losing accuracy and within reasonable times.

5 6 Chapter 2

Quantum Simulation

“I think I can safely say that nobody understands quantum me- chanics” Richard Feynman

2.1 Overview

Since ancient times, human beings have been always fascinating by the true of nature of matter and pursued their understanding trough simpler and sim- pler principles, powerful enough, to accurately describe the vast majority of phenomena seen in daily life. The idea of indivisible blocks of matter was first proposed by early Indian and Greek philosophers and took centuries until chemists and physicists unveiled the multiple facets of the atom. Years later, a gang of physicists drastically changed the way reality was thought. They introduced a complete set of new ideas and invented quantum me- chanics, now nature is telling us an unheard story far beyond our wildest imagination. Along the lines of this chapter, conceptual elements that enable scien- tists to perform quantum simulations are discussed. In section 2.2, further details about the quantum dynamics are presented as well as the basic setup considered in the time evolution of the atomic systems. In section 2.3, the interaction between atoms reveals how the Hamiltonian is built. In section 2.4, the simplest model for particle interaction shows how elegant mathe- matics can describe nature at atomic level. Finally, in section 3.4, a concrete review on the major numerical approaches used in quantum simulation is presented.

2.2 Quantum Dynamics

Quantum dynamics refers to the evolution in time of quantum systems. Understanding the behaviour of matter at atomic scale enables researches

7 0 1 k-1 k k+1 n-2 n-1 Figure 2.1: 1D lattice with n sites and open boundary conditions. Every intermediate site has exactly two neighbours. The boundary sites have only one neighbour each. Labels are assigned from left to right. to predict material’s macroscopic properties. For example in the design of topological insulators [12] – materials that behaves like insulators within their interior and conductors in their boundaries – it is worth to know the external conditions that allow conventional materials exhibit such remark- able properties, in other words, it is important to know the conditions for state transitions. The numerical study of the state evolution of atoms and molecules is a demanding computational task. Scientists have been puzzled by the amount of resources needed to simulate relative small systems. Luckily, physicists came about with an elegant insight to solve the atomic conundrum. Instead of simulating hundreds of thousands of individual particles, they focused in small samples of the material only and then inferred the general proper- ties out of the local observations, this approach is also known as Statistical Physics and describes nuclear reactions accurately, shedding light on prob- lems from a variety of disciplines including biology, chemistry, neurology among others.

2.2.1 Lattice Ideally, atoms and molecules are assumed to be localed in a uniform struc- ture called Lattice. A 1D lattice Ln, as shown in Figure 2.1, is a collection of n sites where every site has two neighbours except the boundary ones. Since the particles placed on the lattice’s boundaries do not interact each other, the 1D lattice has open boundary conditions. In contrast, when there are interactions between the boundary elements, the lattice is said to have pe- riodic boundary conditions. As a consequence of the two types of boundary conditions, 1D Hamiltonians of this study differ in one addend only, the one that accounts for the bound interaction between the first and last particles. Naturally, the 1D case might be extended to the second dimension. A 2D lattice Lm×n with m × n sites and open boundary conditions is shown in Figure 2.2. This is the standard arrangement of atoms utilised in the subsequent chapters and whenever a new lattice is introduced, it is assumed to be a 2D one. As a convention, every site p within the lattice is labeled by

p = i · j + j (2.1) where i and j are the abscissa and ordinate respectively. For instance, given a rectangular lattice L2×3, the label for a particle located at (1, 2) is 4 = 1 · 2 + 2.

8 m(n-1) m(n-1)+1 mn-2 mn-1

0 1 n-2 n-1 Figure 2.2: 2D lattice with m × n sites and open boundary conditions, distributed in m rows and n columns. The label for each site is assigned from left to right and from bottom to top, starting from the left-most lower and ending at the right-most upper corner.

Finally, although out of reach for the present study, the particles are allowed to be placed in the 3D scenario. If that is the case, labels for the 3D lattice are given from the left-most lower to the right-most upper corners, and from front to back.

2.3 Spin Interaction

Spin is a fundamental property present in elementary particles as well in atoms and molecules. It suggests some sort of rotation around an axis and, from the macroscopic point of view, it explains phase transitions in materials; for example from iron to a permanent magnet. For the sake of the non-physicist reader, the spin will be regarded as a binary quantity which after measurement is either 1 or 0. Nonetheless, spin superposition is allowed and destroyed after the measurement, producing one of the two initial outcomes only. In a geometrical sense, the spin – a quantum state – can be modelled as a unit vector circumscribed by the Bloch sphere [20]. The Bloch sphere, shown in Figure 2.31, is a graphical representation of a two-level pure state2 |ψi, where the projection onto the z-axis is the outcome of the measurement

1Source: Wikipedia contributors. “Bloch sphere”. Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 2 Aug. 2010. Web. 18 Aug. 2010. 2A pure quantum state can be written by a single ket vector, or as a sum of basis states.

9 z 0

ψ θ

φ y

x

1 Figure 2.3: Bloch sphere is a geometrical representation of a two-level pure state. The projection onto the z-axis is the outcome after measurement, it can be 0 or 1. process. In general the spin is fully defined by

|ψi = cos(θ/2)|0i + eiφ sin(θ/2)|1i (2.2) where 0 ≤ θ ≤ π, 0 ≤ φ ≤ 2π, and eiφ = cos φ + i sin φ. Spin interacts with neighbour particles and external forces to produce new states. The mutual influence of the two-level system is realised by the application of the Pauli Matrices. Pauli Matrices (see equations (2.3)), named after Wolfgang Pauli, are represented by the Greek letter σi, where i ∈ {0, 1, 2, 3}. They are 2 × 2 complex Hermitian3 and Unitary4 matrices, capable to rotate vectors that represent the quantum states in any direction. This fact is incredibly useful since by the rotation, the energy of the system is unchanged and the conservation laws of physics remain safe under the numerical treatment.

For the quantum simulations developed in latter sections, σ3-matrix – neighbour interaction –, σ1-matrix – external interaction – and σ0-matrix – no interaction – will be taken into account only. The linear combination of the Kronecker product of the 2 × 2 matrices reproduces the evolution in time of the quantum systems faithfully.

3A Hermitian matrix A satisfies A = A†. 4 † † A Unitary matrix U of size n × n satisfies UU = U U = In.

10 1 0 σ := (2.3a) 0 0 1 0 1 σ := (2.3b) 1 1 0 0 −i σ := (2.3c) 2 i 0 1 0  σ := (2.3d) 3 0 −1

2.4 Ising Model

By early 20th century, German physicists Wilhelm Lenz presented a mathe- matical model to explain ferromagnetism – the mechanism by certain mate- rials transform into permanent magnets – in a bare but convincing manner. Few years latter, one of his doctoral student named Ernst Ising solved the problem for the 1D case and wrote his name in the physics textbooks. Nowa- days, the Ising Model is one of the standard test problems where a variety of novel numerical methods are examined looking for the fastest and most accurate technique in quantum simulation. In its simplest form, the Ising model represents a 1D lattice Ln with open boundary conditions (see Figure 2.4). On each site, the particle’s spin is affected by the interaction with its neighbour, as well as the external field. The Hamiltonian H for an Ising-like system is given by

M X (k) (k) H = αkQ1 ⊗ ... ⊗ QN , (2.4) k=1 where N is the number of sites, the αk is the strength of the magnetic (k) field and Qi is one of the Pauli matrices shown in (2.3). For instance the Ising-like Hamiltonian for a 4-particle systems with open boundary conditions looks like

H = α1σ3 ⊗ σ3 ⊗ σ0 ⊗ σ0 +

α2σ0 ⊗ σ3 ⊗ σ3 ⊗ σ0 +

α3σ0 ⊗ σ0 ⊗ σ3 ⊗ σ3 +

α4σ1 ⊗ σ0 ⊗ σ0 ⊗ σ0 +

α5σ0 ⊗ σ1 ⊗ σ0 ⊗ σ0 +

α6σ0 ⊗ σ0 ⊗ σ1 ⊗ σ0 +

α7σ0 ⊗ σ0 ⊗ σ0 ⊗ σ1 .

11 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

0 1 k-1 k k+1 n-2 n-1

------

Figure 2.4: 1D lattice with open boundary conditions under the influence of an external field. There are n sites, labeled from 0 to n − 1, subject to a perpendicular magnetic field. The interaction between neighbour particles plus to the external field transforms the particle’s spin. This configuration is the setup for the Ising model.

Evidently, the resulting Hamiltonian H is a Hermitian-Unitary 16 × 16 matrix with sparsity index ∼ 70%5 and a well defined pattern around the main diagonal (see Figure 2.5a). For the interested reader, an award-wining introduction to the Ising model [4] is also available.

2.5 Numerical Solution

Once we have established the mathematical model, we can think in the best numerical technique to solve the eigenvalue problem associated with the Hamiltonian that describes the evolution in time of the quantum system and find the minimum eigenvalue, which corresponds to the ground state. Far for being a complete survey on the numerical methods used in quan- tum simulation, the idea behind the following lines is to elaborate on the major approaches employed by physicists, from the early stages to current developments, to solve the optimisation problem.

2.5.1 Exact Diagonalisation Exact diagonalisation is a numerical technique that roots its origin in the well-known Gaussian elimination algorithm. By the successive application of elementary rows-wise operations, the hope is to obtain an explicit expres- sion for the calculation of eigenvalues and eigenvectors, therefore ground states.

5The sparsity index is one minus the number of non-zero elements divided by the total number of the entries.

12 (a) 4-particle Hamiltonian (b) 5-particle Hamiltonian (c) 6-particle Hamiltonian

Figure 2.5: Sparsity pattern observed in three different sized Hamiltonians. Figure 2.5a shows a 16×16 Ising-like Hamiltonian with sparsity index ∼ 70%. Figures 2.5b and 2.5c show a 32×32 and 64×64 Ising-like Hamiltonian with sparsity index ∼ 83% and ∼ 90% respectively. The resulting Hamiltonian is Hermitian and built upon the linear combination of the Kronecker product between Pauli matrices.

To refresh the memory of the distracted reader – not to mention to improve author’s writing skills –, the eigenvalue problem (2.5) deals with finding the λ-values that satisfy

Ax = λx , (2.5) where A is a square matrix, x is a row-vector and λ is the eigenvalue. Although by the application of the Gaussian elimination it is possible to find all eigenvalues of the matrix A, only the minimum is relevant for the calculation of ground states. The unexpected feature turns out to be inconvenient due to the computational cost implied in the computation of unnecessary values. To begin with, equation 2.5 is taken into a more convenient way, say

(A − λI)x = 0 . (2.6) −1 Assuming the inverse matrix (A − λI) exists, solving the so-called characteristic polynomial

det (A − λI) = 0 , (2.7) will end up with the eigenvalues of the matrix A. This na¨ıve approach is useful for small systems only. In Figure 2.6 a 60×60 sparse matrix6 is used for the application of the Gaussian algorithm.

6Note that for the simulation of quantum systems described up to now, the size of the matrix is given by 2p, where p is the number of particles; therefore a 60×60 sparse matrix presented in this example corresponds to a fictional system.

13 (a) Sparse matrix (b) Reduced matrix

Figure 2.6: Sparsity pattern observed in a given matrix is contrasted against the one after the application of the Gaussian elimination. It might be the case that after some reduction operations, new non-zero entries appear. This is a collateral effect caused by the Gaussian algorithm.

The idea is to replace the non-zero elements in the lower triangular matrix by zeros through basic arithmetic operations between rows, then through backward substitution, find the solution of the related system. Despite the simplicity of the approach, there are serious performance issues one can easily foresee. The obvious ones refer to the necessity building exponentially sized matrices and the cubic complexity of the elimination procedure. The unexpected inclusion of non-zero entries is another collateral effect. Since the given matrix is sparse by construction, during the successive col- umn reduction – adding zeros in the lower triangular matrix – one might add non-zero elements in entries whose previous value was zero. This issue increases the total procedure’s overheat, making its application a non-viable alternative in the realm of quantum simulation.

2.5.2 Quantum Monte Carlo

Quantum Monte Carlo (QMC) is a whole family of stochastic methods based on the guidelines provided by the Monte Carlo (MC) methods. The fun- damental idea underlying MC is sampling – random selection of individu- als from a given population –. Usually, the high-dimensionality of certain problems makes MC the method of choice due to its haphazard nature and always-increasing accuracy. MC integration, for instance, is a technique to estimate the area under

14 the curve. Althought this particular simple example lacks of the dimension- ality course – the exponential increase of degrees of freedom (DoF) with the size of the problem –, it shows the concept of sampling perfectly and the correlation between number of samples and the accuracy of the stochastic method. Given the unnormalised sinc function [14] defined by

 sin x if x 6= 0 sinc x := x (2.8) 1 otherwise, the problem consists in approximate the area under the curve in the closed interval [0, π],

Z π sinc x dx. (2.9) 0 It is important to note that the integral (2.9) does not have analytical solution, hence a numerical method must be employed to find the value of the area below the curve. To this end, an off-the-shelf numerical software produces the following result

Z π sinc x dx ≈ 1.8519, (2.10) 0 which might be used to verify the correctness of the MC method that will be applied soon. The integration domain of problem (2.9) is [0, π] and the values taken by function (2.8) range from 0 to 1. In other words, the area of interest is bounded by a rectangle of width ` = π and height h = 1, therefore one can safely state that the resulting area must be a value between 0 and π = ` · h. Not too impressive till now. Let’s take a bunch of grains of sand and throw them over the rectangle where the function (2.8) lays on (see Figure 2.7). Assuming no hurricanes near by and you got a good shot, some grains will land in the area below the curve whereas the remaining will do so out of the region of interest. Finally, one can count the number of grains that belong to each of the two areas and derive a simple expression to solve problem (2.9), say

Z π p sinc x dx ≈ blue π (2.11) 0 pblue + pred where pblue and pred are the grains below and above the curve respectively. The relative error for 102 samples is ∼ 10−1 whereas for 104 samples is ∼ 10−4, showing the intuitive notion that the accuracy of the result is proportional to the number of samples. This fact is the major disadvantage of MC based methods although it can be counteracted by the use of multiple processes units, taking advantage of its embarrassingly parallel behavior.

15 (a) 102 Samples (b) 103 Samples

(c) 104 Samples (d) 105 Samples

Figure 2.7: Integration via Monte Carlo method. Intuitively, the more sam- ples the better the approximation is. The area below the curve is estimated by the ratio between the number of blue and the total number of points (blue + red).

16 Once understood the concept underlying MC methods, it can be extrap- olated to quantum simulation, what is known as QMC. Here the problem is to find the ground state – eigenvector – for a given system of interacting par- ticles using randomness to generate solutions that, after several iterations, might converge toward the desired state. The application of QMC does not guarantee optimal solutions, however they tend to be cheap and fast. For a complete review on the application of QMC for many-electron systems, the interested reader might refer to the work by Foulkes et al. [8].

2.5.3 Density Matrix Renormalisation Group Density Matrix Renormalisation Group (DMRG) is a numerical technique used to calculate ground states in quantum systems. First envisioned by American physicist Steven White [19], it was applied on the solution of 1D problems and since then it has been the method of choice among the scientific community due to its high-accuracy and numerous freely-available implementations in major programming languages [1] [2] [7]. Albeit DMRG was introduced for the simulation of toy-like problems, remarkable progress has been achieved since its initial conception. In 2004, Verstraete et al. extended the original work by White through the solution of quantum-many body systems with periodic boundary conditions [18] and two and higher [17]. DMRG relies on the principle of divide et impera – divide and conquer, briefly D&C –. In other words, the idea is to divide the original problem into smaller ones such that the solution of the simplified versions converge towards the optimum i.e., the lowest energy state. DMRG’s procedure starts by the initialisation of the system’s state. As weird as it might sound, the best approach consists in the assignation of normalised random values for the probabilities associated to each possible outcome. In this way, there exists a better chance to obtain the ground state, avoiding local minima. In Figure 2.87 a schematic representation of the DMRG algorithm is depicted. The lattice L10 is divided into two parts. The left- and right- part contain 4 and 5 sites respectively (see Figure 2.8a). The fifth site, which acts as a pivot, does not belong to any of the two sub-blocks and is chosen arbitrarily, favouring the stochastic nature of the entire procedure. Again, by the introduction of probabilistic elements in the algorithm, DMRG circumvents potential local minima. The next step is the propagation of the relevant information from each sub-block to the pivot. Assuming the size of the sub-blocks reside within the tractable zone (e.g., small enough to be solved by traditional means), the

7Source: Wikipedia contributors. “Density matrix renormalization group.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 15 Jul. 2010. Web. 27 Aug. 2010.

17 (a) Lattice decomposition (b) Sweep directions

Figure 2.8: Lattice decomposition in DMRG method. The original system is divided into two blocks, the ground state is calculated for each sub-block and then hte information is propagated to the pivot. Both the starting pivot as the sweep direction are picked at random, incorporating some fuzziness that eventually will avoid local minima and lead to the ground state. objective is to calculate the ground state of left- and right-parts and then estimate the resulting state for the pivot. In case that the size of the sub- blocks is too big, the standard approach consists in the recursive application of the same idea: dividing the block into two sub-blocks by choosing a pivot randomly and calculating the respective ground states. In the last step, after solving the system for a given pivot, a sweeping is performed over each site (see Figure 2.8b). One more time, the sweeping direction is completely arbitrary, it might be from left to right or on the other way around. In any case, the underlying principle is to incorporate some sort of fuzziness that eventually will take over the local minima and lead to the desired ground state. Finally for a complete review of the DMRG technique, the proactive reader is strongly advised to follow the work by Schollw¨ock [15].

18 Chapter 3

Ground State Calculation

“Do not worry about your problems with mathematics, I assure you mine are far greater” Albert Einstein

3.1 Overview

Ground state calculation has never been an easy task, mainly because of the exponential size of practical applications and the finite, although always increasing, computing capabilities. As seen in the previous chapter, there are several interesting approaches to deal with quantum simulations, from the traditional exact solution of the related eigenvalue problem to the more sophisticated stochastic algorithms that take advantage of randomness to come up with faster and nearly optimal solutions. The contraction scheme used to calculate the ground state is based on the ideas of Huckle and Waldherr [11]. It combines the best from two well- known techniques to solve the associated minimisation problem, linearly in the number of particles. Instead of overwhelming formulae, the propose of this chapter is to illustrate, through small examples, the application of the concepts involved. In section 3.2 a convenient data representation allows cheap calculations of vector- and matrix-vector products. Finally in section 3.4, a variational-principle-based numerical method for the solution of the generalised eigenvalue problem is presented.

3.2 Data Representation

To calculate the ground state in a quantum system a numerical technique must be able to deal with vectors and matrices in higher dimensions. In this section we elaborate on how to represent long vectors and matrices without losing accuracy and enormous storage requirements.

19 3.2.1 State Representation States are represented by 2p-D arrays, which implies a storage problem as soon as p – the number of particles – is too big. To work around this issue, the Kronecker product representation is used as alternative to efficiently construct larger vectors. For instance, the state of a 3-particle system is represented by an 8- component array

  ψ0 ψ1    .  |ψi =  .  ,   ψ6 ψ7 P ∗ where ψi ∈ C and hψ|ψi = i ψi ψi = 1. Alternatively, the same 8- component array might be represented using only three tensors, with six elements in total, say

|ψi = |ψ(1)ψ(2)ψ(3)i = |ψ(1)i ⊗ |ψ(2)i ⊗ |ψ(3)i (1)! (2)! (3)! ψ0 ψ0 ψ0 = (1) ⊗ (2) ⊗ (3) ψ1 ψ1 ψ1  (1) (2) (3) ψ0 ψ0 ψ0  (1) (2) (3) ψ0 ψ0 ψ1   .  =  .  ,    (1) (2) (3) ψ1 ψ1 ψ0  (1) (2) (3) ψ1 ψ1 ψ1

(m) where ψk ∈ C , m is the number of the vector and k is the array index. Immediately, one can see the relationship between the elements in the resulting vector and the binary concatenation of the indexes of the preceding components (see Table 3.1). The first component of |ψi is calculated through the product of the elements located at position 0, 0, 0 in vectors |ψ(1)i, |ψ(2)i, |ψ(3)i respectively. The second component corresponds to the product of elements 0, 0, 1, and so forth. Note that the components related to the least significant bit are contained in |ψ(3)i, analogously the most significant ones refer to values in |ψ(1)i. In a broader sense, tensor vectors |ψ(m)i might have more than two components and does not have to be equally sized, for instance tensor |ψ(1)i might have four elements whilst tensor |ψ(2)i and |ψ(3)i eight components

20 Table 3.1: Mapping components of final state. The individual components of the final state are obtained by the multiplication of the respective elements of tensor vectors |ψ(m)i. By using this approach, there is no need to store all final components explicitly, instead they can be calculated at any time.

|ψi |ψ(1)i |ψ(2)i |ψ(3)i 0 0 0 0 1 0 0 1 2 0 1 0 3 0 1 1 4 1 0 0 5 1 0 1 6 1 1 0 7 1 1 1 each. To this end, a new notation comes in to facilitate the representation of long vectors and allow their cheap computation. Suppose again an 8-component vector represented by the Kronecker product of two tensors with four and two components each

|ψi = |ψ(1)ψ(2)i = |ψ(1)i ⊗ |ψ(2)i ψ(1) 00 !  (1) (2) ψ01  ψ0 =  (1) ⊗ (2) ψ10  ψ1 (1) ψ11  (1) (2) ψ00 ψ0  (1) (2) ψ01 ψ0   .  =  .  .    (1) (2) ψ11 ψ0  (1) (2) ψ11 ψ1 Instead of focusing on the components, the idea is to concentrate on the index sets, or bit positions in the binary representation, that allow the calculation of the components in the resulting vector |ψi

ψ = ψ(1) ψ(2) , {i2,i1,i0} {i2,i1} {i0} where ik={0,1,2} takes the value of 0 or 1. Note the relationship between the subscript k and the particle it represents within the quantum system. As it will be discussed in section 3.3, it is handy to have such representation

21 for tensors because it allows the computation of vector- and matrix-vector products without the explicit representation of them. In general, it is assumed that any quantum state |ψi can be represented by the linear combination of Kronecker products between tensor of different size

J K X O (k) |ψi = |ψ(j) i , (3.1) j=1 k=1 where K ≤ p – number of particles – and the number of addends J < ∞. Note that the number of addends J determines the level of precision in the approximation of the state |ψi. Equation (3.1) shows the general form one can use to effectively repre- sent larger vectors by means of the tensor Kronecker product. This is an efficient approach and perfectly suited for quantum simulation because of its easy decomposition that allows the calculation of vector- and matrix-vector products using only the information contained in the index sets.

3.2.2 Hamiltonian Representation Due to the storage constraints Hamiltonians are never built explicitly, in- stead they are represented in compact form by the linear combination of Kronecker products of small matrices

M X (m) (m) H = αmQ0 ⊗ ... ⊗ Qp−1 , (3.2) m=1 (m) where Qk is one of the Pauli matrices, M is the total number of bounds in the quantum system, and p the number of particles. The method developed in this chapter works with the Hamiltonian given in Equation (3.2), therefore we will stick to this definition unless otherwise stated. One remarkable feature of the compact form is that it enables the calculation of the matrix-vector product by addressing the tensor’s index set elements and the Pauli matrix’s subscript k. For example, assume the 16-component vector given by

|ψi = |ψ(1)ψ(2)i ψ = ψ(1) ψ(2) , {i3,i2,i1,i0} {i2,i0} {i3,i1} has to be multiplied by the Hamiltonian

M X (m) (m) H = αmQ0 ⊗ ... ⊗ Q3 , m=1

22 which results in the state

|φi = H|ψi M X (m) (m) (1) (2) = αmQ0 ⊗ ... ⊗ Q3 |ψ ψ i m=1 M ! M ! X (m) (m) (1) X (m) (m) (2) = αmQ0 ⊗ Q2 |ψ i αmQ1 ⊗ Q3 |ψ i m=1 m=1 |φi = |φ(1)φ(2)i φ = φ(1) φ(2) . {i3,i2,i1,i0} {i2,i0} {i3,i1} Naturally, note that the size of the matrices is given the tensor’s number of components, therefore problems associated with storage can be skipped and the matrix-vector product is realised efficiently.

3.3 Contraction Scheme

The contraction scheme is used to calculate the inner product between two tensor vectors. It relies on the binary index set representation to choose the multiplicands selectively out of the tensor components, avoiding explicit formulation of the vectors. For example, assume two 4-component vectors |ψi = |ψ(1)i and |φi = |φ(1)i ⊗ |φ(2)i = |φ(1)φ(2)i, and the inner product

hψ|φi = hψ(1)|φ(1)φ(2)i 1 1 1 X X (1)∗ (1) (2) X (1)∗ (2) = ψ φ φ γ = ψ φ {i1,i0} {i1} {i0} ∴ {i1} {i1,i0} {i0} i1=0 i0=0 i0=0 1 X (1) = γ φ . {i1} {i1} i1=0 Too fast? Let’s try to elaborate on the contraction scheme itself. It begins by defining the index sets of the tensors, in this case the index (1) (1) (2) set for tensors |ψ i, |φ i, and |φ i are {i1, i0}, {i1}, and {i0} respec- tively. To get a full contraction – or getting a scalar value from the inner product–, it is important that the union of index sets in a group of ten- S sors (e.g., {i1, i0} ∅ = {i1, i0}) equals the union of its counterpart (e.g., S {i1} {i0} = {i1, i0}). If so, the result of the contraction scheme is a scalar that represents the inner product hψ|φi. Otherwise, the outcome of the contraction is another tensor, whose values represent the components of the missing elements in the index set.

23 Next step involves the selection of the inner-most index to contract, (1) (2) i0. The tensors involved are |ψ i and |φ i, they are taken apart and the summation of the conjugate products is calculated and stored in an auxiliary tensor γ{i1}. From the management of the indexes, there are two relevant facts to bare in mind after the contraction of the first index:

1. The summation index set {i0} is obtained through the intersection of T the preceding index sets, i.e., {i0} = {i1, i0} {i0}. If the summation index set is empty, it means that the given tensors cannot be con- tracted and a new tensor, with common index set elements must be chosen.

2. The contraction index set {i1} is obtained through the symmetric dif- S ference of the preceding index sets, i.e., {i1} = {i1, i0}\{i0} {i0}\{i1, i0}. If the contraction index set is empty, there might be two possible situ- ations. Either the contraction is done or the two selected tensors have the same index sets and the result of the contraction is a scalar that will be multiplied to the final result.

The contraction scheme is repeated until the inner product shows up. The complexity of the procedure is linear in the number of tensors, nonethe- less, it differs from the direct calculation since explicit storage of the entire vectors is not needed.

3.4 Numerical Ansatz

The calculation of the ground state follows a two-step numerical procedure. Firstly, the Hamiltonian’s compact form (see Equation (3.2)) and the ten- sor representation of the quantum state are reformulated as a minimisation problem. Secondly, the minimisation problem is solved through a numerical scheme, which is based on the variational principle and is applied iteratively until convergence.

3.4.1 Rayleigh Quotient The Rayleigh quotient is a numerical scheme used to find eigenvalue approx- imations for Hermitian matrices. It is defined as

hψ|H|ψi R(H, |ψi) := , (3.3) hψ|ψi where H is complex Hermitian and |ψi is a non-zero vector. Note that in the context of quantum simulation H is real, therefore its Hermitian character is reduced to being symmetric.

24 The range of values obtained via the Rayleigh quotient is bounded by the minimum and maximum eigenvalues

λmin ≤ R(H, |ψi) ≤ λmax , (3.4)

2p for all non-zero |ψi ∈ C . Since the calculation of the ground state is all about finding the minimum eigenvalue, and considering the structure of the Hermitian matrix H, the Rayleigh quotient is a perfectly well-suited scheme for the problem of interest. In its final form, the minimisation problem reduces to

hψ(1) . . . ψ(q)|H|ψ(1) . . . ψ(q)i λmin min (3.5) u (1) (q) ⊗2p (1) (q) (1) (q) |ψ ...ψ i∈C hψ . . . ψ |ψ . . . ψ i where q ≤ p is the number of tensor. As a matter of fact, the smaller the number of tensors, the better the approximation of the eigenvalue. If q = 1, the resulting eigenvalue is the exact one, however having a unique vector in the Rayleigh quotient poses a storage problem that makes its consideration out of any practical application.

3.4.2 Alternating Least Squares Alternating Least Squares (ALS) is a numerical technique used to solve overdetermined systems of equations. It is based on the Least Squares method, which primary application area is data fitting, and its practical application is due to the ability to optimise one by one the variables in- volved along the minimisation of the Rayleigh quotient. To solve minimisation problem (3.5), a number of preliminary algebraic transformation are needed. In fact, the idea is to minimise each state vector |ψ(i)i one at a time, say

(L) (i) (R) (L) (i) (R) (i) hψ ψ ψ |H|ψ ψ ψ i λmin = min |ψ(i)i hψ(L)ψ(i)ψ(R)|ψ(L)ψ(i)ψ(R)i hψ(L)ψ(i)ψ(R)| PM α H(m) ⊗ H(m) ⊗ H(m)|ψ(L)ψ(i)ψ(R)i = min m=1 m L i R |ψ(i)i hψ(L)ψ(i)ψ(R)|ψ(L)ψ(i)ψ(R)i PM α hψ(L)|H(m)|ψ(L)ihψ(R)|H(m)|ψ(R)ihψ(i)|H(m)|ψ(i)i = min m=1 m L R i |ψ(i)i hψ(L)|ψ(L)ihψ(R)|ψ(R)ihψ(i)|ψ(i)i PM α β hψ(i)|H(m)|ψ(i)i = min m=1 m m i |ψ(i)i γhψ(i)|ψ(i)i hψ(i)| PM αmβm H(m)|ψ(i)i = min m=1 γ i |ψ(i)i hψ(i)|ψ(i)i

25 where |ψ(L)i = |ψ(1) . . . ψ(i−1)i and |ψ(R)i = |ψ(i+1) . . . ψ(q)i. Through the previous manipulation, the original problem is converted into the standard eigenvalue problem

(i) (∗) (i) (i) hψ |Hi |ψ i λmin = min (3.6) |ψ(i)i hψ(i)|ψ(i)i

(∗) PM αmβm (m) where Hi = m=1 γ Hi . Keep in mind that there is no need to build neither huge matrices nor long vectors. For instance, the size of the (∗) square matrix Hi is 2 to the power of the number of elements (cardinality) in tensor |ψ(i)i index set, making problem (3.6) solvable by any standard eigenvalue algorithm. Additionally, the solution of the standard eigenvalue problem gives the initial values used along the main procedure. There is, however, a major caveat to the ALS approach. Although it is a good technique for local optimisation (i.e., solving the minimisation problem one at a time), it lacks of certainty when dealing with global optimisation (i.e., finding the ground state). A useful heuristic to avoid getting stuck in local minima consists in the random selection of states as the initial ones. Besides circumventing the locality issue, through the stochastic selection of the initial states the hope is to converge faster to the optimal solution. Another problem of ALS relates to the accuracy of the representation of the state |ψi. So far it has been assumed the ground state |ψmini is repre- sented by |ψ(1) . . . ψ(q)i faithfully, however in practice, this is not the trend. There are families of states that cannot be represented as the Kronecker product between them, they are also known as mixed states and play major roles in a variety of quantum phenomena, hence their relevance. To better describe any quantum state, assume a generalised quantum state

J X |ψi = |ψ(1) . . . ψ(q)i + |φ(1) . . . φ(r)i , j=1 where q, r ≤ p and the parameter J < ∞ determines the precision in the representation of the state |ψi. The simplest of such generalisations (e.g., J = 1)

(1) (q) (1) (r) |ψi u |ψ . . . ψ i + |φ . . . φ i , transforms the standard eigenvalue problem (3.6) into its generalised version

26 (i) (∗) (i) (i) (i) (i) hψ |Hi |ψ i + hu|ψ i + hψ |ui + η λmin = min (3.7a) |ψ(i)i hψ(i)|ψ(i)i + hv|ψ(i)i + hψ(i)|ui + ρ ! H(∗) |ui |ψ(i)i hψ(i)| 1 i hu| η 1 = min (3.7b)    (i)  |ψ(i)i |vi |ψ i hψ(i)| 1 I hv| ρ 1 where

|ui = H|φ(1) . . . φ(r)i η = hφ(1) . . . φ(r)|H|φ(1) . . . φ(r)i |vi = hψ(1) . . . ψ(i−1)ψ(i+1) . . . ψ(q)|φ(1) . . . φ(r)i ρ = hφ(1) . . . φ(r)|φ(1) . . . φ(r)i .

Note how vector |vi is built. It is constructed upon the partial con- traction of two tensor vectors, where vector |ψ(i)i is excluded. This unique feature is integrated into the proposed contraction scheme and its realisation comes from the binary index set representation. With the previous elements in mind, the Tensor Contraction Scheme for 2D Problems in Quantum Simulation is ready to act. In the following chap- ter, a number of scenarios will be taken into account to test the strengths and weaknesses of the contractions, favouring those where the best approx- imations are obtained.

27 28 Chapter 4

Numerical Experiments

“Mathematics is man’s attempt on understanding nature” Charles Darwin

4.1 Overview

The objective of the numerical results after simulations is to provide insights for the correct understanding of complex phenomena, enabling scientists to test hypothesis and draw conclusions. In the process, a number of conjec- tures has to be proven right, or eventually wrong, to end up with powerful theorems that generalise the findings carried out along the numerical study. In the following sections several results argue for the benefits of the proposed method. In section 4.2, the experimental setup and combination of parameters involved during the quantum simulation is presented. Next in section 4.3, the results after the execution of the computer programs are summarised graphically. Finally in section 4.4, an analysis on the origin and consequences of the errors accounts for the limitations of the contraction schemes.

4.2 Experimental Setup

To verify the properties of the proposed method for ground state calculation, a 12-particle chain of atoms was considered. The chain was affected by a perpendicular magnetic field and only open boundary conditions were allowed. The effective Hamiltonian H for the system depicted in Figure 4.1 was a 4.096 × 4.096 symmetric matrix, with high concentration of non-zero values along the main diagonal and some marginal band-like structures in both the upper and lower triangular areas (see Figure 4.2), making its traditional – full – storage an over demanding and inefficient task.

29 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

0 1 k-1 k k+1 10 11

------

Figure 4.1: Setup for numerical experiments. A chain of 12 interacting atoms was affected by an external magnetic field, altering the spin direction and allowing the evolution in time of the entire system.

Figure 4.2: Sparsity pattern of the Hamiltonian H used in numerical ex- periments. The Hamiltonian H for the 12-particle system was a symmetric matrix that contained 4096 rows and the same number of columns; in total ∼ 1.6e7 entries, from which 53248 were non-zero values.

30 (a) 6 Tensor blocks

(b) 4 Tensor blocks

(c) 3 Tensor blocks

(d) 2 Tensor blocks

Figure 4.3: Each tensor represents the state of the particles located at the different sites. By grouping them in alternative configurations, variation in the value of the ground state and the common statistics (min, max, average, etc.) was found. The number of blocks determines the size of the associated tensors, for instance when considering six blocks, each tensor have four components; on the other hand, when considering two blocks only, the tensors have 64 components.

Several parameters affected the result directly, among them, number of tensors, index set composition, and number of starting and main iterations. They were investigated to find their contribution in the final result, aiming to optimise their values for future experiments. Firstly, the 12 particles were grouped into six, four, three, and two sets of tensors (see Figure 4.3) to compare by which grouping the best solution is achieved in less iterations. To accomplish this task, a sweep from 1 to 30 iterations was performed over the four groups of tensors. The second parameter of interest was the index set composition, in par- ticular which indexes belong to which tensor. Take for instance the setup given by Figure 4.3d where two tensors are used to calculate the ground state of the system only. Implicitly, it is assumed the index sets are

ψ = ψ(1) ψ(2) , {i11,...,i0} {i11,...,i6} {i5,...,i0}

however, it is also reasonable to think about non-sequential arrangements1 like

1Non-sequential arrangements are those constituted by non-neighbouring particles.

31 ψ = ψ(1) ψ(2) {i11,...,i0} {i11,...,i6} {i5,...,i0} = ψ(1) ψ(2) {i11,i9,...,i3,i1} {i10,i8,...,i2,i0} = ψ(1) ψ(2) {i9,i8,i5,i4,i1,i0} {i11,i10,i7,i6,i3,i2} . .

In total there are 66 allowed combinations for this example. In fact, n there exist different ways to arrange n indexes by groups of k tensors k with the same number of components. For simplicity, tensors with different sizes were not considered in the simulations. Once tested the effect of the number of blocks in the ground state rep- resentation, it was investigated the contribution of the different index set arrangements. In the case of six tensors, for example, four different index set configurations were evaluated (see Figure 4.4). Analogously, each of the four different tensors was tested against four index set arrangements chosen at random. The idea was to figure out the best combination among the proposed ones and them compare in the long run. The index arrangements are summarised in Table 4.1.

32 (a) First arrangement (b) Second arrangement

(c) Third arrangement (d) Fourth arrangement

Figure 4.4: Random index set arrangement for six tensors. Each site repre- sents an index element and they are grouped to build 4-component tensors. Note that the fourth arrangement considers periodic boundary conditions for the interaction between the index set elements, however, they do not reflect the mutual exchange within the system’s particles necessarily.

33 Table 4.1: Tensor configurations. Each set corresponds to the index set for a tensor involved along the calculation of the (1) (2) ground state. For instance, the tensor |ψi = |ψ ψ i might be represented by the index sets {i0, . . . , i5} and {i6, . . . , i11} respectively, in other words, ψ = ψ(1) ψ(2) . {i0,...,i11} {i0,...,i5} {i6,...,i11}

Test 1 2 3 4 {i0, i1} {i0, i4} {i0, i4} {i0, i3} {i2, i3} {i1, i5} {i1, i5} {i1, i2} {i , i } {i , i } {i , i } {i , i } 6 Tensors 4 5 2 3 2 6 4 7 {i6, i7} {i6, i10} {i3, i7} {i5, i6}

34 {i8, i9} {i7, i11} {i8, i9} {i8, i11} {i10, i11} {i8, i9} {i10, i11} {i9, i10} {i0, i1, i2} {i0, i4, i8} {i0, i1, i2} {i0, i2, i3} {i , i , i } {i , i , i } {i , i , i } {i , i , i } 4 Tensors 3 4 5 1 5 9 4 5 6 4 6 7 {i6, i7, i8} {i2, i6, i10} {i8, i9, i10} {i8, i10, i11} {i9, i10, i11} {i3, i7, i11} {i3, i7, i11} {i1, i5, i9} {i0, i1, i2, i3} {i0, i1, i4, i5} {i0, i1, i2, i3} {i0, i3, i4, i7} 3 Tensors {i4, i5, i6, i7} {i2, i3, i6, i7} {i4, i5, i8, i9} {i1, i2, i5, i6} {i8, i9, i10, i11} {i8, i9, i10, i11} {i6, i7, i10, i11} {i8, i9, i10, i11} {i , i , i , i , i , i } {i , i , i , i , i , i } {i , i , i , i , i , i } {i , i , i , i , i , i } 2 Tensors 0 1 2 3 4 5 0 2 4 6 8 10 0 1 4 5 8 9 0 3 4 7 8 11 {i6, i7, i8, i9, i10, i11} {i1, i3, i5, i7, i9, i11} {i2, i3, i6, i7, i10, i11} {i1, i2, i5, i6, i9, i10} Table 4.2: Statistics for six tensors and four arrangements for the index sets. The best results were obtained for the index set configuration {i0, i1}, {i2, i3}, {i4, i5}, {i6, i7}, and {i8, i9}, {i10, i11}.

Arrangement 1 2 3 4 Average -14.63 -14.46 -14.52 -14.52 Median -14.63 -14.46 -14.53 -14.53 Std. Dev. 0.0038 0.0048 0.0133 0.0180 Min. -14.63 -14.46 -14.53 -14.53 Max. -14.61 -14.43 -14.47 -14.45

Finally, the long run consisted of 100 iterations where the four best configurations were compared. Although a certain number of iteration does not provided the solid arguments of an analytical proof, certainly, it settles a fertile soil to conjecture and to come up with innovative ideas.

4.3 Results

Simulations of the system described in section 4.2 were executed on a per- sonal laptop (see appendix B for further configuration details), the exact solution was calculated through Matlab’s eigenvalue function. Figure 4.5 shows the energy levels (eigenvalue) using four different number of tensors (six, four, three, and, two), each one of them with four different index set arrangements (see Table 4.1 for the entire configuration). As a general trend, there was a correlation between the number of tensors and the error in the initial solution, the lesser the number of tensors the closer the initial guess to the optimal solution. In Tables 4.2, 4.3, 4.4, and 4.5 a summary of the main statistics is presented. As expected, the best results in terms of the proximity to the optimal value were obtained for the tests with two tensors. In the long run, after 100 iterations, the best results corresponded to the combination of two tensors with the index sets {i0, i1, i2, i3, i4, i5}, and {i6, i7, i8, i9, i10, i11}, say

ψ = ψ(1) ψ(2) , {i11,...,i0} {i0,i1,i2,i3,i4,i5} {i6,i7,i8,i9,i10,i11} which after nine iterations of the ALS method produced -14.90 as the mini- mum eigenvalue. On the other hand, since the exact solution is -14.92, there is plenty of room for further improvements. Note also, that the contribu- tion of the different index set arrangements was minimal. In fact, no better results were obtained by varying the index set constitution.

35 (a) Six Tensors (b) Four Tensors

(c) Three Tensors (d) Two Tensors

Figure 4.5: Simulation results considering different blocking and index set arrangements. The lowest horizontal line corresponds to the optimal value. Evidently, all configurations tended to achieve a steady state after few iter- ations, however the best results were due to the use of two tensors.

Table 4.3: Statistics for four tensors and four arrangements for the in- dex sets. The best results were obtained for the index set configuration {i0, i1, i2}, {i3, i4, i5}, {i6, i7, i8}, and {i9, i10, i11}.

Arrangement 1 2 3 4 Average -14.69 -14.48 -14.65 -14.55 Median -14.69 -14.48 -14.65 -14.57 Std. Dev. 0.0022 0.0208 0.0034 0.0279 Min. -14.69 -14.48 -14.65 -14.57 Max. -14.67 -14.41 -14.63 -14.49

Table 4.4: Statistics for three tensors and four arrangements for the in- dex sets. The best results were obtained for the index set configuration {i0, i1, i2, i3}, {i4, i5, i6, i7}, and {i8, i9, i10, i11}.

Arrangement 1 2 3 4 Average -14.77 -14.69 -14.69 -14.66 Median -14.77 -14.69 -14.69 -14.66 Std. Dev. 0.0076 0.0124 0.0105 0.0082 Min. -14.77 -14.69 -14.69 -14.66 Max. -14.73 -14.64 -14.65 -14.61

36 Table 4.5: Statistics for two tensors and four arrangements for the in- dex sets. The best results were obtained for the index set configuration {i0, i1, i2, i3, i4, i5}, and {i6, i7, i8, i9, i10, i11}.

Arrangement 1 2 3 4 Average -14.90 -14.50 -14.63 -13.90 Median -14.90 -14.51 -14.63 -13.90 Std. Dev. 0.0136 0.0140 0.0144 0.0146 Min. -14.90 -14.51 -14.63 -13.91 Max. -14.83 -14.44 -14.58 -13.84

Figure 4.6: Energy level after 100 iterations. The long run execution showed the supremacy of the utilisation of two tensors. On the other hand, the worst results were due to the usage of six tensors.

Table 4.6: Statistics for ground state calculation using different number of tensors and the best index set configurations. The best results were obtained for the index set configuration {i0, i1, i2, i3, i4, i5}, and {i6, i7, i8, i9, i10, i11}.

Blocks 6 4 3 2 Average -14.63 -14.69 -14.77 -14.90 Median -14.63 -14.69 -14.77 -14.90 Std. Dev. 0.0021 0.0012 0.0042 0.0075 Min. -14.63 -14.69 -14.77 -14.90 Max. -14.61 -14.67 -14.73 -14.83 Min. at 38 40 26 9

37 4.4 Errors

In Numerical Analysis, error is an implicit ingredient one has to deal with in order to follow any numerical recipe for the solution of mathematical problems. In a narrower sense, in the ground state calculation using the contraction scheme there are two major types of numerical errors: rounding and vector representation errors. Rounding errors are due to the limitation of digital computers in rep- resenting continuous numerical values internally. For example, the infinite- long decimal number 1/3 = 0.3¯ might be approximated up to certain finite number of decimal places, using one of the well-established standard2 ways to perform the round-off. Although present in most of the calculations, rounding errors are disregarded by the automatic cancellation that occurs after the successive occurrence of the themselves [9]. Vector representation errors, on the other hand, play a starring role in the whole process. They are so critical because, along the minimisation algorithm, one blindly assumes the ground state can be written faithfully as

|ψ∗i = |ψ(1) . . . ψ(q)i + |φ(1) . . . φ(r)i , however, the opposite is truth. Ground states are rarely that simple, usu- ally quantum simulation involves more elaborated states that hardly can be reproduced by two addends. In the ideal case, it would be wonderful to have more terms in the summation that accounts for the final ground state, nonetheless they come at a price and the researcher must be aware of the compromise between the efficiency of the overall procedure and accuracy of the solution. Once identified the causes of the errors, it is time to evaluate their effect in the final solution calculation. Althought there is no easy way to know in advance the minimum eigenvalue for an arbitrary large Hamiltonian, it is feasible to calculate the ratio between the maximum and minimum eigenval- ues for small systems and to conjecture about its asymptotical behaviour. This ratio is also known as the condition number κ (A), defined by

λmax (A) κ (A) := , (4.1) λmin (A) where λmax(A) and λmin(A) are the maximal and minimal eigenvalues of A. The condition number determines how well-posed problem (2.6) really is, in other words the condition number is a good indicator on how drastically change the solution, after slight variations on the problem definition or initial conditions.

2The standard ways to perform the round-off includes: truncation, round up, round down, round to −∞, and round to +∞.

38 Table 4.7: Evaluation of the condition number κ (A) for selected Hamilto- nians. In these particular examples, They are specified by the number of particles that constitute the system of interest, ranging from 6 to 11. Note the increasing value of the condition number for larger systems, nonetheless, there is an anomaly around its value for a 10-particle system.

Particles 6 7 8 9 10 11 κ (A) 114.49 383.46 407.48 803.45 535.62 2.86e3

In Table 4.7 the condition number for selected Hamiltonian is presented. Note the correlation between the number of particles and the condition number itself, the larger the number of particles, the greater the κ (A). This is truth for most of the given examples, however, for a 10-particle system the value of the ratio is smaller than expected, adding more elements to the conjecture about their relationship. Nonetheless, the values in Table 4.7 are big enough to safely state that the calculation of the minimum eigenvalue in quantum simulation is an ill-posed problem, hence the variations of the final solution after slight modification of the initial conditions.

39 40 Chapter 5

Conclusions and Outlook

“1050 is a long way from infinity.” Daniel Shanks

5.1 Overview

Quantum simulation’s final goal is to understand nature at its most fun- damental scale, broadening the spectrum of applications and increasing the control level that, eventually, will lead to unheard scientific developments for the benefits of humankind. To start the quantum revolution, it is imperative to provide better tools for the study of the natural phenomena at atomic scale, without sacrificing accuracy and within reasonable time windows. In the last chapter, several insights product of the quantum simulations are highlighted. In section 5.2, the major conclusions of the study are pre- sented. Finally in section 5.3, important recommendations for future work are described, establishing the staring point for upcoming improvements that can be applied to the contraction schemes.

5.2 Conclusions

In section 1.4 were introduced three challenges that the numerical scheme used in quantum simulation must beat to solve the associated minimisation problem (1.1) efficiently. In the following paragraphs, they are going to be revisited one by one, evaluating their successful accomplishment. The first challenge implied the representation of long vectors without storing them explicitly. To solve this issue, it was necessary a two-step pro- cedure. Firstly, the original (unknown) vector was written as the Kronecker product of smaller tensors. Secondly, their components were calculated on- the-fly by using the binary index representation of the smaller tensors. To add more accuracy, not only the Kronecker product of tensors but the linear

41 combination of them were used in the vector representation, allowing better approximations with modest storage requirements. The second challenge had to do with the calculation of inner products based on the representation. The scalar product is a major step within the minimisation process therefore it is vital to figure out an efficient algorithm for its calculation. The contraction scheme made possible to compute exact inner product between arbitrary long vectors written in compact form – linear combination of Kronecker products –, without adding extra overheat to the general procedure. The third challenge claimed for the Rayleigh quotient minimisation, tak- ing into account the given vector representation and the storage constraints that prohibits the explicit formulation of the problem. The solution to the optimisation conundrum came from the well-known ALS technique, which turned out to produce close-to-the-optimal solutions for even small number of iterations. Additionally, data showed that for larger tensors the initial solution was closer to the optimal. Briefly speaking, the main achievement accomplished in the present work was the approximation of the minimum eigenvalue by the combination of two well-known techniques – binary index representation and ALS –, which under other circumstances (i.e., explicit formulation) is not tractable. Also, it was shown numerically that best approximation was achieved after few iterations and the variation in the index set representation did not improve the results of the scheme.

5.3 Further Improvements

Happily this is not the end, but only a transition. There is plenty of room for further improvements that might lead to broader application areas within the realm of quantum simulation or even finding better solutions. For in- stance, ALS method should be applied to Hamiltonians given by other quan- tum models (i.e., Heisenberg model, Hubbard model, etc.) to figure out possible numerical instability. In the same manner, ALS might encounter local minima along the iterative minimisation process. To solve the locality problem, it might be advantageous to apply simultaneous optimisation that eventually can be executed in parallel, hoping shorter execution time and better solutions. Another open issue relates to the use of heterogeneous tensor, or tensors with different sizes. It might the case that according to the specific quantum model, the solution given by heterogeneous tensors would be better than the observed in the homogeneous counterpart.

42 Appendix A

Computer Program

“I have not failed. I’ve just found 10,000 ways that won’t work” Thomas Edison

A.1 Function Specification

The Matlab’s functions described in Table A.1 were specifically developed to implement the Tensor Contraction Schemes for 2D Problems in Quantum Simulation. Further documentation is embedded within the source code, it includes specification on the input and output parameters and developer’s notes on the codification itself. Additionally, each function file contains a short example that shows how to use it within Matlab’s console.

A.2 Code Example

Users can take advantage of the scripts developed to simulate Ising-like systems with large number of particles. There are three simple steps needed to run the simulation:

1. Initialisation: defines of number of particles, Hamiltonian, alpha pa- rameters and tensor composition. It is important that the number of particles match the number of items in the construction of the tensor blocks.

2. Initial Guess: calculates the initial solution based on the tensor struc- ture by solving the standard eigenvalue problem (3.6). It was seen experimentally that the larger the blocks, the closer to the optimal solution.

3. Minimisation: executes the minimisation of the Rayleigh quotient via ALS, using the generalised version of the eigenvalue problem (3.7).

43 Table A.1: Description of the functions developed to implement the Tensor Contraction Schemes for 2D Problems in Quantum Simulation. Further details and example of application are provided within the code.

Function Description build operator returns one row of the Hamiltonian compute matten returns the matrix-tensor product conj tensor returns the complex conjugate of a tensor contract block returns the contraction indexes over two blocks copy except returns a copy of a tensor without a block dot tensor returns the inner product between two vectors index gen returns a list of decimal values init guess calculates the starting solution makefis makes the full index set of a tensor make hamiltonian returns the effective Hamiltonian map hamiltonian returns an Ising-like Hamiltonian cell array minals minimises a tensor using ALS mineig returns the minimum eigenvalue and eigenvector numblocks returns the number of blocks in a tensor shift gen returns a scalar contribution of the index set simpletest simple test example sum tensor returns the sum of two equal-index tensors symmdiff returns the symmetric difference between the sets tensor returns a tensor object test simulates of a 12-particle system

44 The result (minimum eigenvalue) after each iteration is returned, and taking as starting point for the next one.

The following simple test shows the minimum code to simulate a 4- particle system along 10 iterations. There are three equally sized tensors, each of them contains four indices. Remember that the output corresponds to the minimum eigenvalue obtained every iteration. For a more elaborated script, including graphical comparison of the tests, please refer to the file test.m.

%%%%%%%%%%%%%%%%%% % Initialisation % %%%%%%%%%%%%%%%%%% p = 4 ; % Number of particles s i t e r = 1 0 ; % Iterations starting solution miter = 1 0 ; % Iterations main procedure

Hmap = map hamiltonian(p); % Hamiltonian definition M = size (Hmap, 1 ) ; % Number of bounds alpha = ones(1, M); % Alpha parameter

ten x(1) = tensor([0 1]); ten x(2) = tensor([2 3]);

%%%%%%%%%%%%%%%% % Inital Guess % %%%%%%%%%%%%%%%% ten x0 = i n i t guess(Hmap, ten x, alpha, siter);

%%%%%%%%%%%%%%%% % Minimisation % %%%%%%%%%%%%%%%% z = minals(Hmap, ten x0, alpha, miter); disp ( z ) ;

A.3 Licence

The Scripts for Tensor Contraction Schemes for 2D Problems in Quantum Simulation are free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version.

45 The Scripts for Tensor Contraction Schemes for 2D Problems in Quan- tum Simulation are distributed in the hope that it will be useful, but WITH- OUT ANY WARRANTY; without even the implied warranty of MER- CHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the Scripts for Tensor Contraction Schemes for 2D Problems in Quantum Simulation. If not, see http://www.gnu.org/licenses/.

46 Appendix B

Configuration Management

“Not everything that can be counted counts, and not everything that counts can be counted” Albert Einstein

B.1 Overview

Far from being an exhaustive hardware and software configuration manage- ment, this chapter is intended to provide a minimal guideline that ensures the reproducibility of the results obtained in the present work. In section B.2, the characteristics of the hardware are presented. Finally in section B.3, the version of the applications, operating system, and the profiling options are described.

B.2 Hardware Configuration

Although hardware tends to be a limiting factor in scientific computing, in particular when massive amounts of data are processed, in the present numerical experiments only commercial off-the-shelf hardware was involved. Table B.1 sums up the configuration of hardware used for the contraction schemes.

B.3 Software Configuration

It is important to keep the record of the software used along the simulations to be able to reproduce the results in other machines. Table B.2 shows the version of the numerical software and the operating system. Although software per se can make the difference, there are other op- timisation heuristics that might be applied in the implementation of the

47 Table B.1: Technical specification of the Hardware used in the numerical ex- periments. The configuration of the equipment corresponds to a commercial off-the-shelf laptop, any user trying to replicate the results is encouraged to use her own personal computer.

Device Specification CPU Intel Core 2 Duo T5800 RAM 4 GB HD 320 GB GPU ATI Redeon HD 3470

Table B.2: Version of the software used in the numerical experiments.

Application Version Operating System MS Windows Vista Home Matlab R2007a algorithm. In the following paragraphs several optimisation hints, including the way to test the code’s efficiency, are discussed.

B.3.1 Code Optimization In addition to the numerical methods applied for the efficient simulation of quantum systems, there are a number of tricks that might be incorporated within the code to produce faster simulations. Albeit no rule of thumb is known to speedup certain code, the following approaches have been taken into account during the development phase, aiming to improve the overall performance.

Array preallocation: before entering to loops, arrays are explicitly de- clared, indicating the number of elements that will be stored. It will tell the compiler how big the data structure is and look for consecutive spaces in memory to save the data, making its retrieval faster.

Vectorisation: instead of using for loops to operate over the array’s com- ponents, it is preferred to use functions that act on the entire arrays. A classical example is the usage of the function sum to add up the elements, in contrast to the utilisation of a loop to traverse the array element-wise.

Referencing wildcards: enable programmers to access multiple elements in arrays without invoking them directly, for instance, the colon op- erator : selects ranges of array elements faster than referencing them individually.

48 Figure B.1: Profiling sample output for Matlab’s scripts. In the top of the table the functions with higher requirements of time. Self time corresponds to the time spent by the function itself, without considering the time used by their child functions.

Other options: there are tons of specific language-oriented shortcuts, which applied properly not only reduce the execution time but optimise the storage resources. Among them, efficient ways to obtain the number of elements in arrays, find the minimum of a matrix, convert arrays into a column vectors, or even include optimised code from other languages.

Finding the best tuning for the code is a demanding task and depends on the specific problem to solve. It might be the case that the optimisa- tion parameters found for a given quantum system do not hold for another family of problems, therefore tuning process must be restarted and applied continuously. Truly, this is a never ending task.

B.3.2 Profiling Profiling is a diagnostic tool that helps developers to find out possible bottle- necks in the code. Matlab’s profiling tools reports on the number of calls, total time, and self time for functions; in Figure B.1 a sample of its output is presented. First column indicates the name of the function analysed, it might include not only developer’s but built-in functions that eventually are called in the scripts. Second and third columns show the number of calls and the total time spent by the function respectively. Fourth column reports on the time spent in the functions without the time spent in their child func- tions. It also includes overhead resulting from the process of profiling. Profiling optimisation involve three steps. Firstly, the scripts are exe- cuted under the surveillance of the profiling tool. Secondly, the files with higher inner times are identified and optimised, focusing on the hints given

49 previously. Finally, the profiling tool is examined again until no further im- provements in the code are possible. Any code is susceptible to be optimised as much as the developer wishes, however, there is a compromise between the time spent in the code and the real gain. It is up to the researcher to evaluate the resource investment for the optimisation of the code.

50 Bibliography

[1] A. Albuquerque, F. Alet, P. Corboz, P. Dayal, A. Feiguin, S. Fuchs, L. Gamper, E. Gull, S. Gurtler, and A. Honecker. The alps project re- lease 1.3: Open-source software for strongly correlated systems. Journal of Magnetism and Magnetic Materials, 310(2):1187–1193, March 2007.

[2] G. Alvarez. The density matrix renormalization group for strongly cor- related electron systems: A generic implementation. Computer Physics Communications, 180(9):1572 – 1578, 2009.

[3] Neil Ashby. Relativity and the global positioning system. Physics Today, 55(5):41–47, 2002.

[4] Barry A. Cipra. An introduction to the ising model. Am. Math. Monthly, 94(10):937–959, 1987.

[5] Rene Dugas. A History of Mechanics (Dover Classics of Science and Mathematics). Dover Publications, June 1988.

[6] A. Einstein. Uber¨ das relativit¨atsprinzipund die aus demselben gezoge- nen folgerungen. Jahrbuch der Radioaktivit¨atund Elektronik, 4:411–462, 1908.

[7] R. Fazio, S. Montangero, G. De Chiara, D. Rossini, M. Rizzi, and D. Bleh. Parallel open well-adapting dmrg enhanced research with power. On-line. http://www.qti.sns.it.

[8] W. M. C. Foulkes, L. Mitas, R. J. Needs, and G. Rajagopal. Quantum monte carlo simulations of solids. Reviews of Modern Physics, 73(1):33– 83, Jan 2001.

[9] David Goldberg. What every computer scientist should know about floating-point arithmetic. ACM Comput. Surv., 23(1):5–48, March 1991.

[10] Werner Heisenberg. The Physical Principles of the Quantum Theory. Dover Publications, March 2003.

51 [11] Thomas Huckle and Konrad Waldherr. Unifying large-scale tensor approximations—concepts and algorithms, 2010. in progress.

[12] Joel E. Moore. The birth of topological insulators. Nature, 464(7286):194–198, March 2010.

[13] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 1 edition, October 2000.

[14] Frank W. J. Olver, Daniel W. Lozier, Ronald F. Boisvert, and Charles W. Clark, editors. NIST Handbook of Mathematical Functions. Cambridge University Press, 1 har/cdr edition, May 2010.

[15] U. Schollw¨ock. The density-matrix renormalization group. Rev. Mod. Phys., 77(1):259–315, Apr 2005.

[16] C. H. Tseng, S. Somaroo, Y. Sharf, E. Knill, R. Laflamme, T. F. Havel, and D. G. Cory. Quantum simulation of a three-body-interaction hamil- tonian on an nmr quantum computer. Phys. Rev. A, 61(1):012302, Dec 1999.

[17] F. Verstraete and J. I. Cirac. Renormalization algorithms for quantum- many body systems in two and higher dimensions, 2004.

[18] F. Verstraete, D. Porras, and J. I. Cirac. Density matrix renormaliza- tion group and periodic boundary conditions: A quantum information perspective. Physical Review Letters, 93(22):227205+, Nov 2004.

[19] Steven R. White. Density matrix formulation for quantum renormal- ization groups. Phys. Rev. Lett., 69(19):2863–2866, Nov 1992.

[20] Wikipedia. Bloch sphere — wikipedia, the free encyclopedia, 2010. [Online; accessed 18-August-2010].

[21] Wikipedia. Introduction to — wikipedia, the free encyclopedia, 2010. [Online; accessed 28-June-2010].

52 Index

ALS, 25 Summation, 23 Array Preallocation, 48 Ising Model, 11

Bloch Sphere, 9 Lattice, 8 Bra-Ket Notation, 2 Least Squares, 25

Characteristic Polynomial, 12 Matrix Condition Number, 38 Hermitian, 9 Contraction Normal, 38 Full, 23 Pauli, 9 Partial, 23 Unitary, 9 Scheme, 23 MC, 14 Monte Carlo Methods, 14 Dimensionality Course, 14 Dirac Notation, 2 Open Boundary Conditions, 8 Divide and Conquer, 17 Optimisation DMRG, 17 Global, 25 Local, 25 Eigenvalue Problem Generalised, 25 Periodic Boundary Conditions, 8 Standard, 25 Phase Transition, 9 Error Profiling, 49 Rounding, 38 QMC, 14 Vector Representation, 38 Quantum Exact Diagonalisation, 12 Dynamics, 7 Ferromagnetism, 11 Mechanics, 1 Function Monte Carlo, 14 sinc, 14 Simulation, 1 Superposition, 1 Gaussian Elimination, 12 Rayleigh Quotient, 24 Hamiltonian, 4 Referencing Wildcards, 48

Index Sampling, 14 Arrangement, 29 Self Time, 48 Contraction, 23 Spin, 9 Set, 19 State

53 Degenerate, 3 Ground, 3 Mixed, 25 Pure, 9 Quantum, 2 Statistical Physics, 7

Uncertainty Principle, 1

Vectorisation, 48

54