<<

Universidade Estadual de Campinas

Instituto de Física Gleb Wataghin

Bruno Ricardi de Abreu

Quenches numéricos de desordem no modelo Bose-Hubbard

Numerical quenches of disorder in the Bose-Hubbard model

CAMPINAS 2018 Bruno Ricardi de Abreu

Numerical quenches of disorder in the Bose-Hubbard model

Quenches numéricos de desordem no modelo Bose-Hubbard

Tese apresentada ao Instituto de Física Gleb Wa- taghin da Universidade Estadual de Campinas como parte dos requisitos exigidos para a obten- ção do título de Doutor em Ciências.

Thesis presented to the Institute of Gleb Wataghin of the University of Campinas in par- tial fulfillment of the requirements for the degree of Doctor in Sciences.

Orientador: Silvio Antonio Sachetto Vitiello

Este exemplar corresponde à versão final da tese defendida pelo aluno Bruno Ricardi de Abreu, e orientada pelo Prof. Dr. Silvio An- tonio Sachetto Vitiello.

Campinas 2018 Agência(s) de fomento e nº(s) de processo(s): CNPq, 141252/2014-0; CNPq, 232682/2014-3 ORCID: ttps://orcid.org/0000-0002-9067-779X

Ficha catalográfica Universidade Estadual de Campinas Biblioteca do Instituto de Física Gleb Wataghin Lucimeire de Oliveira Silva da Rocha - CRB 8/9174

Abreu, Bruno Ricardi de, 1990- Ab86n AbrNumerical quenches of disorder in the Bose-Hubbard model / Bruno Ricardi de Abreu. – Campinas, SP : [s.n.], 2018.

AbrOrientador: SIlvio Antonio Sachetto Vitiello. AbrTese (doutorado) – Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin.

Abr1. Superfluidez. 2. Bose-Hubbard, Modelo de. 3. Monte Carlo quântico, Método de. 4. Sistemas desordenados. 5. Átomos ultrafrios. I. Vitiello, Silvio Antonio Sachetto, 1950-. II. Universidade Estadual de Campinas. Instituto de Física Gleb Wataghin. III. Título.

Informações para Biblioteca Digital

Título em outro idioma: Quenches numéricos de desordem no modelo Bose-Hubbard Palavras-chave em inglês: Bose-Hubbard model method Disordered systems Ultracold atoms Área de concentração: Física Titulação: Doutor em Ciências Banca examinadora: Silvio Antonio Sachetto Vitiello Ricardo Luis Doretto Marcos Cesar de Oliveira Raimundo Rocha dos Santos José Abel Hoyos Neto Data de defesa: 08-08-2018 Programa de Pós-Graduação: Física

Powered by TCPDF (www.tcpdf.org) MEMBROS DA COMISSÃO JULGADORA DA TESE DE DOUTORADO DE BRUNO RICARDI DE ABREU RA: 80858 APRESENTADA E APROVADA AO INSTITUTO DE FÍSICA “GLEB WATAGHIN”, DA UNIVERSIDADE ESTADUAL DE CAMPINAS, EM 08/08/2018.

COMISSÃO JULGADORA:

- Prof. Dr. Silvio Antonio Sachetto Vitiello - (Orientador) - IFGW/UNICAMP - Prof. Dr. Ricardo Luis Doretto - IFGW/UNICAMP - Prof. Dr. Marcos Cesar de Oliveira - IFGW/UNICAMP - Prof. Dr. Raimundo Rocha dos Santos - INSTITUTO DE FÍSICA - UFRJ - Prof. Dr. José Abel Hoyos Neto - INSTITUTO DE FÍSICA/SÃO CARLOS

A Ata de Defesa, assinada pelos membros da Comissão Examinadora, consta no processo de vida acadêmica do aluno.

CAMPINAS 2018 Acknowledgements

My entire career as a physics student and, in particular, the work that is presented in this dissertation could not be possibly made without the unconditional love and support that I have always received from my family. I feel like just thanking them on these lines does not even fairly compensate for the most elemental source of motivation that they represent to me. Even so, if not by other means, I express here my gratefulness for having them in my life. It is an extraordinarily comforting pleasure to be sure that they will always be by side no matter the possible different courses that my life would take as a consequence of my choices. Just as important as them for the construction of my career as a , the development of this work, and for my formation as a citizen and human being during my time as a student at Unicamp is my long-term advisor Silvio Vitiello. Along our journey he has consistently been aware of my feelings, tempering my thoughts when they were too fast and confusing, hastening my ideas when I was moving too slow and wisely advising me in a number of situations of life with distinguished discernment and sagacity. I am deeply thankful for his unrestricted patience and perseverance during this period. I am also very gratified for people that work at IFGW/Unicamp and made this project possible. From faculty, with highly skilled professors that taught me physics on the finest level, to staff that provided fundamental support such as access to scientific books and articles through the library (BIF), scheduling of classrooms for presentations whenever needed and so many other things, including hot, good coffee and snacks. I am completely sure that they made academic lifehere a lot easier for me. I must also recognize that this research used the computing resources and assistance of the John David Rogers Computing Center (CCJDR) in IFGW, whose staff has been extremelly supportive as well. Part of this work was made in the United States, more specifically at the University of Illinois at Urbana-Champaing (UIUC), Institute for Condensed Matter Theory (ICMT), where I have been a visiting scholar with Professor David Ceperley. He has demonstrated that, much more than the extraordinary scientist that his career grants, he is an excellent human being. He helped me by being kind and prestative, keeping my hopes alive during what was, beyond any doubts, the hardest piece of my life so far. I am deeply grateful for his support, which of course extended to academic and research life. During this period I also met Ushnish Ray, an excepetionally talented scientist that has guided me through the subtleties of the subjects and methods that were used throughout this work. I am thankful for his patience, consideration and collaboration. Last, but not least, I shall say that I have been exceedingly lucky in finding new friends and keeping old ones during these years, both in Brasil and in the United States. Their friendship is priceless and makes life worth living. During my time in the US, I met my beloved Kinsey who has, since then, sweetened my life with unequivocal love, support and kindness, softening my heart whenever my hard thoughts were overcoming my feelings. I am thankful for financial support from the Conselho Nacional de Desenvolvimento Científico e Tecnológico – CNPq under grants No. 141242/2014-0 and No. 232682/2014-3 that concern both regular doctorate scholarship and the Science Without Borders program. Resumo

Neste trabalho as propriedades das fases superfluida (SF) e vidro de Bose (BG) do modelo Bose-Hubbard desordenado em três dimensões são investigadas usando simulações de Monte Carlo quântico. O diagrama de fases é construído utilizando desordem Gaussiana nas energias de ocu- pação, e dois tipos adicionais de distribuição, exponencial e uniforme, são estudados com respeito às suas influências quantitativas e qualitativas no estabelecimento do super-fluxo que caracteriza o estado superfluido. A estatística de observáveis do sistema pertinente a distribuições depro- babilidade sobre o ensemble de desordem são estudadas para diversos valores de interação entre átomos e tamanhos da rede, onde fortes efeitos de tamanho são observados. Estes efeitos estão relacionados ao mecanismo que dirige a transição SF-BG e corroboram o entendimento do caráter percolativo da transição. Apesar disso, ambos os parâmetros de ordem, a fração de superfluido e a compressibilidade, permanecem auto-promediantes por toda fase superfluida. Nos arredores do contorno SF-BG, efeitos de tamanho são dominantes mais ainda sugerem que a auto-promediação persiste. Estes resultados são relevantes para experimentos com gases atômicos ultrafrios onde um procedimento sistemático de mediação sobre realizações de desordem não é tipicamente possível, e também para cálculos numéricos que precisam necessariamente considerar efeitos de tamanho quando o sistema apresenta pequenas quantidades de superfluido.

Palavras-chave: superfluidez, Monte Carlo quântico, modelo de Bose-Hubbard, desordem, vi- dro de Bose, percolação, auto-promediação, modelo de Bose-Hubbard desordenado, gases atômicos ultrafrios Abstract

In this work the properties of the superfluid (SF) and Bose-glass (BG) phases in the three- dimensional disordered Bose-Hubbard model are investigated using Quantum Monte-Carlo simu- lations. The phase diagram is generated using Gaussian disorder on the on-site potential, and two additional types of distributions, namely exponential and uniform, are studied regarding both their qualitative and quantitative influence on the establishment of the superflow that characterizes the superfluid state. Statistics pertaining to probability distributions of observables over the disorder ensemble are studied for a range of interaction strengths and system sizes, where strong finite-size effects are observed. These effects are related to the mechanism that drives the SF-BG transition and corroborates the understanding of the percolation character of the transition. Despite this, both order parameters, the superfluid fraction and compressibility, remain self-averaging through- out the superfluid phase. Close to the superfluid-Bose-glass phase boundary, finite-size effects dominate but still suggest that self-averaging holds. These results are pertinent to experiments with ultracold atomic gases where a systematic disorder averaging procedure is typically not pos- sible, and also to numerical calculations that must necessarily address finite-size effects when the system exhibits small amounts of superfluid.

Keywords: superfluidity, Quantum Monte Carlo, Bose-Hubbard model, disorder, Bose-glass, percolation, self-averaging, disordered Bose-Hubbard model, Stochastic Series Expansion, ultracold atomic gases List of Figures

1.1 Basis functions from the nearly free particle to the atomic limit ...... 28 1.2 Bloch waves constructed for a one dimensional lattice...... 29 1.3 Coarse-graining picture in real and momentum space ...... 31 1.4 Typical potential of soft core particles...... 32 1.5 Cartoon of Bose-Hubbard model terms...... 33 1.6 Scheme of the superfluid and Mott-insulating phases of the Bose-Hubbard model . . 37 1.7 Superfluid fraction as the response to boundaries motion ...... 43 1.8 Phase diagram of the single-band Bose-Hubbard model...... 46 1.9 Finite-temperature phase diagram of the BHM...... 47

2.1 Division of a system into “correlation volumes” ...... 50 2.2 Distribution of local critical parameters in a disordered system...... 50 2.3 Renormalization flux scheme for a clean critical point...... 54 2.4 Renormalization flux scheme for class 1...... 55 2.5 Renormalization flux scheme for class 2a ...... 57 2.6 Renormalization flux scheme for class 2b...... 58 2.7 Renormalization flux scheme for class 2c ...... 59 2.8 Quantum-to-classical mapping of quenched disorder...... 61 2.9 Local values of an observable within a sample of the disorder ensemble ...... 63 2.10 Illustration of the Theorem of inclusions ...... 65

3.1 Illustration of addition of disorder to an experimental ...... 67 3.2 Distribution of Bose-Hubbard terms for speckle disorder ...... 68 3.3 Different types of diagonal-disorder distributions...... 72 3.4 Renormalization flux scheme for the DBHM ...... 73 3.5 Illustration of the DBHM phase diagram...... 75 3.6 Phase diagram of the DBHM obtained by SMFT ...... 76 3.7 Phase diagram of the DBHM obtained by LMF ...... 77 3.8 Phase diagrams of the DBHM at unit filling ...... 78 3.9 Phase diagram of the DBHM at half-filling...... 79 3.10 Crossover between the low-휅 and high-휅 BG...... 80 3.11 Onset of superfluidity in terms of chemical potential...... 81 3.12 Percolation picture of the SF-BG transition ...... 83

4.1 Illustration of a random experiment to calculate the area of a lake...... 91 4.2 Examples of one-dimensional Brownian bridges ...... 96 4.3 Transformation of random variables...... 98

5.1 Illustration of the configuration space created in Handscomb’s method ...... 109 5.2 Configuration space after truncation of the series ...... 111 5.3 Bond decomposition for the BHM...... 112 5.4 World Line picture and scattering vertices ...... 114 5.5 Example of bonds for 3 lattice sites with PBC...... 116 5.6 Example of diagonal update...... 117 5.7 Illustration of the insertion of a worm for loop update ...... 118 5.8 Possible movements of the worm on a scattering vertex ...... 120 5.9 Example for the estimate of equal-time correlation functions ...... 126

6.1 Thermodynamic quantities calculated via ED and SSE ...... 130 6.2 Dependence of Bose-Hubbard terms on the lattice-depth ...... 132 6.3 Physical properties and grand-canonical phase diagrams for 푠 = 10 and 14...... 133 6.4 Physical properties and grand-canonical phase diagrams for 푠 = 18...... 134 6.5 Illustration of the LDA procedure...... 135 6.6 Radial distributions of physical quantities for 푠 = 10 ...... 136 6.7 Radial distributions of physical quantities for 푠 = 18 ...... 137 6.8 Phase diagram for 휌 = 0.5 ...... 138 6.9 Phase diagram for 휌 = 0.75 ...... 139 6.10 Phase diagram for 휌 = 1.25 ...... 139 6.11 Phase diagram for 휌 = 1.0 ...... 140

7.1 Example of the evolution of averages with the number of samples ...... 143 7.2 Example of the evolution of variances with the number of samples ...... 144 7.3 Example of the evolution of probability distributions with the number of samples . . 144 7.4 Sample fluctuations of order parameters over the phase diagram ...... 145 7.5 Relative momenta of 휌푠 along Δ/푈 = 0.5 ...... 147 7.6 Relative momenta of 휅 along Δ = 0.5 ...... 147 7.7 Probability distributions and quantile-quantile plots of 휌푠 for Δ/푈 = 0.5 ...... 148 7.8 Probability distributions and quantile-quantile plots of 휅 for Δ/푈 = 0.5 ...... 149 7.9 Distribution of the samples standard deviation...... 150 7.10 Probability distributions of order parameters before and after re-scaling energy shifts151 7.11 Probability distributions obtained from shuffling the random potential ...... 152

8.1 Three-dimensional maps of local properties for different samples ...... 156 8.2 Local properties integrated over one spatial direction ...... 157 8.3 Relation between the wave function and the random potential ...... 158 8.4 Maps for a sample with average superfluid fraction ...... 160 8.5 Comparison of 휌푠 for different distributions of the random potential ...... 161 9.1 Scaling of average values of the order parameters ...... 164 9.2 Magnitude of disorder fluctuations for different lattice sizes ...... 165 9.3 Scaling of 풫퐿 for 푈/푡 = 22.0, Δ/푈 = 0.5 ...... 166 9.4 Scaling of 풫퐿 for 푈/푡 = 62.0, Δ/푈 = 0.5 ...... 168 9.5 Scaling of 풫퐿 for 푈/푡 = 72.0, Δ/푈 = 0.5 ...... 169 9.6 Scaling of 휌푠 for 푈/푡 = 72.0, Δ/푈 = 0.5 ...... 170 9.7 Comparison of the integrated random potential for 퐿 = 12 ...... 171 9.8 Scaling of 풟 for 푈/푡 = 62.0, Δ/푈 = 0.5 ...... 172 9.9 Scaling of 풟 for 푈/푡 = 72.0, Δ/푈 = 0.5 ...... 172

A.1 Illustration of the standard deviation of a distribution ...... 194 A.2 Illustration of skewed distributions ...... 195 A.3 Illustration of distributions with different kurtoses...... 196

C.1 Example of experimental setup ...... 201 Contents

Introduction 15

I Theory 22

1 The Bose-Hubbard model 23 1.1 The art of coarse-graining ...... 23 1.1.1 The ferromagnet paradigm...... 23 1.2 Coarse-graining of an excessively microscopic model: derivation of the Bose-Hubbard Hamiltonian...... 24 1.2.1 A very general Hamiltonian ...... 24 1.2.2 Addition of a lattice: Bloch waves...... 25 1.2.3 Choice of an appropriate basis set: Wannier functions...... 28 1.2.4 Further simplifications: energy bands...... 29 1.2.5 The single-band standard Bose-Hubbard Hamiltonian...... 30 1.3 Physical properties of the model...... 33 1.3.1 Insights from the double-well potential...... 33 1.3.2 Superfluid and Mott-insulating phases ...... 35 1.3.3 Energy spectrum and excitations ...... 37 1.3.4 Order parameters and phase diagram...... 42 1.3.5 Finite temperature effects ...... 46

2 General effects of disorder on continuous phase transitions 48 2.1 Harris’ criterion...... 49 2.2 Chayes’ criterion ...... 53 2.3 Fate of critical points under addition of disorder...... 54 2.4 Self-averaging of observables...... 58 2.5 Theorem of inclusions...... 64

3 The disordered Bose-Hubbard model 66 3.1 Addition of disorder to the Bose-Hubbard Hamiltonian...... 66 3.1.1 Correlation between disorder distributions in Hamiltonian terms ...... 67 3.1.2 Off-diagonal disorder Bose-Hubbard Hamiltonian ...... 69 3.1.3 Diagonal disorder Bose-Hubbard Hamiltonian ...... 69 3.1.4 Types of diagonal-disorder distributions ...... 70 3.2 Expected effects of disorder ...... 71 3.2.1 Violation of Harris’ criterion...... 71 3.2.2 Character of the Griffiths singularities ...... 72 3.2.3 Exigency of an intervening phase ...... 74 3.3 Phase diagrams...... 74 3.3.1 Commensurate and incommensurate fillings ...... 75 3.3.2 Reentrant behavior of the superfluid phase...... 76 3.4 The Bose-glass ...... 78 3.4.1 Onset of superfluidity and the percolation picture...... 81 3.4.2 Order parameters...... 82 3.5 Finite-temperature effects ...... 84

II Methods 85

4 Generic numerical methods 86 4.1 Exact diagonalization...... 87 4.1.1 Selection of a suitable basis set ...... 87 4.1.2 Direct numerical diagonalization...... 88 4.1.3 Example for the DBHM ...... 89 4.2 Monte Carlo...... 90 4.2.1 A picturesque random experiment...... 90 4.2.2 Estimators...... 92 4.3 Sampling techniques ...... 96 4.3.1 Transformation of random variables...... 96 4.3.2 Acceptance-rejection ...... 100 4.3.3 Metropolis algorithm...... 102

5 Stochastic Series Expansion 105 5.1 Handscomb’s method...... 105 5.2 Extended sampling...... 109 5.3 Diagonal update...... 115 5.4 Loop update...... 116 5.5 Observables...... 121 5.5.1 Z-sector ...... 121 5.5.2 G-sector...... 124 5.6 CSSER...... 127

III Applications 128

6 Preliminary results 129 6.1 Comparison between QMC and exact diagonalization ...... 129 6.2 Phase diagrams for the DBHM ...... 130 6.2.1 Grand-canonical maps ...... 131 6.2.2 Trapped systems ...... 132 6.2.3 Fixed filling maps...... 135

7 Aspects of the disorder ensemble 141 7.1 Definition of disorder-statistical quantities ...... 142 7.2 Disorder equilibration...... 143 7.3 Size of fluctuations over the phase diagram...... 144 7.4 Differences in probability distributions ...... 146 7.5 Origins of non-Gaussian behavior ...... 149

8 Features of the random potential 154 8.1 Local properties in different samples ...... 155 8.2 Effects of different disorder distributions ...... 159

9 Finite-size scaling of quantities 162 9.1 Disorder averages and fluctuations...... 163 9.2 Probability distributions...... 165 9.3 Relative variances and the self-average query...... 171

10 Concluding remarks 174

References 176

Appendices 188

A Statistics toolbox 189

B Central Limit Theorem 198

C Example of experimental setup 200

Index of acronyms 202 15

Introduction

As my initial assertion I shall prompt the reader of this dissertation that I have deliberately opted, in writing this document, for the path of completeness and protraction rather than the path of conciseness, even though I have been advised not to do so. My incitation for this choice comes from the fact that, in several opportunities while studying and researching subjects related to topics that I will discuss here, I have struggled to find references that present methodologies, motivations, and conclusions – elements of the actual process of making science – in a more detailed manner, which I believe is a common dilemma in the realm of scientific papers and articles. Most of my understanding of the physics of bosonic cold atoms systems has only been achieved after reading dissertations that discuss the subject in a more basic level, even though the ultimate knowledge is condensed into articles from several authors. I then accordingly want my work to be presented in a fashion that is similar to those that were so important to me. Furthermore, I strongly believe that subtleties that were traversed by me along my journey to obtain my doctorate are worth being documented because they can help others to avoid obstacles that I have found, which hopefully can facilitate for the progress of this field. For the reader that is interested in a more concise documentation on the subject, I suggest referring to my paper [1] and also to those that I have cited within this dissertation. As a matter of fact, I have also opted for a quite detailed bibliography, citing not only those works that have directly formed and contributed to my knowledge but also the original ones. The work that is going to be presented here is entirely a result of my efforts to make a relevant contribution to condensed matter physics. This enterprise started with my advisor Professor Silvio Vitiello at IFGW/Unicamp in 2014, and gained a more concrete aspect when I visited Professor David Ceperley at the Institute for Condensed Matter Theory (ICMT) from the University of Illinois at Urbana-Champaign (UIUC), during the 2015/16 school year. A large portion of the results were obtained after I came back to Brazil, specifically in the year of 2017. I should also say that my collaborator Ushnish Ray played an extremely relevant role in obtaining these results. In fact, if it was not for formality, as he is not a professor yet, I would call him an advisor as well. I believe that the conditions for performing scientific research, particularly in condensed matter physics, are great in Campinas and excellent in Urbana-Champaign, therefore I strongly suggest both places for those that would be interested in pursuing academic life. In this introduction, I will present my motivations for studying the subject of this thesis and also discuss why and how this work can be relevant to the scientific community. I will then explain how the dissertation is organized and give some details on the subject of each chapter. 16

Motivation

Phases of matter are inherently a subject of interest of the human mind. It has been an object of study since the early days of civilization, playing religious, philosophical and scientific roles throughout the ages. Although very far away from what scientists nowadays find a reasonable description of what a phase of matter is, how it can be changed and, from a more practical point of view, how it can be turned into something useful to society, these notions have long before crossed several minds in different contexts, with diverse concerns and various beliefs. For instance, the social importance of alchemists dates not later than the ancient Egyptian civilization. Similar ideas were proliferated in Greek philosophy, chiefly in the context of the so-called fundamental elements. In modern ages, the capability of manipulating precious metals was indeed a source of power and distinction. Despite the enormous distance in time and scientific knowledge, there is a common piece of interest that persists. Some minds seem to be attracted by the possibility of changing one phase of matter into other. In this sense, condensed-matter can be seen as alchemists of the scientific era. In technical terms, this comprises the understanding of phase transitions and,more generally, critical phenomena. For that, a deep acquaintance to phases of matter on their own is primordial. During my graduation at IFGW/Unicamp I was lucky to be exposed, relatively early, to a tool that I think is incredibly valuable in learning the scientific method: computational physics. At least in Brazil, it is a common complaint from students that there is a sort of gap in between learning theoretical and experimental physics, which is terrible since empiricism is vital for the development of science. Computational physics is able to partially fill this gap mostly because computers are, nowadays, largely accessible. One can therefore use simulations to promote the embracement of concepts that are presented in classes and books. On a bigger scale, they can be actually used to endorse scientific development by giving new insights to both theory and experiment. For these reasons I have found motivation in conducting my career to the use of computational tools to study condensed matter physics. Even though classical phases of matter and phase transitions are extremely interesting and there is a lot of open questions regarding their mechanisms, my attention was completely taken by quantum phases of matter when I was firstly introduced to the phenomenon of superfluidity in 2008, the very first year of my Physics career. Since then, it has been my goal to study macroscopic quantum phenomena that are wonderfully enthralling. Fortunately, to make these two resolutions of mine converge, there exists a powerful method to investigate quantum systems using computers: Quantum Monte Carlo. I have therefore dedicated myself to learning and applying such tool to continuum systems, more specifically 4He, during my Master’s degree, and to lattice-systems during my doctorate, which comprises the work that is to be presented here. In addition, we have witnessed in the last twenty years or so a prodigious development of what is generally called the field of synthetic materials, chiefly boosted with the help of ultracold atomic gases and artificial lattices that can experimentally realize standard models of condensed matter physics with large precision and great control. This allows for a direct test of our understanding of theoretical concepts and ingredients that we should consider in describing the features of a problem, such as the role of interactions between particles, that lie within the heart of many-body, 17 collective phenomena. Moreover, such realizations turn possible to study the interplay between these different ingredients, which gives physicist the possibility to distinguish their importance in describing the observed properties of the system, a task that is extremely complex when considering strongly correlated, non-perturbative and/or disordered systems. Regarding applications, these systems potentially will lead to the development of new materials that can exploit the physics of the quantum world in a macroscopic, every-day scale. The most prominent examples are perhaps superfluids and superconductors that can offer a dramatic change of our common perspective of energy and heat transport, for instance. Concurrently to the development of these experimental techniques that allow one to manipulate the order of 105 atoms, there has been substantial enhancement of the available computational power, specially with the aid of supercomputers. This increasing of resources now allows us to simulate systems that are actually under the same conditions as the experimental ones, regard- ing control parameters such as temperature, volume or number of particles, interaction strength, disorder and lattice geometries, for instance. It is then possible to directly benchmark theory and experiment, which greatly improves our ability to test theoretical ideas and also propose new experimental instances. In spite of that, the resources are finite and there is a huge amount of problems that society faces today that would have interesting and relevant consequences if they could be addressed. We therefore must use such resources wisely. As a picturesque example, we should not use supercomputers to simulate systems under experimental conditions if results from a smaller system that could be obtained from a laptop are already sufficiently precise! The work that is presented in this dissertation was intended to make contact with such ideas. I have dedicated my efforts to study systems formed by bosonic atoms in a lattice at very lowtem- peratures, which are suitably described by the so-called Bose-Hubbard model. These systems, that are nowadays largely reproduced in laboratories around the world, typically exhibit a superfluid and an insulating phase resulting from the competition between the two primordial ingredients of the model: interaction between atoms and diffusion of atoms throughout the lattice. The specific problem that I chose to address was then what are the effects of the addition of disorder tothe lattice, which is then a new ingredient, from both qualitative and quantitative perspectives. At the time I made this decision I was extremely captivated by the possibility of experimentally realizing my own calculations! Even though this is still possible, as I will try to make clear, during my studies on this system, my main goal regarding research has become to help both experimentalists and theorists not to waste their time and resources. Hopefully I will convince the reader that I was indeed able to partially fulfill my ambitions.

Purpose of this work

The qualitative consequences of the addition of quenched disorder to the Bose-Hubbard model have been established in 1988-89 with the seminal works by Fisher et al. [2,3]. Perhaps the most significant effect that was then predicted is the existence ofthe Bose-glass phase, that exhibits peculiar features that I shall discuss along this dissertation. Since then, this model has received a lot of attention and its properties have been explored using renormalization group (RG) approaches, numerical techniques, as well as experiment [3–13]. 18

The static nature of the quenched type of disorder has the important consequence of demanding averaging the free energy of the system over the different static realizations, making it much more technically challenging than its counterpart, the so-called annealed disorder [14]. In the later case, disorder can be handled by averaging the partition function over the disorder degrees of freedom that are thus in thermal equilibrium with the remaining degrees of freedom. Conversely, in order to properly address a certain physical property of the quenched-disorder system that is encapsulated in an observable 푋, one therefore must consider the statistics of 푋 – averages, variances, and ultimately probability distributions – over a number of different disorder instances arising from realizations of the random, disordered potential to which the system is subjected. In other words, for a certain set of control parameters, these different disorder instances constitute what we shall call a disorder ensemble. The purpose of this work can be condensed in systematically studying the consequences of exploiting this ensemble. Even though the disordered Bose-Hubbard model has been largely discussed in the scientific literature from both experimental and theoretical perspectives, I have not found a work that addresses the question of how significant the effects of considering different disorder instances can be when considering the superfluid properties of the model. At the same time, I have notfound a discussion of what are the effects of considering different types of disorder distributions from which the random potentials are taken. Furthermore, even though the question of self-averaging of observables in disordered systems is vital, I have not found it addressed in the literature. These questions, that are going to be clarified along this dissertation, make this work original. The practical relevance of studying these problems is to quantify for both experiments and numerical calculations to what extent one must consider the averaging of physical properties over the disorder ensemble and, additionally, quantify the difference arising from considering more idealistic type of disorder distributions, such as the uniform case, and distributions that are closely related to what is found in experiments, such as the speckle-field one. In order to accomplish for that, I have used Quantum Monte Carlo, specifically the Stochastic Series Expansion method, to simulate the disordered Bose-Hubbard model for a range of physical parameters that covers the superfluid phase and part of the Bose-glass phase, obtaining the order parameters andother thermodynamic properties of the system over the disorder ensemble.

Organization of the text

This dissertation is divided in three parts that were meant to be independent contingent on the interests of the reader. PartI – Theory, contains the theoretical framework that I believe is relevant to discuss the physics of the Disordered Bose-Hubbard Model and is organized in three chapters. PartII – Methods, encloses the numerical methods and techniques that I have used to simulate the model and obtain its physical properties, being composed of two chapters. Finally, PartIII– Applications, presents and discusses the results in the light of the theoretical perspective, which were obtained from the methods previously discussed. It contains five chapters. I must beforehand tell the reader that this is where my original contributions to the scientific community are placed. In addition to that, I have included at the end of the document a section with Appendices containing details that, even though not vital in explaining the results and discussing the physics of the system, 19

I strongly believe are necessary and relevant if one is interested in reproducing the calculations that I have performed. Before describing in more details the contents of each chapter, I shall notify the reader that, even though I have tried to present the subjects of PartsI andII in a closed, complete form, this is certainly a very difficult task. I therefore recommend consulting with the references thatIhave cited along the text in case the reader requires further details or clearer explanations. I shall also make a couple of technical remarks. Most of the figures that I have included are in colors and,in spite of my efforts to make them significant even when printed in black and white, Iamconvinced that printing the whole document in gray scale will represent a certain degree of prejudice in the quality of the dissertation. Furthermore, given the fact that the text is written in LATEX with the hyperlink package, I recommend reading the text from a digital device rather than from a hard copy.

Overview

Chapter1 – The Bose-Hubbard model, broadly discusses a few features of condensed matter physics that are relevant to solidify the importance of a model that describes a certain physical system. This is done through the common, text-book paradigm of the ferromagnetic phase tran- sition. I then present a derivation of the Bose-Hubbard Hamiltonian by sequentially introducing the constituting elements of bosonic cold atoms systems: a periodic potential (lattice), interac- tions, diffusion and so on. I also give explicit details on the approximations that are considered to obtain the standard, single-band Bose-Hubbard model (BHM). Notice that, in this chapter, I do not consider disorder in the system. After deriving the Hamiltonian, I then discuss the phys- ical properties of the system using insights from an analytical solution of a double-well potential and present phase diagrams, also deriving expressions of the energy spectra in both strongly and weakly correlated regimes. Chapter2 – General effects of disorder on continuous phase transitions, as the very namecan tell, discusses the effects of the addition of quenched disorder toa clean model that exhibits a second-order . Even though the BHM fits in this category, the discussion extends to other systems as well. I present two widely known criteria – Harris’ and Chayes’ – that, under very general assumptions, establish the relevance of the disorder operator to the problem. I then discuss the different possibilities that one could expect regarding the critical properties ofthe system in the light of renormalization group arguments. Sequentially, I discuss the self-averaging question that is crucial to this dissertation and, finally, the Theorem of Inclusions that, even though quite general, was initially derived in the context of the BHM. Chapter3 – The disordered Bose-Hubbard model (DBHM), that closes PartI, collects the concepts from the two previous chapters in an attempt to predict the features of the DBHM. I discuss the different ways that one can actually add disorder to the BHM, giving more details for the case that is central on this dissertation: quenched, diagonal disorder. I also present the three types of disorder distributions that are going to be used in PartIII. The relevance of disorder to the system is then discussed, with details on the new intervening phase, the Bose-glass, in a fashion that is similar to the initial chapter: excitations, phase diagrams and other aspects. 20

The percolation character of the superfluid/Bose-glass transition, a feature that is going tobe important for the analysis of the results that I have obtained, is particularly discussed as well. Chapter4 – Generic numerical methods, that opens PartII, presents relevant numerical tech- niques that were used to obtain the results of this work. Specifically, I discuss direct numerical diagonalization, necessary to obtain physical properties of the model and also to benchmark re- sults from more abstract methods such as Quantum Monte Carlo, which is also discussed on a very basic level. I then discuss the question of numerically sampling distributions from a more direct framework, namely transformation of variables, to the far-reaching, powerful Metropolis algorithm. Chapter5 – Stochastic Series Expansion (SSE), closing PartII, presents the particular method that was used to simulate the DBHM. I start with a derivation of Handscomb’s method, which is the historical root of SSE, and then generalize it to obtain SSE according to what as originally done by the creator of the method, A. Sandvik. I then discuss the sampling procedures with explicit examples of the diagonal update process and the loop-update one that is realized via the directed- loop algorithm. Finally, I present how relevant observables are calculated within the framework and language of SSE. Chapter6 – Preliminary results, that opens the PartIII, where my original contributions start to be presented, discusses basic features of the DBHM, such as the phase diagrams obtained for the Gaussian type of disorder. I also include a comparison between results obtained from Quantum Monte Carlo and exact diagonalization of the DBHM. This chapter is the only one on the entire dissertation that addresses the question of trapped, non-homogeneous systems that are found in experiments. In particular, using the Local Density Approximation, I present estimates of the features of atomic clouds, such as the shell-structure of phases, that is corroborated in experimental systems. I then discuss different types of maps that one can consider when establishing thephase diagram of the DBHM. Chapter7 – Aspects of the disorder ensemble, discusses the features of exploring the random potential to a larger degree by considering simulations for several samples with different disorder realizations. I explicitly address the question of equilibrium in the disorder sense, and subse- quently analyze statistical features of the order parameters of the DBHM, with special attention to the fluctuations over the disorder ensemble and the shape of their probability distributions. I then present the relation between the deviation from Gaussian behavior of the superfluid order parameter to the percolation mechanism of the superluid/Bose-glass transition. Chapter8 – Features of the random potential, presents a more detailed discussion of the pecu- liarities of the different realizations of the random potential in determining the physical properties of the system. In particular, I show that the wave function of this bosonic system, which is strictly related to the establishment of a superfluid, is directly connected to the formation of puddles where the random potential is negative, which corroborates the understanding of the percolation mech- anism and the consequent deviation from Gaussian behavior of the superfluid order parameter of the system. Additionally, I present results for different disorder distributions – Gaussian, box and exponential – where quantitative differences are observed and related to the particular shapeof the distributions. I argue that these differences are consequence of the energetic balance between the interaction energy of the atoms and the occupation energy of lattice sites coming from the random potential. Chapter9 – Finite-size scaling of quantities, discusses the influence of the lattice size on the 21 properties and features that were presented in the previous three chapters. It is shown that, as the size of the lattice is increased, different realizations of the random potential start to look more similar from the perspective of their physical properties. In other words, the order parameters of the system are self-averaging quantities, which is explicitly shown by the scaling of statistical quantities of the disorder ensemble to the thermodynamic limit. Here, we have performed simulations with lattice sizes that are comparable to what is found in experiments. Chapter 10 – Concluding remarks, summarizes and contextualizes the results obtained in Part III of this dissertation. I also discuss some prospects on topics that I think would be interesting to be studied in the future. AppendixA defines all the statistical quantities that were used in this dissertation, while AppendixB discusses the Central Limit Theorem and AppendixC presents an example of exper- imental setup for ultracold atoms in disordered optical lattices. 22

Part I

Theory 23

Chapter 1

The Bose-Hubbard model

One remarkable feature of condensed matter physics is that it can encapsulate a huge amount of physical knowledge into what we call models. To construct these models, the art of coarse- graining is far-reaching, being quite popular in the field. In this chapter I will present a suitable derivation of the Bose-Hubbard Model (BHM) that simulates soft-core bosons in an optical lattice, discussing the meaning of the physical parameters and the scope of the model. I will also discuss pertinent physical properties and present the phase diagram for this system.

1.1 The art of coarse-graining

I will start this discussion with the paradigmatic example of a ferromagnet. From a layman point of view this may not be the best option since the very bear concept of magnetization carries on a significant amount of physical knowledge, but for the purposes of what condensed-matter physics really wants to clarify, it is consensually the most pictorial and intuitive case. Moreover, I am assuming that a layman that puts his eyes on this thesis is not that much of a layman. The following discussion is strongly based on Ref. [15], in similarity to what is found in Refs. [16, 17].

1.1.1 The ferromagnet paradigm A ferromagnet is a certain material – a crystal, for instance – that exhibits a liquid, finite magnetization in its natural state. This means that even when it is left alone, without any kind of fields, external interferences or, in general, in the absence of any perturbations, the material is magnetized. If you come along with a compass and put it close to a ferromagnet, the needle will do some crazy movements. The nature of the constituents of such material, in particular the way they interact, is responsible for the macroscopic emergent behavior that is the magnetization. They fabulously arrange themselves in order to produce this unique, important physical property. Furthermore, we also know that if we heat up a piece of ferromagnet too much, it loses this property. It becomes a paramagnet. There is no finite magnetization unless we provide an external magnetic field. By changing the temperature, it is then possible to transform one phase intothe other. The same material, with exactly the same constituents, has different physical properties according to a certain control parameter. 24

In order to describe the emergency of the spontaneous magnetization in a crystal, we need to specify the constituting parts that form the material. Perhaps the more fundamental picture that we can draw, because I do not intend to go into particle physics depths, is that the composing elements are atomic nuclei and electrons interacting via the Coulomb force. I will call this the approach number 1. It is very fundamental, microscopic. If we can derive the magnetization of a ferromagnet from there, we would definitely have a successful theory. On the other hand, this route is poorly practical and actually it is not smart. We know more than this. For instance, from solid state physics, we can think of electrons in a prescribed crystal lattice with an effective interaction (approach 2). Parameters specifying the interactions, as well as the band structure, crystal fields and so on are widely known. This route is more suitable because the formationof crystal lattices and inner atomic shells are remote from the spontaneous magnetization that we want to describe. There is a little bit more that we can say that will further simplify our approaches. The source of magnetization is undoubtedly the electronic spins of incomplete electronic shells (d and f shells in Fe, Ni and Co, for instance). We also know that the exchange effect, that combines Coulomb interaction and Pauli exclusion, tends to align spins in an effective short-range interaction, since it lowers the energy of the system. With that in mind, a proper route would be considering classical spins, one in each unit cell of a crystal lattice, with specified spin-spin interaction that would come from parameters adjusted to simulate what approach 2 would imply. This is approach 3. The quantum nature and electronic motion are ignored. However, the physics of interest lies in how a large number of spins behave together, namely, the liquid magnetization. Being a little bit crude on the unit cell scale cannot matter much. The approaches 1, 2 and 3 constitute models for a ferromagnet. They are representations in terms of parameters that comprise interactions between the elemental constituents of a system that we want to describe. The transition from a more microscopic level of description to a less refined one is what we call the coarse-graining procedure. In each step, we need to know a lot of physics to put as much information as we can into parameters that will mimic the physics that lies within lower scales that we suspect are likely to not play a central role in the observed macroscopic physical properties of the system.

1.2 Coarse-graining of an excessively microscopic model: derivation of the Bose-Hubbard Hamiltonian

Models are usually specified by their Hamiltonian, which encloses the form with whichthe components of the model interact and, therefore, governs the dynamics and thermodynamics of the described system. I will start a series of simplifying hypotheses that will transform a very complicated Hamiltonian into the pragmatical Bose-Hubbard model.

1.2.1 A very general Hamiltonian The use of field operators is reasonable to start modeling a system from the most general perspective because it requires the minimum amount of physical knowledge about the system, 25 which can be seen by its definition:

^† ∑︁ † 휓 (⃗푥) = ⟨⃗푥|휈⟩ 푎^휈, (1.2.1) 휈

† where |휈⟩ is a single-particle eigenstate and 푎^휈 is an operator that creates a particle in such state. Thus, 휓^†(⃗푥) creates a particle at position ⃗푥 in any possible single-particle state. We do not know anything about the quantum state of this particle except for its position. It is plausible to presume that the particles have kinetic degree of freedom and that they interact in pairs via some potential 푈(⃗푥1, ⃗푥2). In the scope of this thesis, three-body and higher- order interactions will not play any important role. I will also consider that the particles can be subjected to a external field 푉 (⃗푥) that is a one-body potential. The Hamiltonian operator that describes these particles, in second quantization form, can then be written as

∫︁ [︃ 2 ]︃ 1 ∫︁ ∫︁ 퐻^ = 푑푑푥휓^†(⃗푥) − ~ ∇⃗ 2 + 푉 (⃗푥) 휓^(⃗푥) + 푑푑푥 푑푑푥 휓^†(⃗푥 )휓^†(⃗푥 )푈(⃗푥 , ⃗푥 )휓^(⃗푥 )휓^(⃗푥 ). 2푚 2 1 2 1 2 1 2 2 1 (1.2.2) From the above discussion and taking into account the integrations in configuration space, this Hamiltonian describes particles at any point of space, occupying any possible single-particle states. It could describe any system whose particles pairwise interact, so it is really very general. The first term is composed of one-body operators, so it essentially constitutes many problems at the single- particle level that are likely to be soluble using textbook-like tools of quantum mechanics since the wave-functions would be separable. The second term introduces interactions, characterizing the many-body problem. It clearly makes the Hamiltonian non-diagonal, and most of the times diagonalizing such general form is just unfeasible. Actually, it is hard to even get any physical insights from this form. We need to be more specific in what we want to describe in order to obtain a more treatable form.

1.2.2 Addition of a lattice: Bloch waves The Bose-Hubbard model is a lattice model. In many problems of condensed matter physics, the existence of a lattice is a consequence of the mechanism of spontaneous symmetry breaking that underlies phase transitions. For instance, a liquid freezes into a solid as we reduce its tem- perature. The solid is rigid, and its macroscopic physical properties differ from the liquid because its constituents exhibit a periodical arrangement throughout the entire space that the system oc- cupies. This periodic arrangement, which is called a lattice, arises from the interactions between the particles and is responsible for the stiffness, or rigidity, of the solid. Particles in a liquiddo not possess an organized spatial structure. In contrast to the solid, the knowledge of the position of a certain particle does not bring information about the position of all other particles. In other words, the density of particles does not exhibit long-range correlations. The symmetry that is behind such phase transition is that of spatial translation and rotation. Although a question of its own interest and importance, a lot of times the formation of a lattice does not concern the description of a physical problem. In the field of synthetic materials, this is 26 very often the case because the lattice itself is prescribed, and does not necessarily arise from the dynamics of the components of the system. Conversely, it determines the dynamics of the system and, in that being so, it can be seen as a fundamental ingredient of the model describing such material. In this case, the lattice is part of the model. This is precisely the case of cold-atoms systems in optical lattices, which are the main experimental realization of the model that is central to this thesis. A practical manner to include a lattice in the Hamiltonian that we have written in the last section is by considering the fact that this structure can, as a matter of fact, be encapsulated by a periodic one-body potential 푉 (⃗푥) to which the particles are subjected. By periodic we mean that

푉 (⃗푥) = 푉 (⃗푥 + ⃗푎) (1.2.3) where ⃗푎 is any linear combination of the translation vectors of the lattice. This periodicity results in a band structure, where the eigenstates of the Hamiltonian can be grouped in bands. Consider a single particle in one spatial dimension such that the periodicity condition can be written as 푉 (푥) = 푉 (푥 + 푚푎) where 푎 is the lattice parameter and 푚 ∈ 풵. A natural route to deal with periodic functions is to cast their Fourier series expansion,

+∞ ∑︁ ˜ 푖푞푛푥 푉 (푥) = 푉 (푞푛)푒 , (1.2.4) 푛=−∞ where 푞푛 = 푛휋/푎 are the reciprocal vectors. Using this form, it is interesting to notice the action 2 2 of the single particle Hamiltonian 퐻(푥) = − ~ 푑 +푉 (푥) in a plane wave function 푓 (푥) = √1 푒푖푘푥: 2푚 푑푥2 푘 퐿

[︃ 2 2 +∞ ]︃ 1 푑 ∑︁ √ ~ ˜ 푖푞푛푥 푖푘푥 퐻(푥)푓푘(푥) = − 2 + 푉 (푞푛)푒 푒 퐿 2푚 푑푥 푛=−∞ 1 [︃ 2푘2 +∞ ]︃ ~ 푖푘푥 ∑︁ ˜ 푖(푞푛+푘)푥 = √ 푒 + 푉 (푞푛)푒 (1.2.5) 퐿 2푚 푛=−∞ [︃ 2 2 +∞ ]︃ ~ 푘 ∑︁ ˜ = 푓푘(푥) + 푉 (푞푛)푓푘+푞푛 (푥) 2푚 푛=−∞

so that the resulting function belongs to the subspace 풮푘 ≡ {푓푘, 푓푘+푞1 , 푓푘−푞1 , 푓푘+푞2 , 푓푘−푞2 , ...}, and it is clear that the action of 퐻(푥) in any member of such subspace is a closed operation. This means that these functions span such subspace so that a general solution for the eigen-functions of the Hamiltonian 퐻(푥) can be written as

∑︁ 1 푖(푞푛+푘)푥 푖푘푥 휓푘(푥) = 푢˜푛(푘)√ 푒 ≡ 푒 푢푘(푥), (1.2.6) 푛 퐿 where 1 ∑︁ 푖푞푛푥 푢푘(푥) = √ 푢˜푛(푘)푒 (1.2.7) 퐿 푛 is called a Bloch wave. Notice that it represents a plane wave modulated by a periodic function. ′ In addition to that, two subspaces 풮푘 and 풮푘′ are equal if, and only if, 푘 = 푘 + 푛(2휋/푎), therefore 27 the set of 푘 values in the range −휋/푎 < 푘 < +휋/푎 are unique labels to the corresponding set of subspaces {풮푘}. This set is the so-called first Brillouin zone. The time-independent Schrödinger equation for 휓푘(푥) then reads

2 푑2 − ~ 휓 (푥) + 푉 (푥)휓 (푥) = 퐸 휓 (푥), (1.2.8) 2푚 푑푥2 푘 푘 푘 푘 so that

2 2 ∑︁ 1 푑 ∑︁ 1 ∑︁ 1 ~ √ 푖(푞푛+푘)푥 √ 푖(푞푛+푘)푥 √ 푖(푞푛+푘)푥 − 푢˜푛(푘) 2 푒 + 푉 (푥) 푢˜푛(푘) 푒 = 퐸푘 푢˜푛(푘) 푒 2푚 푛 퐿 푑푥 푛 퐿 푛 퐿

2 2 ∑︁ ~ (푘 + 푞푛) 1 푖(푞푛+푘)푥 ∑︁ 1 푖(푞푛+푘)푥 ∑︁ 1 푖(푞푛+푘)푥 푢˜푛(푘)√ 푒 + 푢˜푛(푘)푉 (푥)√ 푒 = 퐸푘 푢˜푛(푘)√ 푒 , (1.2.9) 푛 2푚 퐿 푛 퐿 푛 퐿 and taking the inner product with √1 푒푖(푘+푞푚)푥 we obtain 퐿

[︃ 2(푘 + 푞 )2 ]︃ ~ 푛 훿 + 푉˜ (푞 − 푞 ) 푢˜ (푘) = 퐸 푢˜ (푘). (1.2.10) 2푚 푚푛 푚 푛 푛 푘 푛

푛 푛 This is a matrix equation that can be diagonalized to obtain the required solutions {휓푘 (푥), 퐸푘 }, where 푛 denotes a band. As stated above, the periodic potential brings in this structure of eigenpairs {휓푖, 퐸푖}, where 푖 = 푖(푘, 푛) indexes the band 푛 and the crystal momentum 푘, that characterize the solution of the Schrödinger equation for the system. We can then rewrite the single-particle field operator using these basis functions in termsof momentum as ^ ∑︁ ∑︁ 푛 ^ 휓(⃗푥) = 휓⃗푘 (⃗푥)푏푛,⃗푘, (1.2.11) 푛 ⃗푘 ^ ⃗ where now 푏푛,⃗푘 annihilates a particle with momentum 푘 in the energy band 푛. The Hamiltonian (1.2.2) then becomes

∑︁ † ∑︁ ⃗푘1,⃗푘2,⃗푘3,⃗푘4 † † 퐻^ = 퐸 ⃗^푏 ^푏 ⃗ + 푈 ^푏 ^푏 ^푏 ⃗ ^푏 ⃗ . (1.2.12) 푛,푘 푛,⃗푘 푛,푘 푛1,푛2,푛3,푛4 푛 ,⃗푘 푛 ,⃗푘 푛3,푘3 푛4,푘4 푛 ,푛 ,푛 ,푛 1 1 2 2 푛,⃗푘 1 2 3 4 ⃗푘1,⃗푘2,⃗푘3,⃗푘4 Although we have now included the lattice potential to the problem, this form of Hamiltonian is no less complicated than the previous one, therefore we have not gained much from coarse- graining perspectives. For such purposes, the choice of basis functions is very important since it can potentially help analyzing the structure of the continuous Hamiltonian. To obtain the last equation (1.2.12) we have used an expansion in Bloch waves that, even though carry the information of the lattice, are still completely delocalized functions and therefore approach the nearly free particle regime. More localized functions, usually obtained from the atomic limit, are suitable for deeper lattices. This situation is described in Fig. 1.1. 28

Figure 1.1: Basis functions for different physical situations ranging from the free particle regime (left) to the atomic limit (right). As the lattice depth is plugged in, the band structure emerges, becoming energy levels in the atomic limit, where the band description is no longer needed. Figure from Ref. [18].

1.2.3 Choice of an appropriate basis set: Wannier functions We now know that any linear combination of Bloch waves is a solution to the problem of particles in an underlying periodic potential. This gives the freedom to choose the relative phase between these states in whatever fashion we may want. A more localized wave-function can be constructed as following: 푛 ∑︁ 푖⃗푘·⃗푥푗 푛 푤푗 (⃗푥) = 푒 휓⃗푘 (⃗푥), (1.2.13) ⃗푘 where ⃗푥푗 is any point in space that we may want to localize the function around. A convenient choice is the location of maximal weight of such state, which should coincide with the minima of the lattice potential. This gives a maximally localized state. For localization to be granted, energy bands cannot overlap, as it will be in the cases we are going to deal with. These so-called Wannier functions have the very useful feature of being orthogonal to each other. A comparison between such states and the former Bloch waves is shown in Fig. 1.2. The extended character of Bloch waves, compared to the localization of Wannier states, is evident. Using this set of functions to write field operators in terms of lattice sites,

^ ∑︁ ∑︁ 푛 ^ 휓(⃗푥) = 푤푗 (⃗푥)푏푛,푗, (1.2.14) 푗 푛 we obtain the following form for the Hamiltonian (1.2.2): 1 ^ ∑︁ ∑︁ 푛푚^† ^ ∑︁ ∑︁ 푛푚^† ^ ∑︁ ∑︁ 푛1푛2푛3푛4^† ^† ^ ^ 퐻 = − 푡푖푗 푏푛,푖푏푚,푗 + 휖푖 푏푛,푖푏푚,푖 + 푈푖푗푘푙 푏푛1,푖푏푛2,푗푏푛3,푘푏푛4,푙 (1.2.15) 푛,푚 푖̸=푗 푛푚 푖 2 푖푗푘푙 푛1푛2푛3푛4 where ∫︁ [︃ 2 ]︃ 푡푛푚 ≡ − 푤¯푛(⃗푥) − ~ ∇2 + 푉 (⃗푥) 푤푚(⃗푥)푑⃗푥, (1.2.16) 푖푗 푖 2푚 푗 29

Figure 1.2: Examples of Bloch waves and the resulting Wannier state for a 퐿 = 20 lattice. Top: Bloch waves for two different wave vectors within the first Brillouin zone. Note that they extend all over the system. Bottom: resulting Wannier state centered around the tenth lattice site. It rapidly vanishes away from its center. A deeper lattice would exhibit an even more localized state.

∫︁ [︃ 2 ]︃ 휖푛푚 ≡ 푤¯푛(⃗푥) − ~ ∇2 + 푉 (⃗푥) 푤푚(⃗푥)푑⃗푥, (1.2.17) 푖 푖 2푚 푖 and ∫︁ ∫︁ 푛1푛2푛3푛4 푛1 푛2 푛3 푛4 푈푖푗푘푙 ≡ 푤¯푖 (⃗푥1)푤 ¯푗 (⃗푥1)푈(⃗푥1, ⃗푥2)푤푘 (⃗푥2)푤푙 (⃗푥2)푑⃗푥1푑⃗푥2. (1.2.18) The bar over the functions (푤¯) denote their complex conjugates. Note that off-diagonal terms in the single-particle sector arise from the fact that the Wannier states are not eigenstates of that one-body Hamiltonian.

1.2.4 Further simplifications: energy bands At first glance, equation (1.2.15) may seem more complicated than the previous field Hamilto- nian since we ended up creating yet another non-diagonal term. However, as it can be seen from the summation over 푖푗, which denote the minima of the lattice potential, we are starting to face a more lattice-like form of Hamiltonian, which is our goal. It is very important to notice that the 30 creation operators that we are handling now create particles in states localized at the bottom of the lattice wells and at a certain energy band. Considerations about these energy bands can greatly simplify the problem. For instance, in the case of fermions, it is often possible to eliminate all bands except for the valence and conduction ones. For hard-core bosons a similar analysis hold because their interactions somehow mimics Pauli’s exclusion principle. For the cases that are important to this thesis, namely soft-core bosons, one single band is capable to accommodate all the particles that will compose the dynamics of the system. Transitions to other bands can be achieved by thermal excitation. Thus, provided that the temperature is low enough, we can safely disregard all energy bands 푛 > 0. In more precise terms, this is valid when ⎯ ⎸ 2 2 ⃒ ⎸~ 휕 푉 (⃗푥)⃒ 푘 푇 ≪ ⎷ ⃒ , (1.2.19) 퐵 푚 휕2⃗푥 ⃒ ⃒⃗푥푖 where 푇 is the temperature and 푘퐵 the Boltzmann constant. This comes from the fact that, close to the minima of the potential (⃗푥푖) we can harmonically approximate it, so that the right hand side of the last equation is a measure of the separation between the lowest energy band (푛 = 0) and the closest one (푛 = 1). Furthermore, this single-band form of Hamiltonian can be written in a hierarchy of terms corresponding to summations over sites, over nearest-neighbors sites ⟨...⟩, over second-nearest neighbors sites ⟨⟨...⟩⟩ and so on,

⎡ ⎤ ⎡ ⎤ ^ ∑︁ ^†^ ∑︁ ^†^ ∑︁ ^†^ 1 ∑︁ ∑︁ 퐻 = ⎣ 휖푖푏푖 푏푖 − 푡푖푗푏푖 푏푗 − 푡푖푗푏푖 푏푗 + ...⎦ + ⎣ 푈푖푛^푖(^푛푖 − 1) + 푈푖푗푛^푖푛^푗 + ...⎦ , (1.2.20) 푖 ⟨푖푗⟩ ⟨⟨푖푗⟩⟩ 2 푖 ⟨푖푗⟩

^†^ where 푛^푖 = 푏푖 푏푖 is the number operator. For the lattice character of such Hamiltonian to be even more prominent, it is essential to realize that disregarding higher order energy bands is equivalent to ignoring the dynamics that takes place at length scales smaller than the lattice constant, as shown in Fig. 1.3. This procedure lies within the core of coarse-graining techniques. Integrating out high-momenta modes is also a fundamental tool in the context of the Renormalization Group (RG) that has found enormous success in describing the mechanisms behind phase transitions [19–21].

1.2.5 The single-band standard Bose-Hubbard Hamiltonian Up to this point, we have simplified a very general Hamiltonian to a lattice form that, in principle, could be more treatable. Although quite useful, these simplifications were made upon very mild and broad considerations, namely the introduction of a periodic one-body potential and the coarse-graining over short-length scales that are not relevant to the problem. Further simplifications will need more specific assumptions. In order to obtain the standard formofthe Bose-Hubbard Hamiltonian, we will consider three of them. The first one concerns the range of the interactions between the bosons that we wanttode- scribe. In cold atoms systems, soft-core bosons are paradigmatic. Their interacting potentials can come in different shapes, but must possess the following attributes: being short-range and 31

Figure 1.3: Coarse graining scheme for ignoring high order energy bands. To the left, real position space length scales 푎 (lattice constant) and 푙. To the right, corresponding momentum-space scales. As we can see, taking into account only the first Brillouin zone (indicated by red) is equivalent to accounting only for the dynamics that takes place in length scales larger than a lattice constant in real position space. Larger momentum-scales are essentially integrated out. Figure from Ref. [18]. predominantly repulsive. Note that, according to equation (1.2.18), the interaction term is a re- sult of the interacting potential integrated either over different Wannier states or over the same ones. In being short-ranged, we expect that the former case cannot contribute much (recall that the Wannier states themselves are localized!), whereas the later one is relevant. This results in positive interaction energy terms regardless how far away from each other the occupied lattice sites are. The interaction usually is composed of a repulsive part that origins from the Coulomb forces between the electronic clouds of the atoms, and an attractive part that arises from the dipole-dipole interaction. Potentials very often have the well-known Lennard-Jones shape, as in Fig. 1.4, which illustrates both attributes. This amounts to the fact that we can ignore interaction terms in the Hamiltonian (1.2.20) that account for different sites, keeping only the local term:

⎡ ⎤ ^ ∑︁ ^†^ ∑︁ ^†^ ∑︁ ^†^ 1 ∑︁ 퐻 = ⎣ 휖푖푏푖 푏푖 − 푡푖푗푏푖 푏푗 − 푡푖푗푏푖 푏푗 + ...⎦ + 푈푖푛^푖(^푛푖 − 1). (1.2.21) 푖 ⟨푖푗⟩ ⟨⟨푖푗⟩⟩ 2 푖

The second assumption regards the diffusion of particles throughout the lattice or, in other words, the behavior of the terms defined in (1.2.16). They are commonly called hopping terms, which can be understood by the fact that they account for the energy of atoms hopping, or jumping, to different lattice sites. From their definitions, they are tunneling amplitudes between quantum states localized on different sites, or more precisely, they are matrix elements of the underlying potential plus the kinetic term between different Wannier states. Intuitively, the deeper the lattice, the more difficult for a particle to tunnel to a different site. Also, tunneling to larger distances 32

Figure 1.4: Illustration of a typical soft-core bosonic interacting potential. Important features are strong repulsion and short-rangeness (see text). Figure from the web, source not specified. has to be harder than tunneling to a close site. Actually, from a text-book-like calculation with a particle facing a repulsive box potential, it can be seen that these dependencies are exponential. As we have a lattice model, we do not ever want the lattice to be too shallow, otherwise it would not make any sense on having a lattice at all. Therefore, hopping terms to sites beyond closest- neighbors are, to a very good approximation, negligible. Our Hamiltonian then becomes:

^ ∑︁ ^†^ ∑︁ ^†^ 1 ∑︁ 퐻 = 휖푖푏푖 푏푖 − 푡푖푗푏푖 푏푗 + 푈푖푛^푖(^푛푖 − 1). (1.2.22) 푖 ⟨푖푗⟩ 2 푖 The third and last consideration is, perhaps, the most obvious one: we have no reason (yet!) to distinguish between lattice sites. Just as the particles that we have in the problem, lattice sites are identical. In other words, our system is homogeneous and isotropic, so all spatial directions are equivalent. It is also a clean system, in the sense that there are no impurities or defects. Maybe at some point it will make sense to relax these conditions, but surely not right at the start. This measures up to having no lattice dependence in any of the terms defined by equations (1.2.16) to (1.2.18): ^ ∑︁ ^†^ ∑︁ ^†^ 1 ∑︁ 퐻 = 휖푏푖 푏푖 − 푡푏푖 푏푗 + 푈푛^푖(^푛푖 − 1). (1.2.23) 푖 ⟨푖푗⟩ 2 푖 The first term is no more than an overall energy that depends on the total number of particles 푁. If 푁 is fixed, it is a constant and it is then irrelevant. However, in the more generalcase of the grand-canonical ensemble, where the system is in contact with a particle-reservoir, it can become important since it is possible to attribute a chemical potential 휇 that will then control the average number of atoms in the system. We then finally arrive at the Hamiltonian for the so-called single-band standard Bose-Hubbard model:

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ 퐻 = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1) − 휇 푛^푖. (1.2.24) ⟨푖푗⟩ 2 푖 푖 33

Figure 1.5: Cartoon of the interaction term 푈 and hopping term 푡 of the Bose-Hubbard model. Although atoms appear in two colors in the figure, they are indistinguishable for our purposes. Figure from the OLAQUI website.

1.3 Physical properties of the model

A lot of information about the behavior of a system described by a certain model can be obtained by analyzing possible solutions of the Hamiltonian in regimes where they are somehow easily achieved. Most of the times, for interacting systems, they are either the weakly interacting regime or the strongly interacting regime. For instance, solutions can be obtained by perturbatively treating the appropriate terms in the Hamiltonian. This procedure leads to a better understanding of the physics of the model and also can bring insights on what to expect in non-perturbative regimes. But before that, we can learn a lot by considering the simplest situation that the Bose- Hubbard model could possibly describe: a one-dimensional double-well potential.

1.3.1 Insights from the double-well potential We consider here a system composed of two lattice sites with a certain number of particles 푁 such that the Bose-Hubbard Hamiltonian reads

(︁ )︁ 푈 푈 퐻^ = −푡 ^푏†^푏 + ^푏†^푏 + 푛^ (^푛 − 1) + 푛^ (^푛 − 1). (1.3.1) 1 2 2 1 2 1 1 2 2 2

In the non-interacting limit, 푈 = 0, 퐻^ can be diagonalized by defining the operators

^ 1 (︁^ ^ )︁ 푏+ = √ 푏1 + 푏2 (1.3.2) 2

^ 1 (︁^ ^ )︁ 푏− = √ 푏1 − 푏2 (1.3.3) 2 34 that destroy bosons in the usual symmetric and anti-symmetric states. In such states, the number of particles in each well is not determined, and the Hamiltonian is given by ^ ^† ^ ^† ^ 퐻 = −푡푏+푏+ + 푡푏−푏−, (1.3.4) so they are clearly the eigen-states for such problem. On the opposite case, when 푡 = 0, 푈 푈 퐻^ = 푛^ (^푛 − 1) + 푛^ (^푛 − 1), (1.3.5) 2 1 1 2 2 2 therefore eigen-states are number states in each well. These two situations point out that there is a sort of competition between localized states, with well defined occupation number, and delocalized states where occupation number in each site cannot be determined. ^ We can go a little bit further by considering the Heisenberg equations for 푏푖,

푑^푏 [︁ ]︁ 푖 1 = ^푏 , 퐻^ = 푈푛^ ^푏 − 푡^푏 (1.3.6) ~ 푑푡 1 1 1 2

푑^푏 [︁ ]︁ 푖 2 = ^푏 , 퐻^ = 푈푛^ ^푏 − 푡^푏 , (1.3.7) ~ 푑푡 2 2 2 1 where 퐻^ is defined by equation (1.3.1) and we have used the standard bosonic commutation relations. By considering the action of the annihilation operator on an occupation number state, ^ √ 푏푖 |푛푖⟩ = 푛푖 |푛푖 − 1⟩ , (1.3.8) we can see that, in the large-occupation number limit, it is possible to substitute the operator ^푏 √ 푖 푖휑푖 for a complex number 푏푖 = 푛푖푒 , where we have introduced a phase 휑푖. With that and some manipulation on the Heisenberg equations, we obtain

푑푛1 푑푛2 2푡√ = − = 푛1푛2 sin (휑1 − 휑2), (1.3.9) 푑푡 푑푡 ~ which is commonly found in the context of Joseph junctions [22]. By linearizing this last equation it is possible to find the expected value of phase and number oscillations [23], which are given by √︃ 푈 + 2푡/푁 ⟨(휑 − 휑 )2⟩ = , (1.3.10) 1 2 2푡푁

√︃ 2푡푁 ⟨(푛 − 푛 )2⟩ = , (1.3.11) 1 2 푈 + 2푡/푁 as one would expect from the fact that 푛푖 and 휑푖 are canonically conjugated variables. The most important aspect here is that increasing the tunneling amplitude 푡 relatively to the interaction energy 푈 suppresses the fluctuations in the relative phase between the wells while it increases fluctuations in the number of bosons in each well. As we will see in the following subsections, this is the fundamental mechanism of the superfluid/Mott-insulator transition of the Bose-Hubbard model. 35

1.3.2 Superfluid and Mott-insulating phases From the analysis of the double-well potential we can infer that the Bose-Hubbard model exhibits two phases or, in other words, two ground states with different physical properties and features that depend on the relative magnitude of the terms 푡 and 푈 that control, respectively, the diffusion of particles throughout the lattice and the interaction energy. The transition between such phases is governed by fluctuations that these terms induce, namely density and phase fluctuations. This is a paradigm of a quantum phase transition [24], since the fluctuations, rather than being of the more usual thermal character, have their origin in Heisenberg’s uncertainty principle that holds between canonically conjugated variables. Here, they are the amplitude and phase of the underlying bosonic field that we have coarse-grained to obtain the lattice model.

The superfluid As we are considering bosonic particles, we are not affected by Pauli’s exclusion principle. Bose statistics allows multiple particles to occupy a single quantum state. When a system is subjected to sufficiently low temperatures, as it is in the cases that we are interested in, we would then expect the bosons to condense in the lowest energy quantum state. Moreover, since the system is homogeneous, we expect such state to extend all over the system because there is no preferential region for the particles to condense in. This means that there should exist a global wave-function that is completely delocalized, that is then called a macroscopic wave-function. The picture described above captures the essence of the widely known phenomenon of Bose-Einstein condensation [25]. The suppression of phase fluctuations between lattice sites that we have found in the double- well model is a direct suggestion that a global wave-function arises when we increase the relative strength of the tunneling amplitude 푡. By spreading throughout the lattice, the particles reduce the total kinetic energy, populating this low energy quantum state. The system is then said to exhibit phase coherence. In being delocalized, this state does not give any information about the position of the bosons therefore occupation numbers of lattice sites are not well defined. The amplitude of a global wave-function is related to the density of particles, thus, if the phase of the wave-function is completely known, our ignorance about its amplitude, and consequently about the density, is as large as possible. In this scenario, where diffusion dominates over interaction, a suitable basis of single particle states is given by plane waves because we are in the nearly free particle regime. The lowest energy state is then the one of zero-momentum, and it is in this one that bosons will completely condense in when there is absence of interaction. When interactions come into play, quantum depletion happens. The effect of interactions, either attractive or repulsive, is to somehow localize particles. For instance, in case of attraction, atoms want to be close together whereas in the case of repulsion they do not want to get too close. Either way, this localization means that we know something about the density of particles and hence, by the uncertainty principle, we cannot completely know their momentum. This means that particles that were occupying the zero-momentum state start to be depleted from it, occupying higher momentum states in a random fashion. The fraction of particles that still remain in the lowest energy state is called the condensate fraction. Eventually the interaction is strong enough to deplete most of the particles, evenly occupying other available 36 states, so that the condensate fraction vanishes and we lose the Bose-Einstein condensate. Such disrupting can also be achieved when other quantum states are thermally occupied. We are not going to deal with thermal effects in this thesis. A striking consequence of Bose-Einstein condensation, in three-dimensional systems, is super- fluidity. Because the condensed particles occupy a single quantum state, they cannot carry any entropy. This indicates that there has to be a separation between these particles and the other that are not condensed, which lies within the heart of the two-fluid model [26–28]. Such decoupling can be simply put in the relation

휌 = 휌푁 + 휌푆, (1.3.12)

where 휌 stands for the total density of particles, 휌푁 for particles in the “normal” state and 휌푆 for particles in the “superfluid” state. Since this later component of the fluid does not carry entropy, its energy is below the minimum potential energy required to be able to convert to any other kind of energy such as heat. Therefore, the superfluid is not subjected to dissipation related effects such as friction, being a zero-viscosity fluid. This dissipationless mass flux leads to intriguing effects like everlasting currents. Moreover, the superfluid is an ideal heat conductor. The above description of the onset of superfluid, although quite comprehensible, is far from being general. There is a whole field of active academic and technological research in which superfluidity takes place on a huge number of different situations. One important note isthat, when dealing with charged particles, the very same phenomenon is called superconductivity. A lot more can be learned in the literature [25, 29–31], and we will discuss other features of a superfluid along the rest of this text.

The

The opposite situation happens when particles are completely localized. If interaction is strong enough, the particles do not want to hop to sites that are already occupied. They rather stay alone! In a primary case, each lattice site is occupied by exactly one particle, because the liquid energy for a particle to leave a site and double-occupy its neighbor, namely 푈 − 푡, is positive. Notice that this only happens when the whole lattice is occupied. If a single lattice site is empty, bosons will be able to hop to it and the system becomes again a superfluid. In general, the same phenomenon of localization happens whenever there is a commensurate filling of the lattice sites, meaning that they are occupied by an integer number of particles. These so-called Mott-insulator states compose the second phase of the Bose-Hubbard model. In such phase, occupation numbers in lattice sites are well defined, which means that the particles are occupying multiple single-particle states evenly throughout the lattice. All of them are occupied but none of them is macroscopically occupied, which is in complete contrast to the situation that leads to the superfluid. Since in this case we completely know the amplitude of the bosonic field, we know nothing about its phase, therefore there is no way to have phase coherence. A picture exhibiting the main features of these two phases is shown in Fig. 1.6. 37

Figure 1.6: Scheme of the two phases of the Bose-Hubbard model in three-dimensions. a) Su- perfluid phase. On the left, cartoon of atoms that are favorable to hop throughout thelattice lowering the total kinetic energy. As a consequence, they are completely delocalized and occupy a single quantum state that extends all over the lattice. On the right, time-of-flight experiments show that a matter-wave interference pattern appears, indicating that the phase of the macro- scopic wave-function is well defined. b) Mott-insulator phase. Each lattice site is occupied by a fixed number of atoms (one) and the phase and no matter-wave interference can be seeninthe experiment, indicating that the density is well known but the phase of the wave-function is not defined. Figure from Ref. [32].

1.3.3 Energy spectrum and excitations

Quantum many-body theory provides a large set of tools that can be used to obtain the energy spectrum of quantum systems in a wide range of situations. One of such tools is perturbation theory. In what follows, we will use ideas from perturbation theory to address the nature of the excitations of the Bose-Hubbard model, once we know that the ground-state has the features already discussed. The main difficulty in handling the Bose-Hubbard Hamiltonian (1.2.24) arises from the in- teracting term that avoids the possibility of factoring the many-body Hamiltonian into a sum of single-particle terms [18]. However, as we have discussed, an appropriate choice of basis function can transform it in a more suitable form, according to the regime we are interested in. This choice of basis should be done in a way to maximize the overlap between the low energy eigen-states of the Hamiltonian and a certain number of basis functions. For instance, suppose ^ that {|Ψ푖⟩} are eigen-states of 퐻, i.e.,

^ 퐻 |Ψ푖⟩ = 퐸푖 |Ψ푖⟩ . (1.3.13) 38

These states can be represented in any other basis {|휑푖⟩} by transforming

푀 ∑︁ |Ψ푖⟩ = |휑푗⟩⟨휑푗|Ψ푖⟩ , (1.3.14) 푗=0

where 푀 is the largest number of basis functions that are needed to capture the eigen-states |Ψ푖⟩ of interest. Between two different basis, the one that has the smaller 푀 is the more suitable one. In the context of perturbation theory, the Hamiltonian is usually written in the form

^ ^ ^ 퐻 = 퐻0 + 휆퐻1, (1.3.15)

^ where 휆 is “small”, so that the chosen basis could be the eigen-states of 퐻0, which are supposedly accessible. Another possibility that we will use here is to substitute many-particle interactions by particles interacting with a constant, free of fluctuations, mean-field that results from averaged interactions between particles. This kind of technique belongs to the broad framework of mean field theories.

Weakly interacting regime

In this regime (푈/푡 ≪ 1), as we have already pointed out, the ground-state and its low lying excitations should be adequately captured by a basis of plane-waves, which corresponds to the solutions of the non-interacting problem. Therefore, we will write once again our bosonic operators as 1 ^ ∑︁ 푖⃗푝·⃗푥푗 ^ 푏푗 = √ 푒 푏⃗푝, (1.3.16) 푉 ⃗푝∈FBZ where FBZ indicates the First Brillouin Zone and ⃗푥푗 indicates the 푗-th lattice site. With that, the Bose-Hubbard Hamiltonian (1.2.24) becomes

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ ^†^ 퐻 = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1) − 휇 푏푖 푏푖 <푖푗> 2 푖 푖 ∑︁ ^†^ 푈 ∑︁ ^†^†^ ^ ∑︁ ^†^ = −푡 푏푖 푏푗 + 푏푖 푏푖 푏푖푏푖 − 휇 푏푖 푏푖 (1.3.17) <푖푗> 2 푖 푖 푡 푈 휇 ∑︁ ∑︁ ^† ^† −푖⃗푝·⃗푟푖 푖⃗푞· ⃗푟푗 ∑︁ ∑︁ (⃗푝+⃗푞−⃗푟−⃗푠)·⃗푟푖^† ^† ^ ^ ∑︁ ∑︁ ^† ^ −푖(⃗푝−⃗푞)·⃗푟푖 = − 푏⃗푝 푏⃗푞 푒 푒 + 2 푒 푏⃗푝 푏⃗푞 푏⃗푟 푏⃗푠 − 푏⃗푝 푏⃗푞 푒 . 푉 <푖푗> ⃗푝,⃗푞 2푉 푖 ⃗푝,⃗푞,⃗푟,⃗푠 푉 푖 ⃗푝,⃗푞

Now, since the first summation is over nearest-neighbors, we have that ⃗푟푗 = ⃗푟푖 +푟 ^, where 푟^ is either one of the unitary, oriented vectors in each spatial direction, namely {푥,^ −푥,^ 푦,^ −푦,^ 푧,^ −푧^}. 39

Last equation then reads 푡 [︁ ]︁ 푈 ^ ∑︁ ∑︁ ∑︁ ^† ^ −푖(⃗푝−⃗푞)·⃗푟푖 푖⃗푞·푟^ ∑︁ ∑︁ (⃗푝+⃗푞−⃗푟−⃗푠)·⃗푟푖^† ^† ^ ^ 퐻 = − 푏⃗푝 푏⃗푞 푒 푒 + h.c. + 2 푒 푏⃗푝 푏⃗푞 푏⃗푟 푏⃗푠 푉 푖 푟^∈{푥,^ 푦,^ 푧^} ⃗푝,⃗푞 2푉 푖 ⃗푝,⃗푞,⃗푟,⃗푠 휇 ∑︁ ∑︁ ^† ^ −푖(⃗푝−⃗푞)·⃗푟푖 − 푏⃗푝 푏⃗푞 푒 푉 푖 ⃗푝,⃗푞 ∑︁ ∑︁ [︁^† ^ ]︁ 푈 ∑︁ ^† ^† ^ ^ = −푡 푏⃗푝 푏⃗푞 훿(⃗푝 − ⃗푞) + h.c. + 훿(⃗푝 + ⃗푞 − ⃗푟 − ⃗푠)푏⃗푝 푏⃗푞 푏⃗푟 푏⃗푠 푟^∈{푥,^ 푦,^ 푧^} ⃗푝,⃗푞 2푉 ⃗푝,⃗푞,⃗푟,⃗푠 ∑︁ ^† ^ − 휇 훿(⃗푝 − ⃗푞)푏⃗푝 푏⃗푞 ⃗푝,⃗푞 ⎡ ⎤ ∑︁ ^† ^ ∑︁ 푈 ∑︁ ^† ^† ^ ^ ∑︁ ^† ^ = 푏⃗푝 푏⃗푝 ⎣−2푡 cos (⃗푝 · 푟^)⎦ + 훿(⃗푝 + ⃗푞 − ⃗푟 − ⃗푠)푏⃗푝 푏⃗푞 푏⃗푟 푏⃗푠 − 휇 푏⃗푝 푏⃗푝 ⃗푝 푟^∈{푥,^ 푦,^ 푧^} 2푉 ⃗푝,⃗푞,⃗푟,⃗푠 ⃗푝 ∑︁ ^† ^ 푈 ∑︁ ^† ^† ^ ^ = [휖(⃗푝) − 휇] 푏⃗푝 푏⃗푝 + 훿(⃗푝 + ⃗푞 − ⃗푟 − ⃗푠)푏⃗푝 푏⃗푞 푏⃗푟 푏⃗푠 (1.3.18) ⃗푝 2푉 ⃗푝,⃗푞,⃗푟,⃗푠 where we have defined 휖(⃗푝) ≡ 2푧푡 − 2푡 ∑︁ cos (⃗푝 · 푟^) , (1.3.19) 푟^ with 푧 the coordination number, and transformed 휇 → 휇 + 2푧푡 so that 휖(⃗푝 = 0) = 0. Once again, the interaction term complicates affairs. It comprises the scattering of two particles with incoming momentum ⃗푝 and ⃗푞 into outcoming states of momentum ⃗푟 and ⃗푠, such that momentum conservation is ensured by the delta function. We will then use the Hartree-Fock approximation to truncate this term. We will only allow scattering into the same states either directly, ⃗푝 ↔ ⃗푟 and ⃗푞 ↔ ⃗푠, which is called the Hartree process, or by exchange, ⃗푝 ↔ ⃗푠 and ⃗푞 ↔ ⃗푟, which is called the Fock process, so that ∑︁ ^† ^† ^ ^ ∑︁ [︁^† ^† ^ ^ ^† ^† ^ ^ ]︁ 훿(⃗푝 + ⃗푞 − ⃗푟 − ⃗푠)푏⃗푝 푏⃗푞 푏⃗푟 푏⃗푠 ≈ 푏⃗푝 푏⃗푞 푏⃗푝 푏⃗푞 + 푏⃗푝 푏⃗푞 푏⃗푞 푏⃗푝 . (1.3.20) ⃗푝,⃗푞,⃗푟,⃗푠 ⃗푝,⃗푞 We are now, inspired by mean-field theory, going to assume that the macroscopically occupied zero-momentum state is devoided of fluctuations, such that ^†^ 푏0푏0 = ⟨푛^0⟩ + 풪(훿푛^0) ≈ 푁0, (1.3.21) therefore we have ∑︁ [︁^† ^† ^ ^ ^† ^† ^ ^ ]︁ ^†^†^ ^ ∑︁ ^† ^ ∑︁ [︁^† ^† ^ ^ ^† ^† ^ ^ ]︁ 푏⃗푝 푏⃗푞 푏⃗푝 푏⃗푞 + 푏⃗푝 푏⃗푞 푏⃗푞 푏⃗푝 ≈ 푏0푏0푏0푏0 + 4푁0 푏⃗푝 푏⃗푝 + 푏⃗푝 푏⃗푞 푏⃗푝 푏⃗푞 + 푏⃗푝 푏⃗푞 푏⃗푞 푏⃗푝 ⃗푝,⃗푞 ⃗푝̸=0 ⃗푝̸=0,⃗푞̸=0 ∑︁ ^† ^ ∑︁ [︁^† ^ ^† ^ ^† ^ ]︁ = 푁0(푁0 − 1) + 푁0 푏⃗푝 푏⃗푝 + 2 푏⃗푝 푏⃗푝 푏⃗푞 푏⃗푞 − 푏⃗푝 푏⃗푞 훿(⃗푝 − ⃗푞) . ⃗푝̸=0 ⃗푝̸=0,⃗푞̸=0 (1.3.22) Bringing this term to the Hamiltonian (1.3.18) and ignoring terms of order 풪(푁/푉 ), we obtain 2 ^ 푁0 푈 ∑︁ ^† ^ 푈 ∑︁ ^† ^ ^† ^ ∑︁ ^† ^ 퐻 = + [휖(⃗푝) + 2푛0푈] 푏⃗푝 푏⃗푝 + 푏⃗푝 푏⃗푝 푏⃗푞 푏⃗푞 − 휇 푏⃗푝 푏⃗푝 , (1.3.23) 2푉 ⃗푝̸=0 푉 ⃗푝̸=0,⃗푞̸=0 ⃗푝 40

where 푛0 = 푁0/푉 is the condensate density. Although we are neglecting fluctuations around the macroscopic condensate state of momentum zero, it is important to consider Hartree-Fock interactions of non-zero momenta. We then allow for first order fluctuations around a thermally averaged occupation, such that

^† ^ 2 푏⃗푝 푏⃗푝 = ⟨푛^⃗푝⟩ + 훿푛^⃗푝 + 풪(^푛⃗푝) ≈ ⟨푛^⃗푝⟩ + 훿푛^⃗푝, (1.3.24) which gives ^† ^ ^† ^ ^† ^ ^† ^ 푏⃗푝 푏⃗푝 푏⃗푞 푏⃗푞 ≈ ⟨푛^⃗푝⟩푏⃗푞 푏⃗푞 + ⟨푛^⃗푞⟩푏⃗푝 푏⃗푝 − ⟨푛^⃗푝⟩⟨푛^⃗푞⟩, (1.3.25) and then ⎡ ⎤ ∑︁ ^† ^ ^† ^ ∑︁ ∑︁ ^† ^ ∑︁ 푏⃗푝 푏⃗푝 푏⃗푞 푏⃗푞 ≈ 2 ⎣ ⟨푛^⃗푝⟩⎦ 푏⃗푞 푏⃗푞 − ⟨푛^⃗푝⟩⟨푛^⃗푞⟩ ⃗푝̸=0,⃗푞̸=0 ⃗푞̸=0 ⃗푝̸=0 ⃗푝̸=0,⃗푞̸=0 ∑︁ ^† ^ 2 = 2푁푡ℎ 푏⃗푝 푏⃗푝 − 푁푡ℎ (1.3.26) ⃗푝̸=0 where 푁푡ℎ is the number of particles outside the condensate, 푁 = 푁0 + 푁푡ℎ. Substituting this term in equation (1.3.23), we finally arrive at the Hartree-Fock Hamiltonian,

2 2 ^ 푁0 푈 푁푡ℎ푈 ∑︁ ^† ^ ∑︁ ^† ^ 퐻HF = − + [휖(⃗푝) + 2푛푈] 푏⃗푝 푏⃗푝 − 휇 푏⃗푝 푏⃗푝 , (1.3.27) 2푉 푉 ⃗푝̸=0 ⃗푝 where 푛 = 푛푡ℎ +푛0. Given a certain fixed condensate number 푁0, we can calculate the appropriate chemical potential by noting that the condensate does not carry entropy, therefore

⃒ ⃒ 휕⟨퐻^ ⟩⃒ 휕⟨퐻^ ⟩⃒ ⃒ ⃒ 휇 = ⃒ = ⃒ = 푈 (푛0 + 2푛푡ℎ) = 푈 (2푛 − 푛0) . (1.3.28) 휕푁 ⃒ 휕푁0 ⃒ 푆 ⟨푛^⃗푝⟩,⃗푝̸=0

The total number of non-condensed particles 푁푡ℎ is given by

⎡ (︃ )︃ ⎤ ∑︁ 1 1 ∑︁ 훽퐸HF(⃗푝) 푁푡ℎ = 훽(휖(⃗푝)+2푛푈−휇) = ⎣ coth − 1⎦ , (1.3.29) ⃗푝̸=0 푒 − 1 2 ⃗푝̸=0 2 where 훽 = 1/푘퐵푇 and 퐸HF(⃗푝) ≡ 휖(⃗푝) + 2푛푈 − 휇. The last pair of equations constitute a self- consistent problem for 푛0 that can be numerically solved iteratively from some initial guess, which can be more easily seen by expressing them as

⎡ (︃ )︃ ⎤ 1 ∑︁ 훽(휖(⃗푝) + 푈푛0) 푛0 = 푛 − ⎣ coth − 1⎦ . (1.3.30) 2푉 ⃗푝̸=0 2

Once 푛0 is found, the thermodynamic observables can be calculated used the usual methods of statistical mechanics. 41

Strongly interacting regime In contrast to the discussed above, in this regime (푈/푡 ≫ 1), we keep the interaction term exact and treat the kinetic Hamiltonian perturbatively, which is called the site-decoupled mean- field theory. We then approximate off-diagonal bosonic operators by ^†^ (︁ ^† ^†)︁ (︁ ^ ^ )︁ ^† ^ ^† ^ ^† ^ 푏푖 푏푗 ≈ ⟨푏푖 ⟩ + 훿푏푖 ⟨푏푗⟩ + 훿푏푗 ≈ ⟨푏푖 ⟩푏푗 + 푏푖 ⟨푏푗⟩ − ⟨푏푖 ⟩⟨푏푗⟩, (1.3.31) ^ where we kept only first order terms. If we write ⟨푏푖⟩ = 훼, the Bose-Hubbard Hamiltonian (1.2.24) reads

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ 퐻 = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1) − 휇 푛^푖 ⟨푖푗⟩ 2 푖 푖 ∑︁ (︁ ^† ^ ^† ^ ^† ^ )︁ 푈 ∑︁ ∑︁ ≈ −푡 ⟨푏푖 ⟩푏푗 + 푏푖 ⟨푏푗⟩ − ⟨푏푖 ⟩⟨푏푗⟩ + 푛^푖(^푛푖 − 1) − 휇 푛^ ⟨푖푗⟩ 2 푖 푖 [︂ ]︂ ∑︁ (︁ *^ ^† 2)︁ 푈 ∑︁ ^ = −푧푡 훼 푏푖 + 푏푖 훼 − |훼| + 푛^푖(^푛푖 − 1) − 휇푛^푖 ≡ ℎ푖, (1.3.32) 푖 2 푖 where the site-decoupling becomes clear. By using a truncate basis in Fock-space, we can diag- ^ onalize each ℎ푖 starting with a certain 훼0, which will give eigen-states |휑푖⟩ and eigen-energies 퐸푖 that can be used to compute the new 훼 = 훼1 given by

1 ⟨ ⃒ ⃒ ⟩ ∑︁ ⃒^ −훽퐸푖 ⃒ 훼1 = 휑푖⃒푏푒 ⃒휑푖 . (1.3.33) 푍 푖 This procedure is then repeated until the solution converges, when observables can finally be calculated. Moreover, in the limit 푡 → 0, the site-decoupled version of the Hamiltonian attributes a local energy 푈 휖(푛) = 푛(푛 − 1) − 휇푛, (1.3.34) 2 where each site is clearly occupied by the non-negative integer number 푛 of bosons that minimize 휕휖(푛) 휖, i.e., 휕푛 = 0. Therefore for all values 푛 − 1 < 휇/푈 < 푛, exactly 푛 bosons occupy each site, which is in agreement with the previous considerations about the Mott-insulator phase.

Comments on the excitations The nature and features of the excitations of ground-states can often suggest choices of suitable order parameters to identify the phases of a model. As we can see from the form of the Hamiltonians in second quantization, in the Bose-Hubbard model excitations will have either particle, hole, or particle-hole character. This means that we could excite the ground state either by adding a particle to the system, removing a particle from the system, or moving a particle from one site to another and leaving a hole behind. In the weakly interacting regime, which is expected to represent the superfluid phase, the Hartree-Fock approach led us to a Hamiltonian that is gapless, with linear dispersion relation for 42 small momenta, as it can be seen from equation (1.3.19), 휖(⃗푝) ∼ 푝. Thus, the primary excitations on this phase are merely sound waves or, in other words, density waves. They comprise particles moving collectively and in phase throughout the lattice. Moreover, since any particle that is added to the system can freely hop throughout the lattice, adding (or removing) a particle does not come with an energetic cost, and the total density of particles fluctuates around a mean value fixed by the chemical potential. The Hartree-Fock-Bogolyubov-Popov theory, that allows for other types of scatterings in the interaction term of the Hamiltonian, is capable to identify excitations known as Bogolyubov quasi-particles, that have particle-hole-like features [33, 34]. On the other hand, in the strongly interacting regime, that represents the Mott-insulator phase, any excitations cost a finite amount of energy and ultimately will destroy the insulating state. The Hamiltonian is said to be gapped. Consider, for instance, the case where we have exactly one particle per lattice site, namely unit filling. If 푡 = 0, the only way we could possibly excite this state is bringing a new particle to the system. However, as we conclude from the last section, exactly one particle in every site minimizes the Hamiltonian for a finite interval of chemical potentials. This means that, in order to bring another particle to a site and therefore excite the ground state, we need to pay this finite amount of energy which is exactly the difference in chemical potential for the intervals with 푛 and 푛 + 1 particles, i.e. 휇/푈. The other possibility regards the case where 푡 is finite, but small. Here, excitations could comprise particles hopping to neighboring sites and leaving a hole behind. However, when a particle moves to an occupied site, there is an amount of energy 푈 that comes from their interaction, which is definitely larger than the energetic gain 푡 of the hop. This means that hopping is energetically costly, which is in agreement with our considerations about the localization of particles in the Mott-insulator phase.

1.3.4 Order parameters and phase diagram The discussion about the physical properties of the Bose-Hubbard model that we had so far allows us to establish a considerably precise picture of the shape of the phase diagram in terms of the model’s parameters. Also, we are now able to appropriately define the order parameters that will identify each of the phases that we have found either in the strong or weak coupling regime.

The superfluid fraction 휌푠 The primary identity of a superfluid state is its superfluid fraction, that is simply definedas the ratio of the density of particles in the superfluid state to the total density of particles inthe system. Although simple in concept, its analytical calculation is totally unpractical in almost all cases covering interacting systems. However, as we will see later, there is an efficient and relatively simple way to numerically access it in computer simulations. A fundamental convenience of 휌푠 is the fact that it can be seen as an order parameter from Landau’s theory of phase transitions point of view. This means that it is possible to determinate a certain type of symmetry in the Hamiltonian that is broken by the ground state of the ordered phase, which here is the superfluid state. This symmetry is that of the wave-function’s complex phase. Because we cannot determine it from the physical perspective of measurements, the entity that governs the quantum dynamics of the system, namely its Hamiltonian, cannot depend upon it. 43

The Hamiltonian is then symmetric under unitary transformations of the wave-function’s phase. Nevertheless, as we already seen, the superfluid state is characterized by phase coherence, so this symmetry is spontaneously broken along the phase transition and the ground-state “chooses” a particular phase for the wave-function of the system, that is now macroscopic. Yet another important point for this choice of order parameter comes along with the fact that we can identify the broken symmetry: it is also possible to identify a conjugated field to the phase of the wave-function that will enter in the calculation of the free energy of the system and therefore plays a crucial role on the thermodynamic observables. Although we will not pursue this route here, it is widely used in theoretical approaches to superfluids. It will also inspire the form in which we will further calculate superfluid fractions in our numerical simulations. Broadly speaking, the superfluid fraction can be calculated as a the linear response of the free-energy when theboundaries of the system are moved, as shown in Fig. 1.7. Since the superfluid decouples from the normal fluid, carrying no entropy, it does not respond to the boundary motion. In systems that are enclosed by periodic boundary conditions, as it will be the case in this dissertation, this response is strictly related to twists in the boundaries that are encapsulated by the so-called winding numbers [35, 36].

Figure 1.7: Cartoon characterizing the calculation of the superfluid fraction 휌푠 as a response to the motion of the boundaries of the system.

The compressibility per particle 휅 The discussion about the excitations on the regimes of strong or weak interactions allows us to realize yet another order parameter that, in spite of not being that useful regarding Landau’s theory, is still capable of pointing out in which phase the ground state of the system is for a certain set of physical parameters of the model. This quantity is the compressibility of the system. In the context of thermodynamics, it is usually defined as the ratio of the change in the total volumeof the system, 푉 , to an applied external isotropic pressure 푝,

1 (︃휕푉 )︃ 휅 ≡ − , (1.3.35) 푉 휕푝 푇,푁 where 푁 is the number of particles of the system. Note that a change in the volume at fixed number of particles implies in a change of density of particles. However, this is only appropriate when working on the canonical ensemble. In this thesis, we will always work in the grand-canonical ensemble, allowing for a bath of particles as well as a thermal bath. More importantly, the lattice in the Bose-Hubbard model, for our purposes of describing cold atoms systems, is a physical ingredient 44 of the model. Even if we apply a pressure field to atoms in the lattice, we will not “squeeze” the lattice but the atoms, therefore the volume of the system cannot be changed. Alternatively, when putting it in contact with a particle reservoir, this pressure field allows more particles to come into the system, amounting to a change in the density of the system. This leads to a different definition of compressibility, since we can also perform changes in the density by controlling the chemical potential 휇. On a formal footing, consider a system in contact with a thermal reservoir and a particle reservoir so that the natural thermodynamic variables are temperature, volume and chemical potential, (푇, 푉, 휇), which characterizes the grand-canonical ensemble. Consider now the Gibbs- Duhem relation, 푁푑휇 = −푆푑푇 + 푉 푑푝, (1.3.36) therefore [︃ 휕휇 ]︃ 푉 [︃ 휕푝 ]︃ = . (1.3.37) 휕(푉/푁) 푇 푁 휕(푉/푁) 푇 A little exercise on the derivatives shows that [︃ 휕 ]︃ 1 [︃ 휕푁 ]︃ (︃ 휕 )︃ 푁 2 (︃ 휕 )︃ = = − (1.3.38) 휕(푉/푁) 푉 푉 휕(1/푁) 휕푁 푉 푉 휕푁 푉 and also [︃ 휕 ]︃ (︃ 휕 )︃ = 푁 , (1.3.39) 휕(푉/푁) 푁 휕푉 푁 therefore the left-hand-side of equation (1.3.37) can be written as

[︃ 휕휇 ]︃ 푁 2 (︃ 휕휇 )︃ = − (1.3.40) 휕(푉/푁) 푇 푉 휕푁 푇,푉 while the right-hand-side can be

푉 [︃ 휕푝 ]︃ (︃ 휕푝 )︃ 1 = 푉 = − , (1.3.41) 푁 휕(푉/푁) 푇 휕푉 푇,푁 휅 so that 푁 2 (︃ 휕휇 )︃ 1 − = − , (1.3.42) 푉 휕푁 푇,푉 휅 which gives 푉 (︃휕푁 )︃ 휅 = 2 . (1.3.43) 푁 휕휇 푇,푉 In terms of the density of the system, 휌 = 푁/푉 , we have

푉 (︃휕푁 )︃ 1 1 (︃휕(휌푉 ))︃ 1 (︃ 휕휌 )︃ 휅 = 2 = = 2 , (1.3.44) 푁 휕휇 푇,푉 휌 푁 휕휇 푇,푉 휌 휕휇 푇,푉 45 showing that compressibility can also be understood in terms of a variation in the density of the system caused by a change in the chemical potential. In this sense, gapped phases are incompress- ible (휅 = 0), while gapless phases are compressible (휅 is finite). As we already discussed, in the Mott-insulator phase changes in the chemical potential do not change the density of the system because they are energetically costly. Thus, the insulator is incompressible, and we expect the total number of particles in this phase to be constant. On the other hand, as particles are free to hop around the lattice in the superfluid regime, addition of particles comes with small increments in the chemical potential, which increases the density rendering the system compressible. Here, the total number of particles fluctuates around an average value that is determined uniquely by the chemical potential. Since one phase is compressible and the other one is not, compressibility can be used as an order parameter for the phase transition. Although this is not usual for the standard Bose-Hubbard model, it will be very important when we bring disorder in. We stress once more that, for now on, compressibility should be understood as a measure of the capability of adding or removing particles to or from the system or, in other words, changing its density. Table 1.1 summarizes the order parameters and associated phases of the Bose-Hubbard model.

Phase Superfluid fraction 휌푠 Compressibility 휅 Superfluid (SF) finite finite Mott insulator (MI) zero zero

Table 1.1: Identification of the phases of the Bose-Hubbard model in terms of the chosen order parameters.

Phase diagram Taking into account the discussion about the physical properties of the model in its two different phases, it is intuitive to think that the phase diagram will have certain regions of constant density, corresponding to the insulating phase. When somehow the excitation gap closes, we obtain the superfluid state. The corresponding shape of the phase diagram, obtained using site-decoupled mean-field theory [3], is shown in Fig. 1.8. For 푡 = 0, which corresponds to the vertical axis in the graph, it is possible to realize the extension of constant density for different chemical potentials that we previously mentioned. As the atoms are allowed to tunnel to different sites, eventually the energy gained by adding a particle and letting it hop overcomes the cost of the repulsion energy between pairs in the same site. That shows that, as we increase 푡, the extension of the dome in the vertical direction shrinks. At some point, we reach the tip of the dome where the involved energies are perfectly balanced. A very interesting feature of the phase diagram is that there are two possible scenarios for the phase transition to happen. In the first one, it takes place by controlling the density of particles. This is the case when we cross the dome in any direction, at any point, except for the tip of the dome. At the tip, a pure quantum phase transition (QPT) takes place. Level curves of constant density are perpendicular to the tip, therefore crossing the dome in that direction implies in a 46

Figure 1.8: Phase diagram of the standard single-band Bose-Hubbard model. In gray, regions of constant commensurate density, called domes, correspond to the Mott insulator phase (MI). Outside the domes, atoms are free to hop around around the lattice, delocalizing themselves and constituting the superfluid phase (SF). Figure from the web, source could not be determined. phase transition at constant density. Therefore, it is driven exclusively by changing the ratio of tunneling amplitude to interaction energy, namely, 푈/푡.

1.3.5 Finite temperature effects The consideration of finite temperature effects can be important for experiments with ultracold atomic gases where, in spite of the name ultracold, thermal fluctuations can lead to the destruction of long-range order in the superfluid phase and also to the closing of the energy gap in the insulating phase. Theoretically, the loophole of the standard Bose-Hubbard model in such case would be the breakdown of the single-band picture. In order to overcome such caveat, multiple-band Bose- Hubbard models are often used [37–39]. However, considering that the bandwidth in the weakly- interacting regime is given by Eqs. 1.3.19 and 1.3.27, 퐸푏푤 = 2푧푡, where 푧 is the number of nearest-neighbors, finite-temperature effects can be important even within the single band picture. The destruction of long-range order that characterizes the superfluid state would then lead toa transition to a normal liquid, where the total superfluid fraction vanishes even though the system is still compressible. As we will see in Chapter3, this comprises a major difficulty to identify the disordered Bose-glass phase in experimental systems. Alternatively, from the Mott-insulator state, if thermal fluctuations are comparable to the energy gap, 푘퐵푇 ∼ 퐸푔푎푝 ∼ 푈, they can possibly render the system compressible, once more transitioning it to a normal liquid state. This scenario is described in Fig. 1.9 that shows the phase diagram for finite temperature calculations from Quantum Monte Carlo (QMC) [40]. 47

Figure 1.9: Finite-temperature phase diagram of the standard single-band Bose-Hubbard model at unit filling. NL stands for normal liquid that is the phase of the system when thermal fluctuations disrupt either the long-range order of the SF phase or the energy gap of the MI state. The gray bar loosely define the MI domain. Figure from Ref. [40]. 48

Chapter 2

General effects of disorder on continuous phase transitions

One of the most important pieces of the process of describing nature with the tools that natural science provides us is identifying what are the relevant ingredients that a certain line of thought, or theory, should contain in order to properly account for the observed phenomenon. It is also one of the most difficult ones. One reason for that is that new ingredients can easily increase the complexity of the theory. On the other hand, they can lead to insights on phenomena that the previous theory would never cover. This increasing in complexity is ultimately the price that needs to be paid to keep the engines of the mechanism of asymptotic search for the truth, namely science, going. As we have done in the beginning of the last chapter, when we start to think at a certain problem, we make as few assumptions as possible about it. We do not want previous knowledge to “contaminate” what could be seen in the future. The good premises are the ones that do not, before hand, complicate affairs too much or, from another perspective, do not convert our reasoning to one that is only applicable to quite restrict situations. Not less importantly, they need to be glaring reasonable and, ideally, had been proved to be valorous in other occasions. The assumption of a clean, symmetric lattice, where every lattice site is equivalent, fairly fulfills these requirements. This high degree of symmetry enormously facilitates approaches that aim to describe the physics of bosonic cold atoms systems. Any type of preferential regime, such as anisotropies, should only be addressed once there is a clear understanding of what happens in the case of a perfect (hyper)-cubic lattice. This is because a higher symmetry instance can bring valuable insights to what would happen in a more complicated case that, without it, would be solely on its own. It is much harder to argue and find explanations when we do not know whatto expect. In light of such ideas, Chapter1 provided a fairly comprehensible, although still incomplete, background on the physics of bosonic atoms in a lattice. However, the main topic of this dissertation regards the so-called dirty-boson problem [41]. In this case, the underlying periodic potential that composes the lattice is somehow randomly perturbed, generating a random potential. This situation arises in a variety of physical systems, being ubiquitous in realistic descriptions of nature that can hardly accommodate completely pure, unperturbed systems. Actually, physical systems 49 predominantly exhibit features such as defects and impurities that then add a new component to the physics of the pure, clean system. The description, in very general terms, of the dynamics and thermodynamics of such disordered systems will be the subject under discussion in this chapter. As we shall see, more than just describing nature in a manner that approaches reality, disordered systems in fact have lead to a whole new branch of physics. They have been extensively studied in the last 50 years using theoretical approaches, numer- ical techniques, as well as experiments. Although enormous amounts of knowledge for particular systems and models has been achieved, it is still quite hard to identify a unifying theory that can accomplish for the description of the effects of the addition of disorder to clean systems. Perhaps the most successful route is the one that employs scaling theory ideas, rooted in renormalization group (RG) techniques, that has undoubtedly arisen large attention since the seminal work by Harris [42]. Although the language used in this approach can become highly technical and in spite of this not being the scope of the present dissertation, the basic concepts that will be discussed next can bring valuable insights and lead to a much better understanding of the physics of the disordered Bose-Hubbard model.

2.1 Harris’ criterion

In 1974, Harris presented a heuristic argument that allows for the derivation of a stability condition for a critical point when disorder is added to a clean system. Following his own recent review [43], which is quite more comprehensible than the original work [42], I will now derive this condition that accounts for both thermal phase transitions, viz. the ones that are driven by thermal fluctuations, and quantum phase transitions – driven by quantum fluctuations. To start with, recall that a phase transition is marked by pronounced and tenacious phenomena that take place at, or close to, the critical point of the system. These are called critical phenomena [15]. This point corresponds, in the language of Section 1.1.1, to the Currie temperature 푇푐 below which a paramagnet becomes a ferromagnet material. As we have seen, this requires that a net fraction of spins point towards the same direction. In more precise terms, in the ferromagnetic phase, the spin-spin correlations extend throughout the system. Exactly at the critical point, the correlation length 휉, that is a measure of the extension of these correlations, is effectively just as large as the system, therefore infinite. This divergence of the correlation length is a necessary condition for the existence of the ferromagnetic phase. Actually, a large number of phases of matter are described in terms of correlations between the compounding particles or entities of the system, and locating a critical point is often done by searching for the point in parameter-space where such correlations diverge. This is at the heart of RG techniques. The very procedure of coarse-graining discussed in the last chapter can be thought of as the substitution of a number of spins within a block for a single effective block-spin if we take the block size in one dimension to be exactly the correlation length of the system. This immediately leads to the scenario of scale invariance at critical points. Harris used this division into blocks of “volume” 휉푑 (see Fig. 2.1), where 푑 is the spatial dimension of the system, to derive his criterion. Suppose that the clean system undergoes a phase transition from a higher-symmetry phase (HSP) to a lower-symmetry phase (LSP) at temperature 푇 = 푇푐. These two phases are often 50

Figure 2.1: Left: Division of the system into “correlation volumes” and the associated critical parameters 푇푖. Right: Probability distribution of the critical parameters. Figure from Ref. [43].

Figure 2.2: Left: Width Δ푇푐 of the distribution (red) decreases slower than (푇 − 푇푐). Right: Δ푇푐 decreases faster than (푇 − 푇푐). Figure from Ref. [43]. called, respectively, the disordered phase and the ordered phase, which can obviously lead to tremendous confusion in the context of disordered systems. I will prefer the HSP-LSP vocabulary, even though it may be very hard or even impossible to ascertain the symmetry that is being broken at the critical point for certain systems. What happens if, in the vicinity of the critical point (preferentially in the HSP), we start adding small amounts of disorder to the system? It is not pushy to think that the system will develop regions that will undergo the same phase transition at different critical parameters. If we divide 푑 the system into 휉 -blocks, we could actually attribute to each block a critical parameter 푇푖 that would be a consequence of the average disorder effect within that block. Since the disorder is random, this would lead to a distribution of critical temperatures that is also shown in Fig. 2.1. By comparing the width of this distribution, Δ푇푐, to the distance between the overall temperature 푇 and the clean critical temperature 푇푐, namely (푇 − 푇푐), it is possible to obtain two distinct situations that are shown in Fig. 2.2. In the simplest case, which corresponds to the right panel of Fig. 2.2, as the overall temperature 51

to which the system is submitted is lowered towards the critical point 푇푐, the width of the histogram 푑 decreases faster than (푇 − 푇푐). This means that the 휉 -blocks cannot possibly undergo their individual phase transitions before the clean critical point itself is reached. Consequently, the phase transition remains sharp, and the critical properties do not change when disorder is added. In other words, the clean critical point is stable against disorder, remaining in the same universality class. On the other hand, the situation described by the left panel of Fig. 2.2 is more interesting. As the temperature of the system is lowered, (푇 − 푇푐) eventually becomes smaller than Δ푇푐, which means that a significant fraction of the 휉푑-blocks would have already traversed their local critical points. It is then evident that the critical behavior, in this case, has to be changed to accommodate such feature. The reasoning is not fully self-contained because, as one may notice, the concept of sharp phase transitions only makes sense in the context of infinite systems – in the thermodynamic limit. Nevertheless, the system is said to be unstable against disorder, which raises four main questions that we shall formulate in short. This scenario can be put into equations if we employ critical exponents. In particular, 휈 is the one associated to the correlation length 휉 of a certain quantity, commonly the order parameter of the transition, and it is implicitly defined by

−휈 휉 ∼ (푇 − 푇푐) , (2.1.1) which encompasses the divergence of 휉 at the critical point previously discussed. By inverting this equation, we can derive the scaling behavior of (푇 − 푇푐) in terms of 휉,

−1/휈 (푇 − 푇푐) ∼ 휉 . (2.1.2) 푑 Now recall that the local critical temperature 푇푖 of each 휉 -block is a result of the averaged disorder effect over a region of “volume” 휉푑. Thus, by the Central Limit Theorem (see AppendixB), the width Δ푇푐 of the distribution of the 푇푖 scales like √︁ 푑 −푑/2 Δ푇푐 ∼ 1/ 휉 = 휉 . (2.1.3)

Basically, this comprises the intuitive fact that, the larger the region over which the disorder is being averaged to obtain 푇푖, the narrower the obtained distribution. It is now possible to assign the equation

(푇 − 푇푐) > Δ푇푐 (2.1.4) to the case where the critical point is stable against disorder, which allows us to derive the stability criterion

휈푑 > 2 (2.1.5) that is the main result of the work by Harris [42]. When this condition is not fulfilled, following T. Vojta [14], a few questions are relevant:

1. Are the bulk phases qualitatively changed? 52

2. Is the phase transition still sharp, or it becomes smeared since different regions can cross the local critical points independently?

3. If it is sharp, does its order change?

4. If it remains continuous, does the critical behavior change?

Harris’ criterion itself cannot account for all of these questions, but research over the years has found many answers. Despite its relevance, discussing such details is out of the scope of this dissertation and one can find excellent explanations in the literature. Particularly, a huge effort has been done by T. Vojta et al. to turn these concepts more comprehensible, and they definitely comprise my favorite references [14, 44–47].

A few remarks

Although the derivation presented in this section may seem quite general, there are a few points that deserve further explanation. In first place, Harris’ criterion, as presented, is valid onlyfor uncorrelated or at least short-correlated disorder. Spatial correlations on the random potential would conflict immediately with the casting of the Central Limit Theorem. It has beenshown ⃒ ⃒−푎 ⃒ ⃗′⃒ that, for correlations of type ⃒⃗푟 − 푟 ⃒ , the criterion reads [48]

min(푑, 푎)휈 > 2. (2.1.6)

However, as we shall make explicit, the disorder used throughout this thesis is of the uncor- related type. More important of noticing is the fact that, although originally derived for thermal phase transitions, Harris’ criterion holds in exactly the same form for quantum phase transitions. This is the case because the dimension 푑 that enters in the equation is the one over which the disorder degrees of freedom are averaged. Although we might have not previously mentioned, the disorder that is under consideration here is that of the random-mass, static type, that is commonly referred to as quenched disorder. Formally, this means that the random potential 훿푟(⃗푥) enters in Wilson-Ginzburg-Landau (WGL) form of the free-energy of the system as

∫︁ {︁ }︁ 퐹 [푚(⃗푥)] = 푑푑푥 −ℎ푚(⃗푥) + [푟 + 훿푟(⃗푥)] 푚2(⃗푥) + [∇푚(⃗푥)]2 + 푢푚4(⃗푥) , (2.1.7) where 푚(⃗푥) is the order parameter, ℎ is the conjugated field and 푟, 푢 are coefficients. This means that the random potential couples to the square of the order parameter, thus the random- mass nomenclature. This contrasts to the random-field type of disorder where the randomness is incorporated as a spacial dependency on ℎ = ℎ(⃗푥). The final remark regards the fact that the critical exponent that enters in the criterion istheone of the clean phase transition – the one with no disorder. The consideration of non-zero disordered critical points is the subject of the next section and comprises the so-called Chayes’ criterion [49]. 53

2.2 Chayes’ criterion

The fate of a phase transition, or more generally, the shape of a phase diagram when Harris’ criterion is violated is a subject of tremendous interest in condensed matter physics. This is because, in this case, one cannot simply ignore effects of disorder even when it comes in considerably small amounts. As we shall see later, a general classification of such effects was only possible recently [47], and it might very well be the case that it was made by the intense research on system-specific effects of disorder during the years 1990 and2000. Notwithstanding that, a somehow generic result has been derived by Chayes et al. [49] that has important consequences in understanding the features of systems where disorder is relevant. In particular, it can help to establish the topology of the phase diagram of a disordered system. I will not derive it here because, even though the line of thought is similar to what is presented in Harris’ arguments, the derivation is highly more technical and some of the assumptions are, in an attempt to be mild, perhaps unclear to be directly translated in system-specific conditions. Instead, I will just enunciate it and briefly comment on its implications. Chayes’ criterion establishes that, if an appropriate finite-size correlation length diverges ata non-trivial value of disorder with an exponent 휈, then 휈 must satisfy

휈 ≥ 2/푑. (2.2.1)

As it can be seen, the result is quite delicate because it can be easily misled towards Harris’ criterion, which is in a completely different context. To avoid such misleading, let us elucidate the meanings of some of the enunciated terms. In the first place, it mentions the existence of a properly defined finite-size correlation length, here denoted by 휉퐹 푆. This is not the same quantity as the intrinsic correlation length 휉 that we have been using so far, which is defined in terms of the decay of correlation functions. Alternatively, to define 휉퐹 푆, it is necessary to identify a system-specific, finite-volume event that is exponentially unlikely in the HSP, but typical on long length scales at the critical point, or inside the LSP. 휉퐹 푆 is then the natural length scale beyond which such events become unlikely in the HSP. For some systems the equivalence between 휉 and 휉퐹 푆 can be proved. However, in some other cases, a decoupling is possible [50]. As we shall see, for the Disordered Bose-Hubbard model – subject of this dissertation – it is the case that 휉퐹 푆 can be defined in direct relation to 휉, therefore I will not further discuss other cases. The second point regards the fact that we are now considering a critical point with a non- trivial value of disorder. This immediately excludes the previous clean critical point that is under assumption in the Harris’ criterion. Conversely, HSP and LSP here are disordered phases that could, in principle, have completely different properties from their bulk, clean counterparts. This extremely important result will be useful later when we discuss the possible existence of other disordered phases between HSP and LSP. Furthermore, the concept of the unlikely event used to define 휉퐹 푆 will be of key importance to discuss, in very general terms, the fate of clean critical points under the addition of disorder. 54

Figure 2.3: Renormalization flux scheme for a system with clean critical pointat 푔 = 푔푐 (blue, four-pointed star). Red, five-pointed stars indicates the clean, attractive fixed points.

2.3 Fate of critical points under addition of disorder

Harris’ criterion allows us to divide disorder in two main classes1:

1. 휈푑 > 2, Harris’ criterion is satisfied, disorder is said to be irrelevant.

2. 휈푑 < 2, Harris’ criterion is violated, disorder is said to be relevant.

This terminology anticipates the use of renormalization group language that we shall now employ to deepen this classification. In order to do so, we shall quantify the amount of disorder inthe system in terms of the operator Δ, often called the disorder strength. It can be defined in terms of the random-potential 훿푟(⃗푥) used in Eq. 2.1.7. Specifically, if 훿푟(⃗푥) is originated from a probability distribution 푃 [훿푟(⃗푥)], an extremely useful, common and intuitive definition for Δ is given by ∫︁ Δ2 = 푑푑푥[훿푟(⃗푥)]2푃 [훿푟(⃗푥)], (2.3.1) i.e. Δ is the standard-deviation of the disorder distribution. Suppose now that we have a clean system, controlled by a parameter 푔, that undergoes a phase transition at 푔 = 푔푐. For instance, in the ferromagnetic paradigm of Section 1.1.1, 푔 would correspond to the exchange integral 퐽, which quantifies the coupling between neighboring spins, divided by the temperature T. The two bulk, clean phases that 푔푐 separates are going to be denoted the HSP, for which 푔 > 푔푐, and the LSP, for which 푔 < 푔푐. In RG terms, 푔푐 is a repulsive fixed point, whereas HSP and LSP are controlled respectively by attractive fixed points at 푔 = 0 and 푔 = ∞, as shown in Fig. 2.3. The addition of disorder introduces a new ingredient in this parameter-space, namely Δ. The relevance of disorder can then be translated into the behavior of this new operator under coarse-graining, viz. at large length scales. When Harris criterion is fulfilled, possible inhomogeneities created by disorder become less and less important as the system is renormalized at each RG step. This means that, under coarse graining, Δ → 0, hence it becomes an irrelevant operator. This is in direct agreement

1The marginal case 휈푑 = 2 needs further considerations [14]. 55

Figure 2.4: Renormalization flux scheme for a system with clean critical pointat 푔 = 푔푐 when 휈푑 > 2, i.e. Harris’ criterion is fulfilled. with the previous argument that, in this case, the width of the distribution of the local critical parameters decreases faster than the overall distance to the critical point. With inhomogeneities being renormalized out, the critical behavior is not expected to change, therefore the clean critical point remains stable, with the same set of critical exponents (same universality class). This situation is shown in Fig. 2.4. On the other hand, when 휈푑 < 2, inhomogeneities generated by the disorder profile do not decrease under coarse graining, and a richer scenario sets in. These individual regions could, in principle, undergo the clean phase transition independently since there is a finite probability that some of them cross their local critical points before the bulk, clean critical point is reached. The existence of such regions was first noticed in the context of ferromagnetic materials by Griffiths [51], which originated the name of Griffiths singularities. However, works by McCoy et al. also pointed out this possibility [52–54]. It has been later proved that they can give rise to non-analycities in the free-energy and derived quantities of the system, therefore destroying the clean critical point [55–58]. Following T. Vojta [14], three possible outcomes can be classified in terms of the contribution of these rare-regions to the generalized susceptibility

1 ∫︁ 휒 = 푑푑푟푋(⃗푟) (2.3.2) 푉 where 푉 is the volume of the system and 푋 is the differential susceptibility defined by the functional derivative 훿푚(⃗푥) 훿2퐹 [푚(⃗푥)] 푋(⃗푟) ≡ 푋(⃗푥 − ⃗푥′) = = , (2.3.3) 훿ℎ(⃗푥′) 훿[ℎ(⃗푥′)]2 where 푚(⃗푥) is the order parameter of the clean phase transition, ℎ(⃗푥) its conjugated field, and 퐹 is the WGL form of the free-energy defined in Eq. 2.1.7. In the continuous type of phase transitions – also called second order phase transitions – this quantity is singular at the critical point. An estimate of the susceptibility of the system when the disorder profile is able to generate 56 such rare-events is given by ∫︁ 휒푅푅 ≈ 푑퐿푅푅푤(퐿푅푅)휒푖(퐿푅푅), (2.3.4) where 푤(퐿푅푅) is the probability of finding a rare-region of size 퐿푅푅 and 휒푖(퐿푅푅) its contribution to the susceptibility. The functional form of 푤(퐿푅푅) is governed by combinatorics and generally given by

(︁ 푑 )︁ 푤(퐿푅푅) ∼ exp −푝퐿푅푅 (2.3.5) for the uncorrelated type of disorder, where 푝 is a non-universal constant. This allows for the cases:

(a) 휒푖(퐿푅푅) increases slower than exponentially with 퐿푅푅.

(b) 휒푖(퐿푅푅) increases at least exponentially with 퐿푅푅, remaining finite for finite 퐿푅푅.

(c) 휒푖(퐿푅푅) increases and diverges at some finite 퐿푅푅.

In case (a), the net contribution of the rare-regions to the total susceptibility vanishes, therefore they cannot possibly give rise to any kind of non-analytic behavior in thermodynamic quantities: this means that the critical point of the disordered system must be placed exactly at the same point as the one for the corresponding clean system. However, the existence of such regions must change the critical behavior of the system, which is then expected to fall onto a new universality class. Furthermore, the fact that there is a finite probability of finding regions that can locally support the characteristics of the ordered phase can give rise to different physical properties that are intermediate in between the HSP and the LSP. In particular, for a gapped-to-gapless type of quantum phase transition, this ultimately leads to the creation of a new intervening phase, that is called the Griffiths phase (GP). In this case, even though the two bulk, clean phases HSP and LSP will continue to exist, with a sharp phase transition in between them, the clean critical point is destroyed and substituted for a new one that, although of the conventional type, is in another universality class, as described in Fig. 2.5. This has to be the case in order to reconcile with Chayes’ criterion. This case is of primary interest for this dissertation as it encapsulates the situation for the Bose-Hubbard model in three-dimensions. In general, Griffiths phases exhibit intermediate properties between the two clean phases. However, this assertion can be highly system-specific, and we should only discuss it later in the context of the Bose-glass phase. In (b), the estimate of the susceptibility of the rare-regions diverges, being dominated by the contribution of the largest region. In this case, the disorder strength Δ is renormalized without limits under coarse graining, reflecting the fact that the inhomogeneities arising from the disorder potential appear to be larger in larger length scales. This corresponds to the emergence of a new attractive fixed point at Δ = ∞, called an infinite randomness fixed point (IRFP). Along with the previous attractive fixed points that correspond to the bulk, clean phases at 푔 = 0 and 푔 = ∞, this also gives rise to a Griffiths phase. The clean critical point is destroyed and substituted byanew, multiple critical point, while the LSP-GP and GP-HSP transitions belong to different universality classes that have exponents fulfilling Chayes’ criterion, as described in Fig. 2.6. Notice that, 57

Figure 2.5: Renormalization flux scheme for a system with clean critical pointat 푔 = 푔푐 when 휈푑 < 2 and condition (a), when Griffiths singularities are exponentially weak. The critical behavior is still of the conventional type, but the universality class is changed. The Griffiths phase is controlled by a finite-disorder fixed point Δ푐 that characterizes the average disorder strength at large length scales.

differently from the previous case, here the LSP can actually persist regardless thevalueof Δ, as long as we are close enough to 푔 = 0.

Finally, in (c), finite-sized rare-regions can actually undergo the phase transition on their own, since they give rise to divergences in the susceptibility. Consequently, the clean phase transition is completely destroyed in a process that is called smearing, shown in Fig. 2.7. Since different regions will cross their local critical points independently, a sharp phase transition is inconceivable. As a result, a Griffiths region (GR) is formed around which the singularities smear the sharpness of the clean phase transition . However, individual contributions can be extremely small and very hard to detect in experimental systems. As we shall discuss yet in this chapter, quantum phase transitions are more likely to exhibit such behavior.

These three cases can be made more comprehensible if we consider the effective dimension of − − the rare-regions, 푑푅푅, and the lower critical dimension 푑푐 of the clean system [59]. If 푑푅푅 < 푑푐 , the individual regions cannot possibly undergo the phase transition independently of the bulk system, − which corresponds to (a). In the marginal case 푑푅푅 = 푑푐 , they still cannot do so, but they almost − can! – corresponds to (b). When 푑푅푅 > 푑푐 , the regions can exhibit ordering behavior regardless the lack of long-range order in the bulk system, resulting in a smeared transition. Naturally, − although universality classes can simplify the finding of the appropriate 푑푐 for a specific system, asserting 푑푅푅 requires further analysis and can strongly depend upon features such as anisotropies, thus being a quite more subtle task.

The fate of phase transitions in the presence of disorder can be summarized in terms of the classifications discussed in this section as it is shown in Table 2.1, mostly from Ref. [14]. Recall that class 1 fulfills Harris’ criterion, whereas class 2 violates it. 58

Figure 2.6: Renormalization flux scheme for a system with clean critical pointat 푔 = 푔푐 when 휈푑 < 2 and condition (b), when Griffiths singularities have strong power-law character. Consequently, there is the emergency of a Griffiths phase, which can be seen in the diagram by the presenceof an attractive fixed point with infinite disorder strength.

Class Rare-region dimension Griffiths singularities Global transition 1 — — conventional − 2a 푑푅푅 < 푑푐 weakly exponential conventional − 2b 푑푅푅 = 푑푐 strong power-law infinite randomness − 2c 푑푅푅 > 푑푐 RRs undergo PT transition smeared Table 2.1: Classification of phase transitions in disordered systems according to the classes dis- cussed along Section 2.3. PT stands for phase transition.

2.4 Self-averaging of observables

As mentioned earlier in the remarks of Section 2.1, the type of disorder that is pertinent to this dissertation results in static randomness on the local terms of the Hamiltonian of the model – called quenched disorder. This is in contrast to the counterpart that considers itinerant defects or impurities that may not be local – called annealed disorder. The main difference between these two types stems from the statistical treatment that needs to be employed to obtain thermody- namical quantities of the system. In the later case, the disorder degrees of freedom are in thermal equilibrium with the system, therefore the partition function 풵푎푛 of the disordered system can be obtained by averaging over these degrees of freedom,

풵푎푛 = [풵] , (2.4.1) 59

Figure 2.7: Left: Renormalization flux scheme for a system with clean critical pointat 푔 = 푔푐 when 휈푑 < 2 and condition (c), when Griffiths singularities undergo the phase transition, giving rise to the Griffiths region (GR). The clean critical point is destroyed, occasioning the smearing of the phase transition, which is no longer sharp. Right: Order parameter 푚 as a function of the control parameter 푇 for a paramagnetic (PM) to ferromagnetic (FM) smeared phase transition, figure from Ref. [14]. The tail of the ordered phase corresponds to the GR. where [...] denotes the disorder average2. On the other hand, as disorder is static, “frozen” in the quenched case, every single disorder realization is different. To obtain average thermodynamic quantities we then have to perform averages over the free energies associated to each disorder realization. This imposes the technical challenge of having to average over the logarithm of the partition function rather than the partition function itself. In a broader point of view, as we shall make explicit in the next chapter, the quenched type of disorder gives rise to an ensemble of Hamiltonians, called the disorder ensemble, where each Hamiltonian is equivalent but not identical. The analysis of the fluctuations within such ensemble is a central subject of this thesis. Even though in the last few sections general results have been established that make no dis- tinction between thermal and quantum phase transition, at this point it is worth noticing a key difference between them. When a system is described by classical mechanics – quantum behavior is negligible – there is complete decoupling of its kinetic and configurational degrees of freedom. In other words, kinetic energy and potential energy are commutative quantities, which allows us to write the partition function of the system as ∫︁ ∫︁ ∫︁ 풵푐푙푎푠푠푖푐푎푙 = 푑푝푑푞 exp [−훽퐻(푝, 푞)] = 푑푝 exp [−훽푇 (푝)] 푑푞 exp [−훽푉 (푞)] ≡ 풵푘푖푛풵푝표푡. (2.4.2)

The kinetic part of the partition function, 풵푘푖푛, cannot possibly give rise to any type of singular behavior in the free energy or derived quantities of the system, since it is composed of strictly smooth, well-behaved, analytical functions (Gaussians). Consequently, any non-singular behavior must come from the potential part, 풵푝표푡, which is therefore responsible for phase transitions within

2This will become the standard notation for disorder averages, but I shall make it explicit again in a more proper context. 60 the realm of possibility. In practical terms, only spatial fluctuations need to be consider in the description of classical phase transitions. Contrarily, in quantum systems factorization is not possible because kinetic and potential operators do not, in general, commute. In this case, we can apply a Trotter decomposition [60],

[︁ ]︁푁 exp (퐴 + 퐵) = lim 푒퐴/푁 푒퐵/푁 (2.4.3) 푁→∞ to write the partition function as

푁 ∫︁ −훽퐻 ∏︁ [︁ −훽푇/푁 −훽푉/푁 ]︁ 풵푞푢푎푛푡푢푚 = Tr푒 = lim Tr 푒 푒 ≡ 풟[푞(휏)] exp {−푆[푞(휏)]}, (2.4.4) 푁→∞ 푖=1 where 푞(휏) is the trajectory of the system in space-imaginary-time. This originates from the Trotter decomposition that slices 훽, the inverse temperature, introducing an extra coordinate 휏, the imaginary-time, that corresponds to an infinite dimension at zero temperature [61]. Therefore, it plays the role of an extra dimension in quantum phase transitions. This quantum-to-classical mapping allows us to conclude that a quantum phase transition in 푑 spatial dimensions is equivalent to a classical phase transition in (푑 + 1) dimensions3. Since we need to account for fluctuations in space and imaginary-time, a correct description in terms of the order parameter 푚 is given by a generalization of the previous WGL free energy,

⎧ (︃ )︃2 ⎫ ∫︁ ∫︁ ⎨ 1 휕푚 ⎬ 푆[푚(⃗푥,휏)] = 푑푑푥 푑휏 −ℎ푚 + [푟 + 훿푟(⃗푥)]푚2 + (∇푚)2 + + 푢푚4 + ... (2.4.5) ⎩ 푐2 휕휏 ⎭ where 푐 comprises the propagation speed of fluctuations of the order parameter4. In both classical and quantum cases, there is an important consequence on the thermodynamics of the system when spatial correlations are present on the random potential 훿푟(⃗푥). When corre- lation is short-ranged, in RG terms it can be integrated out since, close to the phase transition, the large length scale behavior prevails. However, medium-ranged to long-ranged correlations can have the significant effect of increasing the size of possible Griffiths singularities, enhancingthe effects of these rare-regions. In this sense, correlated-disordered systems are more susceptible to exhibit features pertinent to Griffiths phases and regions previously discussed. In the specific case of quantum systems, even when disorder is completely uncorrelated inspace, the perfect correlations in imaginary-time stemming from the static character of the quenched disorder profile cannot be integrated out. In practical terms, the extra dimension (imaginary-time) produces a highly anisotropic (푑+1)-dimensional system. If the disorder is point-like in space, being

3There are a few assumptions that must be granted for the mapping to work properly. It does not work for real-time quantities such as transport properties, only for thermodynamical quantities. Also, the resulting action 푆[푞(휏)] has to be real so as to be interpreted as a functional free-energy. Imaginary actions are sometimes related to Berry’s phases [62] and give rise to the sign-problem in Quantum Monte Carlo [63]. 4This is a quite primary form of quantum-WGL free energy. It breaks down if the system has gapless modes where the time-frequency dependence of the order parameter becomes non-analytic [64]. 61

Figure 2.8: Scheme of quenched disorder in a 푑-dimensional quantum system, where the point-like, static disorder gives rise to a highly anisotropic system in (푑 + 1)-dimensions with rod-like defects along the imaginary-time direction 휏. Figure from Ref. [41]. uncorrelated over every spatial dimension, the quantum system is mapped to a classical system that has rod-like defects, as shown in Fig. 2.8. This means that, even in the thermodynamic limit, every single disorder realization is in principle completely different, even though equivalent in the sense that they have the same physical parameters, in particular the disorder strength Δ. Since these degrees of freedom cannot be decoupled in the quantum scenario, it may be the case that each single disorder realization will have a different set of thermodynamical quantities. Therefore, the understanding of distributions of observables over this disorder ensemble is of key importance in this situation. A central quantity that encloses the size of fluctuations over the disorder ensemble, viz. from sample to sample5, of a physical property related to an observable 푋 is the relative-variance of 푋, 풟푋 (퐿), defined as (Δ푋)2 풟 (퐿) ≡ , (2.4.6) 푋 [푋]2 where (Δ푋)2 is the disorder-variance and [푋] the disorder-average of 푋. 퐿 is the length of the system along one of the spatial dimensions. To make it clear, by disorder-average we mean the standard, statistical, single-weight average of 푋 over the different realizations of the random- potential, with the same meaning holding for every other statistical quantity that we shall employ along this dissertation. I shall draw the attention whenever there is a distinction between disorder- like statistics and the statistical noise of quantities within a single disorder realization. The scaling of 풟푋 (퐿) to the thermodynamic limit is of paramount importance for practical purposes [65, 66]. We can initially divide this scaling in two possibilities:

5The term sample is going to be used throughout the text with the meaning of a particular disorder realization, i.e. a particular configuration of the random potential 훿푟(⃗푥). 62

1. 풟푋 (퐿) −→ 0 when 퐿 → ∞.

2. 풟푋 (퐿) ̸−→ 0 when 퐿 → ∞.

In the first case, with a vanishing relative variance for increasing system sizes, one single samplein the thermodynamic limit is enough to capture all of the physics related to 푋, being representative of the whole ensemble. 푋 is then said to be a self-averaging observable. This has a tremendous im- portance for experiments, since it guarantees reproducibility, as well as for numerical calculations, since it ensures that larger systems will necessarily lead to better, more refined disorder-statistics. This class can be further divided in two situations:

푑 (a) 풟푋 (퐿) ∼ 1/퐿 , with 푑 the spatial-dimension of the system.

푎 (b) 풟푋 (퐿) ∼ 1/퐿 , with 푎 < 푑.

When the relative-variance of 푋 scales as in 1(a), 푋 exhibits strong self-averaging. This is a consequence of the Central Limit Theorem (see AppendixB) that controls the disorder-variance in this case. When 풟푋 (퐿) scales as in 1(b), 푋 exhibits weak self-averaging. A more tenuous situation arises when the relative variance does not vanish in the thermo- dynamic limit, i.e. in case 2. This means that every single disorder realization, even for the infinite-sized system, is unique, and sample to sample fluctuations cannot be made smallerby increasing the system size. More importantly, depending on the width of the distribution, the process of disorder averaging, which is imperative in this case, can be extremely costly. One may need several large samples to obtain reasonable accuracy. It is therefore unclear to what extent one can ascribe universal properties to such systems. Table 2.2 summarizes the cases discussed above.

Class Scaling of 풟(퐿) Type of self-averaging 1a ∼ 1/퐿푑 strong (SSA) 1b ∼ 1/퐿푎, 푎 < 푑 weak (WSA) 2 ̸−→ 0 non self-av. (NSA)

Table 2.2: Classification of the scaling behavior of the relative variance of a physical property over the disorder ensemble.

The subject of self-averaging also depends on what part of the phase diagram one is performing disorder averages. Close to critical points, as correlation lengths of order parameters diverge, this is a subtle issue. Consider, for instance, the already employed division of the system in “correlation volumes”, viz. blocks of linear size 휉 along each spatial dimension. For a single sample of the disorder ensemble, the estimated value 푋¯ of an observable 푋 is given by the average of the local values 푋푖,

푁 ¯ 푋1 + 푋2 + 푋3 + ... + 푋푁−1 + 푋푁 1 ∑︁ 푋 = = 푋푖, (2.4.7) 푁 푁 푖=1 63

Figure 2.9: Division of the system into “correlation volumes”. Each block has a local value 푋푖 of a certain observable 푋. The estimated value of 푋 for a single sample is then given by the average of these local values. where 푁 is the total number of 휉-blocks, 푁 = 푉/휉푑, with 푉 = 퐿푑 the volume of the system6 as shown in Fig. 2.9. The disorder-averaged value of 푋, [푋], is therefore the average over different samples of the individuals 푋¯,

푁푠 ¯ 1 ∑︁ ¯ [푋] = [푋] = 푋푗, (2.4.8) 푁푠 푗=1 where 푁푠 is the number of samples. This means that [푋] is given by the sampling distribution of the sample mean of 푋, in statistical language. Consequently, according to the Central Limit Theorem, the variance of this distribution is controlled by the number of uncorrelated values that were used to construct each 푋¯, namely the number of 휉-blocks. Additionally, this distribution is Gaussian-like. Away from the critical point, the linear size of the system 퐿 is such that 퐿 ≫ 휉, therefore it is guaranteed that there is a minimum amount of blocks for the CLT to hold. This leads to the strong-self averaging (SSA) scenario of class 1a. As a critical point is approached, finite-sized systems may have their sizes comparable to 휉, 퐿 ∼ 휉, which could lead to a breakdown of the CLT because the number of uncorrelated local values 푋푖 in each sample is close to the unit. Nonetheless, in this case, as one increases the system size, since 휉 is large but finite, eventually the number of 휉-blocks increases and SSA sets in. This implies in strong finite size effects close to the phase transition, as we shall observe laterinthe case of the superfluid parameter of the disordered Bose-Hubbard model. At the critical point, as 휉 diverges, 휉 → ∞, there is no reason at all for the CLT to work. In this case, in general, it is expected that observables that are related to the order parameters of the phase transition do not self-average, which comprises a serious issue to study critical phenomena

6Of course, the equality holds only for cubic-like systems. 64 in finite-sized disordered systems. Even though the CLT is an important tool that describes the mechanisms of strong self- averaging behavior, it is fundamental to notice that, when Griffiths singularities are relevant to the problem, viz. inside Griffiths phases or regions, the distributions of the inhomogeneities throughout the system certainly have a strong dependence on the particular realization of the random potential. Therefore, these inhomogeneities are distributed differently from sample to sample, constituting a primary reason for the breakdown of self-averaging of observables disregarding whether the system is close to a critical point or not. Both the existence and break-down of self-averaging have been speculated and confirmed in several disordered systems [4, 66–78], and some research has been made in an attempt to unify the mechanisms behind it [65, 79–85]. However, to the best of my knowledge, this still remains an open question that could be further elucidated by studying the disorder averaging process in different systems.

2.5 Theorem of inclusions

One last result that, in spite of being quite generic, is central and was firstly derived within the context of the Bose-Hubbard model regards the character of phase transitions driven solely by disorder. It is particularly useful when considering bounded disorder distributions7. So far we have discussed the general effects of phase transitions on clean critical points. When disorder is relevant to the problem, it can dramatically change the phase diagram of the system, ultimately leading to the existence of different phases of matter that are ubiquitous to disordered systems. The Theorem of Inclusions [7, 86] proves that, when a transition driven by disorder takes place, it is always possible to find regions of the competing phases inside each other, which precludes transitions from gapless to gapped states. Consider then a disorder-driven phase transition between phases 1 and 2. By disorder-driven it is meant that the phase boundary is dependent upon the properties of the disorder distribution, collectively gathered in the variable ϒ⃗ that includes all microscopic parameters except for the ⃗ disorder bound 퐸. Physically, we expect this phase boundary, parametrized by 퐸 = 퐸푐(ϒ), to be continuous, which allows us to write

′ ′ 퐸푐(ϒ푖) = 퐸푐(ϒ푖) + 퐴푖(ϒ푖 − ϒ푖) (2.5.1) ⃗ for each component 푖 of ϒ, where 퐴푖 is some constant scalar, and also allows for the representation in Fig. 2.10. Note that, as Phase 2 has a larger 퐸 than Phase 1, it is always possible to find local regions in the system that look like those from Phase 1. Conversely, for a particular disorder realization ϒ⃗ *, it is always possible to find arbitrary large regions that are representable by another ⃗ ′ ⃗ ′ ⃗ * disorder realization ϒ such that 퐸푐(ϒ ) < 퐸푐(ϒ ), which thus look like domains of the phase possible at larger disorder bounds (Phase 2). In other words, the generic, random character of the disorder potential implies that there exist arbitrary large regions in which a disorder realization generated by ϒ⃗ can be considered as a typical realization of a different ϒ⃗ ′.

7 A bounded disorder distribution is such that 푃푏표푢푛푑푒푑(휖) = 0 for 휖 > 퐸. 퐸 is the disorder bound. 65

⃗ Figure 2.10: The phase boundary between two phases is parametrized by 퐸 = 퐸푐(ϒ). It is then possible to find arbitrarily large regions of either phase inside the other (see text).

Consequently, there is always a finite probability of finding arbitrarily large regions of Phase 1 in Phase 2 and vice versa. This immediately rules out transitions between gapped and gapless disordered phases except for a fundamental exception that is given by the rule: when the critical ⃗ boundary 퐸푐 is not dependent on ϒ. This would correspond to a vertical phase boundary in Fig. 2.10 and is precisely the mechanism of the Griffiths type of transitions that was previously mentioned. In the case of a gapped system, 퐸푐 = 퐸푔푎푝 is the only possible relation regardless the details of the disorder distribution, where 퐸푔푎푝 is the width of the gap. Such disorder-driven transition is therefore only possible at the precise value where the gap is destroyed which culminates in the presence of extremely rare regions that mimic a regular, pure system subjected to an external field. Another important observation is that, for unbounded disorder distributions, the gapped phase is completely destroyed in the thermodynamic limit. However, in this case finite size effects can be extremely large, specially for small to moderate disorder strengths Δ. This makes the identification of gapped/gapless phases a very difficult task, consisting a major difficulty in numerical calculations as well as experiments. 66

Chapter 3

The disordered Bose-Hubbard model

The theoretical background that is desirable in the scope of this dissertation can now be made complete, after the discussion of the Bose-Hubbard model in Chapter1 and the general effects of disorder on phase transitions in Chapter2, with the examination of the Disordered Bose-Hubbard Model (DBHM) that is the topic of the present Chapter. I will discuss a few types of disorder distributions and how we can consider them in the terms of the Hamiltonian that comprises the model. Next, I shall introduce the phase diagram for the diagonal disorder, three-dimensional case, and discuss in reasonable details the emergence of the Bose-glass phase. Particularly, I will discuss the features of the superfluid/Bose-glass phase transition that is a central subject along this dissertation.

3.1 Addition of disorder to the Bose-Hubbard Hamiltonian

The addition of disorder to a system can be made in different forms. As previously discussed in the remarks of Section 2.1 and in Section 2.4, annealed disorder comprises the case when the defects or impurities are in thermal equilibrium with the system. This type of disorder is of- ten time dependent in the sense that it can be itinerant, moving throughout the system. Even though important for a wide range of systems, from superconductors to proteins, passing through ferromagnetic materials and neural networks, this is not the kind of disorder that is going to be considered here. Instead, we are going to introduce quenches of disorder to the BHM. Quenched disorder has the feature of being static, localized. In the case of a lattice system, each quench, or each disorder realization, constitutes a different lattice in the sense that the physical parameters of the Hamiltonian, in this case 푡, 푈 and 휇, have a spatial dependence that is peculiar to every single different quench. Of course, for systems with a certain instance of symmetry, the degree towhich this peculiarity amounts to different physical properties can be significantly less pronounced. Recall the Bose-Hubbard Hamiltonian of Eq. 1.2.24,

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ 퐻 = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1) − 휇 푛^푖. (3.1.1) ⟨푖푗⟩ 2 푖 푖 67

Figure 3.1: (a) Red beams are counter-propagating lasers that form the standing-wave pattern that constitutes the clean lattice. The green been is another laser that goes through a speckled holographic lens and superposes to the clean lattice, generating the disordered lattice. (b) Zoom in the optical lattice. Δ is the disorder strength. Figure from Ref. [87].

By including site-dependent parameters that arise from disorder added to the lattice, we would then have ^ ∑︁ ^†^ 1 ∑︁ ∑︁ 퐻 = − 푡푖푗푏푖 푏푗 + 푈푖푛^푖(^푛푖 − 1) − 휇푖푛^푖, (3.1.2) ⟨푖푗⟩ 2 푖 푖 where now 푡푖푗 is the hopping amplitude for a particle to go from site 푗 to site 푖, 푈푖 is the energy associated to a pair of atoms in site 푖 and 휇푖 is a local chemical potential, i.e. it constitutes an energy term for occupation of site 푖. This last term can be further written as 휇푖 = 휇 − 휖푖, where 휖푖 is the energy associated to the occupation of site 푖 and 휇 is the overall chemical potential that controls the total number of particles in the system. In other words, 휖푖 is a local chemical potential shift that arises from the disorder that is added to the lattice. As one one may suspect, this is actually the only manner for addition of random-mass disorder to a lattice1. The experimental situation, described in Fig. 3.1, clearly illustrates this concept.

3.1.1 Correlation between disorder distributions in Hamiltonian terms An immediate consequence of this type of disorder is that there must be some kind of corre- lation between the disorder distributions of the terms 푡푖푗, 푈푖 and 휖푖. As it is possible to see from the definitions of the Bose-Hubbard terms in Eqs. 1.2.16, 1.2.17 and 1.2.18, they are given by matrix elements between Wannier functions that characterize the quantum state related to the occupation of each site, Eq. 1.2.13. These, in turn, are constructed from Bloch waves that take into account the periodic potential that constitutes the optical lattice. Therefore, introducing a random potential generates spatial dependence in all terms that are correlated, meaning that the disorder distributions on different terms of the Hamiltonian have to be obtained from the primary disorder distribution of the random potential. This is a possibility that has only recently been considered. Previous calculations, in general, were made with uncorrelated disorder distributions. In order to construct these distributions “from scratch”, a generalized set of Wannier states has to be employed. In the case of the speckled type of disorder that is extremely relevant to ultracold

1Other types of disorder can be considered by using random fields or introducing defects and impurities. 68

Figure 3.2: Distributions of Bose-Hubbard terms obtained from generalized Wannier states con- 3 structed for the speckled field type of disorder ona 46 lattice. (a) Occupation energy 휖푖 terms, consisting of the speckle disorder distribution that resembles an exponential form. (b) Associated hopping terms 푡푖푗. (c) Associated on-site interaction energy 푈푖. The respective standard-deviations are given by 휎(휖) = 0.95Δ, 휎(푡) = 0.0088Δ and 휎(푈) = 0.0047Δ. Figures from Ref. [18].

atomic systems, this has been done in great details in Ref. [88]. Fig. 3.2 shows the incidence of terms within a certain interval for different disorder strengths ona 463 lattice.

It is clear that the most relevant disorder distribution is the one on the local occupation energy terms 휖푖 that is directly related to the random potential. Secondly, the hopping terms are symmetrically distributed around an average value and do not appear to be strongly influenced by the disorder strength Δ, in the sense that the distributions do not change their shapes neither their averages considerably for different values of Δ. However, the relative width of the distribution is noticeably large, which means that there can be relatively large and small hopping amplitudes terms throughout the lattice. Lastly, the on-site interaction energy is definitely the one that suffers the less amount of changes when disorder is included. The distributions of these terms exhibit very weak dependence upon the disorder strength and are concurrently quite narrow. For such reasons, disorder in this term is usually disregarded and the interaction energy is taken to be homogeneous throughout the lattice. This is somewhat intuitive because this term is fundamentally related to 69 the atomic nature of the bosons. One possibility for the DBHM Hamiltonian would then be

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ 퐻 = − 푡푖푗푏푖 푏푗 + 푛^푖(^푛푖 − 1) − (휇 − 휖푖)^푛푖, (3.1.3) ⟨푖푗⟩ 2 푖 푖 where the distribution of the 푡푖푗 is obtained from the distribution of the underlying random poten- tial, which is directly related to 휖푖. This has been extensively studied in the context of ultracold atoms, both computationally and experimentally [12, 89–92].

3.1.2 Off-diagonal disorder Bose-Hubbard Hamiltonian The distributions shown in Fig. 3.2 also point to an interesting fact. Although the random potential has an exponential-like form related to the speckle field, the resulting distribution of hopping terms is remarkably normal, Gaussian-like. This is expected for any reasonable kind of random potentials based on general arguments that involve the Central Limit Theorem (see AppendixB). Even though the use of Gaussian distributions in these terms predates the construc- tion of generalized Wannier states for optical lattices, this observation can be taken as a further motivation to study Hubbard models with Gaussian disorder in the hopping amplitudes,

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ 퐻 = − 푡푖푗푏푖 푏푗 + 푛^푖(^푛푖 − 1) − 휇 푛^푖, (3.1.4) ⟨푖푗⟩ 2 푖 푖 where 푡푖푗 is sampled from [︃ 2 ]︃ (푡푖푗 − ⟨푡⟩) 푃 (푡푖푗) ∼ exp − 2 , (3.1.5) 2Δ푡 with ⟨푡⟩ being the average hopping amplitude and Δ푡 is the associated disorder strength. In fact, research has been done within the realm of this so-called off-diagonal disordered Bose-Hubbard model [93, 94]. However, as it completely disregards the nature of the underlying random potential, it is difficult to relate it to ultracold atomic systems where the optical lattice is a key ingredient. Other possibilities include different types of disorder distributions. Particularly, one could consider the binary distribution that is usually employed in percolating-lattices, also called bond- disorder [95]. In this case, hoppings to certain randomly distributed sites are prohibited. As far as I am aware, this route has not been pursued by many since, for such purposes, less intricate models can be cast that enormously simplifies both numerical and analytical calculations.

3.1.3 Diagonal disorder Bose-Hubbard Hamiltonian It has been shown that, for the small to intermediate disorder strengths that are relevant in this dissertation, the consideration of off-diagonal disorder does not have any qualitative effect, and quantitative deviations are quite small, at most of the order of 10% [88, 89]. This is usually within the experimental uncertainty in ultracold atomic systems. Moreover, this terms need to be calculated for every single disorder instance, which adds yet another cost to the already delicate 70 subject of quenched disorder in such systems. For these reasons, the most common type of disorder that is considered in several works is the diagonal form [5,6, 73, 96–103]

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ 퐻 = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1) − (휇 − 휖푖)^푛푖. (3.1.6) ⟨푖푗⟩ 2 푖 푖

This is the form that is going to be used throughout this dissertation. The distribution from which the occupation energies 휖푖 are sampled can be chosen to meet both experimental significance and numerical convenience. Three different types that are going to be considered here are discussed in what follows.

3.1.4 Types of diagonal-disorder distributions

The main difference between the disorder distributions that are going to be employed istheir boundedness. We can consider either bounded or unbounded distributions. In the former case, the energy shifts that are attributed to lattice sites have both a maximum and minimum possible values, since the probability distribution from which they are sampled is cut exactly at the bound. In the later, there is no such limit. However, physical distributions require relatively large shifts to be unlikely, i.e. the distribution must have a vanishing probability density tail. Notice that, in the case of ultracold atomic systems, unbounded disorder needs to be considered because the disorder distribution comes from an incident laser, so that the disorder strength is related to its power. Bounding the disorder shifts then sounds somewhat nonphysical, even though it can be a good approximation. In order to compare the effects of different disorder distributions on physical properties, itis imperative to consider distributions with the same standard-deviation, which defines the disorder strength Δ (see Section 2.3). In several works that consider the uniform, bounded type of disorder, Δ has been often taken as the disorder bound instead of the disorder strength. I stress that this will not be the case here. Furthermore, since the overall chemical potential is not included in the occupation energy, the distributions must have zero mean.

Box disorder

This is the most common type of disorder distribution, found over a wide range of works in the literature. The energy shifts 휖 are uniformly distributed according to the probability density function

⎧ √ √ √1 , if − 3Δ < 휖 < 3Δ ⎨ 2 3Δ 푃box(휖) = (3.1.7) ⎩0, otherwise, where the bounded character of such distribution is made evident. 71

Gaussian disorder In this case, the energy shifts are sampled from a Gaussian distribution, which has the following density function

1 {︃ 휖2 }︃ 푃gauss(휖) = √ exp − . (3.1.8) 2휋Δ 2Δ2 This is the distribution that I have used the most to produce the results exhibited along this dissertation. Whenever not clearly specified, the reader shall assume that Gaussian disorder is being considered. A few reasons for its use include the fact that it has not been yet largely exploited in the literature, and also that it is an unbounded type of distribution that has more physical significance than a bounded one. Moreover, we will be interested in the peculiar details of each distribution that will lead to quantitative differences on relevant physical properties.

Exponential disorder This type of distribution is closely related to the speckled-field widely used in disordered ul- tracold atomic gases. It is defined by the probability density function

1 {︃ (휖 + Δ)}︃ 푃 (휖) = exp − . (3.1.9) exp Δ Δ These three different distributions are shown in Fig. 3.3. Besides the already discussed bound- edness feature, another remarkable aspect is that the exponential form of disorder is not symmetric around the mean value. This is expected since it is supposed to represent the effect of a laser that is being superposed to the optical lattice and will have important consequences in the physical properties of the DBHM.

3.2 Expected effects of disorder

Now that we have chosen a Hamiltonian for the disordered Bose-Hubbard model, viz. Eq. 3.1.6, it is time to consider the possible manners in which diagonal-disorder could affect its clean counterpart, Eq. 1.2.24. This is going to be done using the criteria, classification scheme and theorem of inclusions that were discussed in Chapter2.

3.2.1 Violation of Harris’ criterion The relevance of disorder to the behavior of a clean system, according to Harris’ criterion, can be established by knowing the critical exponent 휈, associated to the correlation length of the order parameter related to the phase transition that such clean system undergoes, and the spatial dimension 푑 of the system (see Section 2.1). As we have seen in Section 1.3.2, the type of system that the BHM describes undergoes a quantum phase transition from a Mott-insulating to a superfluid state that is achieved either by changing the density of the system – indirectly, its chemical potential – or the interaction-tunneling ratio 푈/푡. In the later case, this can be done at 72

Figure 3.3: Different types of diagonal-disorder distributions with identical zero mean and unit standard-deviation. (a) Box disorder, Eq. 3.1.7. (b) Gaussian disorder, Eq. 3.1.8. (c) Exponential disorder, Eq. 3.1.9. (d) All distributions in the same plot. constant, commensurate density at the tips of the MI lobes (see Fig. 1.8). It has been shown by Fisher et al. [3] that the density-driven transition is mean-field-like, whereas the pure QPT falls into the universality class of the (푑 + 1)-dimensional XY model [17, 104–106]. For the three-dimensional case that is going to be considered in this dissertation, both transi- 1 tions have mean-field like exponents, i.e. 휈 = 2 . In particular, for the pure QPT, the quantum- to-classical mapping leads to a system exactly at the upper critical dimension of the XY model. In any case, we then have 1 3 휈 · 푑 = · 3 = < 2, (3.2.1) 2 2 therefore the clean model violates Harris’ criterion. Consequently, the disorder strength Δ is a relevant operator, and the critical behavior of the system must change.

3.2.2 Character of the Griffiths singularities The underpinning mechanism that leads to the destruction of the MI phase, and consequently to the destruction of the clean critical point, is exactly the emergence of Griffiths singularities that arise from the disorder potential locally closing the MI gap, creating regions that support 73

Figure 3.4: Renormalization flux scheme for the DBHM. The Griffiths phase corresponds toBose- glass, which is a gapless state devoided of global phase coherence. The topology of the phase diagram is determined by the presence of attractive fixed points at 푈/푡 = 0, 푈/푡 = ∞, that define the clean QPT, and the Δ = Δ푐 < ∞ fixed point that governs the Griffiths phase. local coherence of the wave function. The closing of the MI gap means that the system is now compressible, differently from the MI state. It is possible to locally add particles tothesystem without any energetic cost. However, these individual regions, in having the rare-event character ubiquitous to this kind of phase, are not connected in order to establish global coherence throughout the system, therefore there is lack of superfluid fraction in the system. These regions correspond to superfluid-like puddles immersed in an incoherent background subjected to a local external field, since they are inherently different by not being coherently connected. The effective dimension of these regions, 푑푅푅 in the language of Section 2.3, is therefore 푑푅푅 = 1, since they are finite in every spatial dimension and we have to add one extra dimension due to the quantum nature of the system – the imaginary-time direction. Consequently, 푑푅푅 is less than the lower critical dimension of the associated universality class [107], thus the system is in class 2a of Table 2.1. This new phase, called the Bose-glass2, is therefore governed by finite disorder strength fixed point, whereas an unstable fixed point is placed exactly at the clean critical point. As aconse- quence, the critical behavior of the disordered system, due to the existence of the singularities at Δ > 0, must change, and we expect the system’s critical exponents to fall on a new universality class. The expected resulting RG flux scheme, which indicates the topology of the phase diagram, is shown in Fig. 3.4, that is a system-specific version of Fig. 2.5. Note that this picture indicates the phase diagram of the system for bounded disorder: the MI persists up to some critical value of the disorder strength Δ = Δ푐. Conversely, in the case of unbounded distributions, there is always a non-vanishing probability of finding a region that locally closes the MI gap, which is therefore completely absent.

2A lot more features of the Bose-glass are discussed in Section 3.4. 74

3.2.3 Exigency of an intervening phase In the seminal work by Fisher et al. [3], it was argued that, even though not fundamentally impossible, a direct SF-MI phase transition in the presence of disorder was unlikely to be seen. For the case of the density-driven transition, this is certainly not possible given the absence of particle-hole symmetry. Any extra amount of particles or holes would be localized by the random potential. On the other hand, the pure quantum phase transition at the tips of the lobes is more subtle. In order for particle-hole excitations, that would be gapless in this case, hop freely across the lattice, thus generating a superfluid state, such dilute pairs must be initially not localized themselves, which is very unlikely given both the relevance of disorder to the problem and the highly localized character of the Mott-insulating phase. In contrast, what is expected is that the low-lying excitations are either bound-excitons, localized by the random potential, or unbound, separate quasiparticles and quasiholes (in equal number), both of which again localized. The direct transition would then only be reasonably possible for sufficiently weak disorder. Despite its unlikeliness, a large number of numerical simulations [5, 108–112] and approximate analytical calculations [113–118] verified this possibility, which remained an open question forover 20 years. The Theorem of Inclusions (Section 2.5), which is far reaching, was actually derived within an effort to put an end to this controversy [7]. From that, it is naturally impossible to observe a direct gapless-SF to gapped-MI phase transition, since it is always possible to find arbitrary large regions from the competing phases in both sides of the phase boundary. Therefore, a new intervening phase must preclude any transition from the SF to the MI [86]. Since what controls the transition to the MI is the energy gap of this state, there is no dependence on the random potential other than the disorder strength itself, which is the loophole of the theorem that allows for the gapless-Bose-glass to gapped-MI transition to take place. The reason why numerical works observed this direct phase transition is given by finite-size effects. Even though the direct transition is impossible, the character of the intervening phase is related to the proliferation of rare-regions that emerge from the Mott-insulator as Griffiths singularities. In being rare, one needs really large systems for a reasonable probability of finding them, which was not made possible in the years 1990s. Approximate calculations that report the same feature were certainly not accurate enough to capture such subtlety [118].

3.3 Phase diagrams

The expected effects of disorder in the Bose-Hubbard model, discussed in the last section, allows us to schematically draw the phase diagram of Fig. 3.5. Notice that for zero-disorder strength, Δ = 0, the picture is obviously the same as the one previously shown in Fig. 1.8. For Δ > 0, there are two remarkable features. The first one is the emergency of the intervening glassy phase, the Bose-glass (BG), and the simultaneous shrinking of the Mott lobes. This retiring of the domes happens along all directions in parameter-space, leaving no room for a direct MI-SF transition – a direct consequence of the Theorem of Inclusions (see Secs. 2.5 and 3.2.3 for details). Secondly, there is a complete destruction of the Mott-insulating phase when disorder is strong enough – the energy gap of this phase is such that 퐸푔푎푝 ∼ 푈, as pointed out in Section 1.3.2, therefore the corresponding phase is expected to vanish for Δ > 푈 when bounded disorder is 75

Figure 3.5: Illustration of the DBHM phase diagram for increasing disorder strength. Δ = 0 has been already discussed in Section 1.3.4. For Δ > 0, the emergency of the intervening Bose-glass (BG) phase and the destruction of the MI lobes are remarkable features (see text). Figure from Ref. [119]. being considered. In the thermodynamic limit, the MI is always destroyed for unbounded types of disorder distributions. A more realistic phase diagram, obtained via Stochastic Mean Field Theory (SMFT), was calculated in Refs. [89, 118] and is shown in Fig. 3.6. Even though qualitatively accurate, these calculations show that identifying the Bose-glass phase for weak disorder can be a quite subtle task. However, more precise calculations of Ref. [103] using Local Mean Field (LMF) theory to analyze the superfluid clusters led to a better resolution of the intervening glassy phase, asthe direct comparison of Fig. 3.7 shows. We should mention that in both cases the box-type of disorder distribution has been used.

3.3.1 Commensurate and incommensurate fillings Although this kind of phase diagram is certainly useful for theoretical purposes, it is hard to imagine practical situations where the usage of the chemical potential 휇 as one of the axes is relevant3. For instance, in ultracold atomic gases, it is often the case where the particles are trapped in the artificial lattice, which means that the number of particles is fixed. Thisalso corresponds to the situation in different simulation methods that work in the canonical ensemble. Therefore, another appropriate and more common type of phase diagrams are those that consider the filling of the lattice, i.e. the total number of particles divided by the number of lattice sites,

3This can be relevant when the Local Density Approximation (LDA) is required, since the local 휇 is a proxy for the local density. 76

Figure 3.6: Phase diagram of the DBHM obtained by Stochastic Mean Field Theory calculations for different disorder strengths. Upper panels show the superfluid order parameter 휓¯ and lower panels show the compressibility 휅. The vertical axes are given by the chemical potential 휇 in units of the on-site interaction energy 푈, while the horizontal ones are the tunneling-interaction ratio 퐽/푈 times the number of nearest-neighbors 푍 of the lattice. Figure from Ref. [118]. from now on denoted by 휌. It is then possible to substitute 휇 by the disorder strength Δ in the vertical axis. Because the Mott-insulator is characterized by an integer number of particles in each lattice site, it only arises for commensurate fillings of the lattice, viz. 휌 = 1, 2, 3, ... . In particular, unit filling has been explored using both numerical and analytical methods, as shown inFig. 3.8. It is worth noticing that, even though higher commensurate fillings could be studied, in principle they do not bring in any new physics that the 휌 = 1 instance does not cover. For numerical methods, working with less particles is always more convenient. As the Mott-insulating phase does not exist for incommensurate filling, phase diagrams for this case are a lot rarer in literature. Recall that, for a long time, a direct SF-MI transition has been speculated. Fig. 3.9 shows an example obtained from Quantum Monte Carlo simulations at 휌 = 0.5. More phase diagrams for incommensurate fillings are going to be discussed in Chapter6.

3.3.2 Reentrant behavior of the superfluid phase The most remarkable feature that is common to the different types of disorder distributions that have been considered here and other works is the resurgence of superfluidity in the presence of 77

Figure 3.7: Phase diagram of the DBHM obtained through two different theoretical approaches for intermediate disorder strength Δ/푈 = 0.6. (a) Local Mean Field theory and analysis of the superfluid clusters (LMF). (b) Stochastic mean field theory. Figure from Ref.[103]. disorder when the corresponding clean system would be totally insulating – the so called reentrant superfluid phase (RSF) [121]. It can be noticed in Fig. 3.8(d), for instance, by the extension of finite superfluid fractions to regions where it is zero in the clean system that has a criticalpoint at 푈/푡 = 29.34(2) [40, 122]. Notice that all other commensurate filling phase diagrams exhibit the same trend. This is an example of the fascinating order-by-disorder fortuity in quantum systems. The RSF typically arises as a finger and appears to be controlled entirely by the disorder strength Δ: this shape and qualitative aspects of such feature arise in all three types of disorder that we have considered. The physical underpinning of this process is rooted in the basic percolation mechanism via which the SF arises in these systems [123]. Typically, the destruction of the MI requires the gap to locally close. This mechanism is re- sponsible for the creation of local SF puddles that are ubiquitous in the BG phase [12]. For the additional requirement of globally coherent superflow, the puddles must be connected over the disorder terrain and delocalization must not be too energetically prohibitive. For a given interac- tion strength and weak disorder, although the creation of SF puddles is possible (for unbounded disordered system this will be the case no matter how small the disorder), they are too sparse for achieving any global superfluidity. Additionally, with increasing 푈/푡, delocalization is penalized due to energetics and, consequently, the creation of SF puddles is suppressed. These effects explain the behavior of the superfluid fraction for Δ/푈 < 0.5 and 푈/푡 > 29.34(2). For intermediate disorder strengths, the puddles proliferate and the particles are able to tunnel through the disorder terrain across different puddles, thereby leading to a globally coherent super- flow. Thus, the RSF extends to large values of interaction-tunneling ratio, until the energetic cost of delocalization, set by 푈/푡, is too large to support a superflow. For larger disorder strengths, there are patches of space where, relative to the chemical potential of the system, the disorder is so large that it creates barriers in the form of hills or valleys that the particles cannot traverse, resulting in the loss of global coherence. For increasing disorder, these patches proliferate. The net effect of these tendencies is the resulting RSF finger. To first order then, the disorder strength plays the dominant role in describing this aspect. 78

Figure 3.8: Phase diagrams of the DBHM at unit filling. (a) Three-dimensions, box disorder, obtained from Quantum Monte Carlo (QMC) simulations, from Ref. [86]. (b) Two-dimensions, box-disorder, obtained from LMF theory, from Ref. [103]. (c) Two-dimensions, box disorder, obtained from QMC, from Ref. [120]. (d) Three-dimensions, exponential disorder, obtained from QMC, from Ref. [18]. Color scale indicates the superfluid fraction.

3.4 The Bose-glass

The emergence of the Bose-glass phase is arguably the most dramatic effect of the addition of disorder to the Bose-Hubbard model. When the lattice contains sites with large enough energy shifts caused by the underlying random potential, the Mott-insulating gap locally closes, allowing for more particles to be added to or removed from that region without any energetic cost. In fact, these processes can be energetically favorable depending on the magnitude of the energy shift 휖푖 compared to the on-site interaction energy 푈 or, more generally, the ratio Δ/푈. This renders the system compressible, even though the excitations are now localized sound modes, contrarily to the delocalized scenario of the clean system discussed in Section 1.3.3. A single site that is able to 79

Figure 3.9: Phase diagram of the DBHM at half filling in three-dimensions, with exponential disorder, obtained via QMC simulations. Color scale indicates the superfluid fraction. Figure from Ref. [18]. close the insulator gap gives rise to a Griffiths singularity. As the system is suddenly compressible, 휅 can be used as an order parameter for the MI-BG transition [71]. However, this Griffiths-type of transition is characterized by the lack of universal critical exponents [14, 45, 46], which is somewhat intuitive by the fact that it is driven by the appearance of rare, random singularities. In spite of being rare, in a thermodynamic system there is always a finite probability of finding arbitrarily large, but still finite4, regions where the energy shifts attributed to the lattice sites are all able to close the Mott-insulator gap. In addition to render the system compressible, these regions can support coherent hopping of particles, which mimics a superfluid-like puddle subjected to a local conjugated field. Even if the energy shifts within a certain region are not all largeenough to close the gap, small negative shifts can make multiple occupancy of the sites favorable. Given the repulsion between particles on the same site, such multiple occupancy further contributes to the delocalization of the particles in that region, effectively increasing the size of SF puddles. These larger regions certainly contribute to a larger global compressibility of the Bose-glass phase, which in spite of being called a glass, can be highly compressible. On the other hand, shifts that are too negatively large contribute even further for the localiza- tion of particles. The system may have structures in the form of deep valleys across the disorder terrain that particles would not be able to traverse. Of course, positive shifts will always contribute for localization since particles occupying such sites are energetically penalized, which increases the extension and number of impassible structures in the lattice. This amounts to a general increasing of the sparsity of puddles when the disorder strength increases. Another important aspect of this phase can be noticed by considering the Theorem of Inclusions,

4The probability of finding such regions is governed by combinatorics and generally has an exponential formon the size of the region, therefore infinite regions have vanishing probabilities. 80

Figure 3.10: Crossover between the low-휅 and high-휅 BG. The white dashed line in the left panel indicates the SF-BG phase boundary, while the compressibility 휅 is shown in color-scale. The white arrow indicates the regions where the compressibility was calculated to obtain the figure in the right panel. Figure from Ref. [18]. which states that it is always possible to find regions of the competing phases in either side ofthe phase boundary of a disorder-driven transition. In the present case, it implies that one can always find superfluid-like regions in the Bose-glass phase, and Bose-glass-like regions in thesuperfluid phase. However, as we have already discussed, the very nature of the Bose-glass is that of local regions that support coherent tunneling of particles immersed in an incoherent background. These “superfluid-lakes” that are guaranteed to exist in the light of the theorem, specially at thevicinity of the phase boundary, are actually of the same nature as the superfluid-puddles that emerge from locally closing the MI energy gap. In fact, the incoherent background in which the superfluid- lakes are immersed are of the Bose-glass type, which means that it also contains unconnected SF puddles5. The presence of such lakes supposedly leads to a cross-over from a low-휅 BG to a high-휅 BG as one passes by the tip of the SF-BG finger-like phase boundary, as shown in Fig. 3.10. Moreover, this points out to a possible fractal nature of the Bose-glass phase [124, 125]. By taking into account the nature of the SF puddles, the Bose-glass should exhibit the same structure in smaller and smaller scales. A strong reason that supports this conjecture is that, even though we can ascribe different aspects to the superfluid-lakes and to the superfluid-puddles, it is not possible to differentiate the physical mechanism that causes them. In a heuristic view, a little puddlecould very well look like a large lake on a smaller length scale.

5It is quite important to distinguish that the incoherent background that we are talking about here is not actually of the MI type, but rather of the BG type. 81

Figure 3.11: Onset of superfluidity in terms of chemical potential 휇, i.e. by density modulation. (a) Interactionless system (푈 = 0), completely localized where all bosons occupy the same ground eigenstate 휀0. (b) When interaction is present and 휇 > 휇푐, bosons fill up the dents of the random potential, and some of them occupy extended states that comprise the superfluid component. (c) For 휇 < 휇푐, the particles become again localized, and the system is in the Bose-glass phase. Figure from Ref. [121].

3.4.1 Onset of superfluidity and the percolation picture As we have discussed in Section 3.2, the clean Bose-Hubbard model undergoes two different types of phase transitions that, for the three-dimensional case, are in the same universality class. The onset of superfluidity in a random media, when driven by density modulation, is illustrated in Fig. 3.11. Consider first situation (a) where there is no interaction between theatoms, i.e. 6 푈/푡 = 0. If the chemical potential 휇 is larger than the ground single-particle eigenstate 휀0 , since we are considering bosonic particles, an infinite number of bosons would occupy such state that would then correspond to a localized, insulating state. The repulsive interaction is therefore imperative for the stabilization of the system. When 푈/푡 > 0, described in situation (b), the repulsive force between atoms that occupy the same site can punch them out of the level 휀0 and, when the chemical potential is high enough, bosons in the levels below 휇푐 “fill” the dents of the random potential, effectively screening the 7 disorder profile . The remaining bosons occupy extended states above 휇푐, becoming the superfluid component. When the chemical potential is reduced below 휇푐, as described in situation (c), the superfluid component disappears and the system becomes the localized Bose-glass insulator. This precisely describes the mechanism of localization of exceeding particles or holes that we have considered to rule out a direct SF-MI transition at points away from the tip of MI lobes (see Section 3.2.3). Alternatively, the transition between the BG and the SF can be driven either by controlling the ratio 푈/푡 – a pure quantum phase transition – or by modulation of the disorder strength Δ –

6Note that, since we have an underlying random potential in this case, the single-particle eigenstate does not extend throughout the system, even though it can definitely have a finite spatial extension. 7There is a strong similarity here with the way that fermionic particles occupy energy levels in order to fulfill Pauli’s exclusion principle. 82 a pure disorder-driven transition. These two possibilities are closely related and can be considered on equal footing given the existence of SF-puddles that are ubiquitous to the Bose-glass phase. In fact, such equivalence between them is the very reason why the phase transition takes place by percolation [126]. Consider first a “recently” formed BG state that emerged from the MIphase, viz. the locally coherent regions formed by the presence of Griffiths singularities are sparse, with a reasonable average distance in between them (a few lattice sites on each direction would work). If we increase Δ from this situation, it is expected that more such regions will develop that would then “fill in” the incoherent regions in between the puddles. In this case, even if thehopping amplitude is not relatively large, the puddles can get so close that hopping from a puddle to a neighboring one can become favorable. If there exist a patch of such puddles closely placed where particles, even if in small number, can hop throughout the whole system, this would then lead to a globally coherent superflow that corresponds to a finite superfluid fraction. Notice thatthis superfluid fraction can be infinitely small in case the puddles “barely percolate”, which indicatesa second order phase transition. Similarly, when the puddles are still sparse, increasing 푡 can lead to favorable tunneling in between them, even though particles may have to hop through sites that, in being positively shifted relatively to the chemical potential, momentarily increase the total energy of the system, which is made possible due to the quantum nature of the particles8. However, this energetic cost is soon compensated by the establishment of the global superflow. On the other hand, if we further increase disorder, the development of deep valleys or hills throughout the disorder landscape “localizes” the puddles, forbidding the tunneling of particles across such structures. This eventually breaks out the connectivity that leads to the superfluid state, and the Bose-glass phase is once again restored. Figure 3.12 summarizes these possibilities. Notice that, even though quite elucidating, the figure does not properly account for the fractal structure of the Bose-glass phase. The nature of the SF-puddles in the Bose-glass phase, as well as the percolation scenario for the SF-BG transition has been confirmed via a synergistic study involving experiments with ultracold atomic gases and large scale QMC calculations [12]. This transition is a peculiar gapless-SF to gapless-BG percolation driven phase transition that nonetheless has all the qualities of a QPT [89, 126]. In fact, the critical exponents have been recently calculated and confirmed to belong to the three-dimensional percolation universality class, particularly with 휈 = 0.88(5) [123]. The nature of the low lying excitations of the two phases appears to be distinct: whereas for the SF they are non-localized sound modes [25], the BG has localized excitations corresponding to the embedded SF puddles, though this requires further studies.

3.4.2 Order parameters Given the discussion on the features of the Bose-glass (BG) phase and the corresponding transitions between the superfluid (SF) and Mott-insulating (MI) phases, it is straightforward to recognize that the order parameters discussed for the clean SF-MI phase transition, namely the superfluid fraction 휌푠 and the compressibility per particle 휅 (see Section 1.3.4), are quite suitable to identify each of the three phases of the disordered model, as described in Table 3.1. Notice

8In some sense, quantum percolation needs less proximity of ordered regions to allow for long-range order than classical percolation [126, 127]. 83

Figure 3.12: Illustration of the transitions from the MI phase (top-left) to the BG, BG-SF (bottom- right) and SF-BG (bottom-left), see text for discussion. Figure from Ref. [18]. that the Bose-glass state has intermediate physical properties of the MI and SF phases, having the peculiarity of lacking long-range order while having infinite superfluid susceptibility and finite compressibility [3,5,6, 10].

Phase Superfluid fraction 휌푠 Compressibility 휅 Superfluid (SF) finite finite Bose-glass (BG) zero finite Mott-insulator (MI) zero zero

Table 3.1: Identification of the phases of the disordered Bose-Hubbard model in terms ofsuitable order parameters. 84

3.5 Finite-temperature effects

As we have seen in Section 1.3.5, the main finite-temperature effect to the clean Bose-Hubbard model is the possible transition to a normal liquid state, which would be a consequence of the destruction of long-range order of the superfluid order parameter due to thermal fluctuations. Inthe presence of disorder, this state would then lack superfluid fraction while being compressible, which immediately could be wrongly recognized as the Bose-glass phase that has completely distinct nature. In the normal liquid, the superfluid-like puddles that are pervasive in the Bose-glass may not be supported, since thermal fluctuations would destroy such local order. This comprises a serious difficulty to properly identify the Bose-glass state in experimental disordered optical lattices that deal with the additional adversity of accessing the system’s temperature only through indirect measurements of quantities such as entropy [12, 91, 128]. However, if disorder is strong enough, thermal fluctuations may not be able to vanish out the local order and the system would still be in the Bose-glass phase. On the other hand, when the disorder strength is small, thermally excited atoms could inco- herently hop across little hills on the disorder terrain that would then be completely “screened”. In this case, one would expect that there would be no difference between physical properties obtained from different disorder realizations even for small lattices, even though the ground stateofthe system could very well exhibit non-self averaging behavior of the same properties. A very complete description of temperature-related effects can be found in Refs. [89, 129, 130]. In this dissertation we will always consider temperatures low enough so that these effects can be safely neglected. In other words, we will be always concerned with the ground state properties of the disordered Bose-Hubbard model. 85

Part II

Methods 86

Chapter 4

Generic numerical methods

Numerical methods have been proved to be of fundamental importance for the development of condensed matter physics throughout the years. Systems that are relevant to this field are often quite complex in the sense that it can be extremely hard to distinguish between different elements that a theory or a model should consider in order to accomplish for the correct description of an experimentally observed phenomenon. This task is actually a major obstacle for the construction of effective models, a procedure that has been discussed in the context of coarse-grained Hamiltonians in Section 1.1. Numerical techniques can considerably facilitate and improve this “selection” of ingredients that a model must contain since, most of the times, they can readily be employed regardless of the complexity of the terms in effective Hamiltonians. From another perspective, much of the physics that is found over these systems is a conse- quence of the collective behavior of their individual components, which means that the many-body character of the problem is essential. Treating such high-order correlations can be extremely chal- lenging, if not impossible, using analytical tools. In fact, a lot of important effects that often emerge in these systems are inherently non-perturbative given the strong interacting nature of the components. These are a few reasons for the lack of analytical methods that are able to cover a satisfactory range of models. In order to overcome such obstacles several numerical methods have been developed and im- proved since the construction of the earliest electronics-based computers in the 1940s [131, 132]. The most straightforward application of computers to perform calculations is their ability to re- alize a large number of operations in a short interval of time1. Actually, such possibility is very often exploited as a “step-zero” approach to construct more sophisticated methods, and it entails the somewhat naïve, but still quite powerful, ideas of exact diagonalization that I will detail in what follows and that I have used to account for the exactness of Monte Carlo methods that were fundamental to obtain the results presented in this dissertation. Even though exact diagonalization techniques are very important, the exponentially increasing price, regarding both resources and time, is the ultimate handicap of such direct, “crude” applica- tion of numerical methods, chiefly when concerning quantum systems [133–135]. The Monte Carlo method that I will also discuss in this Chapter is a widely employed tool to study quantum systems and has been successful in attacking a large number of problems in condensed matter physics and

1“Large” and “short” are, of course, relative terms. This relativity will become clear along this Chapter. 87 several other fields. Its application depends on the ability to generate stochastic samples that is going to be discussed in the realm of a few remarkable sampling techniques at the end of the Chapter.

4.1 Exact diagonalization

In the absence of a more embracing analytical framework and with the increasing availability of computational resources, one important numerical method to investigate quantum systems, at least in a basic level, is the exact diagonalization of Hamiltonians. Recall that the Hamiltonian governs the dynamics and the thermodynamics of a quantum system [61, 136], therefore by being able to calculate its eigen-states it is possible to access all of the physical properties of the system. However, as one would suspect, such achievement must somehow be extremely difficult, otherwise the job of a physicist would be restricted to setting up machines to diagonalize matrices, which we definitely know is not the case. Even though the development of exact diagonalization approaches is certainly an active and important field of research, there are several obstacles that restrict the use of such techniques in a wide range of systems as, for instance, ultracold-atomic gases that are central to this dissertation. In spite of that, the application of more sophisticated numerical techniques to different systems and models relies on their benchmark to situations where exact methods can be used, viz. they must reproduce exact results. In order to accomplish for that, I will describe in the present Section the procedure of numerically diagonalizing the Bose-Hubbard Hamiltonian in a very straightforward manner that has been used to establish exact results that were later reproduced by Quantum Monte Carlo methods. A more complete description of exact diagonalization in quantum systems can be found in Ref. [137], and a very detailed application to the Bose-Hubbard model is given in Ref. [138], which unfortunately came to my acknowledgment after I wrote my own codes. Particularly for cold bosons in optical lattices, the recent tutorial of Ref. [139] is quite pedagogical.

4.1.1 Selection of a suitable basis set

Diagonalizing the Hamiltonian operator 퐻^ is the very essence of completely solving a quantum mechanical problem. However, this is a possibility that is seldom available. The main obstacle to this task is that, for almost every relevant problem of quantum mechanics, the size of the vector- space where the Hamiltonian operator lives in, namely Hilbert or Fock space, is gigantic. Exactly diagonalizing a matrix translates into finding the roots of its characteristic polynomial, which isan analytically possible only for quite small, textbook-like systems. Nonetheless, by using computers one can always find diagonalization strategies that, even though inherently carry errors from finite numerical precision, can be made highly precise and accurate and are applicable to larger, more complex problems. In order to extract the eigenstates of a Hamiltonian 퐻^ , it is necessary to use a basis set to represent its matrix elements. Suppose we have a basis ℬ = {|1⟩ , |2⟩ , |3⟩ , ...} that spans the ⟨ ⃒ ⃒ ⟩ ^ ⃒ ^ ⃒ Hilbert (or Fock) space of the system such that the elements of 퐻 are given by 퐻푖푗 = 푖⃒퐻⃒푗 . 88

^ The matricial representation of 퐻 is this basis, [퐻]ℬ, is then given by

⎡ ⎤ 퐻11 퐻12 퐻13 ··· ⎢ ⎥ ⎢퐻21 퐻22 퐻23 ···⎥ [퐻]ℬ = ⎢ ⎥ (4.1.1) ⎢퐻31 퐻32 퐻33 ···⎥ ⎣ . . . . ⎦ . . . .. By solving the equation

det([퐻]ℬ − 휆1) = 0, (4.1.2) where 휆 is the variable to be found and 1 is the unit matrix, one can find the eigenvalues 휖푖 and ^ associated eigenvectors |휑푖⟩ that then diagonalize the operator 퐻, viz. ^ 퐻 |휑푖⟩ = 휖푖 |휑푖⟩ . (4.1.3)

The task of solving such polynomial equation can of course be made a lot easier if the structure of the matricial representation of 퐻^ can be made simpler. This can be done by a suitable and appropriate choice of basis set, in the same sense that has been discussed in Section 1.2.3. In particular, the use of explicit symmetries of the problem is of great value, since it can reveal details of the shape of the Hamiltonian, such as block or triangular structures, which facilitates the diagonalization procedure.

4.1.2 Direct numerical diagonalization Once an appropriate basis set is found, the following step regards the application of a numer- ical method to diagonalize the obtained matrix representation, which is called direct numerical diagonalization. In general, numerically finding roots of polynomials of high order is a quite tricky task. In fact, it is often the case that they are found by generating a matrix that has the same characteristic polynomial and finding its eigenvalues – the inverse process. The most common strategy is then trying to find an unitary matrix 푈 that makes the Hamiltonian diagonal,

퐻 → 푈 †퐻푈, (4.1.4) in an iterative way,

† † † † † † 퐻 → 푈1 퐻푈1 → 푈2 푈1 퐻푈1푈2 → 푈3 푈2 푈1 퐻푈1푈2푈3 → ... (4.1.5) until the matrix becomes diagonal. The columns of 푈 = 푈1푈2푈3... then contain the eigenvectors of 퐻. One complication of such method is that, given its structure, it is highly non-linear in the Hamiltonian terms. For matrices that exhibit stochastic noise, this can be a serious issue since a little deviation in one of the entrances can lead to a completely different eigenvalue/eigenvector set. On the other hand, a lot of times one is interested only in a subset of eigenvectors, as for instance the low lying energy states of a system or the highest occupation modes of reduced density matrices. For such purposes, power methods, the Lanczos method and Arnoldi decomposition are specifically designated [18]. 89

There are several computational libraries that perform such procedures. In particular, for FORTRAN programmers, LAPACK is potentially the most complete one. Another complete package with linear algebra tools is given by the GNU Scientific Library (GSL), more suitable for C/C++ users, which has been widely used in my codes.

4.1.3 Example for the DBHM As a more concrete example, consider the disordered Bose-Hubbard Hamiltonian in the canon- ical ensemble, for a one-dimensional system with 퐿 = 2, i.e. containing two lattice sites,

(︁ )︁ 푈 (︁ )︁ 퐻^ = −푡 ^푏†^푏 + ^푏†^푏 + 푛^2 +푛 ^2 − 푛^ − 푛^ + 휖 푛^ + 휖 푛^ . (4.1.6) can 1 2 2 1 2 1 2 1 2 1 1 2 2 Let us assume we are interested in the unit filling case, which means that the total number of particles is also two, 푁 = 2. Considering the form of the Hamiltonian, a suitable basis set is given by the occupation number of each lattice site, ℬ = {|푛1, 푛2⟩}, which has the following state-vectors

{︁ }︁ ℬcan = |2, 0⟩ , |1, 1⟩ , |0, 2⟩ . (4.1.7) The matrix elements are therefore

⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ ^ ⃒ ⃒ ^ ⃒ ⃒ ^ ⃒ 2, 0⃒퐻can⃒2, 0 = 푈 + 휖1 1, 1⃒퐻can⃒2, 0 = −푡 0, 2⃒퐻can⃒2, 0 = 0 ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ ^ ⃒ ⃒ ^ ⃒ ⃒ ^ ⃒ 2, 0⃒퐻can⃒1, 1 = −푡 1, 1⃒퐻can⃒1, 1 = 휖1 + 휖2 0, 2⃒퐻can⃒1, 1 = −푡 (4.1.8) ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ ^ ⃒ ⃒ ^ ⃒ ⃒ ^ ⃒ 2, 0⃒퐻can⃒0, 2 = 0 1, 1⃒퐻can⃒0, 2 = −푡 0, 2⃒퐻can⃒0, 2 = 푈 + 휖2, which gives ⎡ ⎤ 푈 + 휖1 −푡 0 ⎢ ⎥ [퐻]ℬcan = ⎣ −푡 휖1 + 휖2 −푡 ⎦ . (4.1.9) 0 −푡 푈 + 휖2 In this extremely simple case, it is actually possible to find an exact, analytical solution for the problem. However, if one considers the situation with just one more lattice site, 퐿 = 3 and 푁 = 3, the canonical basis already has 10 elements, ⎧ ⎫ ⎨ |0, 0, 3⟩ , |3, 0, 0⟩ , |0, 3, 0⟩ , |0, 2, 1⟩ , |2, 0, 1⟩⎬ ℬcan,퐿=3 = , (4.1.10) ⎩ |1, 0, 2⟩ , |0, 1, 2⟩ , |1, 2, 0⟩ , |2, 1, 0⟩ , |1, 1, 1⟩⎭ so that the Hamiltonian is a 10 × 10 matrix that certainly cannot be analytically diagonalized. The canonical basis increases enormously with the system size, which turns even the numerical diagonalization unpractical for more than a few lattice sites. For instance, for 퐿 = 4, 5, 6 the basis has 35, 126 and 462 elements! The situation is even more complicated for the grand-canonical ensemble, where the disordered Bose-Hubbard Hamiltonian for 퐿 = 2 is given by

(︁ )︁ 푈 (︁ )︁ 퐻^ = −푡 ^푏†^푏 + ^푏†^푏 + 푛^2 +푛 ^2 − 푛^ − 푛^ + 휖 푛^ + 휖 푛^ − 휇(^푛 +푛 ^ ). (4.1.11) Gcan 1 2 2 1 2 1 2 1 2 1 1 2 2 1 2 90

Since in this case the number of particles is not fixed, the associated occupation number basis set is effectively infinite. To perform diagonalizations, it is necessary to truncate the basis. In general, a truncation that allows for a maximum occupation number of three times the filling of the system is required. The grand-canonical basis for 퐿 = 2 therefore has 16 elements,

⎧ ⎫ ⎪ |0, 0⟩ , |1, 0⟩ , |2, 0⟩ , |3, 0⟩ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨⎪ |0, 1⟩ , |1, 1⟩ , |2, 1⟩ , |3, 1⟩ ⎬⎪ ℬGcan,퐿=2 = , (4.1.12) ⎪ |0, 2⟩ , |1, 2⟩ , |2, 2⟩ , |3, 2⟩ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩⎪ |0, 3⟩ , |1, 3⟩ , |2, 3⟩ , |3, 3⟩ ⎭⎪ or 4퐿 for any one-dimensional system. For 퐿 = 6, the basis has 4096 elements against the 462 of the canonical basis. A complete, full diagonalization for such a small system followed by the calculation of partition function and thermodynamical properties in a regular workstation (3.47 GHz processor) and using GSL routines takes about one day2. By considering low temperatures so that only a small number of excited states contribute effectively to the average of observables, it is possible to use Lanczos’ method to extract a few of the smallest eigenpairs and reduce the required time to about a couple of hours. In comparison, Quantum Monte Carlo methods are able to estimate the same set of properties within a relative error of less than 1% in less than 30 seconds! This example clearly illustrates how a direct application of diagonalization techniques can become incredibly costly for reasonable system sizes.

4.2 Monte Carlo

Instead of looking for a precise definition of what Monte Carlo methods are, it is usefulto identify common features that different members of this class possess. Their essence is to use chance, or random behavior, to investigate the structure of stochastic processes and study phe- nomena that are governed by such mathematical objects. The primary ideas that gave birth to this widely used and extremely important class of algorithms were conceived within the context of the Manhattan Project, during World War II, and the development of the first electronics-based computers [140]. During the years, it has developed into a methodology that permeates most of contemporary science, finance and engineering [141]. A very complete guide for the general con- cepts and applications of such methods can be found in Ref. [142] that is part of a larger set of textbooks on the subject. Here, I will give a couple of examples that illustrate a few features of Monte Carlo techniques.

4.2.1 A picturesque random experiment Perhaps the most common example that illustrates the substance of Monte Carlo methods is the calculation of 휋 using random numbers. I will consider here a slightly changed version that comprises the calculation of the area of a lake using the artillery of cannon balls. Suppose that

2퐿 = 6 was the largest system for which I performed full exact diagonalization. 91

Figure 4.1: It is possible to use a randomly-shooting cannon to calculate the area of a lake (a). As the cannon is fired (b), some of the shots will hit the grass and some of them will hit thelake(c). After a certain amount of shots, since the firings are uncorrelated and random, we expect that the whole terrain will be reasonably covered (d). See text for discussion. we somehow have a cannon that is capable to hit any point within a square with its projectiles and, inside this square, there is a lake, as in Fig. 4.1(a). A fundamental hypothesis that we have to assume here is that the shots of the cannon are performed randomly, which means that every point within the square has exactly the same probability of being hit. Additionally, subsequent shots are uncorrelated events, in the sense that the result of the previous shot has no influence over the next one whatsoever. We then start our artillery experiment, randomly firing cannon balls all over the place, as described in Fig. 4.1(b). As we keep the artillery on, some cannon balls will hit the the terrain outside the lake and some of them will hit the lake, as in Fig. 4.1(c). After a good amount of shots, or perhaps after the powder is all gone, we expect that the shots will have covered a reasonable portion of both the lake and the terrain, as in Fig. 4.1(d). By counting the number of shots that hit the lake we can estimate its area because

Number of shots that hit the lake Area of the lake = Total number of shots Area that the cannon can reach and we previously know the area covered by the cannon – it is a square! Of course, the estimation of the area of a lake by using a firing cannon is extremely picturesque. However, this can bemade a lot more realistic by considering, for instance, a satellite picture of the lake. The same estimate can be made by calculating the fraction of the number of pixels in the picture that fall into the 92 lake to the total number of pictures in the image. Furthermore, instead of using cannons, we can use random numbers generated by a computer or, more boldly, outcomes from measurements in simple quantum systems [143]. This example also illustrates a few aspects of the Monte Carlo method:

1. The random character of the experiment is fundamentally important. For instance, if the shots of the cannon were rather performed with different intensities but along the same direction, we would definitely not have an estimate of the area, but ofa section, or the size of a string, across the lake.

2. The number of times that the random experiment is repeated matters. For a few shots, as in Fig. 4.1(b), we would estimate that the lake does not exist! However, as the number of shots increases, the estimate gets more accurate and precise, which is a direct realization of the Law of large numbers.

3. The same experiment can be used for different purposes. For instance, we could have used a procedure to track the time that each ball that hits the lake takes to get to the bottom to estimate the average depth of the lake.

4. Instead of firing the same cannon several times, we could rather have alarge collection of these random-cannons that would shoot all at the same time. In the present case, both experiments would give the same result, a fortuity that is strictly related to the concept of ergodicity. Ergodic processes are of paramount importance for Monte Carlo methods to work properly.

A final remark regards the fact that we do not really need a random experiment to estimate areas, which are given by

∫︁∫︁ Area of the lake = 푑푥푑푦, (4.2.1) lake an operation that can be easily performed by using quite primitive numerical methods. However, for more complex problems that require the evaluation of multi-dimensional integrals with intricate boundary conditions – a common demand in quantum many-body systems – the Monte Carlo method is an extremely valuable tool.

4.2.2 Estimators One of the most remarkable features of the Monte Carlo method is the ability to estimate quantities that are related to underlying stochastic processes in a controllable way. In a variety of problems, a certain quantity of interest 푔 depends on a random variable 푋 whose time-evolution is governed by a certain stochastic process 풮. Consider initially that 푔 is measured, or estimated, only when the evolution of the probability distribution 풫(푋) has previously led to equilibrium, viz. when 풮 has reached a stationary state. For practical purposes, what is needed is that both 93 the average and the variance of 푋 are fixed. When such equilibrium state is achieved, the expected value of 푔 is given by ∫︁ 퐸[푔(푋)] = 푔(푋)풫(푋)푑푋, (4.2.2) while its variance is

[︁ ]︁ Var[푔(푋)] ≡ 퐸 (푔(푋) − 퐸[푔(푋)])2 (4.2.3) [︁ ]︁ = 퐸 (푔(푋))2 − 2푔(푋) · 퐸[푔(푋)] + (퐸[푔(푋)])2 [︁ ]︁ = 퐸 (푔(푋))2 − 2퐸[푔(푋)] · 퐸[푔(푋)] + (퐸[푔(푋)])2 [︁ ]︁ = 퐸 (푔(푋))2 − (퐸[푔(푋)])2 ∫︁ [︂∫︁ ]︂2 = [푔(푋)]2 풫(푋)푑푋 − 푔(푋)풫(푋)푑푋 . (4.2.4)

In both cases, calculation of the quantities requires complete information about the probability distribution 풫, which is seldom available3. Notwithstanding that, inspired by our random-firing- cannon example, we could think about designing a specific type of cannon that would sample 풫, in the sense that the shots would be performed not completely randomly, but rather according to the probability distribution of the random variable 푋. In other words, this designed cannon would allow for the generation of draws from the probability distribution. If we perform a series of shots, or equivalently if we sample a collection of values {푋1, 푋2, 푋3, ..., 푋푁 } from 풫, then

푁 1 ∑︁ 퐺 ≡ 푔(푋푖) (4.2.5) 푁 푖=1 is a Monte Carlo estimator for the expected value of 푔. Of course, the effort that we need to put in is now hidden in the construction of the cannon, i.e. in how to sample a – most often intricate – probability distribution. Fortunately, there is a set of techniques that were developed during the years in order to accomplish for this task, and some of them are going to be discussed in the next Section. However, before discussing sampling techniques, it is of paramount importance to make explicit how Monte Carlo estimates can be made in a controllable fashion, as we have noticed in the beginning of the present Section. This can be done by noticing that

[︃ 푁 ]︃ 1 ∑︁ 퐸 [퐺] = 퐸 푔(푋푖) 푁 푖=1 푁 1 ∑︁ = 퐸 [푔(푋푖)] 푁 푖=1 = 퐸 [푔(푋)] , (4.2.6)

3See discussion about population distributions and sample distributions in AppendixA, where random variables, expected values and other momenta are further discussed. 94 and also

[︃ 푁 ]︃ 1 ∑︁ Var[퐺] = Var 푔(푋푖) 푁 푖=1 [︁ ]︁ = 퐸 (퐺 − 퐸 [퐺])2 [︁ ]︁ = 퐸 퐺2 − 퐸 [퐺]2 [︁ ]︁ = 퐸 퐺2 − 퐸 [푔(푋)]2

⎡ 푁 푁 ⎤ 1 ∑︁ ∑︁ 2 = 퐸 ⎣ 2 푔(푋푖) · 푔(푋푗)⎦ − 퐸 [푔(푋)] 푁 푖=1 푗=1 ⎡ 푁 푁 푁 ⎤ 1 ∑︁ ∑︁ 1 ∑︁ 2 2 = 퐸 ⎣ 2 푔(푋푖) · 푔(푋푗) + 2 푔(푋푖) ⎦ − 퐸 [푔(푋)] 푁 푖=1 푗=1,푗̸=푖 푁 푖=1 ⎡ 푁 푁 ⎤ [︃ 푁 ]︃ 1 ∑︁ ∑︁ 1 ∑︁ 2 2 = 2 퐸 ⎣ 푔(푋푖) · 푔(푋푗)⎦ + 2 퐸 푔(푋푖) − 퐸 [푔(푋)] 푁 푖=1 푗=1,푗̸=푖 푁 푖=1 푁 푁 푁 1 ∑︁ ∑︁ 1 ∑︁ [︁ 2]︁ 2 = 2 퐸 [푔(푋푖) · 푔(푋푗)] + 2 퐸 푔(푋푖) − 퐸 [푔(푋)] 푁 푖=1 푗=1,푗̸=푖 푁 푖=1 푁 푁 1 ∑︁ ∑︁ 1 [︁ 2]︁ 2 = 2 퐸 [푔(푋푖)] · 퐸 [푔(푋푗)] + 2 푁퐸 푔(푋) − 퐸 [푔(푋)] 푁 푖=1 푗=1,푗̸=푖 푁 푁(푁 − 1) 1 [︁ ]︁ = 퐸 [푔(푋)]2 + 퐸 푔(푋)2 − 퐸 [푔(푋)]2 푁 2 푁 1 1 [︁ ]︁ = 퐸 [푔(푋)]2 − 퐸 [푔(푋)]2 + 퐸 푔(푋)2 − 퐸 [푔(푋)]2 푁 푁 1 {︁ [︁ ]︁ }︁ = 퐸 푔(푋)2 − 퐸 [푔(푋)]2 푁 Var[푔(푋)] = . (4.2.7) 푁

This last pair of equations, for the expected value of 퐺 and its variance, ensures that the estimate of 푔 becomes more precise and more accurate as the number of draws from the probability distribution is increased. Recall that we had the same conclusion in the case of the calculation of the area of a lake using the random cannon: the more shots, the better the estimate of the area. The accuracy is guaranteed by the fact that 퐸 [퐺] = 퐸 [푔(푋)], whereas the precision is ensured by Var[퐺] = Var[푔(푋)]/푁. However, the later is only true if the variance of 푔(푋) remains fixed, which is related to the condition of equilibrium previously enunciated. Even though the equilibrium requirement is possibly fulfilled by a random variable that does not have a temporal dependence, a similar approach can be used to stochastic processes not necessarily in stationary states if the boundary conditions of the problem are fixed. Suppose, for instance, 95 that we are interested in the evolution of 푋 = 푋(푡) from 푡 = 0 to 푡 = 휏 and we know that

⎧ ⎨ 푋(푡 = 0) ≡ 푋0 = 푎 (4.2.8) ⎩ 푋(푡 = 휏) ≡ 푋휏 = 푏 which fixes the boundary conditions. In such type of situations, the property 푔 that we had before translates into a quantity 풢 that is a functional of 푋(푡), viz. it has a different value for every different “path” that the stochastic variable takes to gofrom 푋0 to 푋휏 . Associated to the stochastic process 풮 that drives 푋(푡) there is a law, which attributes statistical weights for different paths through a measure 풟푋, so that

∫︁ 푋휏 퐸 [풢] = 풢[푋(푡)]풟푋, (4.2.9) 푋0 which expresses the expected value of 풢 in terms of a path-integral, or in general a functional integration4. In simple terms, 퐸 [풢] is given by the sum of the values of 풢 along all paths 푋(푡) that start at 푋0 and end at 푋휏 in a time 휏 multiplied by the respective “probability” attributed to each path. Just like before, we thus can estimate the value of 풢 using the estimator

푁 1 ∑︁ G = 풢[푋푖(푡)], (4.2.10) 푁 푖=1 where the sum is over a collection of sampled paths {푋1(푡), 푋2(푡), ..., 푋푁 (푡)}. These paths are generally called random walks, even though there are several specific names depending on the different stochastic processes that generate them. A good example of this kind of approach is the externally driven Brownian motion, which describes a particle under the action of molecular random forces, whose position is therefore a stochastic variable 푋(푡), subjected to an external field 푉 (푋(푡)). If the particle is at position 푋0 at 푡 = 0, the probability amplitude to find it at 푋휏 , at 푡 = 휏, is given by

∫︁ 푋휏 {︂ ∫︁ 휏 }︂ 휌(푋0, 푋휏 ; 휏) = exp − 푉 (푋(푡))푑푡 풟풲푋 , (4.2.11) 푋0 0 where 풟풲푋 is the Wiener measure [145]. The paths generated by this Wiener process, shown in Fig. 4.2, are called Brownian bridges, and they originate from considering only the molecular random forces, not including the external field. This also illustrates an important application of this formalism that allows one to solve non-linear partial differential equations in terms of expected averages over random walks.

4Actually, path-integrals that were initially developed in the context of Feynman’s approach to quantum me- chanics are more subtle because their “measure” may not be positive-definite, but rather a complex number [144]. 96

Figure 4.2: One-dimensional Brownian bridges with end points 푋0 = 푋1 = 0. Figure from PHYS- 510 course of George Mason University, 2010.

4.3 Sampling techniques

Now that we have developed a procedure to estimate quantities of interest that depend upon some underlying phenomena of stochastic nature5, it is time to discuss the main ingredient of such approach, which comprises the sampling of probability distributions. This can be done for simple situations, such as a sequence of numbers, as well as for more complicated cases of random walks that are configurations in a highly-dimensional phase-space. I will discuss three of them, with increasing functionalities, starting from the most straightforward one that considers a transforma- tion of random variables, to then discuss rejection techniques and the 푀(푅푇 )2 algorithm. All of them are better contextualized and certainly more detailed in Ref. [142].

4.3.1 Transformation of random variables Most of sampling techniques used in modern computation algorithms rely on the ability of generating pseudo-random numbers, which are random within an extremely large periodicity. In particular, a common choice is the Mersenne twister generator that has a periodicity of (219937 −1) [146]. This is the generator that has been used in the Monte Carlo codes that produced the results to be presented in this dissertation. It generates uniformly distributed random numbers that can be used to draw samples from a different probability distribution using a transformation of random variables. In what follows, p.d.f. will stand for probability density function.

5Its evolution however can very well be deterministic. 97

The transformation of random variables finds a good motivation in the more common case of performing transformations in basic calculus in order to perform integrations. Suppose, for instance, that we face the problem √ ∫︁ 휋/2 퐼 = 2푥 cos 푥2푑푥, (4.3.1) 0 and we do not notice that the integrand 2푥 cos 푥2 is the derivative of sin 푥2. We would then proceed by substituting 푦 for 푥2,

1. Transform 푦 = 푥2 √︁ 2. Replace the limits, 푥 = 0 → 푦 = 0 and 푥 = 휋/2 → 푦 = 휋/2 √ √ 3. Noticing the inverse transformation 푥 = 푦, replace the integrand 2푥 cos 푥2 by 2 푦 cos 푦

푑푦 푑푥 √1 √ 4. Using the inverse transformation, 푑푦 = 2 푦 so that 푑푥 = 2 푦

The original problem then becomes

∫︁ 휋/2 퐼 = cos 푦푑푦 = 1. (4.3.2) 0 In the general case,

∫︁ 푏 퐼 = 푓(푥)푑푥, (4.3.3) 푎 these steps would read

1. Choose a suitable transformation function 푦(푥)

2. Note its inverse 푥(푦)

3. Replace the limits by 푦(푎) and 푦(푏)

푑푥 4. Replace 푑푥 by 푑푦 푑푦 which results in

∫︁ 푦(푏) 푑푥 퐼 = 푓(푥(푦)) 푑푦. (4.3.4) 푦(푎) 푑푦

Suppose now that 푋 is a random variable with p.d.f. 푓푋 (푥), therefore by definition

∫︁ 푏 풫 {푎 ≤ 푋 ≤ 푏} = 푓푋 (푥)푑푥, (4.3.5) 푎 98

Figure 4.3: Transformation of random variables via 푌 = 푦(푋). At the left, the p.d.f. of X, followed by the transformation 푦(푥) (center) and by the p.d.f. associated to 푌 . The shaded areas must be the same, see text for discussion. Figure from the Probability Course from Cambridge University, 2003-2004. The whole discussion about transformation of variables is strongly motivated by this document. and we want to transform 푋 into another random variable 푌 via a continuous function 푦, i.e. 푌 = 푦(푋). Moreover, if 푎 ≤ 푋 < 푏 then 푦(푎) ≤ 푌 < 푦(푏) and

∫︁ 푏 ∫︁ 푦(푏) 푑푥 풫 {푦(푎) ≤ 푌 < 푦(푏)} = 풫 {푎 ≤ 푋 < 푏} = 푓푋 (푥)푑푥 = 푓푋 (푥(푦)) 푑푦, (4.3.6) 푎 푦(푎) 푑푦 which explicitates that the integrand depends completely on 푦, and calling it 푔(푦),

∫︁ 푦(푏) 풫 {푦(푎) ≤ 푌 < 푦(푏)} = 푔(푦)푑푦, (4.3.7) 푦(푎) therefore 푔(푦) is the p.d.f. associated to 푌 . However, the crucial step is the assignment 풫 {푦(푎) ≤ 푌 < 푦(푏)} = 풫 {푎 ≤ 푋 < 푏} , (4.3.8) which implies that, given some assumptions about the transformation 푦(푥), the value of 푌 must be in the range 푦(푎) to 푦(푏) and the probability of 푌 being in this range is the same as the probability of 푋 being in the range 푎 to 푏. In Fig. 4.3, this condition means that the shaded areas are the same. It can be shown that, for a transformation function that is either monotonically increasing or decreasing, this requirement is fulfilled. First, define the cumulative distribution function, c.d.f. from now on, associated to the p.d.f. 푓푋 (푥) by ∫︁ 푥 ′ ′ 퐹푋 (푥) = 푓푋 (푥 )푑푥 , (4.3.9) −∞ and now consider the monotonic transformation case, such that 푦(푋) ≤ 푦(푥) ⇐⇒ 푋 ≤ 푥, (4.3.10) 99 therefore 풫 {푌 = 푦(푋) ≤ 푦(푥)} = 풫 {푋 ≤ 푥}

=⇒ 퐹푋 (푥) = 퐹푌 (푦) (4.3.11) and also

푑퐹 푑퐹 푑퐹 푑푦 푑푦 푋 = 푌 = 푌 =⇒ 푓 (푥) = 푓 (푦) . (4.3.12) 푑푥 푑푥 푑푦 푑푥 푋 푌 푑푥 Similarly, for a monotonically decreasing function,

풫 {푌 = 푦(푋) ≤ 푦(푥)} = 풫 {푋 ≥ 푥}

=⇒ 퐹푌 (푦) = 1 − 퐹푋 (푥) (4.3.13) such that 푑푦 푓 (푥) = −푓 (푦) . (4.3.14) 푋 푌 푑푥 In either case, the relation between the p.d.f. resulting from the transformation 푌 = 푦(푋) is

⃒ ⃒ ⃒ 푑푦 ⃒ ⃒ ⃒ 푓푋 (푥) = 푓푌 (푦) ⃒ ⃒ =⇒ |푓푋 (푥)푑푥| = |푓푌 (푦)푑푦| , (4.3.15) ⃒푑푥⃒ which is the condition that we wanted. Therefore, the relationship between the p.d.f. 푓푋 (푥), the inverse of a transformation function 푥(푦) and the derived p.d.f. 푔(푦) is

⃒ ⃒ ⃒푑푥⃒ ⃒ ⃒ 푔(푦) = 푓푋 (푥(푦)) ⃒ ⃒ . (4.3.16) ⃒ 푑푦 ⃒

In the present context, the relevant question to ask is, given a certain p.d.f. 푓푋 (푥) that is known, in the sense that we are able to sample from it, how to transform these draws to a collection that is distributed according to another p.d.f. 푔(푦) that we do not know how to sample. Of particular interest is the case where 푓푋 (푥) is a uniform distribution.

⎧ ⎨1, if 0 ≤ 푥 < 1 푓푋 (푥) = (4.3.17) ⎩0, otherwise. As an example, consider that we want to obtain the transformation that leads to exponentially distributed numbers, i.e.

푔(푦) = 휆푒−휆푦. (4.3.18) We then have 100

⃒ ⃒ ⃒푑푥⃒ 푑푥 ⃒ ⃒ 푔(푦) = 푓푋 (푥(푦)) ⃒ ⃒ = 푓푋 (푥(푦)) ⃒ 푑푦 ⃒ 푑푦 푑푥 휆푒−휆푦 = 1 푑푦 푑푥 = 휆푒−휆푦푑푦 1 푥 = 푒−휆푦 =⇒ 푦 = − log 푥 (4.3.19) 휆 as the required transformation. Of course, such direct procedure is not always available, and different sampling techniques are necessary.

4.3.2 Acceptance-rejection There is a class of methods for sampling probability distributions that make use of trial values that are selected and tested by criteria that involve one or more different random variables followed by which it can be accepted or rejected as a sampled value. If rejected, a new trial is proposed and tested, and the cycle goes on until acceptance takes place. In general, these are called rejection techniques. From now on I am going to use the convention that the Greek letter 휉 represents a draw from a uniform probability distribution in the unitary interval 푈 = (0, 1), i.e. ⎧ ⎨1, if 0 ≤ 휉 < 1 푓(휉) = (4.3.20) ⎩0, otherwise.

Following Ref. [142], consider that we want to sample 푋 from a complicated p.d.f. in 푈, and that the trial value 푋0 = 휉1 is chosen. The test to which 푋0 is subjected must allow for a larger acceptance where 푓푋 (푥) is larger compared to the acceptance for values that have smaller probability. This is immediately accomplished by accepting 푋0 with probability proportional to 푓푋 (푋0). For simplicity, consider that 푓푋 (푥) has its maximum at 푥 = 0. We then choose points within the smallest rectangle that encloses the function 푓푋 (푥), with the corresponding ordinate being 푋0 = 휉1 and abscissa 휉2푓푋 (0). A test that meets the requirement is

1. Accept 푋 = 푋0 if 휉2 ≤ 푓푋 (푋0)/푓푋 (0).

Geometrically, points lying above the curve 푓푋 (푥) are rejected, while points below the curve are accepted. The accepted ordinates are therefore distributed according to 푓푋 (푥). More generally, consider that we want to sample 푋 from a p.d.f. 푓푋 (푥), but it is easier for us to sample 푍 from the p.d.f. 푔푍 (푧). This 푍 is accepted, 푋 ← 푍, with probability ℎ(푧), i.e. by taking 휉1 and verifying that 휉1 ≤ ℎ(푍) < 1. If not, we sample another 푍. There are therefore the possibility of success, when 푍 is accepted such that 푋 = 푍, and of failure, when 푍 is rejected. The joint probability of 푍 < 푥 and 휉1 ≤ ℎ(푍) is then ∫︁ 푥 풫 {푍 < 푥 and 휉1 ≤ ℎ(푍)} ≡ 풫 {푍 < 푥 and success} = ℎ(푧)푔푍 (푧)푑푧, (4.3.21) −∞ 101 and in particular, ∫︁ +∞ 풫 {푍 < ∞ and success} = ℎ(푧)푔푍 (푧)푑푧. (4.3.22) −∞ This joint probability can be written as the product of the marginal probability for success, 풫 {success}, and a conditional probability that 푍 < 푥 given success, 풫 {푍 < 푥 | success},

풫 {푍 < 푥 and success} = 풫 {success} 풫 {푍 < 푥 | success} , (4.3.23) which for the later case reads

풫 {푍 < ∞ and success} = 풫 {success} 풫 {푍 < ∞ | success} . (4.3.24)

As we consider real-valued stochastic variables, 풫 {푍 < ∞ | anything} = 1, therefore

∫︁ +∞ 풫 {푍 < ∞ and success} = ℎ(푧)푔푍 (푧)푑푧 = 풫 {success} . (4.3.25) −∞ Finally, using Eq. 4.3.23, we have

풫 {푍 < 푥 | success} = distribution of Z coming from a certain rejection algorithm (4.3.26) 풫 {푍 < 푥 and success} = (4.3.27) 풫 {success} ∫︁ 푥  ∫︁ +∞ = ℎ(푧)푔푍 (푧)푑푧 ℎ(푧)푔푍 (푧)푑푧, (4.3.28) −∞ −∞ which explicitates that the p.d.f. that results from the rejection technique is proportional to

ℎ(푧)푔 (푧) 푍 . (4.3.29) ∫︀ +∞ ′ ′ ′ −∞ ℎ(푧 )푔푍 (푧 )푑푧

However, this is not necessarily 푓푋 (푥)! A suitable choice for ℎ(푧) would then be 푓(푧)/푔(푧) ℎ(푧) = , (4.3.30) 퐵ℎ where 퐵ℎ is an upper bound for the ratio 푓(푧)/푔(푧), therefore ℎ(푧) ≤ 1. With that,

푓(푧)/푔(푧) ℎ(푧)푔 (푧) 푔(푧) 푓(푧) 푍 = 퐵ℎ = = 푓(푧), (4.3.31) ∫︀ +∞ ′ ′ ′ ∫︀ +∞ 푓(푧′)/푔(푧′) ′ ′ ∫︀ +∞ ′ ′ −∞ ℎ(푧 )푔푍 (푧 )푑푧 푔푍 (푧 )푑푧 −∞ 푓(푧 )푑푧 −∞ 퐵ℎ which shows that the method with this choice of ℎ(푧) generates the p.d.f. of interest. The a priori success probability, 풫 {success}, which is the efficiency 휖 of the rejection algorithm, is given by

∫︁ +∞ 1 ∫︁ +∞ [︃푓(푧)]︃ 1 풫 {success} ≡ 휖 = ℎ(푧)푔푍 (푧)푑푧 = 푔(푧)푑푧 = , (4.3.32) −∞ 퐵ℎ −∞ 푔(푧) 퐵ℎ so that one should look for the least upper bound, or supremum of the ratio 푓(푧)/푔(푧) to achieve optimal acceptance. 102

4.3.3 Metropolis algorithm One of the most powerful methods of sampling probability distributions on modern computers, being certainly the most widespread, is the Metropolis algorithm that was introduced in 1953 [147] by Metropolis et al.6 The method is guaranteed to sample the desired probability distribution, that can be as complicated as possible, but has a couple of subtleties associated to its construction. The first one regards the fact that it samples the required probability distribution only afteran equilibration step, which is related to the use of Markov chains. The second one is that, even after equilibration, subsequent draws can be highly correlated, which relates to the trial movements of the underlying rejection technique that the method proposes. In spite of that, it has found enormous success and will be used as the primary engine for the Quantum Monte Carlo method that is relevant to the present dissertation. As we have a general idea of rejection techniques from the discussion in the last Section, I will introduce the second ingredient of the method that is the Markov chains. In very broad terms, it can be defined as a stochastic process in which the evolution of the random variable hasno memory. This means that a subsequent value of 푋, say 푋푛+1, depends solely on the current value 푋푛, completely disregarding the way in which 푋 has previously evolved to reach 푋푛. It is sometimes said that the processes has therefore independent future and past, being conditional in the present state of the system. The method bears a strong resemblance to the behavior of systems described by statistical mechanics that approach equilibrium, when the statistical properties do not depend upon the kinetics of the system. Following once more Ref. [142], by system it is meant a point 푥 in a certain space Ω that can possibly be thought of as the description of a physical problem, even though this is not necessary at all. As a example, 푥 could be the positions of all the particles of an ideal gas7. By kinetics it is meant the stochastic transition that governs the evolution of the system, more specifically a probability distribution 퐾(푋|푌 ) that encapsulates the possible evolution of the system known to be at 푌 to 푋. In the case of the ideal gas, 퐾 would comprise the present position and momenta of all atoms at a point 푌 in order to reproduce a transition probability to point 푋. In Monte Carlo simulations, it plays the role of the sampling distribution. A very important condition for the system to evolve towards an equilibrium state and stay there is called the detailed balance. If the equilibrium probability distribution to find the system at 푋 is 푓(푋), then the kinetics must satisfy

푓(푌 )퐾(푋|푌 ) = 푓(푋)퐾(푌 |푋), (4.3.33) which states that the likeliness of a system to move from 푌 to 푋 is exactly the same as the movement in the reverse direction. Notice that the joint probability of moving from 푌 to 푋 is expressed as the a priori chance of the system to be found in 푌 , viz. 푓(푌 ), times the conditional probability that it will move from 푌 to 푋, viz. 퐾(푋|푌 ). In treating a physical system, usually what is known is the kinetics 퐾 and what is wanted is 푓. The Metropolis algorithm deals with

6The surnames of the authors are Metropolis, Rosenbluth, Rosenbluth, Teller and Teller, therefore it is some- times referred to as the M(RT)2 Algorithm. 7Ω usually describes the configurational space of a physical system. 103 the inverse situation: it finds a convenient and correct kinetics that will lead to equilibration of the system such that the probability of finding it at 푋 is the given 푓(푋). The algorithm is simply stated as follows: transitions are proposed, say from 푌 to 푋′, using any distribution 푇 (푋′|푌 ), then by comparing 푓(푋′) to 푓(푌 ) and also taking into account 푇 , the system is either moved to 푋′ (accepted movement) with acceptance probability 퐴(푋′|푌 ), or it remains in 푌 (rejected movement). The kinetics is then given by

퐾(푋|푌 ) = 퐴(푋|푌 )푇 (푋|푌 ), (4.3.34) and the detailed balance condition reads

푓(푌 )퐴(푋|푌 )푇 (푋|푌 ) = 푓(푋)퐴(푌 |푋)푇 (푌 |푋). (4.3.35) Given the p.d.f. f(X), the method establishes a random walk, i.e. a sequence of random variables {푋1, 푋2, 푋3, ..., 푋푛}, each one with an associated probability distribution {휑1(푋), 휑2(푋), ..., 휑푛(푋)}, such that, asymptotically, 푋 is distributed according to 푓,

lim 휑푛(푋) = 푓(푋). (4.3.36) 푛→∞

Notice that 휑1(푋) can be rigorously any distribution. At each step, the transition probabilities are normalized, ∫︁ 푇 (푋|푌 )푑푋 = 1, (4.3.37) which means that the system will surely evolve to a point 푋 from 푌 , including 푋 = 푌 . By assuming that if it is possible to move from 푌 to 푋 then it is also possible to perform the opposite movement, we can define

푇 (푌 |푋)푓(푋) 푞(푋|푌 ) ≡ ≥ 0 (4.3.38) 푇 (푋|푌 )푓(푌 ) such that the acceptance probabilities can be calculated via

퐴(푋|푌 ) = min {1, 푞(푋|푌 )} . (4.3.39) The algorithm is summarized as follows:

′ 1. At step 푛 of the random walk, 푋 = 푋푛. A possible next value 푋푛+1 is sampled from ′ 푇 (푋푛+1|푋푛).

′ ′ ′ 2. The probability of accepting 푋푛+1 is computed. If 푞(푋푛+1|푋푛) ≥ 1 then 퐴(푋푛+1|푋푛) = 1. ′ ′ ′ If 푞(푋푛+1|푋푛) < 1, then 퐴(푋푛+1|푋푛) = 푞(푋푛+1|푋푛).

′ ′ 3. With probability 퐴(푋푛+1|푋푛) we set 푋푛+1 = 푋푛+1. There is an element of rejection here: if ′ 푋푛+1 is not accepted, use the previous value rather than sampling a new one. 4. Get back to 1. 104

The proof that the method indeed samples the desired p.d.f. 푓(푋) is beyond the scope of the present Chapter that is meant to introduce generic numerical techniques. It can be found elsewhere, as in [142]. In any case, as we have noticed, the transition probabilities 푇 can be anything, and a very common choice is a uniform distribution, ⎧ ⎨ 1 , if |푋 − 푌 | < Δ 푇 (푋|푌 ) = Δ 2 (4.3.40) ⎩0, otherwise, where Δ is large enough so that the trail movements potentially lead away from the present 푌 . With this choice, we simply have

푓(푋) 푞(푋|푌 ) = . (4.3.41) 푓(푌 ) Since the rejections are just as important as acceptances for an equilibrated system, as a rule of thumb one usually sets Δ so that the acceptance ratio, i.e. the fraction of accepted trials to the total number of trials, is about 50%. 105

Chapter 5

Stochastic Series Expansion

The primary Quantum Monte Carlo tool used to obtain the results to be presented in this dissertation is the Stochastic Series Expansion method (SSE). It was initially introduced in the early 1990s by A. Sandvik [148] as a generalization of an old method from the 1960s that provides a path to apply Monte Carlo methods to handle systems described by quantum statistical me- chanics [149]. SSE is an extremely robust and versatile method to calculate properties of lattice Hamiltonians, being exact for bosons even though its primary applications were for fermionic sys- tems. In this Chapter, I will introduce its historical roots via Handscomb’s method, which will be subsequently generalized to obtain the SSE formalism. I will also discuss some modifications such as the directed-loop algorithm. The original papers by Sandvik et al. contain everything that is needed to apply the method to systems of interest [148, 150–153]. However, the discussion that follows is strongly based on Ref. [18].

5.1 Handscomb’s method

In a couple of papers from the early 1960s the mathematician D. C. Handscomb, a known expert in Monte Carlo methods, presented a suitable manner to create a configurational space for quantum-mechanical systems whose properties can described by a certain partition function, or more generally by a density operator, that could be easily sampled via generic techniques such as the Metropolis algorithm. The first one [149], boldly entitled “The Monte Carlo method in quantum statistical mechanics”, describes the general formalism to construct such configuration space, whereas the second one brings a direct application to the Heisenberg model [154]. However, as we shall see, the method relies on the ability to exactly calculate traces of operators, which greatly lessens its direct applications. To start with, consider that the Hamiltonian of the system can be written as a sum of individual Hamiltonians, not necessarily commuting,

푀 ∑︁ 퐻 = − 퐻푖, (5.1.1) 푖=1 106 such that the partition function of the system is

{︃ [︃ 푀 ]︃}︃ {︁ −훽퐻 }︁ ∑︁ 풵 = Tr 푒 = Tr exp 훽 퐻푖 . (5.1.2) 푖=1 We now use the definition of the exponential of an operator in terms if its Taylor series, suchthat ⎧ ⎫ {︃ ∞ 푛 }︃ ∞ 푛 (︃ 푀 )︃푛 {︃ ∞ 푛 }︃ ∑︁ 훽 푛 ⎨∑︁ 훽 ∑︁ ⎬ ∑︁ 훽 풵 = Tr (−퐻) = Tr 퐻푖 = Tr 푆푛,푀 . (5.1.3) 푛=0 푛! ⎩푛=0 푛! 푖=1 ⎭ 푛=0 푛! The terms in this sum are given by

(︃ 푀 )︃푛 ∑︁ 푆푛,푀 = 퐻푖 , (5.1.4) 푖=1 or explicitly 1. 푀 = 1

1 (a) 푛 = 1: 푆1,1 = 퐻1 = 퐻1 2 (b) 푛 = 2: 푆2,1 = 퐻1 = 퐻1퐻1 3 (c) 푛 = 3: 푆3,1 = 퐻1 = 퐻1퐻1퐻1 2. 푀 = 2

1 (a) 푛 = 1: 푆1,2 = (퐻1 + 퐻2) = 퐻1 + 퐻2 2 (b) 푛 = 2: 푆2,2 = (퐻1 + 퐻2) = 퐻1퐻1 + 퐻2퐻1 + 퐻1퐻2 + 퐻2퐻2 3 (c) 푛 = 3: 푆3,2 = (퐻1 + 퐻2) = 퐻1퐻1퐻1 + 퐻2퐻1퐻1 + 퐻1퐻2퐻1 + 퐻2퐻2퐻1 + 퐻1퐻1퐻2 + 퐻2퐻1퐻2 + 퐻1퐻2퐻2 + 퐻2퐻2퐻2 3. 푀 = 3

1 (a) 푛 = 1: 푆1,3 = (퐻1 + 퐻2 + 퐻3) = 퐻1 + 퐻2 + 퐻3 2 (b) 푛 = 2: 푆2,3 = (퐻1 + 퐻2 + 퐻3) = 퐻1퐻1 + 퐻1퐻2 + 퐻1퐻3 + 퐻2퐻1 + 퐻2퐻2 + 퐻2퐻3 + 퐻3퐻1 + 퐻3퐻2 + 퐻3퐻3 3 (c) 푛 = 3: 푆3,3 = (퐻1 +퐻2 +퐻3) = 퐻1퐻1퐻1 +퐻1퐻2퐻1 +퐻1퐻3퐻1 +퐻2퐻1퐻1 +퐻2퐻2퐻1 + 퐻2퐻3퐻1 + 퐻3퐻1퐻1 + 퐻3퐻2퐻1 + 퐻3퐻3퐻1 + 퐻1퐻1퐻2 + 퐻1퐻2퐻2 + 퐻1퐻3퐻2 + 퐻2퐻1퐻2 + 퐻2퐻2퐻2 + 퐻2퐻3퐻2 + 퐻3퐻1퐻2 + 퐻3퐻2퐻2 + 퐻3퐻3퐻2 + 퐻1퐻1퐻3 + 퐻1퐻2퐻3 + 퐻1퐻3퐻3 + 퐻2퐻1퐻3 + 퐻2퐻2퐻3 + 퐻2퐻3퐻3 + 퐻3퐻1퐻3 + 퐻3퐻2퐻3 + 퐻3퐻3퐻3,

푛 and so on, so forth. Notice that 푆푛,푀 can be expressed as a sum over 푀 different sequences of indexes 퐶푛 = {푙1, 푙2, 푙3, ..., 푙푛} where the position of the the indexes in the sequence indicates the order over which the product of operators is performed,

(︃ 푀 )︃푛 푛 ∑︁ ∑︁ ∏︁ 푆푛,푀 = 퐻푖 ≡ 퐻푙푗 . (5.1.5) 푖=1 {퐶푛} 푗=1 107

As an example, in case 2(c) these sequences would be

{퐶3} = {(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 1, 2), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)} , while in case 3(b) they would be

{퐶2} = {(1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2), (1, 3), (2, 3), (3, 3)} .

In general, {퐶푛} = {(푙1, 푙2, 푙3, ..., 푙푛)} is obtained by permuting all indexes 푙푖 such that 1 ≤ 푙푖 ≤ 푀. The partition function then becomes

{︃ ∞ 푛 }︃ ∞ 푛 ∑︁ 훽 ∑︁ 훽 풵 = Tr 푆푛,푀 = Tr {푆푛,푀 } 푛=0 푛! 푛=0 푛! ⎧ ⎫ ⎧ ⎫ ∞ 훽푛 ⎨ 푛 ⎬ ∞ 훽푛 ⎨ 푛 ⎬ = ∑︁ Tr ∑︁ ∏︁ 퐻 = ∑︁ ∑︁ Tr ∏︁ 퐻 푛! 푙푗 푛! 푙푗 푛=0 ⎩{퐶푛} 푗=1 ⎭ 푛=0 {퐶푛} ⎩푗=1 ⎭ ⎧ ⎫ ∞ 훽푛 ⎨ 푛 ⎬ = ∑︁ ∑︁ Tr ∏︁ 퐻 . (5.1.6) 푛! 푙푗 푛=0 {퐶푛} ⎩푗=1 ⎭

The expected value of any operator 퐴 of the system is given by [61]

1 {︁ }︁ 퐸 [퐴] = Tr 퐴푒−훽퐻 , (5.1.7) 풵 therefore a similar expansion procedure leads to

⎧ ⎫ ∞ 훽푛 ⎨ 푛 ⎬ ∑︁ ∑︁ Tr 퐴 ∏︁ 퐻 푛! 푙푗 푛=0 {퐶푛} ⎩ 푗=1 ⎭ 퐸 [퐴] = ⎧ ⎫ . (5.1.8) ∞ 훽푛 ⎨ 푛 ⎬ ∑︁ ∑︁ Tr ∏︁ 퐻 푛! 푙푗 푛=0 {퐶푛} ⎩푗=1 ⎭ This can be further arranged and written as

⎧ ⎫ {︁ }︁ ∞ 푛 푛 Tr 퐴 ∏︀푛 퐻 ∞ ∑︁ ∑︁ 훽 ⎨∏︁ ⎬ 푗=1 푙푗 ∑︁ ∑︁ Tr 퐻푙푗 {︁ }︁ 푊푛퐴푛 푛! ⎩ ⎭ ∏︀푛 푛=0 {퐶푛} 푗=1 Tr 푗=1 퐻푙푗 푛=0 {퐶푛} 퐸 [퐴] = ⎧ ⎫ −→ ∞ , (5.1.9) ∞ 훽푛 ⎨ 푛 ⎬ ∑︁ ∑︁ ∑︁ ∑︁ ∏︁ 푊푛 Tr 퐻푙푗 푛! 푛=0 {퐶푛} 푛=0 {퐶푛} ⎩푗=1 ⎭ where

{︁ ∏︀푛 }︁ Tr 퐴 푗=1 퐻푙푗 퐴 = (5.1.10) 푛 {︁∏︀푛 }︁ Tr 푗=1 퐻푙푗 108 is the “local” value of 퐴 in configuration 푛 and

⎧ ⎫ 푛 푛 훽 ⎨∏︁ ⎬ 푊푛 = Tr 퐻푙푗 , (5.1.11) 푛! ⎩푗=1 ⎭ is proportional to the probability of occupying such configuration. This term therefore can be recognized as the probability for the system to be found in state 푛. However, in order to be rigorously identified as a probability, it must satisfy the conditions

푊푛 > 0, any 푛 (5.1.12) and ∞ ∑︁ ∑︁ 푊푛 < ∞, (5.1.13) 푛=0 {퐶푛} which means that the terms must be positive-definite and that the distribution must be normal- izable. Therefore, if we are able to calculate such probability distribution, we are able to sample it using the techniques from the last Chapter. However, notice that these distributions depend upon the calculation of the trace of products of operators. Except for a few specific cases, such quantities cannot be explicitly calculated, thus the method as it is suffers from not being applicable in general. In spite of that, it is instructive to take a deeper look into the structure of the configuration space that is created in this formalism. The state of the system is described by labeled sequences of indexes that correspond to the product of terms from the original Hamiltonian that one needs to calculate the trace to obtain the associated statistical weights. Each sequence has a definite size 푛, which is related to the order of the Taylor expansion that is under consideration, and a set of 푛 organized numbers. For instance, the sequence

푆 = (1, 5, 2, 2, 4, 3, 1) corresponds to an element of seventh order in the Taylor expansion whose probability is propor- tional to the trace of the product

퐻1퐻3퐻4퐻2퐻2퐻5퐻1. Each sequence can be seen as an oriented string whose size is fixed by 푛. Associated to each string there are 푛 beads, each one carrying the corresponding 퐻푖 in the sequence,

퐻1 ← 퐻3 ← 퐻4 ← 퐻2 ← 퐻2 ← 퐻5 ← 퐻1. Even though the strings are oriented, since the trace is a cyclic operation, any bead can serve as the starting point. The configuration space can therefore be pictorially seen as composed of strings with different number of beads, as shown in Fig. 5.1, so that each string corresponds to a different state. 109

Figure 5.1: In order to sample the partition function of a quantum mechanical system, in Hand- scomb’s method it is created a configuration space that is composed of strings carrying a certain number of beads. The number of beads, which is the size of the string, corresponds to the order of the expansion that is being sampled. Attached to each bead there is a number that identifies the corresponding term of the original Hamiltonian. The method attributes a probability to each of these strings, so that the space can possibly be sampled, allowing for the estimation of expected values of observables. 5.2 Extended sampling

Although quite general in principle, Handscomb’s method requires the calculation of probabil- ities that are proportional to the trace of products of operators, viz. trace of the strings, in order to correctly sample the space characterized by the partition function of a quantum mechanical system. For such reason, it is directly applicable only to a few situations where the trace can be analytically calculated [154, 155]. In what follows, I will present a generalization that allows for the sampling of the states that participate in the evaluation of such trace , which therefore turns the method applicable to virtually any lattice Hamiltonian [148, 150]. This comprises the “birth” of SSE. Recall that, by expanding the density operator in a Taylor series, we were able to write the partition function as ⎧ ⎫ ∞ 훽푛 ⎨ 푛 ⎬ 풵 = ∑︁ ∑︁ Tr ∏︁ 퐻 . (5.2.1) 푛! 푙푗 푛=0 {퐶푛} ⎩푗=1 ⎭ At this point, it is convenient to truncate this series at a certain 푛 = 퐿 that can be chosen so that the truncation gives controllable small errors, in the sense that the sum is converged for a certain large 퐿. This is not only practical, but it is actually necessary for applications of the method in computers, since the physical memory of the machine is finite. For such purposes, let us assume 110 that, initially, 퐿 is just as large as the machine is capable to deal with. We then have

⎧ ⎫ ⎧ ⎫ ∞ 훽푛 ⎨ 푛 ⎬ 퐿 훽푛 ⎨ 푛 ⎬ 풵 = ∑︁ ∑︁ Tr ∏︁ 퐻 −→ ∑︁ ∑︁ Tr ∏︁ 퐻 . (5.2.2) 푛! 푙푗 푛! 푙푗 푛=0 {퐶푛} ⎩푗=1 ⎭ 푛=0 {퐶푛} ⎩푗=1 ⎭

Recall also that the size of the strings is given by the order of the expansion, 푛. A handy modifi- cation consists of introducing identity operators in the string such that all of them have the same size. This can be done by denoting 퐻0 = 1 and inserting (퐿 − 푛) of these in every string that has 푛 < 퐿. However, the order of the expansion is still controlled by the number of non-zero indices in each sequence 퐶퐿, which we will denote by 푛(퐶퐿). The partition function then becomes

⎧ ⎫ ⎧ ⎫ 퐿 푛 푛 푛(퐶 ) 퐿 ∑︁ ∑︁ 훽 ⎨∏︁ ⎬ ∑︁ 훽 퐿 ⎨∏︁ ⎬ 풵 = Tr 퐻푙푗 −→ Tr 퐻푙푗 , (5.2.3) 푛! ⎩ ⎭ 푛(퐶퐿)! ⎩ ⎭ 푛=0 {퐶푛} 푗=1 {퐶퐿} 푗=1 where {퐶퐿} = {(푙1, 푙2, 푙3, ..., 푙퐿)} is the set of all sequences obtained from permutations of indexes 0 ≤ 푙푖 ≤ 푀. Furthermore, there is a degeneracy in the insertion of identity operators, which can be done (︁ )︁ in 퐿 different ways, all of them obviously with the same statistical weight. We thus needto 푛(퐶퐿) include the factor

(︃ 퐿 )︃−1 1 = 푛(퐶퐿)! [퐿 − 푛(퐶퐿)]!, (5.2.4) 푛(퐶퐿) 퐿! so that the correct partition function is given by

⎧ ⎫ [퐿 − 푛(퐶 )]! ⎨ 퐿 ⎬ ∑︁ 푛(퐶퐿) 퐿 ∏︁ 풵 = 훽 Tr 퐻푙푗 . (5.2.5) 퐿! ⎩ ⎭ {퐶퐿} 푗=1

With such changes, the configuration space that we need to sample now looks somewhat more homogeneous, as shown in Fig. 5.2. The expected value of operator 퐴 is now written as

⎧ ⎫ [퐿 − 푛(퐶 )]! ⎨ 퐿 ⎬ ∑︁ 푛(퐶퐿) 퐿 ∏︁ 훽 Tr 퐴 퐻푙푗 퐿! ⎩ ⎭ {퐶퐿} 푗=1 퐸 [퐴] = ⎧ ⎫ . (5.2.6) [퐿 − 푛(퐶 )]! ⎨ 퐿 ⎬ ∑︁ 푛(퐶퐿) 퐿 ∏︁ 훽 Tr 퐻푙푗 퐿! ⎩ ⎭ {퐶퐿} 푗=1

In spite of these changes, we are still not able to sample this configuration space if we cannot calculate the probabilities for each string. Suppose then that we will calculate the traces over a certain basis set ℬ훼 = {|훼⟩} of the system, such that the partition function is 111

Figure 5.2: With the truncation of the series and insertion of identity operators, that correspond to the empty, red dotted-line beads, all strings have the same size. In this example, a few strings are shown for 퐿 = 6.

⃒ ⃒ ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ∑︁ ⃒∏︁ ⃒ 풵 = 훽 퐿 훼⃒ 퐻 ⃒훼 퐿! ⃒ 푙푗 ⃒ {퐶퐿} 훼 ⃒푗=1 ⃒ ⃒ ⃒ ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ⃒∏︁ ⃒ = 훽 퐿 훼⃒ 퐻 ⃒훼 , (5.2.7) 퐿! ⃒ 푙푗 ⃒ 훼 {퐶퐿} ⃒푗=1 ⃒ while the expected value of 퐴 is

⃒ ⃒ ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ⃒ ∏︁ ⃒ 훽 퐿 훼⃒퐴 퐻 ⃒훼 퐿! ⃒ 푙푗 ⃒ 훼 {퐶퐿} ⃒ 푗=1 ⃒ 퐸 [퐴] = ⃒ ⃒ (5.2.8) ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ⃒∏︁ ⃒ 훽 퐿 훼⃒ 퐻 ⃒훼 퐿! ⃒ 푙푗 ⃒ 훼 {퐶퐿} ⃒푗=1 ⃒ ⃒ ⃒ ⟨ ⃒ ⃒ ⟩ ⟨ 퐿 ⟩ ⎡ ⃒ ∏︀퐿 ⃒ ⎤ [퐿 − 푛(퐶 )]! ⃒ ⃒ 훼 퐴 푗=1 퐻푙푗 훼 ∑︁ ∑︁ 푛(퐶퐿) 퐿 ⃒∏︁ ⃒ ⃒ ⃒ 훽 훼⃒ 퐻푙 ⃒훼 ⎣ ⎦ 푗 ⟨ ⃒∏︀퐿 ⃒ ⟩ 훼 퐿! ⃒ ⃒ 훼⃒ 퐻 ⃒훼 {퐶퐿} ⃒푗=1 ⃒ ⃒ 푗=1 푙푗 ⃒ = ⃒ ⃒ (5.2.9) ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ⃒∏︁ ⃒ 훽 퐿 훼⃒ 퐻 ⃒훼 퐿! ⃒ 푙푗 ⃒ 훼 {퐶퐿} ⃒푗=1 ⃒ ∑︁ ∑︁ 푊 (훼, 퐶퐿)퐴(훼, 퐶퐿) 훼 {퐶퐿} = ∑︁ ∑︁ . (5.2.10) 푊 (훼, 퐶퐿) 훼 {퐶퐿} 112

Figure 5.3: Bond the composition for the BHM. Each bond links a site to one of its nearest- neighbors.

We can therefore attempt to perform sampling not only over the strings 퐶퐿, but also over the states |훼⟩ ∈ ℬ훼. In order to do so, we need a couple of assumptions about the Hamiltonian 퐻 and the basis set ℬ훼. Regarding 퐻, recall that we have written it as

푀 ∑︁ 퐻 = − 퐻푖, 푖=1 which is a suitable form since lattice Hamiltonians can be decomposed into bonds that link different sites of the lattice. Consider, for instance, the Bose-Hubbard Hamiltonian with nearest-neighbors hopping,

∑︁ ^†^ 푈 ∑︁ 퐻 = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1). <푖푗> 2 푖 A bond, in this case, links the current site to one of its nearest-neighbors, as shown in Fig. 5.3. The Hamiltonian can then be written as a sum over bonds as follows

∑︁ ^†^ 푈 1 ∑︁ ∑︁ 퐻 = −푡 푏푖 푏푗 + [^푛푖(^푛푖 − 1) +푛 ^푗(^푛푗 − 1)] (5.2.11) <푖푗> 2 2 푖 푗 ∑︁ ^†^ 푈 1 1 ∑︁ = −푡 푏푖 푏푗 + [^푛푖(^푛푖 − 1) +푛 ^푗(^푛푗 − 1)] (5.2.12) <푖푗> 2 2 푧 <푖푗> {︂ }︂ ∑︁ ^†^ 푈 = −푡푏푖 푏푗 + [^푛푖(^푛푖 − 1) +푛 ^푗(^푛푗 − 1)] , (5.2.13) <푖푗> 4푧 where 푧 is the number of nearest-neighbors of a lattice site. Therefore, 퐻 is decomposed into a single sum over bonds,

∑︁ 퐻 = − 퐻푏, (5.2.14) 푏∈<푖푗> justifying the previous decomposition. Furthermore, considering the basis set ℬ훼 that is chosen to perform the traces, each bond-Hamiltonian 퐻푏 can be decomposed into a diagonal part, which 113 does not change the state |훼⟩, and an off-diagonal part that does change the state,

푑 표 퐻푏 = 퐻푏 + 퐻푏 . (5.2.15) In the SSE formalism, the basis set must be chosen so that off-diagonal terms are no-branching,

표 퐻푏 |훼푖⟩ ∝ |훼푗⟩ , (5.2.16) with |훼푖⟩ , |훼푗⟩ ∈ ℬ훼. Fortunately, for the BHM and also for the DBHM, the occupation number basis fulfills all these requirements, so that we can use

|훼⟩ = |푛1⟩ ⊗ |푛2⟩ ⊗ |푛3⟩ ⊗ · · · ⊗ |푛푁푠 ⟩ (5.2.17) as basis elements, where 푁푠 is the total number of lattice sites and 푛푖 is the occupation number of site 푖. We will proceed now by inserting a number of identity operators in between beads of the strings using the basis elements,

1 = ∑︁ |훼⟩⟨훼| , (5.2.18) 훼 in the previous expression for the partition function, ⃒ ⃒ ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ⃒∏︁ ⃒ 풵 = 훽 퐿 훼⃒ 퐻 ⃒훼 . (5.2.19) 퐿! ⃒ 푙푗 ⃒ 훼 {퐶퐿} ⃒푗=1 ⃒

Notice that each 퐻푖 is now a bond operator that can be either diagonal or off-diagonal. By denoting |훼(퐿)⟩ ≡ |훼(0)⟩, we then have

⃒ ⃒ ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ∑︁ ⃒∏︁ ⃒ 풵 = 훽 퐿 훼(0)⃒ 퐻 ⃒훼(0) 퐿! ⃒ 푙푗 ⃒ {퐶퐿} 훼(0) ⃒푗=1 ⃒ ⃒ ⃒ ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ∑︁ ⃒∏︁ ⃒ = 훽 퐿 훼(퐿)⃒ 퐻 퐻 ⃒훼(0) 퐿! ⃒ 푙푗 푙1 ⃒ {퐶퐿} 훼(0) ⃒푗=2 ⃒ ⃒ ⃒ ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ∑︁ ⃒∏︁ ⃒ = 훽 퐿 훼(퐿)⃒ 퐻 ⃒훼(1) ⟨훼(1)|퐻 |훼(0)⟩ 퐿! ⃒ 푙푗 ⃒ 푙1 {퐶퐿} 훼(0),훼(1) ⃒푗=2 ⃒ ⃒ ⃒ ⟨ ⃒ 퐿 ⃒ ⟩ ∑︁ 푛(퐶 ) [퐿 − 푛(퐶퐿)]! ∑︁ ⃒∏︁ ⃒ = 훽 퐿 훼(퐿)⃒ 퐻 ⃒훼(2) ⟨훼(2)|퐻 |훼(1)⟩⟨훼(1)|퐻 |훼(0)⟩ 퐿! ⃒ 푙푗 ⃒ 푙2 푙1 {퐶퐿} 훼(0),훼(1) ⃒푗=3 ⃒ 훼(2) {︂ ∑︁ [퐿 − 푛(퐶퐿)]! ∑︁ ⟨ ⃒ ⃒ ⟩ = 훽푛(퐶퐿) ⟨훼(퐿)|퐻 |훼(퐿 − 1)⟩ 훼(퐿 − 1)⃒퐻 ⃒훼(퐿 − 2) × 퐿! 푙퐿 ⃒ 푙퐿−1 ⃒ {퐶퐿} 훼(0),훼(1) ... 훼(퐿−1) ⟨ ⃒ ⃒ ⟩ }︂ ⃒ ⃒ × 훼(퐿 − 2)⃒퐻푙퐿−2 ⃒훼(퐿 − 3) ... ⟨훼(3)|퐻푙3 |훼(2)⟩⟨훼(2)|퐻푙2 |훼(1)⟩⟨훼(1)|퐻푙1 |훼(0)⟩ , (5.2.20) 114

Figure 5.4: Left panel: World Line picture for the strings of operators that constitute the partition function of a quantum mechanical system. The horizontal axis represents lattice sites that are connected through a bond-operator. The vertical axis represents the propagation levels with periodic boundary conditions (see text for discussion). Right panel: each string can be seen as a sequence of scattering vertices consisting of a bond-operator and four “legs” that correspond to the occupation number of the lattice sites connected by the bonds in two subsequent propagation levels. Figure from Ref. [18].

This form allows for a visualization of the configuration space that is called the World Line picture, shown in Fig. 5.4. In the left panel, the corresponding operator string is

⟨ ⃒ ⃒ ⟩⟨ ⃒ ⃒ ⟩⟨ ⃒ ⃒ ⟩⟨ ⃒ ⃒ ⟩⟨ ⃒ ⃒ ⟩⟨ ⃒ ⃒ ⟩ 훼(6)⃒퐻푑 ⃒훼(5) 훼(5)⃒퐻표 ⃒훼(4) 훼(4)⃒퐻푑 ⃒훼(3) 훼(3)⃒퐻푑 ⃒훼(2) 훼(2)⃒퐻표 ⃒훼(1) 훼(1)⃒퐻푑 ⃒훼(0) . ⃒ 푙6 ⃒ ⃒ 푙5 ⃒ ⃒ 푙4 ⃒ ⃒ 푙3 ⃒ ⃒ 푙2 ⃒ ⃒ 푙1 ⃒ Furthermore, the matrix elements of the beads in between subsequent states,

⟨ ⃒ ⃒ ⟩ ⃒ 표,푑⃒ 훼(푛)⃒퐻푛 ⃒훼(푛 − 1) can be seen as a vertex with four “legs”, connecting the two propagation levels (푛 − 1) and 푛, with corresponding states |훼(푛 − 1)⟩ and |훼(푛)⟩, and a bond operator, which can be either diagonal or off-diagonal, that connects two lattice sites. In the occupation number basis set, each leghence carries the occupation number of the lattice site in the different states, as shown in the right panel of Fig. 5.4. By performing the same procedure for the expected value of an operator 퐴, we arrive at

∑︁ ∑︁ 퐸 [퐴] = 푊 (퐶퐿; {훼(0), 훼(1), ..., 훼(퐿 − 1)}) 퐴 (퐶퐿; {훼(0), 훼(1), ..., 훼(퐿 − 1)}) , (5.2.21) {퐶퐿} 훼(0),훼(1) ... 훼(퐿−1) 115 where

⟨훼(퐿)|퐴퐻푙퐿 |훼(퐿 − 1)⟩ × · · · × ⟨훼(1)|퐻푙1 |훼(0)⟩ 퐴 (퐶퐿; {훼(0), ..., 훼(퐿 − 1)}) ≡ (5.2.22) ⟨훼(퐿)|퐻푙퐿 |훼(퐿 − 1)⟩ × · · · × ⟨훼(1)|퐻푙1 |훼(0)⟩ and

훽푛(퐶퐿) [퐿 − 푛(퐶 )]! 푊 (퐶 ; {훼(0), ..., 훼(퐿 − 1)}) ≡ 퐿 ⟨훼(퐿)|퐻 |훼(퐿 − 1)⟩ × · · · × ⟨훼(1)|퐻 |훼(0)⟩ . 퐿 풵 퐿! 푙퐿 푙1 (5.2.23) Notice that, as the trace operation is cyclic, 퐴 can be applied in any scattering vertex of the strings. We are now finally in a position to generate configurations according to 푊 using Monte Carlo methods. They are going to be sampled using the Metropolis strategy of Sec. 4.3.3, which is achieved via two different processes: diagonal update and loop update.

5.3 Diagonal update

Recall that we have inserted identity operators 퐻0 = 1 so that all the strings have the same size. However, what controls the order of the expansion in the Taylor series for 풵 is the number of operators in the string 퐶퐿 that have non-zero index, viz. the number of operators that are not the identity, which we have denoted by 푛(퐶퐿). The diagonal update allows for the sampling of this quantity, which means that a bond-operator is either introduced in the string replacing an identity operator, so that 푛(퐶퐿) −→ 푛(퐶퐿) + 1, or removed by the reverse process, so that 푛(퐶퐿) −→ 푛(퐶퐿) − 1. The operator that is inserted or removed must be a diagonal one. Notice that, in doing so, this update samples the expansion order and partially samples the strings, but does not sample the states {|훼(푖)⟩}. For simplicity, let us denote 푛(퐶퐿) by 푝. The transition probabilities for insertion and removal are

⟨ ⃒ ⃒ ⟩ 푁푏훽 훼(푝)⃒퐻푙 ⃒훼(푝 − 1) 풫 {푝 → 푝 + 1} = ⃒ 푝 ⃒ (5.3.1) 퐿 − 푝 퐿 − 푝 + 1 풫 {푝 → 푝 − 1} = ⟨ ⃒ ⃒ ⟩, (5.3.2) ⃒ ⃒ 푁푏훽 훼(푝)⃒퐻푙푝 ⃒훼(푝 − 1) where 푁푏 is the total number of bonds. In a trial for insertion, a bond is randomly selected and tested for possible update with proba- bility given by Eq. 5.3.1. Similarly, the reduction removes a bond operator and inserts an identity operator with probability given by Eq. 5.3.2. The general procedure then consists of iteration through all scattering vertices. Off-diagonal operators are left unchanged, but they are quite important since they propagate the states {|훼(푖)⟩}, which is not possible with diagonal operators. Consider an example of such type of update for a one-dimensional lattice with 3 lattice sites, which allows for 푁푏 = 3 bonds that are shown in Fig. 5.5, with truncation of the series made for 116

Figure 5.5: The three lattice sites 1, 2 and 3 are connected by the bonds 1 = 푏(1, 2), 2 = 푏(2, 3) and 3 = (3, 1), where periodic boundary conditions in space is being applied. Figure from Ref. [18].

퐿 = 5. The initial and final states are |훼(0)⟩ = |훼(5)⟩ = |1, 1, 1⟩, i.e. one particle in each site. The string that is going to be iterated is such that 퐶퐿 = (2, 0, 2, 0, 2), which corresponds to

⟨ ⃒ ⃒ ⟩ 표 1 표 1 ⃒ 푑⃒ ⟨훼(5)|퐻2 |훼(4)⟩⟨훼(4)| |훼(3)⟩⟨훼(3)|퐻2 |훼(2)⟩⟨훼(2)| |훼(1)⟩ 훼(1)⃒퐻2 ⃒훼(0) .

After the procedure, the final string is 퐶퐿 = (2, 1, 2, 0, 2), corresponding to

⟨ ⃒ ⃒ ⟩⟨ ⃒ ⃒ ⟩ 표 1 표 ⃒ 푑⃒ ⃒ 푑⃒ ⟨훼(5)|퐻2 |훼(4)⟩⟨훼(4)| |훼(3)⟩⟨훼(3)|퐻2 |훼(2)⟩ 훼(2)⃒퐻1 ⃒훼(1) 훼(1)⃒퐻2 ⃒훼(0) , so that the order of expansion has increased by 1. Details are given in Fig. 5.6.

5.4 Loop update

Once the order of the expansion is sampled in the diagonal update procedure, we need to sample the states {|훼(푖)⟩}, which is done by transforming diagonal operators into off-diagonal operators. In this step, all identity operators of the string are removed, which means that the order of the expansion is fixed. In doing so, every vertex in the string sequence contains a bond-operator. The states are going to be sampled through the use of worms to construct closed operators loop. Initially, a random insertion point between the legs of two consecutive vertices is chosen from the entire vertices list. This point can be taken as the propagation level 0 due to the equivalence between cyclic permutations of the string. In what follows, propagation levels will be denoted by 휏. The insertion is made therefore into a random point in a space-“time” composed of the lattice sites and the propagation levels. Next, a pair of operators

⎧ † ⎨^푏 ^푏 , or 퐴^†퐴^ = 푖 푖 (5.4.1) 푖 푖 ^ ^† ⎩푏푖푏푖 is inserted with probability 풫 {insertion} = 1/2 at that point. Depending on the direction of ^† ^ propagation, the head of the worm will carry either 퐴푖 (forward propagation) or 퐴푖 (backward propagation). As it enters the chosen leg with propagation in the forward direction, for instance, it changes the state on that site so that the net state remains the same,

⃒ ⟩ ^† ^ ^† ⃒ ˜ ^ ... |훼(휏)⟩ 퐴푖 퐴푖 ⟨훼(휏)| ... −→ ... 퐴푖 ⃒푇푖 [훼(휏)] 퐴푖 |훼(휏)⟩ ..., (5.4.2) 117

Figure 5.6: Example of diagonal update for the lattice of Fig. 5.5 in the World Line picture. Here, 퐿 = 5 is the truncation of series and the initial state is |훼(0)⟩ = |훼(5)⟩ = |1, 1, 1⟩. The initial string is 퐶퐿 = (2, 0, 2, 0, 2) (left panel). Each vertex, starting at 0, is tested for insertion or removal according to Eqs. 5.3.1 and 5.3.2. The steps proceed as follows: 1. Vertex connecting propagation levels 0 and 1, which contains a diagonal bond-operator linking sites 2 and 3 is tested for removal, which fails. 2. Vertex connecting levels 1 and 2, which is “empty”, is tested for insertion. In this case, the bond 1 is randomly selected, and the insertion is accepted. 3. Vertex connecting levels 2 and 3 contains an off-diagonal operator, so it is left unchanged. However, it propagates the state from |훼(2)⟩ = |1, 1, 1⟩ to |훼(3)⟩ = |1, 2, 0⟩. 4. Vertex connecting levels 3 and 4 is empty, but in this case insertion fails. 5. The last vertex has an off-diagonal operator, so it is left unchanged, and the state propagates back to |훼(0)⟩. This results in the updated string 퐶퐿 = (2, 1, 2, 0, 2) (right panel). Figure from Ref. [18].

⃒ ⟩ ⃒ ˜ where ⃒푇푖 [훼(휏)] indicates the transformation operation due to the passage of the worm’s head ^† ^† ˜ ^ with operator 퐴푖 . In case of forward propagation, the head carries 퐴푖 , therefore 푇 = 퐴, otherwise 푇˜ = 퐴^†. In this way, if we have

|훼(휏)⟩ ≡ |푛푖⟩ ⊗ |푛푗⟩ , (5.4.3) then for forward propagation

⃒ ⟩ ⃒ ˜ ˜ ˜ ⃒푇푖 [훼(휏)] = 푇푖 |푛푖⟩ ⊗ |푛푗⟩ = 퐴푖 |푛푖⟩ ⊗ |푛푗⟩ , (5.4.4) whereas for backward propagation

⟨ ⃒ ˜ ⃒ ˜ ˜† 푇푖 [훼(휏)]⃒ = ⟨푛푖| ⊗ ⟨푛푗| 푇푖 = ⟨푛푖| ⊗ ⟨푛푗| 퐴푖 . (5.4.5) 118

Figure 5.7: Example of the insertion of a worm to start the loop update. A random insertion point is selected along with a pair of operators 퐴^†퐴^. For this example, the insertion point is at ^ ^ site 1 and propagation level 1, i.e. in between vertices 1 and 2, such that 퐴 = 푏1 (left panel). A random direction for propagation of the worm is then selected. In this case, it can go towards vertex 2 (forward direction) or towards vertex 1 (backward direction). Forward: the head of the ^† ^† ^ ^ worm carries 퐴 = 푏푖 while its tail remains fixed holding 퐴 = 푏푖. It then enters the leg bringing a change in the state of site 1 from |1⟩ to |0⟩, so that the net state remains the same, since ⟨ ⃒ ⃒ ⟩ ˜† ⃒ 푑⃒ ^†^ 푏1 |0⟩ = |1⟩. In terms of operator strings, what happens is ... 훼(2)⃒퐻1 ⃒훼(1) 푏푖 푏푖 ⟨훼(1)| ... → ⟨ ⃒ ⃒ ⟩ ⃒ 푑^†⃒˜ ^ ... 훼(2)⃒퐻1 푏1⃒푏1[훼(1)] 푏1 ⟨훼(1)| ... (central panel). Backward: In this case the head of the ^ ^ ^† ^† worm carries 퐴 = 푏1 while the fixed tail holds 퐴 = 푏1. The head will then again change the ˜ state of site 1 from ⟨1| to ⟨0|, since ⟨0| 푏1 = ⟨1|. The resulting transformation of the string is ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ^†^ ⃒ 푑⃒ ^† ˜† ⃒^ 푑⃒ then ... |훼(1)⟩ 푏1푏1 훼(1)⃒퐻2 ⃒훼(0) ... → ... |훼(1)⟩ 푏1 푏1[훼(1)]⃒푏1퐻2 ⃒훼(0) ... (right panel). Figure from Ref. [18].

The tilde ‘∼’ sign over the operators indicates that the transformation is normalized, i.e.

1 ˜푏† |푛⟩ ≡ √ |푛 + 1⟩ (5.4.6) 푛 + 1 and 1 ˜푏 |푛⟩ ≡ √ |푛 − 1⟩ . (5.4.7) 푛

The worm moves unimpeded until the head reaches a vertex that contains a bond operator involving site 푖. An example of insertion of a worm is shown in Fig. 5.7. After going through the entrance leg, the worm reaches a scattering vertex allowing it to proceed 119 in multiple ways. Each option involves a Monte Carlo move that satisfies detailed balance,

{︁ ′ ^ ^′}︁ ^ {︁ ′ ′ ^′ ^ }︁ ′ ′ ^′ 풫 푙 → 푙 , Σ, 퐴푙 → 퐴푙 푊 (푙, Σ, 퐴푙) = 풫 푙 → 푙, Σ , 퐴푙 → 퐴푙 푊 (푙 , Σ , 퐴푙), (5.4.8) ′ ^ where 푙 is the entrance leg, 푙 is the exit leg, Σ is the current state of the vertex, 퐴푙 is the current ′ ^′ operator at entrance leg 푙, Σ is the state of the vertex after the passing of the worm and 퐴푙 is the head-operator after the movement of the worm (it can change direction when it scatters!). W is a weight function proportional to the local Hamiltonian operator, while 풫 is the transition probability from one configuration to another that results from the passage of the worm. There are many ways to choose these transition probabilities. However, one needs to be careful with the efficiency of the options, which is optimized in Ref.[156]. For vertices that involve four lattice sites, which is the case in the present dissertation, there are four possible paths for the worm, shown in Fig. 5.8. The transformations in terms of operator strings are: 1. Straight-jump-forward ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ ^†⃒ ˜ ^ ^† ˜ ⃒ ⃒ ˜ ^ 훼(푞 + 1)⃒퐻푙푞 퐴푖 ⃒퐴푖[훼(푞)] 퐴푖 → 퐴푖 퐴푖[훼(푞 + 1)]⃒퐻푙푞 ⃒퐴푖[훼(푞)] 퐴푖 (5.4.9) 2. Switch-jump-forward ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ ^†⃒ ˜ ^ ^† ˜ ⃒ ⃒ ˜ ^ 훼(푞 + 1)⃒퐻푙푞 퐴푖 ⃒퐴푖[훼(푞)] 퐴푖 → 퐴푗 퐴푗[훼(푞 + 1)]⃒퐻푙푞 ⃒퐴푖[훼(푞)] 퐴푖 (5.4.10) 3. Switch-jump-backward ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ ^†⃒ ˜ ^ ⃒ ⃒ ˜ ˜† ^ ^† 훼(푞 + 1)⃒퐻푙푞 퐴푖 ⃒퐴푖[훼(푞)] 퐴푖 → 훼(푞 + 1)⃒퐻푙푞 ⃒퐴푖퐴푗[훼(푞)] 퐴푖퐴푗 (5.4.11) 4. Bouncing ⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ ^†⃒ ˜ ^ ⃒ ⃒ ^† ^ 훼(푞 + 1)⃒퐻푙푞 퐴푖 ⃒퐴푖[훼(푞)] 퐴푖 → 훼(푞 + 1)⃒퐻푙푞 ⃒훼(푞) 퐴푖 퐴푖. (5.4.12) It is important to notice that the first movement, straight-jump-forward, leads to an effective substitution of a diagonal operator for a different one, since it evenly changes the occupation number of the same lattice site in the two subsequent propagation levels. In contrast to that, the movements that involve a switch, namely switch-jump-forward and switch-jump-backward, lead to an effective substitution of a diagonal operator for an off-diagonal operator, since it changesthe occupation number of different lattice sites in between the propagation levels. The last movement, which is a bouncing of the worm, undoes the previous changes made before scattering and needs to be minimized in order to improve the efficiency of the updating process. The insertion of the worm therefore introduces a discontinuity in the World Line that is propa- gated along with its head. The propagation finishes when the head meets the tail at the insertion point, finalizing one iteration of the loop update. In general, multiple iterations are needed toen- sure that subsequent states are not correlated. For an homogeneous system, it is found that worms approximately 10 times larger than the average size of the operator string are a suitable choice. This however strongly depends on the physical parameters of the system and can greatly change for non-homogeneous potentials, therefore it should be tested during the equilibration process. It also depends on the observables of interest that are going to be estimated. It is possible that the worm gets prohibitively large, in which case all changes are discarded and a new iteration starts. Generally, this is done for worms larger than 100 times the average string size. 120

Figure 5.8: When the worm enters the chosen leg, it reaches a scattering vertex that allows for four different ways that are sampled via the Metropolis algorithm. Notice that the movements switch- jump-forward and switch-jump-backward lead to an effective substitution of a diagonal operator to an off-diagonal one. See text for further discussion. Figure from Ref.[18]. 121

5.5 Observables

Recall that the expected value of observable 퐴 is given by

{︁ −훽퐻 }︁ Tr 퐴푒 1 ⟨ ⃒ ⃒ ⟩ ∑︁ ⃒ −훽퐻 ⃒ 퐸 [퐴] = = 훼⃒퐴푒 ⃒훼 , (5.5.1) 풵 풵 훼 which comprises most of the quantities that are relevant to this dissertation, chiefly thermodynamic quantities such as energy and specific heat, and also the superfluid fraction of the system.For such type of observables, the measurement is done after the Monte Carlo update, when there are no discontinuities present. These are called Z-sector observables. However, there is a a wider class of quantities that involve measurements with a discontinuity present in configuration space, such as imaginary-time correlation functions. These are therefore measured during the passage of the worm in the loop update procedure, and are called G-sector observables.

5.5.1 Z-sector This class of quantities includes the ones that are extremely relevant to the results that are going to be discussed in the present dissertation, such as densities 푛, compressibilities 휅, internal energies 푈, free energies 퐹 , and superfluid fractions 휌푠, which are going to be discussed in what follows. It is worth recalling that we are considering that the whole formalism works in the grand- canonical ensemble, so Hamiltonians 퐻 are written explicitly including the chemical potential term, 퐻 = ℋ − 휇푁, (5.5.2) where 휇 is the chemical potential, 푁 is the total particle number operator and ℋ is the canonical Hamiltonian.

Grand-potential Given the form that we are assuming for the Hamiltonian, which is written in the grand- canonical ensemble, we have

퐸 [퐻] = Ω, (5.5.3) the grand-potential associated to the system (풵 = 푒−훽Ω). Then,

{︁ }︁ Tr 퐻푒−훽퐻 1 ∞ (−훽)푝 Ω = = ∑︁ ∑︁ ⟨훼|퐻(퐻푝)|훼⟩ 풵 풵 푝=0 훼 푝! 1 ∞ (−훽)푝+1 (푝 + 1) ⟨ ⃒ ⃒ ⟩ ∑︁ ∑︁ ⃒ 푝+1⃒ = 훼⃒퐻 ⃒훼 풵 푝=0 훼 (푝 + 1)! (−훽) 1 ∞ (−훽)푝 [︃ 푝 ]︃ [︃ 푝 ]︃ 퐸 [푝] = ∑︁ ∑︁ − ⟨훼|퐻푝|훼⟩ = 퐸 − = − , (5.5.4) 풵 푝=0 훼 푝! 훽 훽 훽 122 which shows that it is given by the average number of operators in the strings multiplied by the −1 factor 훽 , being quite straightforward to measure within the simulations.

Particle density This is also a very direct quantity to measure since we work in the occupation number basis. All that is needed is to count the total number of particles after every iteration. More formally, recall that

|훼⟩ ≡ |푛1⟩ ⊗ |푛2⟩ ⊗ |푛3⟩ ⊗ ... ⊗ |푛푁푠 ⟩ , and with the total particle number operator 푁 given by

푁 ∑︁푠 푁 = 푛^푖, (5.5.5) 푖=1 we have {︁ −훽퐻 }︁ ⟨ ⃒[︃ ]︃ ⃒ ⟩ Tr 푁푒 1 ∞ (−훽)푝 ⃒ 푁푠 ⃒ ∑︁ ∑︁ ⃒ ∑︁ 푝⃒ 퐸 [푁] = = 훼⃒ 푛^푖 퐻 ⃒훼 . (5.5.6) 풵 풵 푝=0 훼 푝! ⃒ 푖=1 ⃒ Notice that it is also possible and straightforward to measure local densities in the SSE formalism,

∞ 푝 1 ∑︁ ∑︁ (−훽) 푝 퐸 [푛푖] = ⟨훼|푛^푖퐻 |훼⟩ , (5.5.7) 풵 푝=0 훼 푝! which is easily extended to densities of particular regions as well.

Compressibility Recall that, from Eq. 1.3.43, the compressibility is given by

푉 휕(퐸 [푁]) 휅 = . (5.5.8) (퐸 [푁])2 휕휇 To express it in a more suitable form, notice that

{︃ }︃ 휕풵 휕 {︁ }︁ 휕푒−훽퐻 {︁ }︁ {︁ }︁ = Tr 푒−훽퐻 = Tr = Tr 훽푁푒−훽퐻 = 훽Tr 푁푒−훽퐻 = 훽풵퐸 [푁] , (5.5.9) 휕휇 휕휇 휕휇 and in the same way 휕2풵 [︁ ]︁ = 훽2풵퐸 푁 2 . (5.5.10) 휕휇2 Therefore,

(︃ )︃ [︃ ]︃ 휕(퐸 [푁]) 휕 1 휕풵 1 1 휕2풵 1 휕풵 휕풵 1 {︁ [︁ ]︁ }︁ = = − = 훽2퐸 푁 2 − 훽2(퐸 [푁])2 , (5.5.11) 휕휇 휕휇 훽풵 휕휇 훽 풵 휕휇2 풵2 휕휇 휕휇 훽 123 which leads to

훽푉 {︁ [︁ ]︁ }︁ 휅 = 퐸 푁 2 − (퐸 [푁])2 , (5.5.12) (퐸 [푁])2 that is also easily calculated since the basis is composed of occupation number states. However, a more suitable form for the compressibility is given by

휕(퐸 [푁]) {︁ [︁ ]︁ }︁ 휅 ≡ = 훽 퐸 푁 2 − (퐸 [푁])2 , (5.5.13) 휕휇 since it can be extended to the calculation of local compressibilities,

{︁ [︁ 2]︁ 2}︁ 휅푖 ≡ 훽 퐸 푛푖 − (퐸 [푛푖]) , (5.5.14) that would be experimentally achievable using techniques such as the quantum gas microscope [157, 158].

Internal energy From Eq. 5.5.2, we have

퐸 [퐻] = 퐸 [ℋ − 휇푁] = 퐸 [ℋ] − 휇퐸 [푁] , (5.5.15) such that the internal energy is given by

퐸 [ℋ] = 퐸 [퐻] + 휇퐸 [푁] 퐸 [푝] = − + 휇퐸 [푁] , (5.5.16) 훽 therefore it is readily obtained from the former quantities.

Heat capacity This quantity is usually defined as

[︁ 2]︁ 2 퐶푉 = 퐸 ℋ − (퐸 [ℋ]) . (5.5.17)

Note that

(퐸 [ℋ])2 = (퐸 [퐻 + 휇푁])2 = (퐸 [퐻] + 휇퐸 [푁])2 = (퐸 [퐻])2 + 2휇퐸 [퐻] 퐸 [푁] + 휇2(퐸 [푁])2 and also [︁ ]︁ [︁ ]︁ [︁ ]︁ [︁ ]︁ [︁ ]︁ 퐸 ℋ2 = 퐸 (퐻 + 휇푁)2 = 퐸 퐻2 + 2휇퐻푁 + 휇2푁 2 = 퐸 퐻2 + 2휇퐸 [퐻] 퐸 [푁] + 휇2퐸 푁 2 124 because 푁 and 퐻 are commutative operators. Then,

[︁ ]︁ {︁ [︁ ]︁ }︁ [︁ ]︁ 휅휇2 퐶 = 퐸 퐻2 − (퐸 [퐻])2 + 휇2 퐸 푁 2 − (퐸 [푁])2 = 퐸 퐻2 − (퐸 [퐻])2 + (퐸 [푁])2. (5.5.18) 푉 훽푉

Furthermore,

{︁ 2 −훽퐻 }︁ [︁ ]︁ Tr 퐻 푒 1 ∞ (−훽)푝 ⟨ ⃒ ⃒ ⟩ 2 ∑︁ ∑︁ ⃒ 2 푝 ⃒ 퐸 퐻 = = 훼⃒퐻 (퐻 )⃒훼 풵 풵 푝=0 훼 푝! 1 ∞ (−훽)푝+2 (푝 + 2)(푝 + 1) ⟨ ⃒ ⃒ ⟩ ∑︁ ∑︁ ⃒ 푝+2⃒ = 2 훼⃒퐻 ⃒훼 풵 푝=0 훼 (푝 + 2)! (훽) ∞ 푝 [︃ ]︃ 1 ∑︁ ∑︁ (−훽) 푝(푝 − 1) 푝 퐸 [푝(푝 − 1)] = 2 ⟨훼|퐻 |훼⟩ = 2 , (5.5.19) 풵 푝=0 훼 푝! 훽 훽 so that

1 {︁ [︁ ]︁ }︁ 휅휇2 퐶 = 퐸 푝2 + (퐸 [푝])2 − 퐸 [푝] + (퐸 [푁])2, (5.5.20) 푉 훽2 훽푉 which is also a suitable form within the framework of the SSE method.

Superfluid fraction For systems with periodic boundary conditions, which is the case for all systems discussed in the present dissertation, this quantity can be calculated using the winding number prescription of path-integrals [30, 35, 36]. In the World Line picture, whenever a particle crosses a boundary it means that a winding has occurred, therefore we just need to check for off-diagonal operators that connect two lattice sites across the periodic boundary. The fluctuations of the net winding number 푊 are related to the superfluid fraction via

2 푚퐿 [︁ 2]︁ 휌푠 = 퐸 푊 , (5.5.21) ~2훽퐸 [푁] where 퐿 is the one dimensional length of the periodic cell.

5.5.2 G-sector Observables in this sector are in the form of imaginary time correlation functions that can be used to obtain important quantities such as n-particle Green’s functions and reduced density matrices. Of particular interest for this dissertation is the single-particle density matrix that is directly related to the condensate fraction of the system. I will give an example of the calculation of this quantity following Ref. [18]. 125

Single-particle density matrix It is a single-particle correlation function measured at equal imaginary times, defined as

푁푠 ∑︁ [︁^†^ ]︁ 휌^1 ≡ |푖⟩ 퐸 푏푖 푏푗 ⟨푗| , (5.5.22) 푖,푗=1 where |푖⟩ is the single-particle state of site 푖. Here, it should be taken as the occupation number of lattice site 푖. The measurement process is made during the passage of the worm in the loop update procedure. Starting with the insertion of the worm at some point in the World Line, we track the states of the system as the discontinuity is propagated. Every time the head of the worm returns to the same location in time, or propagation level, equal time correlation functions can be measured. ^†^ Consider the example shown in Fig. 5.9, where the insertion of the operator pair 푏1푏1 is made at 휏 = 1, lattice site 푖 = 1, which in operator-string language is

⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ 푑⃒ ^†^ ⃒ 푑⃒ ^†^ ... |훼(2)⟩ 훼(2)⃒퐻1 ⃒훼(1) 푏1푏1 ⟨훼(1)| ... = ... |1, 1, 1⟩ 1, 1, 1⃒퐻1 ⃒1, 1, 1 푏1푏1 ⟨1, 1, 1| ... Suppose that the worm propagates forward entering leg 0 on site 1,

⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ⃒ 푑⃒ ^†^ ⃒ 푑^†⃒ ^ ... |1, 1, 1⟩ 1, 1, 1⃒퐻1 ⃒1, 1, 1 푏1푏1 ⟨1, 1, 1| ... −→ ... |1, 1, 1⟩ 1, 1, 1⃒퐻1 푏1⃒0, 1, 1 푏1 ⟨1, 1, 1| ..., then it switches to the leg connecting lattice site 2 and propagates back,

⟨ ⃒ ⃒ ⟩ ⃒ 푑^†⃒ ^ 표 ^†^ ... |1, 1, 1⟩ 1, 1, 1⃒퐻1 푏1⃒0, 1, 1 푏1 ⟨1, 1, 1| ... −→ |1, 1, 1⟩⟨1, 1, 1|퐻1 |0, 2, 1⟩ 푏2푏1 ⟨1, 1, 1| ..., as shown in Fig. 5.9(a). Therefore, at 휏 = 1 again, the form of the operator sequence is

^†^ |0, 2, 1⟩ 푏2푏1 ⟨1, 1, 1| , which is not of the same form as the strings that compose the partition function 풵, since the trace condition is not fulfilled. The entire operator sequence then would look like

1 ∞ 훽푛 ⟨ ⃒ ⃒ ⟩ [︁ ]︁ ∑︁ ∑︁ ∑︁ ′ ⃒ ^† ^ ⃒ 푛 ′ ^† ^ 훼 (휏)⃒퐴푖 퐴푗⃒훼(휏) ⟨훼(휏)|퐻 |훼 (휏)⟩ ≡ 퐸 퐴푖 퐴푗 , (5.5.23) 풵 ′ 푛! 푛=0 {퐶푛} 훼(휏),훼 (휏) [︁^†^ ]︁ so that the previous worm movement allows for the measurement of 퐸 푏2푏1 . The discontinu- ous space that the worm creates during its propagation is therefore what is needed to measure correlation functions. It then continues to move so that

⟨ ⃒ ⃒ ⟩ ⟨ ⃒ ⃒ ⟩ ^†^ ⃒ 푑⃒ ^ ⃒^† 푑⃒ ... |0, 2, 1⟩ 푏2푏1 1, 1, 1⃒퐻0 ⃒1, 1, 1 ... −→ ... |0, 2, 1⟩ 푏1 1, 2, 1⃒푏2퐻0 ⃒1, 1, 1 ..., following which, by another QMC move, it exits through the leg connecting site 3, 126

⟨ ⃒ ⃒ ⟩ ^ ⃒^† 푑⃒ ^ ^† 표 ... |0, 2, 1⟩ 푏1 1, 2, 1⃒푏2퐻0 ⃒1, 1, 1 ... −→ ... |0, 2, 1⟩ 푏1푏3 ⟨1, 2, 0|퐻0 |1, 1, 1⟩ ...,

[︁^†^ ]︁ as shown in Fig. 5.9(b), which then allows for the measurement of 퐸 푏3푏1 . Finally, another set of movements, shown in Fig. 5.9(c), leads to the head returning to the tail. This final set of movements does not involve the head crossing the propagation level where the tail holds still, thus no further equal time measurements can be made. Naturally, the last movement allows for [︁^†^ ]︁ the estimate of the diagonal density 퐸 푏1푏1 , which is not however the best way to perform this estimate.

Figure 5.9: Example for the calculation of equal-time correlation functions during the passage of the worm, which is initially inserted at 휏 = 1 and lattice site 1. (a) Movement allows for the [︁^†^ ]︁ [︁^†^ ]︁ measurement of 퐸 푏2푏1 . (b) Allows for measurement of 퐸 푏3푏1 . (c) Final set of movements leading the head to the tail. See text for detailed discussion. Figure from Ref. [18]. 127

Condensate fraction The condensate fraction of a system is usually defined as the largest eigen-value of the single- particle density matrix 휌^1 [25, 159]. Therefore, once we have measured 휌^1 with the procedure previously discussed, we need to diagonalize it, obtaining a set of eigen-values Λ = {휆1, 휆2, 휆3, ...} such that

푛0 ≡ max{Λ} (5.5.24) is the condensate fraction of the system. This can be challenging since the single-particle density matrix is extremely large for reasonable lattice sizes [18, 160].

5.6 CSSER

The implementation of the Stochastic Series Expansion method in computers can be made in several different manners and different computing languages. Depending on the specificities of the problem that one is interested in studying, algorithmic variations can largely improve the efficiency of the techniques discussed in the present Chapter. CSSER is a parallel SSEbased library written in C++ that was developed by Ushnish Ray around 2014, during his PhD with Professor David Ceperley at the University of Illinois [12, 18, 91, 160]. It is available at https: //github.com/ushnishray/CSSER. Even though its main applications so far have been restricted to ultracold atomic gases, the library is in a highly abstract structure that allows for straightforward implementation of different Hamiltonians, with the basic Monte Carlo engines of loop-update and diagonal-update being model-independent. Some relevant observables, such as thermodynamic quantities, imaginary time correlation functions and superfluid fractions are also implemented. I have used this library, along with quite a few modifications, to obtain the results that are going to be discussed in the upcoming part of this dissertation. 128

Part III

Applications 129

Chapter 6

Preliminary results

We have so far presented part of the theoretical framework and some of the numerical methods that can be used to study the system that is central to this dissertation, namely the Disordered Bose-Hubbard Model (DBHM). This brings us to the point where application of these tools is expected. In what follows, I will discuss the usage of Stochastic Series Expansion (SSE) to simulate the DBHM given by Eq. 3.1.6. The present Chapter is intended to introduce preliminaries of both model and method, comprising a comparison between Quantum Monte Carlo (QMC) results and Exact diagonalization (ED), and also discussing the phase diagram and order parameters of the DBHM for different lattice fillings. Furthermore, the Local Density Approximation (LDA) isused to get an insight into the features of trapped, non-homogeneous systems that are closely related to experiments.

6.1 Comparison between QMC and exact diagonalization

As we have discussed in Sec. 4.1, exact diagonalization techniques are extremely important to verify the validity of more abstract numerical methods. Even when the method under consideration is already established, a direct comparison to exact results is imperative to avoid a number of errors from different sources such as machines compatibility and compilation issues. Furthermore, the construction of an algorithm that exactly solves a certain problem, even for very specific or non- applicable situations, certainly has pedagogical value leading to a deeper acquaintance to the problem and possibly to valorous insights over more generic cases. In the case of the DBHM, I have written codes in C++ that perform exact diagonalization of the model for 푑 = 1 spatial dimensions, both in the canonical and grand-canonical ensembles. They allow for calculations of a few ground-state observables as well as finite-temperature ther- modynamical quantities. However, the lattice size is restricted to only a few sites if one does not want to spend really large computation time, since the codes perform complete diagonalization. Fig 6.1 shows a comparison between ED and SSE for energy, density and compressibility over a variety of samples with different control parameters that are shown in Table 6.1. The lattice size used was 퐿 = 6. There is perfect agreement between the obtained values, which indicates that SSE is indeed capturing the physics of the model. However, perhaps the most important aspect of 130

Figure 6.1: Thermodynamic quantities compressibility, density and energy (top to bottom) cal- culated via exact diagonalization (ED) and Stochastic Series Expansion (SSE) for a 퐿 = 6 one- dimensional lattice. Each sample has a different set of parameters that are shown in Table 6.1. this comparison is the total time required for calculation, which was made in the same computer. ED takes from 12 to 16 hours to finish diagonalization and computation of thermal averages using standard C++ libraries. On the other hand, SSE provides us with rigorously the same results in about 20 seconds!

6.2 Phase diagrams for the DBHM

Another type of preliminary analysis that is important for the study of the DBHM is the construction of phase diagrams through the calculation of the superfluid order parameter 휌푠 and the compressibility 휅 of the system, which makes possible to identify what is the phase of the system (SF, BG or MI) for a given set of physical parameters according to Table 3.1. This allows for a better understanding of the features of the system in terms of its physical properties in different regions of parameter-space. There are several possibilities to construct these maps,with the most popular choice being arguably given by the interaction-tunneling ratio (푈/푡) along one direction and normalized disorder strength Δ/푈 along the other, with fixed filling, which is defined as the ratio of the average number of particles to the total number of lattice sites. Grand-canonical types of phase diagrams, where one of the axes carries the chemical potential 휇, are also relevant and can be used as a mapping from homogeneous systems to trapped systems using the Local Density Approximation (LDA) that incorporates the confinement as a local chemical potential. 131

Sample 푡/푈 휇 Δ 훽 1 0.5 0.5 0.1 5 2 0.02 0.5 0.1 5 3 0.01 0.5 0.1 5 4 0.01 -0.5 0.1 20 5 0.02 -0.5 0.1 20 6 0.5 -0.5 0.1 20 7 0.5 -0.5 1.0 5 8 0.02 -0.5 1.0 5 9 0.01 -0.5 1.0 5 10 0.01 0.5 1.0 20 11 0.02 0.5 1.0 20 12 0.5 0.5 1.0 20 13 0.5 0.5 0.01 5 14 0.02 0.5 0.01 5

Table 6.1: Parameters used for comparison of thermodynamic quantities obtained via ED and SSE.

6.2.1 Grand-canonical maps In artificial lattices constructed using counter-propagating lasers to establish a standing-wave pattern, the lattice periodic-potential is usually of the form [︂ (︂휋 )︂ (︂휋 )︂ (︂휋 )︂]︂ 푉 (⃗푟) = 푉 sin2 푥 + sin2 푦 + sin2 푧 , (6.2.1) 0 푎 푎 푎 where 푎 is the lattice parameter, which is the distance between consecutive potential valleys [12, 90, 91, 161]. This one-body external potential therefore is used to construct Wannier states (see Secs. 1.2.2 and 1.2.3), from which the Bose-Hubbard terms 푡, 휖 and 푈 are obtained (Eqs. 1.2.16, 1.2.17, 1.2.18). Thus, these terms are dependent on the lattice-depth 푉0, as shown in Fig. 6.2. The lattice-depth is often written as 푉0 = 푠퐸푅, where 퐸푅 is the atomic recoil energy: the kinetic energy imparted to an atom at rest by a photon from the optical lattice. For 87Rb atoms and 푎 = 406 nm, we have 퐸푅 = 167 nK. A suitable form of phase diagram is then given by fixing 푠, or equivalently the interaction-tunneling ratio, and calculating physical properties in the 휇 vs. Δ plane. Fig. 6.3 shows the density, compressibility, superfluid fraction and consequent phase diagram for 푠 = 10 and 푠 = 14, while Fig. 6.4 shows the same quantities for 푠 = 18. Data was obtained using the speckle type of disorder distribution. The most remarkable feature of these phase diagrams is perhaps the almost complete absence of insulating states for shallow lattices (푠 . 10). The SF-MI transition for the clean system occurs at 푠 = 13.6. Notice that, for 푠 = 14, the Mott-insulating lobes timidly start to form. This is however a finite size effect since the disorder distribution that is being considered is unbounded. Forlarger lattice-depths (푠 & 18), the interaction is so strong as to forbid global coherence even for large densities of 3 atoms per lattice site, and the phase diagram is mostly taken by the Bose-glass. It is also worth noticing that the low-density Bose-glass has aberrant large compressibilities that can be 132

Figure 6.2: Dependence of the Hubbard terms 푈, 푡 and 휖 (left to right) on the lattice-depth 푉0. Figure from Ref. [18]. noticed by the presence of stripes in the compressibility maps close to the vacuum (zero-density). This feature persists regardless the value of 푠.

6.2.2 Trapped systems The use of the grand-canonical type of phase diagram is further justified by the fact that, in ultracold atomic gases experiments, it is necessary to use a confining potential that keeps the atoms together in order to study interaction-mediated effects, which is usually called a trap and will be denoted by 푉trap. This is often achieved by using magnetic quadrupolar fields that provide harmonic confinement and also balance gravity, so that

푚휔2 푉 (푟 ) = 푟2, (6.2.2) trap 푖 2 푖 where 푟푖 is the distance of lattice-site 푖 to the center of the trap and 휔 is the trap frequency. The DBHM Hamiltonian for the trapped system is then

2 ^ ∑︁ ^†^ 푈 ∑︁ ∑︁ ∑︁ 푚휔 2 퐻trap = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1) − (휇 − 휖푖)^푛푖 + 푟푖 푛^푖. (6.2.3) ⟨푖푗⟩ 2 푖 푖 푖 2 The last two terms can be grouped to obtain

푚휔2 휇 ≡ 휇 − 휖 − 푟2, (6.2.4) 푖 푖 2 푖 so that

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ 퐻trap = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1) − 휇푖푛^푖, (6.2.5) ⟨푖푗⟩ 2 푖 푖 133

Figure 6.3: Physical properties of the DBHM in the (Δ/푈 vs. 휇/푈) plane for 푠 = 10 (left panels) and 푠 = 14 (right panels). Top to bottom: density, compressibility, superfluid fraction and phase diagram. To the last one, the following number-color code is attributed: (0, blue) = vacuum, (1, green) = MI, (2, yellow) = BG and (3, red) = SF. Data obtained by Ushnish Ray using CSSER. 134

Figure 6.4: Same as the previous figure, but for 푠 = 18. Data obtained by Ushnish Ray using CSSER. which indicates that the confining potential could be thought of as a local chemical potential shift, assuming that the sites can be decoupled. In doing so, as one scans the lattice away from the center of the trap, the local chemical potential decreases, which is equivalent to going straight-down in the phase diagrams just showed. This situation is illustrated in Fig. 6.5 and comprises the core of the Local Density Approximation (LDA) in these systems. We can then interpolate different properties in these maps to obtain their radial distributions and, by comparing their values, it is also possible to attribute a phase diagram for the trapped system, as shown in Fig. 6.6( 푠 = 10) and Fig. 6.7( 푠 = 18). This reveals that the confining potential renders the system non-homogeneous. The atomic cloud is therefore composed of shells of different phases that can be non-trivially arranged depending on the physical parameters 푠, Δ and 휇. Another remarkable aspect is that, for higher disorder strengths, the radii of the clouds are larger, which can be seen by the negative slope of the vacuum-border line on the phase diagrams for the homogeneous system. This is physically expected since positive shifted sites are likely to be unoccupied, while the negative ones can only be partially over-occupied since the particles are repulsively interacting, which leads to an expansion of the system for larger disorder strengths. Notice, however, that calculations for the homogeneous systems are averaged over different disorder realizations. If we consider a single disorder realization, the situation is even richer since in this case several lattice sites can be empty. It is therefore more challenging to address the structure of the atomic cloud in terms of the compounding phases that relate to local physical properties. In spite of that, properties that are measured in experiments are inherently time- 135

Figure 6.5: In the Local Density Approximation, the confining potential is incorporated as a local chemical potential shift that hence decreases as the distance from the center of the trap increases (left). This is equivalent to scanning straight-down in the grand-canonical type of phase diagrams (right). averaged, which is a proxy to disorder averaging since the power of the laser used to generate the random potential inevitably drifts over time. It is also worth heeding that, even though we are considering phase diagrams for the trapped system, this is merely a pictorial analysis since, rigorously, phases of matter are only assigned to systems in the thermodynamic limit. In this sense, it is not strictly correct to attribute a “phase” to a certain lattice site, as this property naturally arises from the emerging collective macroscopic behavior of the individual particles over the whole system. This points towards the serious obstacles that trapped systems face concerning the study of critical phenomena, since it can be extremely difficult to distinguish singular behavior of any physical property given their non-homogeneous distributions throughout the atomic cloud.

6.2.3 Fixed filling maps In spite of the importance of the grand-canonical type of phase diagrams that regards the application of the LDA to describe trapped, non-homogeneous systems, the most common map that is employed in the context of the DBHM is certainly the one that considers fixed lattice fillings, which is denoted by 휌. In this case, as the density is fixed, investigations about quantum criticality can be performed since phase transitions potentially take place as a consequence of tuning the model-parameters 푡 and 푈. Similarly, disorder-driven phase transitions can be addressed by tuning the disorder strength Δ. For this reason, we will use maps that carry 푈/푡 in the horizontal axis and Δ/푈 in the vertical axis. This kind of map is practical for numerical investigations, since computations are usually done within the occupation number basis that gives ready access to both local and total number of particles. Additionally, even though the majority of experiments with ultracold atoms use harmonic or more generic non-homogeneous traps, recently it has been made possible to construct hard-wall traps that grant homogeneity to the atomic cloud [162, 163]. This innovation is extremely important for benchmarking of both standard theories of condensed matter physics and computer simulations to experimental systems. 136

Figure 6.6: Radial distributions of physical properties and resulting phase diagram for the trapped system for 푠 = 10. For the left panels, Δ/푈 = 1.0, 휇/푈 = 0, while for the right ones Δ/푈 = 2.9 and 휇/푈 = −1.5. 137

Figure 6.7: Radial distributions of physical properties and resulting phase diagram for the trapped system for 푠 = 18. For the left panels, Δ/푈 = 1.0, 휇/푈 = 1.75, while for the right ones Δ/푈 = 0.01 and 휇/푈 = 1.5. 138

Incommensurate filling

To start with, consider the case of incommensurate filling, where the average occupation number of a lattice site is not an integer. In this case, the clean system is always in the superfluid state. For 휌 < 1, there will be empty lattice sites such that particles are free to hop throughout the lattice without being penalized regardless how large the interaction strength 푈 is. Similarly, for 휌 > 1 and large interaction strengths, there will be an excess number of particles “on top” of a Mott- insulating background that are also able to condense, since hopping through the lattice lowers the energy of the system. When we introduce disorder to the lattice, the particles that exceed unit filling for 휌 > 1 can become localized by valleys and hills of the disorder terrain. Concurrently, the same feature can happen for the exceeding holes when 휌 < 1, therefore in both cases we expect a transition from the SF to the Bose-glass phase. The order parameters and resulting phase diagram for fillings 휌 = 0.5, 휌 = 0.75 and 휌 = 1.25 are shown in Figs. 6.8, 6.9 and 6.10, respectively. These calculations were done with the Gaussian type of disorder, and properties were averaged over 40 disorder realizations for a 퐿 = 6 three- dimensional, cubic lattice. The similar features of the obtained phase diagrams indicate that, as expected, the underlying physical processes that ultimately lead to the behavior of the order parameters are the same over different densities. Even though this sets the shape of the phase diagram, it is quite interesting to notice a couple of aspects. The first one regards the behavior of the compressibility. As one can notice, this quantity is systematically larger for lower densities, reflecting the fact that it is easier to add particles to the lattice if there are empty sites,adirect consequence of the repulsive nature of the interaction between atoms. The second aspect is the increasing of the extension of superfluid phase for higher densities. Notice that, for 휌 = 0.5, less than half of the phase diagram is covered by the SF, whereas for 휌 = 1.25 there is only a tip of BG in the right, top corner of the map. This is again originated in the repulsive character of the interaction. For larger densities, the atoms progressively fill up the dents of the disorder terrain with the remaining ones being more likely to establish global superflow. Both features point to the fundamental manner in which the interaction plays the dominant role in describing the phase diagram of this system.

Figure 6.8: Order parameters and phase diagrams for 휌 = 0.5 (a) Superfluid fraction 휌푠. (b) Compressibility per particle 휅. (c) Phase diagram (BG,yellow) = Bose-glass, (SF,red) = superfluid. 139

Figure 6.9: Order parameters and phase diagrams for 휌 = 0.75 (a) Superfluid fraction 휌푠. (b) Compressibility per particle 휅. (c) Phase diagram.

Figure 6.10: Order parameters and phase diagrams for 휌 = 1.25 (a) Superfluid fraction 휌푠. (b) Compressibility per particle 휅. (c) Phase diagram.

Unit filling The case of commensurate filling is somewhat more interesting since it can support the existence of the Mott-insulating state in clean lattices. Consider then the unit filling case, 휌 = 1, shown in Fig. 6.11. There is a striking difference in comparison to the incommensurate phase diagrams that comprises the appearance of the superfluid state with a finger-like shape. For Δ = 0, the system 1 is a Mott insulator for 푈/푡 & 35.0 , therefore this situation offers the possibility of achieving the superfluid state from the MI just by adding disorder to the lattice. This is a remarkable example of the order-by-disorder fortuity in quantum systems. In fact, as discussed in Sec. 3.3.2, the region that covers the finger-shape is called the re-entrant superfluid (RSF). Notice that, asweare considering an unbounded type of disorder distribution (Gaussian), the Mott insulator is absent for Δ > 0. As expected, the Bose-glass phase intervenes in between the MI and the SF. Higher commensurate fillings can be considered but, just as in the incommensurate case, the underlying physical mechanisms must be the same as the case for unit filling, therefore we do not expect to

1The correct value for the clean critical point is 푈/푡 = 29.34(2) [40]. Here, due to finite-size effects, the value is a little larger. 140 see any relevant differences. Moreover, the simulation of larger fillings is more expensive since we need to consider a much larger basis set as the truncation in the occupation number of each lattice site needs to be enlarged comparatively to unit filling. Given the richness and peculiarities of the commensurate filling phase diagram, the upcoming chapters will be devoted to a systematic analysis of disorder properties and statistics of the DBHM at unit filling.

Figure 6.11: Order parameters and phase diagrams for 휌 = 1.0 (a) Superfluid fraction 휌푠. (b) Compressibility per particle 휅. (c) Phase diagram. 141

Chapter 7

Aspects of the disorder ensemble

After investigating the phase diagram of the DBHM in three-dimensions using SSE, explicitly confirming features that were expected based on theoretical ideas from Chapter3, we turn the attention to the process of averaging over the disorder ensemble. As we are considering quenches of disorder, where the energy shifts attributed to lattice sites are fixed in time, we must consider different disorder realizations and account for the statistics of physical properties over them.Even though the control parameters – filling 휌, interaction-tunnelling ratio 푈/푡, lattice size 퐿 and dis- order strength Δ – are the same for each disorder instance, the fact that we are extracting the disorder profile from a probability distribution generates a unique Hamiltonian for each ofthem. The set of these Hamiltonians is called the disorder ensemble. To be more specific, consider that we have a 퐿 = 6 lattice so that there are 63 = 216 lattice sites. We then construct one particular Hamiltonian according to Eq. 3.1.6,

^ ∑︁ ^†^ 푈 ∑︁ ∑︁ 퐻 = −푡 푏푖 푏푗 + 푛^푖(^푛푖 − 1) − (휇 − 휖푖)^푛푖, ⟨푖푗⟩ 2 푖 푖 where the 216 energy shifts 휖푖 are sampled from a Gaussian distribution with zero mean and standard-deviation Δ, and simulate this Hamiltonian using SSE to obtain its physical properties. However, if we sample the disorder distribution again, the 216 newly sampled numbers will be different – perhaps dramatically – and will lead to a different set of values for the same physical properties. As these two samples are completely equivalent, the only reasonable quantity that we can attribute to the system are disorder averages and other disorder-statistical measures. In principle, a property obtained from a single disorder realization is therefore meaningless. Notice that, even if we consider exactly the same set of 216 numbers, they can be spatially distributed in, roughly speaking1, 216! ∼ 10412 different manners that certainly would have different valuesof physical properties. The problem of obtaining physical properties in systems with quenched disorder therefore is similar to obtaining thermal averages using the canonical ensemble prescription of statistical mechanics. The microstates over which the system navigates as a consequence of the energy

1Recall that we are considering a cubic lattice, therefore each spatial direction is equivalent so that the total number of effectively different samples is potentially smaller than 퐿3!. 142 exchange with the thermal reservoir translates into the different disorder realizations in the disorder ensemble. However, in the canonical case, the system naturally reaches an equilibrium distribution, whereas in the disorder case we must accomplish that by “manually” adding more and more disorder samples to the ensemble. Consequently, not considering a large enough number of samples implies that one is actually measuring properties out of equilibrium in a disorder sense. In what follows, I will discuss this disorder-equilibration process and the size of fluctuations within the disorder ensemble.

7.1 Definition of disorder-statistical quantities

In order to characterize quantities related to the disorder ensemble, it is important to establish a notation that will be used throughout this chapter and others to come. For a precise definition of statistical quantities, we refer the reader to AppendixA. To start with, we shall denote the disorder ensemble for a certain lattice size 퐿 by ℰΔ(퐿). Suppose that we are interested in an observable 푋 of the system. The disorder average of 푋, which is calculated as the regular, single-weight mean of different values obtained from different samples2, is given by

1 푁 [푋] ≡ disorder-average of 푋 = ∑︁ 푋 , (7.1.1) 푁 푖 푖∈ℰΔ(퐿) where 푁 is the number of samples in the ensemble and 푋푖 is the particular value of 푋 for sample 푖. Similarly, the disorder-variance of 푋 is given by

1 푁 (Δ푋)2 ≡ disorder-variance of 푋 = ∑︁ (푋 − [푋])2 . (7.1.2) 푁 − 1 푖 푖∈ℰΔ(퐿) An important quantity that comprises the size of the fluctuations of 푋 from sample to sample within ℰΔ(퐿) is the relative variance of 푋, (Δ푋)2 풟 (퐿) ≡ relative variance of 푋 = . (7.1.3) 푋 [푋]2

Furthermore, the probability distribution of 푋 over ℰΔ(퐿) will be denoted by 풫푋 (퐿). Table 7.1 summarizes this notation. Statistical quantity Notation Average [푋] Variance (Δ푋)2 Relative variance 풟푋 (퐿) Probability distribution 풫푋 (퐿)

Table 7.1: Notation of statistical quantities over the disorder ensemble ℰΔ(퐿).

2Samples and disorder realizations or disorder instances are interchangeable terms. 143

7.2 Disorder equilibration

Consider initially the evolution of average values of the order parameters superfluid fraction 휌푠 and compressibility per particle 휅 as we increase the number of samples that compose the disorder ensemble, as shown in Fig. 7.1 for unit filling, 푈/푡 = 22.0 and Δ/푈 = 0.5. It is clear that,

Figure 7.1: Evolution of order parameters compressibility (top) and superfluid fraction (bottom) obtained using Gaussian disorder for a 퐿 = 6 lattice. Control parameters are 푈/푡 = 22.0, Δ/푈 = 0.5, 휌 = 1 and 훽 = 15.0. for both quantities, there is a certain equilibration “time” of about 150 samples after which the average value remains steady. This indicates the minimum number of 퐿 = 6 samples that one needs to confidently estimate the averages of the order parameters for this point in parameter- space. However, different statistical quantities can require different minimum numbers of samples to equilibrate, hence a similar analysis must be performed for every quantity of interest. In particular, Fig. 7.2 shows the evolution of variances and relative variances for the same samples as in the previous figure. As it is possible to notice, derived quantities are subjected tomore statistical noise, requiring more samples to reach equilibrium in the disorder sense. In spite of that, for statistical quantities related to 푋, it is safe to perform estimates after 3 the probability distribution 풫푋 has reached equilibrium . One possible way to verify that is by monitoring the histograms of the obtained 푋푖 values as we consider more and more samples. Once the shape, width and tails of the histogram cease to change, we have effectively explored the disorder ensemble. The evolution of 풫 for the samples considered in Figs. 7.1 and 7.2 are shown in Fig. 7.3. Notice that, even though the averages equilibrate for 150 samples, higher order momenta that reflect the general shape of the probability distribution can take considerably longer. However, in this case we can see that there is little difference between histograms for 300 and 500 samples. This analysis indicates that acquiring knowledge of the probability distributions over the dis- order ensemble can be computationally expensive in the sense that many samples need to be

3It is important to stress that the equilibration of 푋 does not imply in the equilibration of another observable 푌 , therefore investigating equilibration independently is necessary. 144

Figure 7.2: Evolution of variances (left panel) and relative variances (right panel) for the same samples as in Fig. 7.1.

Figure 7.3: Evolution of probability distributions of superfluid fraction (left panel) and compress- ibility (right panel) for the same samples as in Figs. 7.1 and 7.2. generated. On the other hand, this strongly depends on the size of the fluctuations from sample to sample within the disorder ensemble. If the fluctuations are small, it is expected that less samples are necessary to correctly address statistical measures and ultimately probability distributions.

7.3 Size of fluctuations over the phase diagram

In order to better understand the nature of the fluctuations of physical quantities from sample to sample within the disorder ensemble, it is interesting to look at their variances for different control parameters. Fig. 7.4 shows both variances and relative variances of the order parameters over the unit-filling phase diagram. Regarding the superfluid fraction, the raw variance obviously decreases as one approaches the BG phase boundary, since this order parameter vanishes for this phase transition. For that reason, the important quantity to look at is rather the relative variance, 145 in panel (b), which shows that as we approach the BG the relative size of the fluctuations increases considerably. This indicates that samples look more different when there is less particles inthe superfluid state, which means that the atoms in the normal state are more susceptible tothe disorder terrain. This assertion can be explained by recalling that atoms in the SF state are able to tunnel coherently throughout the lattice. In order to not lose phase coherence, they certainly cannot be too susceptible to the structures of the random potential. The maps for the compressibility exhibit the same trends, but it is worth noticing that in this case even the raw variances continue to increase from the SF, crossing the boundary and inside the BG. Recall that 휅 is finite over this transition.

Figure 7.4: Fluctuations from sample to sample of the order parameters over the phase diagram for 훽 = 15.0, 퐿 = 6 and Gaussian disorder. (a), (b) Variance and relative variance of the superfluid fraction. (c), (d) Same quantities for the compressibility. The white-dashed line indicates the SF-BG phase boundary.

The general increase of fluctuations as the system approaches a phase transition couldbe mistaken as the same behavior that leads to the divergence or discontinuities in susceptibilities at 146 clean critical points. However, in such more common situation, the fluctuations that spread all over the system amounting to long range order are either thermal or quantum in nature. One could then argue that this is therefore what is being observed in Fig. 7.4. Nonetheless, it is fundamentally important to realize that we are strictly observing fluctuations purely from the disorder ensemble. This is done by running the simulations long enough so that the statistical errors associated to the physical properties of every single disorder sample is smaller than 2%. Therefore, any fluctuations arising from the disorder ensemble that are larger than approximately 5% cannot possibly be taken as from the same origin as the ones that drive the phase transition4. This extremely intriguing feature indicates that disorder-driven phase transitions could actually take place under the same mechanisms that drive the thermal/quantum counterparts. On the other hand, recall that we are considering a 퐿 = 6 lattice that is far away from the size of experimental systems. I will come back to the nature of disorder fluctuations later on the present Chapter and also when we discuss the finite size scaling of quantities in Chapter9.

7.4 Differences in probability distributions

The fact that the relative size of fluctuations get larger as we approach the BG phase boundary indicates that the probability distributions of the order parameters get relatively broader. In order to get an insight into changes of the shape of distributions over the phase diagram, let us consider the four lower relative momenta, namely the average, relative-variance, skewness and kurtosis along the line Δ/푈 = 0.5 from inside the SF all the way up to the SF-BG phase boundary, as shown in Fig. 7.5 for the superfluid fraction and Fig. 7.6 for the compressibility. Notice that the analyzed points cover the superfluid phase to the largest extend, including the RSF. The relative variances, as previously noticed, show that fluctuations within the disorder ensem- ble are largely increased as the phase boundary is approached, enlarging by as much as four orders of magnitude for the superfluid fraction. For this order parameter, the third and fourth relative momenta – skewness and excess kurtosis – indicate that the distributions become positively skewed and leptokurtic, deviating from Gaussian behavior. Since we are considering a finite number 푛 of samples, even if they were extracted from a perfectly Gaussian distribution, there would be an associated stochastic noise given by

√︁ 휎skew ≈ 6/푛, (7.4.1) and √︁ 휎kurt ≈ 24/푛, (7.4.2) which are represented by the orange-dashed line in both figures. Points that lie within the region delimited by these lines have a 95% chance of being representative of a Gaussian distribution. Points outside this region therefore have strong tendency to represent non-Gaussian statistics. For the superfluid fraction 휌푠, it is clear that the last couple of points, located in both sides of the phase boundary (푈/푡 = 72.0 and 82.0), strongly suggest that the distribution differs from the ones obtained in regions where 휌푠 is larger. For the compressibility, these deviations are less prominent.

4Rigorously speaking, with a 96% interval of statistical confidence. 147

Figure 7.5: Relative momenta of 휌푠 along Δ = 0.5 for different interaction strengths all the way up to the SF-BG phase boundary. Once more, 퐿 = 6, 훽 = 15.0 and Gaussian disorder. Analyzed points are shown with red crosses in the left panel. The orange-dashed line indicates a 95% confidence interval with respect to a Gaussian distribution. See text for discussion.

Figure 7.6: Same quantities as in Fig. 7.5 for the compressibility 휅.

To better illustrate this change of behavior, histograms associated to the relative distributions are shown in Figs. 7.7 (superfluid fraction) and 7.8 (compressibility). Since one of the specific 148 properties of interest is the Gaussian-like nature of the different distributions, we employ a quantile- quantile measure that enables us to study the extent to which a distribution can be ascribed with Gaussian properties. The calculated quantities are sorted in ascending order according to their oriented distance from the average in terms of the standard deviation, viz. their normal standard scores data 푋푖 − [푋] 푧푖 = √︁ , (7.4.3) (Δ푋)2 while theoretical values are calculated from

theo −1 푧푖 = Φ [(푖 − 0.5)/푛] , (7.4.4) where 푛 is the number of samples and Φ−1 is the standard normal quantile function. These values are then plotted against each other; the points will lie on a straight line of unit slope if the statistics are Gaussian.

Figure 7.7: Probability distributions and quantile-quantile plots of 휌푠 for Δ/푈 = 0.5 and 푈/푡 = 22.0, 42.0, 62.0 and 72.0, lattice size 퐿 = 6, 훽 = 15.0 and 160 samples (Gaussian disorder).

The histograms naturally reflect the features that we discussed based on the behavior ofthe relative momenta. The probability distributions close to the BG phase boundary exhibit non- Gaussian shape, being highly skewed and broad, with a large positive tail. For the superfluid fraction and 푈/푡 = 72.0, which is very close to the tip of the RSF finger, some samples have as much as four times larger amounts of superfluid than the average. Conversely, there isa significant number of samples that have almost no superfluid fraction at all! Notice that,inthis case, [휌푠] = 0.6%, so the number of particles in the superfluid state is quite small. Consequently, the way in which the superflow is established across the lattice highly depends on the peculiarities of the disorder terrain, which causes the broadening of distributions. It is also important to note that the compressibility exhibit the same trends. However, as the interaction-tunnelling ratio decreases and more particles populate the superfluid state, the distributions considerably narrow 149

Figure 7.8: Same quantities and same samples of Fig. 7.7 but for the compressibility 휅. and start to exhibit Gaussian-like features. This indicates that the superfluid somewhat “screens” the disorder profile so that samples look very alike even for this considerably small lattice size. Even though the deviations from Gaussian behavior are made evident by the histograms, it is convenient to realize such feature in the quantile-quantile plots. The reason for that is because it gives us a clearer idea of how non-Gaussian the distribution is, also indicating what parts of it are away from what one would expect when considering a normal distribution. In particular, it shows that even the samples that have order parameters close to the average, with 푧data ∼ 0, are not normally distributed when close to the BG phase. As a final remark, the upshot of these results lies on the fact that a stochastic quantity that is not normally distributed indicates that there is an underlying mechanism driving the process that is not solely led by chance, randomness. Furthermore, acquiring accurate statistical quantities from such distributions, even for the most simple ones, can be very costly in the sense that a large number of samples may be necessary to achieve disorder-equilibrium.

7.5 Origins of non-Gaussian behavior

One of the possible sources of the broadening of distributions when approaching the SF-BG boundary concerns the size of the samples. As we are, so far, considering a 퐿 = 6 lattice, each sample is generated by sampling 216 numbers of the Gaussian distribution of Eq. 3.1.8,

1 {︃ 휖2 }︃ 푃gauss(휖) = √ exp − . 2휋Δ 2Δ2 However, given such finite-size character, the calculated standard-deviation of the sampled numbers may not be the same as the one from the sampling distribution, namely Δ, which sets the disorder strength. This means that, in principle, different samples would then possibly have different disorder strengths. As a result, we would be floating around a certain point in the phase diagram 150

– take for instance 푈/푡 = 72.0, Δ/푈 = 0.5 – as shown in Fig. 7.9. In fact, the histogram of the calculated standard deviations 휎 for 320 samples show that this would amount to a 10% variation in the disorder strengths from sample to sample, even though the sampling distribution is rigorously the same5. This variation could then lead to large deviations since the point we are

Figure 7.9: Distributions of the standard deviations of 320 samples for 푈/푡 = 72.0, Δ/푈 = 0.5 (right panel). This effectively means that the disorder strengths are floating around thechosen point, which is critical when close to the phase boundary. See text for further details. considering is very close to the tip of the RSF finger. The white arrow around the red dot onthe left panel of Fig. 7.9 indicates that we would actually be crossing the phase boundaries, entering the BG phase, which would then explain the presence of samples with aberrantly small superfluid fractions. In spite of that, given the symmetry of both the RSF finger around its tip and of the distribution of standard-deviations, it is expected that such variation could not possibly lead to the observed non-Gaussian behavior of the order parameters. Moreover, the maps that compose the phase diagram are generated considering a number of different samples, therefore it is also expected that such floating-effect would somehow be averaged out. A definitive test can be made by re-scaling the energy shifts of every single disorder sample so that all of them have exactly the same standard-deviation. We then perform the simulations with the re-scaled samples and compare the distributions of the order parameters, as shown in Fig. 7.10. It is possible to see that differences in between the two distributions, without and with re-scaling, are minimal. Consequently, we can conclude that this is not a factor that should be considered in the study of the fluctuations in the disorder ensemble. Notice that even the width of the distributions remains practically the same. Even though such analysis does not allow to point out what is the cause for the broadening and non-normality of the distributions – it points out a factor that is not the cause – it teaches us that global, relatively small changes in the disorder profile have almost no effect on the resulting order parameters. By global we mean that the change is made equally over all lattice sites. Of course,

5This effect is well known in basics statistics. The variance of 푛 normally distributed numbers is distributed according to a (푛 − 1)-degree chi-squared distribution. 151

Figure 7.10: Probability distributions of the superfluid fraction (left panel) and compressibility (right panel) before (top) and after (bottom) re-scaling the energy shifts in order to obtain exactly the same disorder strength for every single sample. 320 samples for 푈/푡 = 72.0, Δ/푈 = 0.5, 훽 = 15.0 and 퐿 = 6. if we multiply the energy shifts by a large factor we certainly expect that the properties would be different. However, in order to correct for the width of the distribution of the standard deviations, the multiplying factor is quite close to unit even for such small lattice size. It is certainly even less relevant if one consider larger lattices as such distribution naturally gets narrower. This could indicate that differences between samples that amount to the large width of the distributions of order parameters are thus related to the way in each the disorder shifts are spatially distributed throughout the lattice. In order to test this conjecture, we can choose a particular sample and randomly shuffle its energy shifts to obtain new disorder realizations that will have exactly the same statistical proper- ties. The only difference between these samples is therefore the way in which the energy shiftsare spatially distributed. We then simulate the samples to obtain the distributions of the superfluid fractions, as shown in Fig. 7.11. In subfigures (a)–(c), the top panel represents the probability dis- tribution obtained from the usual procedure, where energy shifts are not shuffled, but completely resampled. We shall refer to this as the regular distribution. In subfigure (a), a sample that has 휌푠 = 0.69%, which is right on top of the average [휌푠] is chosen. The resulting distribution from the shuffling procedure shows a little deviation from Gaussian behavior, in contrast to theregular case. On the other hand, as we can see in subfigure (d), this distribution has its four lower relative momenta extremely close to the ones pertaining the regular distribution that are given by the red- dotted, horizontal line. This indicates that, even though the “average” sample is indeed capable to correctly capture relative momenta, it fails to reproduce the non-Gaussian features observed in the regular case. This incidentally shows how the analysis of momenta up to fourth order can be deceiving in addressing this question.

In subfigure (b), a sample that has quite low superfluid fraction of 휌푠 = 0.12% is chosen. The resulting distribution from shuffling the energy shifts bears a striking resemblance to the regular 152

Figure 7.11: Probability distributions obtained by averaging over shuffles of a particular disorder realization. In each subfigure, the top panel represents samples obtained with the usual procedure, where energy shifts are not only shuffled but completely resampled. (a) A sample that has super- fluid fraction close to the average value is chosen휌 ( 푠 = 0.69%). We then shuffle the energy shifts to obtain 160 new samples that are simulated to obtain the resulting 휌푠 and pertaining statistics. (b) Same procedure for a sample that has lower than average 휌푠 = 0.12%. (c) Sample with larger than average 휌푠 = 3.06%. The black arrows indicate the positions of the chosen samples. (d) Four lower relative momenta for the previous cases. The red-dotted line indicates values from the usual procedure. Control parameters are 푈/푡 = 72.0, Δ/푈 = 0.5, 훽 = 15.0 and 퐿 = 6. one regarding its general shape, as can be noticed by comparing their quantile-quantile plots – they are almost identical. Furthermore, the relative-variance for this case is even larger, with the distribution being proportionally more positively skewed and more leptokurtic. Concurrently, in subfigure (c) a sample with larger than average superfluid fraction of 휌푠 = 3.06% is chosen, and the obtained distribution from shuffling remarkably exhibits normal, Gaussian behavior. Curiously, the four lower momenta are somewhat close to those of the regular distribution. These results 153 corroborate that deviations from Gaussian behavior of the distributions have their origin in samples that have almost no superfluid present. This conclusion could be expected since such deviations are only observed when approaching the BG phase boundary. As a consequence, we are able to identify the mechanism responsible for the appearance of non-Gaussian features as the one that drives the SF-BG phase transition: percolation. The same analysis could have been performed for the compressibility 휅. However, as this is not the vanishing order parameter, such effects are less prominent when approaching the BG for this quantity. Moreover, notice that a sample that has relatively low 휌푠 does not imply having relatively low 휅 either. In fact, the samples chosen in situations (a)–(c) just described have compressibilities of 2.36 × 10−2, 2.22 × 10−2 and 2.12 × 10−2, respectively – all above the average for the regular distribution, [휅] = 1.46 × 10−2. In spite of producing relevant results, the shuffling procedure that we just described is notused in any other parts of this dissertation. In order to obtain disorder averages and statistics, we use the regular procedure that consists of sampling the energy shifts from the disorder distribution 푃 (휖) for every single sample. A strong reason for this is that, as the primary goal of considering different disorder samples is to confidently explore the disorder ensemble, one needs toconsider a way in which 푃 (휖) is sampled to the greatest extent, which is definitely not accomplished by the shuffling procedure. In fact, if we consider samples that are different only regardingthe shuffling of the random potential, we are somehow introducing a certain degree of correlation between different samples. Particularly for small lattices, sampling 푃 (휖) just once does not allow the system to access details of such distribution, specially for unbounded, tailed distributions, as is the case for the Gaussian one that we are considering. However, the fact that in the shuffling procedure the statistical properties – momenta of all orders – of the sampled set of numbers that compose the random potential is identical over the different samples, the present discussion allows to conclude that high-order momenta of the sampled 푃 (휖), along with the spatial distribution of the energy shifts, play the dominant role in causing the distribution of order parameters and generic physical properties over the disorder ensemble. Recall that, in the regular procedure, we are only ensuring that the random potential of different samples have the same average and variance – higher momenta can be different. 154

Chapter 8

Features of the random potential

As we have seen in the last Chapter, the effects of disorder averaging over different samples from the disorder ensemble are particularly important at the vicinity of the Bose-glass phase. We have shown that in this situation the probability distributions of the order parameters are broad and skewed, a feature that stems from samples that have aberrantly low superfluid fractions. This allowed us to conclude that the percolation mechanism that drives the SF-BG phase transition is responsible for making with that different samples look relatively more and more different, increasing the fluctuations within the disorder ensemble. In other words, the subtleties ofthe particular realizations of the random potential become extremely relevant, ultimately determining the strength of the superfluid flow that characterizes the superfluid state.

In the present chapter I will employ maps of physical properties that can be locally calculated in order to address the question of how the random potential is arranged so that it allows for the appearance of relatively large and small superfluid fractions in different samples with the sameset of control parameters that are thus completely equivalent. This analysis will further corroborate percolation as the manner by which the superfluid state arises from an insulating system by the addition of disorder. Consequently, it will make even more explicit that this mechanism is responsible for the already observed non-Gaussian behavior of relevant physical quantities. More importantly, we will be able to relate the features of the random potential to the way in which the single-particle eigenstate with largest occupation number extends over the lattice, which leads to the resurgence of the superfluid state from the Bose-glass and the formation of the Bose-glass from the Mott-insulating state by the addition of disorder and the increasing of its strength.

I will also discuss the effects of different distributions from which the random potential is sampled, presenting calculations of superfluid fractions along the phase diagram that will show that, even though qualitative features of the superfluid, in particular the finger-like shape of the re- entrant region, are existent regardless the shape of physically reasonable distributions, quantitative differences arise as a consequence of their details. These differences are explained and justifiedby results obtained from analyzing the local properties of the system. 155

8.1 Local properties in different samples

Recall that the superfluid state forms from the connection of superfluid puddles immersed inan incoherent background, a feature that is ubiquitous to the Bose-glass [12]. Therefore, for a point on the phase diagram that approaches the BG from the SF side, we expect that the puddles are barely connected in order to establish a weak superflow that results in a low superfluid fraction. A primary quantity that allows for the identification of the puddles is the wave function associated tothe single-particle eigenstate with largest occupation number, which is obtained by diagonalizing the single-particle density matrix of the system. As usual, the associated eigenvalue is, by definition, the condensate fraction of the system. Another important quantity is the local compressibility 휅푖 which comprises the fluctuations on the number of particles occupying lattice site 푖. In this sense, a site through which particles are actively tunnelling will have a larger 휅푖 than a site where the particles are more localized. Notice that this quantity is not related to the global compressibility 휅. In what follows, I will relate these two quantities to the underlying random potential of two particular samples: one with low superfluid fraction of 휌푠 = 0.03% = 0.05×[휌푠] and the other with large 휌푠 = 3.00% = 4.53 × [휌푠]. Notice that the superflow in the former is almost a hundred times weaker than in the latter. To start with, consider the three-dimensional maps shown in Fig. 8.1 where the physical properties 휖푖 (top), 휓푖 (center) and 휅푖 (bottom) is plotted in colors, with the low-휌푠 sample on the left panels and the high-휌푠 on the right panels. For the wave function 휓, lattice sites with 휓푖 < 0.05 have been blanked, whereas the same has been done for local compressibilities 휅푖 < 0.5 in order to better visualize their spatial structures. As one can see, it is quite difficult to point out any particular differences between the maps of the energy shifts 휖푖 that characterize the disorder realizations. However, the wave function of the high-휌푠 sample is noticeably more extended, occupying a larger fraction of the lattice than the one associated to the low-휌푠 sample, which is more localized. The difference regarding the local compressibilities is striking: the plots show that density fluctuations are a lot stronger in the high-휌푠 sample, indicating that particles are able to tunnel through the structures of the random potential more effectively in this case, which is consistent to a stronger superflow that results in a larger superfluid fraction. Even though these maps allow for an interesting analysis of the distribution of the superfluid puddles, the percolation character of the transition is made more explicit if we consider two- dimensional maps resulting from the integration of properties along one of the spatial dimensions as, for instance, the z-direction. These are shown in Fig. 8.2 where, again, the left panels are properties from the low-휌푠 sample and the right ones from the high-휌푠 one, which are the same as in the previous figure. We can see that the wave function of the low-휌푠 sample, panel (a), is composed of two main puddles that are slightly connected at the center of the map, whereas for the high-휌푠, panel (b), 휓 is more strongly connected throughout the lattice. This is made more evident by the maps of the probability densities |휓|2, which show that most of the particles are actually occupying the puddle located at the top of the map in (a). Conversely, |휓|2 shows that for (b) the puddles are more homogeneously occupied. This is deeply reflected in the maps for the local compressibilities: for the high-휌푠 sample there are quite large density fluctuations on lattice sites within the puddles, while for the low-휌푠, even in the puddles the particles are a lot more localized, which is again consistent with the strength of the superflow in both samples. Notice that there is 156

Figure 8.1: Three-dimensional maps of the local properties 휖푖: energy shift (top), 휓푖: wave function (center), 휅푖: local compressibility (bottom) for different samples. (a) 휌푠 = 0.03% = 0.05 × [휌푠], (b) 휌푠 = 3.00% = 4.53×[휌푠]. Control parameters are 푈/푡 = 72.0, Δ/푈 = 0.5, 훽 = 15.0 and 퐿 = 6. 157

Figure 8.2: Local properties integrated over the z-direction for the same samples as in the previous 2 figure. From top to bottom: wave function 휓푖, probability density |휓푖| and local compressibility 휅푖. See text for discussion. 158

Figure 8.3: Integrated wave function 휓푖 (top) and random potential 휖푖 (bottom) for the same samples of the previous figure. (a) Low-휌푠 sample, (b) high-휌푠 sample. See text for discussion.

a spot of relatively large compressibility in the low-휌푠 sample that corresponds exactly to where the two main puddles that form 휓 for this sample connect. This type of mapping also makes explicit the relation between the spatial distribution of the wave function associated to the macroscopically occupied single-particle state and the particular realization of the random potential. In Fig. 8.3, we show again the wave function 휓푖 and the integrated random potential 휖푖 for the same samples as before: low-휌푠 on the left panels and high- 휌푠 on the right ones. It is impressively evident that the wave function occupies regions where the random potential shifts down the occupation energy of lattice sites, being suppressed where the shifts are positive. However, a fundamental observation is that a more “structured” random potential enhances the superfluid fraction of the system. For the low-휌푠 sample the integrated random potential is quite homogeneous, with almost the whole map occupying the center of the color-scale, which corresponds to zero energy-shifts. In contrast to that, the random potential of the high-휌푠 sample has a structure of valleys, where the wave function is large, and hills where it is 159 suppressed. This corresponds to the underpinning mechanism of the resurgence of the superfluid state from the Mott-insulator as we add disorder to the lattice, namely the emergence of the re-entrant superfluid. For the clean system, a sufficiently strong interaction term 푈 ends up localizing the particles that are than energetically forbidden to hop to previously occupied lattice sites. However, the addition of disorder lowers the cost of multiple occupation in some regions, which allows for particles to locally hop – the Bose-glass state. When there is a connection of such regions through out the lattice, the particles can find patches or “halls” to coherently tunnel over the lattice, establishing a global superflow that results in a finite superfluid fraction. Upto a certain point, increasing disorder facilitates the appearance of these patches, further increasing the superfluid fraction. On the other hand, if disorder is too strong there is a greater chancethat the halls are “closed” positive energy shifts that the particles cannot traverse. Similarly, too large negative shifts can also create barriers since particles become localized in these overly occupied lattice sites. In any case, this amounts to the ceasing of the superflow. For comparison purposes, we show in Fig. 8.4 the three-dimensional maps and the corresponding integrated quantities for a sample that has a superfluid fraction close to the average, 휌푠 = 0.62% = 0.93 × [휌푠]. As one would expect, it exhibits intermediate features in between the low-휌푠 and high-휌푠 samples that we have just discussed. In particular, the wave function is quite representative of that. Notice again that the random potential in arranged in a way that creates a hall through which the particles tunnel.

8.2 Effects of different disorder distributions

The re-entrant behavior of the superfluid state of the disordered Bose-Hubbard model at unit filling and three spatial dimensions is present in all three most common, relevant and physically significant types of probability distributions from which the disorder profile is sampled: box[86], speckle-field [18] and Gaussian (Sec. 6.2.3). The finger-like shape of this part of the phase diagram is explained by the formation and destruction of patches in the random potential that we have just discussed in the previous Section. Notice that negative shifts relative to the chemical potential result in deep wells that can have two different effects. Intermediate values of 휖, compared to the MI gap ∼ 푈/2, can help in lowering the energy cost associated with the multi-particle occupation leading to delocalization of particles. However, for too large negative shifts, the wells may be so deep as to localize particles. Positive shifts, on the other hand, increase the cost associated with site occupation and serve to prevent hopping and delocalization. Consider then these three types of distributions and the resulting superfluid fractions calculated for Δ/푈 = 0.5 and different interaction-tunnelling ratios that scan the superfluid phase tothe largest extent, shown in Fig. 8.5. These points traverse the interior of the SF phase, starting from the regular SF, through the SF-MI transition point of the clean system, and terminating in the RSF part of the phase diagram. It is evident that, for large superfluid fractions, there is very little difference as the SF is able to screen the disordered potential. For smaller valuesof 휌푠, corresponding to an increase in 푈/푡, the effects of disorder become pronounced and the differences in the distributions lead to quantitative differences in the superfluid order parameter. According to these disorder distributions, ∼ 21.13% of the sites of the box disorder have 휖 > Δ compared to ∼ 15.87% for Gaussian and ∼ 13.53% for speckle distributions. Conversely, for 휖 < −Δ, box and 160

Figure 8.4: Maps for a sample with average superfluid fraction 휌푠 = 0.62% = 0.93 × [휌푠]. See text for discussion. 161

Gaussian disorders have the same numbers as before but the exponential distribution has no such energy shifts. This means that, in the exponential case, only the positive tail of the distribution contributes to the localization effects, whereas the negative side facilitates in delocalization. Itis then unsurprising that SF is enhanced for the speckle relative to the other types of disorder.

Figure 8.5: Comparison of 휌푠 for three different distributions of the random potential: box, Gaus- sian and speckle. (a) Probability distributions from which the energy shifts are sampled. Inset: zoom on the tails of the distributions. (b) Calculated superfluid fractions for different values of 푈/푡 along the superfluid phase with Δ/푈 = 0.5, 훽 = 15.0.

The differences between box and Gaussian disorders is due to the interplay of

1. the number and distribution of sites that have sufficiently large (negative) disorder to create SF puddles;

2. the number and distribution of sites that create patches of impassible terrain either due to large positive or negative shifts relative to the chemical potential.

It appears to be the case that the SF in this regime of parameters is enhanced more by delocalization effects due to 1 than it is suppressed by localization effects due to 2, which is justified bythefact that, even though the Gaussian distribution is unbounded, which permits energy shifts to be larger in magnitude than in the box distribution, there is very little difference in the values of superfluid fraction. At the same time, shifts from the speckle distribution can be even larger since it has a more extended tail, but this effect is compensated and actually reversed by the absence of negative shifts in this distribution. 162

Chapter 9

Finite-size scaling of quantities

In the last chapters, we have studied the effects of averaging over different samples ofthe disorder ensemble ℰΔ for different values of interaction strength and tunnelling amplitude, inves- tigating the behavior of the associated fluctuations that comprise the relevance of the particular realizations of the random potential in determining the global features of the system that are encapsulated in order parameters. This analysis has considered the superfluid part of the phase diagram at unit filling, including the region of the re-entrant superfluid where we have seenthat the mechanism by which the lattice supports a global superflow is conceptually different from that of the regular-superfluid. In particular, due to the percolation character of the SF-BG phase transition, we have observed large deviations from Gaussian behavior of the distributions of the order parameters over ℰΔ at the vicinity of the BG where the superfluid fraction is small. We have seen that in this situation different samples from ℰΔ have a tendency to look relatively a lot more different in comparison to points in the phase diagram where 휌푠 is larger, which indicates that the superfluid has the “healing” property of screening the disordered-random potential, rendering it practically irrelevant for small to intermediate values of interaction-tunnelling ratios, chiefly for 푈/푡 < 29.34(2) that is the critical point for the clean system. On the other hand, when there are small amounts of superfluid, differences in between samples are extremely relevant since, inthis case, the superflow is established by the appearance of hallways or patches in the disorder terrain that allow for the delocalization of atoms via tunnelling across the lattice, as we have demonstrated by a remarkable direct relation between the disorder profile and the condensate wave function that comprises the single-particle eigenstate of the system with the largest occupation. In spite of these important results, we have not yet considered how the size of the lattice can affect these features. At first glance, larger samples would be able to explore the distribution of the random potential to a better extent, which would facilitate in exploring the disorder ensemble in the averaging process, particularly in the sense that one would possibly need less samples to confidently estimate the statistics of physical properties of the system. However, in the situations where we have seen that disorder realizations play the dominant role in determining the global behavior, it is unclear to what extent a larger lattice would change the fluctuations within ℰΔ. In the present Chapter, I will use a systematic approach to address the relevance of the lattice size 퐿 in determining the disorder-statistics of the order parameters by comparing properties obtained from exploiting disorder ensembles ℰΔ(퐿) for different 퐿 up to sizes that are comparable 163 to experimental systems. We will see that, in spite of large finite-size effects that are present close to the Bose-glass phase, the order parameters remain self-averaging quantities within the entire superfluid phase.

9.1 Disorder averages and fluctuations

We start by considering what happens to the average values of the order parameters [휌푠] and [휅] as the lattice size 퐿 is changed, as shown in Fig. 9.1 for Δ/푈 = 0.5 and different values of 푈/푡. It is possible to see that, for intermediate values of 푈/푡, where the superfluid fraction is large, there is very little difference in the values of both order parameters after averages over disorder samples are performed. This indicates that even the smallest lattice studied, 퐿 = 6, is capable to trustfully capture the physics of the system from both qualitative and quantitative points of view. However, as 푈/푡 increases and the system approaches the BG phase boundary, with reduced amounts of particles occupying the superfluid state the values of the order parameters start to depend onthe lattice size. In particular, for the superfluid fraction and 푈/푡 = 72.0 this dependence is quite noticeable. For larger systems, the resulting superfluid fractions are smaller. In spite of that, there is convergence to a finite value in the thermodynamic limit. Regarding the scaling of the magnitude of fluctuations from sample to sample within the disorder ensembles ℰΔ(퐿), a similar analysis is shown in Fig. 9.2 for the variance and the relative variance of both order parameters along the line Δ/푈 = 0.5. As we have already noticed in Section 7.3, for both 휌푠 and 휅 the general behavior is an increasing of the relative variances as we approach the Bose-glass phase. Here, we verify that such increasing also happens for larger lattices at approximately the same rate, which can be seen by the fact that the slope of the lines of different colors are practically the same. Consequently, this cannot possibly be taken as afinite- size effect, consisting of an inherent feature of the physical system: samples do look relatively more different close to the phase transition regardless the lattice size that is under consideration. Itis also important to notice that, for all the points studied here, the relative variances decrease with increasing lattice sizes. This feature can be noticed by the fact that lines with different colors never cross, and points towards the self-averaging property of the order parameters. However, at the point with the largest 푈/푡, the relative variance of the superfluid fraction has a peculiar behavior, with quite small changes as the lattice size increases. We will further discuss the scaling of the relative variances and address the question of self-averaging later in this chapter. For the raw variances, in the case of the compressibility, they are monotonically increasing with the interaction-tunnelling ratio 푈/푡. Nonetheless, the variance associated to ℰΔ(퐿 = 10) is quite smaller than the one of the 퐿 = 6 ensemble. In contrast to that, the raw variance of the superfluid fraction has a peak placed at the middle of the superfluid phase. This is expected because, for 푈/푡 → 0, we have [휌푠] → 100%, therefore the superfluid is able to completely screen the disorder 2 profile, rendering (Δ휌푠) → 0. Concurrently, close to the BG phase boundary, [휌푠] → 0 itself, as it 2 is the order parameter of the transition, thus naturally (Δ휌푠) → 0 as well. This necessarily implies in the existence of a maximum in the variance at some point when we cross the SF phase all the way up to the BG. Curiously, the peak is placed at values of 푈/푡 close to the clean critical point, which could indicate the presence of a crossover between the SF and the RSF regions. However, 164

Figure 9.1: Scaling of the disorder-averaged order parameters for different values of 푈/푡 and Δ/푈 = 0.5, with 훽 = 15.0. (a) Averages of superfluid fraction (left) and compressibility (right), both in logarithmic scale, as a function of the inverse lattice size 퐿. Lattice sizes used were 퐿 = 6, 8, 10, 12, 16 and 20. The dashed lines indicate the value for the largest system studied. Error bars are too small to be seen. (b) Superfluid fraction and compressibility maps with the red crosses marking the points that are shown in (a). the peak becomes less pronounced when we increase the lattice size, contrarily to what one would expect in a crossover situation. We then conclude that this is a finite-size fortuity. Even though (Δ휅)2 does not display a maximum in the present case, we do expect this feature to happen within the BG phase – which means larger values of 푈/푡 than those that we have studied – for the bounded type of disorder distributions that can realize the Mott-insulating phase. The analysis of the two lowest momenta of the probability distribution of order parameters, namely average and variance, allows us to conclude that finite-size effects in the process of exploring the disorder ensemble are more prominent when there is less amounts of superfluids in the system, an aspect that has been observed several times along this dissertation for different quantities. We could extend the same analysis to higher momenta but, as we have previously noticed, given 165

Figure 9.2: Variances and relative variances of order parameters for different values of 푈/푡 along Δ/푈 = 0.5, with 훽 = 15.0 and lattice sizes 퐿 = 6, 8 and 10. (a) Compressibility per particle. (b) Superfluid fraction. For completeness, values of the averages for the largest lattice size arealso shown. See text for discussion. the increasing of statistical noise in these quantities, they can be deceiving. Instead of that, we will prefer to study the behavior of the probability distributions 풫퐿(푋) under size scaling to the thermodynamic limit.

9.2 Probability distributions

In the previous section, we have analyzed specific statistical quantities related to the different disorder ensembles that can be constructed for different lattice sizes without trying to establish an explicit equivalence between them. However, recall that what we want to achieve by exploring the disorder ensemble is actually to access the features of the probability distributions of the random potential to the largest extend. In order to compare probability distributions of quantities such as order parameters over these ensembles, we must ensure that their compounding samples explore the disorder distribution evenly. Notice that, if we consider the same number of samples for ensembles of different lattice sizes, the one related to the lowest 퐿 will be effectively sampling the disorder distribution to a lower degree, since each sample has a smaller number of lattice sites. We then define the effective size 푆Δ of the disorder ensemble as

푆Δ ≡ 푁sites × 푁samples, (9.2.1)

3 where 푁sites = 퐿 is the number of lattice sites in each sample and 푁samples is the number of samples in each ensemble. The comparison between probability distributions is therefore made for 1 ensembles with the same 푆Δ . 1This is necessary to compare probability distributions. However, for specific statistical quantities such as the average, it suffices that we consider a number of samples large enough so that they are equilibrated in thedisorder sense. 166

To start with, consider a point in parameter space that is deep inside the superfluid phase, with 푈/푡 = 22.0 and Δ/푈 = 0.5, which has a superfluid fraction of [휌푠] = 51.77%. Fig. 9.3 shows the obtained histograms of both order parameters for lattice sizes from 퐿 = 6 to 퐿 = 16. For this case, we have 퐿 = 6[556], 8[235], 10[120], 12[70], 16[29], where the number of samples

Figure 9.3: Scaling of 풫퐿 for the relative distributions of 휌푠 (top-panels) and 휅 (bottom-panels) for 푈/푡 = 22.0, Δ/푈 = 0.5, which is indicated by the red-cross in the Δ/푈 vs. 푈/푡 map of [휌푠]. Alongside each distribution is the associated normal probability-probability plot. 167

considered to construct each ℰΔ(퐿) is indicated between brackets [...]. We notice initially that the distributions of the superfluid order parameter are consistently Gaussian for small to intermediate lattice sizes (퐿 = 6 to 10), whereas this feature extends to largest lattice studied in the case of the compressibility. In both cases, from 퐿 = 6 to 퐿 = 12, we observe a narrowing of the distributions that indicates that the attenuation of the disorder fluctuations for larger lattice size is consistent to what one would expect from the Central Limit Theorem. However, there are also two aspects that are extremely important when studying disorder-statistics:

1. Considerable deviation from Gaussian behavior in 풫퐿(휌푠) for 퐿 ≥ 12;

2. Both 풫퐿(휌푠) and 풫퐿(휅) broaden from 퐿 = 12 to 퐿 = 16;

Both features indicate the breakdown of the CLT and are, in fact, connected to each other. From the width of the relative histograms, it is possible to conclude that, from 퐿 = 6 to 퐿 = 12, the associated standard deviation goes from about 5% to about 1% in the case of the superfluid fraction. Now recall that, for every single disorder realization, we run the simulation long enough so that the statistical noise due to the Monte Carlo sampling procedure is reduced to 2%. Therefore, in the present case, for intermediate to large lattice sizes, we are observing a mixing of the sampling error within each disorder realization, and the disorder error associated to different samples – the latter is the one that we are interested in. Furthermore, the error associated to sampling is not reduced with increasing lattice sizes – it only decreases if we run simulations longer! That is the reason why we observe feature number 2. Regarding feature number 1, the mixing of two noises makes with that the resulting statistics does not exhibit the same behavior as the one coming from the individual noises, since they are now correlated. We then have no reason to expect Gaussian behavior in this scenario. Notice that deviations from normal behavior are smaller for the compressibility, since the width of the histograms are larger than those for the superfluid fraction, which means that noise coming from disorder averaging are comparatively larger than those coming from MC sampling in this case. These observations make clear the interesting fact that it is actually more challenging to isolate effects coming from exploiting the disorder ensemble when the superfluid fraction is larger. In spite of that, we cannot forget that the magnitudeof deviations coming from sample to sample is very small when 휌푠 is large: in this case, it is only a few percent. Yet another reason for deviation from Gaussian behavior in the largest lattice studied (퐿 = 16) is the small number of samples (29). Regarding this, we can say that the histogram for this ensemble is not equilibrated in the disorder sense – more samples are needed. However, increasing the number of 퐿 = 16 samples is computationally expensive since we would also have to largely increase the number of samples in the ensembles for smaller lattice sizes if we want to keep the effective sizes the same. Next, we consider a point in the phase diagram where the superfluid fraction is smaller: [휌푠] = 2.17% for 푈/푡 = 62.0, Δ/푈 = 0.5, shown in Fig. 9.4. The lattice sizes and number of samples studied are the same as in the previous case. Here, we notice that the skewness of the distributions for the smallest lattice size, a feature that has been observed and discussed in Sec. 7.4, is largely reduced for larger lattices, which corresponds to a more homogeneous distribution the of superfluid puddles for different samples. However, notice that the relative distributions are a lot larger than before. Even for the largest lattice, the relative standard deviation associated to the disorder 168

Figure 9.4: Same quantities as in Fig 9.3 but for 푈/푡 = 62.0, Δ/푈 = 0.5.

averages is about 10%, five times larger than the stochastic errors in each sample duetothe MC sampling of the partition function of the system. Consequently, we do not observe the same broadening as before, which corroborates our conclusion that relates that feature to the mixing of both noises. On the other hand, the fact that in the present case there are also deviations from normality for 퐿 = 16 indicates that this feature is related to the small number of samples rather than the mixing of the noises. 169

Finally, we consider a point even closer to the Bose-glass phase boundary: 푈/푡 = 72.0 and Δ/푈 = 0.5 for which [휌푠] = 0.66%. For this point, whose distributions are shown in Fig. 9.5, we have used lattice sizes and number of samples given by 퐿 = 6[1250], 8[527], 10[270], 12[156], 16[66], 20[34]. We observe that the deviations from Gaussian behavior, namely the broadness of

Figure 9.5: Same quantities as in Figs. 9.3 and 9.4 but for 푈/푡 = 72.0, Δ/푈 = 0.5.

the relative distributions 풫퐿(푋/[푋]) for both 휌푠 and 휅 and their concurrent positive-skewness – a feature discussed in Sec. 7.4 – are attenuated as the lattice size is increased. In fact, these 170 distributions become remarkably Gaussian-like, as can be noticed comparing the quantile-quantile plots for increasing values of 퐿. However, in this case, we cannot attribute the mitigation of non-Gaussian features to the “healing” property of the superfluid. Actually, the scaling of 휌푠 to the thermodynamic limit, displayed in Fig. 9.6, shows that the smallest lattice size, 퐿 = 6, overestimates [휌푠] by a factor of three when compared to the 퐿 = 20 lattice: 0.66% against 0.26%.

Figure 9.6: Size-scaling of [휌푠] for 푈/푡 = 72.0, Δ/푈 = 0.5. Lattice sizes used were 퐿 = 6, 8, 10, 12, 16 and 20. The red-dotted line is a fit to a 푓(푥) = 푎 + 푏푥2 polynomial, which gives and estimate of [휌푠] = (0.19 ± 0.02)% in the thermodynamic limit 퐿 → ∞.

We conclude then that the reason why the distributions get narrower and Gaussian-like is solely due to the fact that the percolation mechanism, for larger lattice sizes, is somewhat more “homogeneous”. Physically, what happens is that, for larger lattices, it becomes less and less likely to obtain realizations of the random potential that allow for aberrantly large or low intensities of the superflow. In other words, the superfluid puddles and the formation of patches throughout the lattice are more evenly distributed for large lattice sizes. As we have seen in Sec. 8.1, the wave function associated to single-particle eigenstates tends to occupy regions where the energy shifts from the disordered potential are predominantly negative. In Fig. 9.7, we show that the distribution of such regions is remarkably more homogeneous for a 퐿 = 12 lattice than for the smallest 퐿 = 6 (Fig. 8.3). Consequently, we expect that, in the thermodynamic limit, one single disorder realization is capable to capture the physics of the system concerning its superfluid properties: within the SF phase, the order parameters are self-averaging quantities, which will be corroborated in the next section. As a final remark on this section, in spite of the narrowing of the distributions, we notice that, for 퐿 = 20, a lattice size and number of atoms that is already comparable to experimental systems [12], the effects of different realizations of the random potential correspond to a variation ofabout 20% to 30% in the values of the superfluid fractions. This stress that both finite-size scaling and disorder averaging are imperative for numerical calculations close to the BG phase boundary and that, if one is able to experimentally control the uncertainty in parameters such as the disorder 171

Figure 9.7: Integrated disorder shifts 휖푖 for a 퐿 = 12 lattice. In the left panel, the sample with smallest superfluid fraction registered in the disorder ensemble with 156 samples: 휌푠 = 0.005%. In the right panel, the one with the largest 휌푠 = 0.8%. strength Δ in a smaller scale than the width of these distributions, one must explicitly address the problem of averaging over different disorder realizations when measuring properties related tothe superfluid.

9.3 Relative variances and the self-average query

In order to properly address the question of the self-averaging property of the order parameters, we must study the scaling of the relative variances. Recall that, from Sec. 2.4, for this three- dimensional system that the DBHM describes, an observable 푋 is a strongly self-averaging quantity if 3 풟푋 (퐿) ∼ 1/퐿 , whereas it is a weakly self-averaging quantity if

푎 풟푋 (퐿) ∼ 1/퐿 , with 푎 < 3. Recall also that non-self averaging features are expected to be relevant only when the correlation length 휉 of the property 푋 is large or, in more specific cases, when there are large statistical fluctuations related to the random potential that comprise the physics of Griffiths phases. For that reason, we will only present the scaling of 풟휌푠,휅 for points in the phase diagram close to the Bose-glass phase, from the superfluid phase side. Consider initially the point 푈/푡 = 62.0, Δ/푈 = 0.5 that is shown in Fig. 9.8. We notice that the data for the compressibility is consistent with an exponent that corresponds to strong self-averaging, even though the smallest lattice size appears to deviate from a 3-power-law due to the presence of finite-size effects that have been already discussed. The situation for the superfluid fraction is more subtle, since that, as we have also verified, it exhibits more prominent finite-size 172

Figure 9.8: Scaling of the relative variances for 푈/푡 = 62.0, Δ/푈 = 0.5. Lattice sizes used were 퐿 = 6, 8, 10, 12 and 16. See text for discussion. effects. It seems to be the case that the exponent is smaller than 3, which would configure weak self-averaging for this quantity. However, notice that, as we increase the lattice size, fits with powers that approach the 3.0 value look more suitable. This scenario is even more clear if we consider the point 푈/푡 = 72.0, Δ/푈 = 0.5, that is shown in Fig. 9.9. Recall that, for this case, we have more samples and have scaled to larger lattices (퐿 = 20), therefore obtaining a better estimate for the relative variances. It is evident that the

Figure 9.9: Scaling of the relative variances of the compressibility (left) and superfluid fraction (right) for 푈/푡 = 72.0, Δ/푈 = 0.5. Lattice sizes used were 퐿 = 6, 8, 10, 12 and 16. See text for discussion. compressibility exhibits strong self-averaging behavior, since its relative variance is undoubtedly scaling with a 푎 = 3 power law, as indicated by the dashed dark-green line that falls right on the 173 calculated points. The situation for the superfluid order parameter is, again, more complicated. Although the general trend shows a decrease in the relative variance with increasing lattice size, we are unable to scale to very large values of L and perform sufficient disorder averaging in order to confidently estimate the scaling exponent. To get some idea of what the scaling mightbe,we fit the data to two different data sets that disregard small lattice sizes in order to reducethebias associated with large non-Gaussian behavior. The red dashed line is obtained by excluding 퐿 = 6 points and the corresponding fit is 푏 = 2.13 ± 0.14. We also fit it to the blue dashed line by additionally excluding 퐿 = 8, 10 lattice sizes to obtain 푏 = 2.48 ± 0.11. For larger values of 퐿 it might very well be the case that 푏 → 3, thereby suggesting there is strong self-averaging even this close to the SF-BG boundary. On the other hand, for practical purposes, even a weak self- averaging is remarkable, given that the critical point cannot support self-averaging in any finite system. 174

Chapter 10

Concluding remarks

I have presented a detailed study of the consequences of considering different realizations of quenched diagonal disorder in the Bose-Hubbard model when calculating properties related to the superfluid phase. In the language that I have developed here, this study comprises the exploiting of statistical properties of the disorder ensemble. Even though these properties can be excessively technical, I have tried to connect them to the physical properties of the system as much as possible. In particular, we have seen that the SF has the “healing” property of the disorder profile, which implies in a certain degree of insensitivity to the specificities of the disorder when there are reason- ably large amounts of atoms occupying the SF state. Conversely, close to the Bose-glass phase, the system is highly sensitive to the disorder realizations. These features were observed by monitoring the size of fluctuations of the order parameters – superfluid fraction and compressibility –within the disorder ensemble for a range of interaction-tunnelling ratios and disorder strengths. With particular relevance, we have explicitly shown, via the calculation of the single-particle eigenstate with largest occupation number, that the wave function of the condensate, strictly related to the establishment of the superflow, is tightly connected to the peculiarities of the disorder terrain. More specifically, the wave function tends to occupy regions where the random potential lowers the occupation energy of the lattice sites. These regions, when connected throughout the lattice, support a global superfluid that is expressed in terms of a finite superfluid fraction, whose intensity is directly related to how such connections are performed. These results corroborates our understanding of the percolation mechanism describing different aspects of the phase diagram. We were able to show that the strength of the disorder distribution plays the dominant role in describing the qualitative aspects of the phase diagram. The shapes of the distribution from which the random potential is sampled come into effect when quantitative comparisons are concerned and also with regards to finite-size effects. The study of the disorder statistics of the SF phase points, beyond any doubt, to the existence of self-averaging of 휅 and 휌푠. This property extends throughout the SF phase all the way up to the SF-BG boundary. The self-averaging is of the strong type for 휅. Although we have not been able to conclude the exact value of the scaling for 휌푠, it appears to be at least weakly self- averaging. As a consequence, we expect that most experiments with ultra-cold atomic gases can safely report observable values without being concerned about disorder averaging. Even near the SF-BG interface, we suspect that the statistical and systematic errors associated with imaging 175 and time-of-flight based measurements will be much larger than disorder averaging related errors. However, finite-size errors might still be significant especially when system sizes are small.Our results are directly applicable to different types of disorder experiments, as we have studied effects of unbounded Gaussian type of disorder that is related to the exponential type of disorder of speckle- fields [8], as well as the more idealistic box type of disorder that can be realized in experiments with homogeneous traps [162, 163]. Although we have only reported data for 휌푠 and 휅, the general self-averaging behavior extends to other quantities that we have also studied, such as the energy of the system, that exhibits strong self-averaging, and the condensate fraction (푛0). The latter exhibits features similar to those of the superfluid fraction, apparently being at least weakly self-averaging, but once againwe were unable to conclude the exact value of the scaling of relative variances for this case, which is even more subtle because it is computationally challenging to attenuate the statistical errors that come from the sampling procedure, since 푛0 results from the diagonalization of the single-particle density matrix of the system – a non-linear operation. We have also considered larger values of the disorder strength Δ/푈 that are in complete agreement with the conclusions presented here. It is also fundamentally important that the observed decrease of relative variance with the lattice size is a feature observed in all the three types of disorder distributions that were used: exponential, Gaussian and uniform. In the future, it would be interesting to see if the present analysis can be extended to the Bose-glass phase in order to characterize when non-self-averaging behavior sets in. There are also interesting prospects with regards to studying the percolation problem in these quantum systems as well. Particularly, owing to the tunneling type of phenomena that governs the connection of percolating clusters, there might be significant differences in the fractal properties of the transition when compared against the standard classical picture [126]. Furthermore, I am interested in applying tools from statistical mechanics to the disorder ensemble in a similar fashion to what is done with granular, jammed systems [164]. I also would like to compare the qualitative differences that can arise by considering annealed disorder in the Bose-Hubbard model, which could also be done within the framework of SSE. 176

References

[1] Bruno R. de Abreu et al. “Properties of the superfluid in the disordered Bose-Hubbard model”. In: Physical Review A 98.2 (2018). doi: 10.1103/physreva.98.023628. [2] Daniel S. Fisher and Matthew P. A. Fisher. “Onset of superfluidity in random media”. In: 61.16 (1988), pp. 1847–1850. doi: 10.1103/physrevlett.61.1847. [3] Matthew P. A. Fisher et al. “Boson localization and the superfluid-insulator transition”. In: Physical Review B 40.1 (1989), pp. 546–570. doi: 10.1103/physrevb.40.546. [4] M. Ma, B. I. Halperin, and P. A. Lee. “Strongly disordered superfluids: Quantum fluc- tuations and critical behavior”. In: Physical Review B 34.5 (1986), pp. 3136–3143. doi: 10.1103/physrevb.34.3136. [5] Werner Krauth, Nandini Trivedi, and David Ceperley. “Superfluid-insulator transition in disordered boson systems”. In: Physical Review Letters 67.17 (1991), pp. 2307–2310. doi: 10.1103/physrevlett.67.2307. [6] Richard T. Scalettar, Ghassan George Batrouni, and Gergely T. Zimanyi. “Localization in interacting, disordered, Bose systems”. In: Physical Review Letters 66.24 (1991), pp. 3144– 3147. doi: 10.1103/physrevlett.66.3144. [7] L. Pollet et al. “Absence of a Direct Superfluid to Mott Insulator Transition in Disordered Bose Systems”. In: Physical Review Letters 103.14 (2009). doi: 10.1103/physrevlett. 103.140402. [8] M. White et al. “Strongly Interacting Bosons in a Disordered Optical Lattice”. In: Physical Review Letters 102.5 (2009). doi: 10.1103/physrevlett.102.055301. [9] T. Micklitz, C. A. Müller, and A. Altland. “Strong Anderson Localization in Cold Atom Quantum Quenches”. In: Physical Review Letters 112.11 (2014). doi: 10.1103/physrevlett. 112.110602. [10] Marisa Pons and Anna Sanpera. “Vortex stability in Bose-Einstein condensates in the pres- ence of disorder”. In: Physical Review A 95.3 (2017). doi: 10.1103/physreva.95.033626. [11] Thereza Paiva et al. “Cooling Atomic Gases With Disorder”. In: Physical Review Letters 115.24 (2015). doi: 10.1103/physrevlett.115.240402. [12] Carolyn Meldgin et al. “Probing the Bose-Glass–Superfluid Transition using Quantum Quenches of Disorder”. In: Nature Physics 12.7 (2016), pp. 646–649. doi: 10 . 1038 / nphys3695. 177

[13] Valentin V. Volchkov et al. “Measurement of Spectral Functions of Ultracold Atoms in Dis- ordered Potentials”. In: Physical Review Letters 120.6 (2018). doi: 10.1103/physrevlett. 120.060404. [14] Thomas Vojta. “Phases and phase transitions in disordered quantum systems”. In: AIP, 2013. doi: 10.1063/1.4818403. [15] Shang-Keng Ma. Modern Theory of Critical Phenomena (Frontiers in Physics). Addison- Wesley, 1976. isbn: 0805366709. [16] Philip W. Anderson. Basic Notions Of Condensed Matter Physics (Advanced Books Clas- sics). Westview Press / Addison-Wesley, 1997. isbn: 978-0201328301. [17] P. M. Chaikin and T. C. Lubensky. Principles of Condensed Matter Physics. Cambridge University Press, 2012. isbn: 9780511813467. [18] Ushnish Ray. “Properties of dirty bosons in disordered optical lattices”. PhD thesis. Uni- versity of Illinois at Urbana-Champaign, Apr. 2015.

[19] L. P. Kadanoff. “Scaling laws for Ising models near 푇푐”. In: Physics 2 (1966), pp. 263–272. [20] K Wilson. “The renormalization group and the 휖 expansion”. In: Physics Reports 12.2 (1974), pp. 75–199. doi: 10.1016/0370-1573(74)90023-4. [21] Kenneth G. Wilson. “The renormalization group and critical phenomena”. In: Reviews of Modern Physics 55.3 (1983), pp. 583–600. doi: 10.1103/revmodphys.55.583. [22] B. D. Josephson. “The discovery of tunnelling supercurrents”. In: Reviews of Modern Physics 46.2 (1974), pp. 251–254. doi: 10.1103/revmodphys.46.251. [23] Matthew Robert White. “Ultracold atoms in a disordered optical lattice”. PhD thesis. University of Illinois at Urbana-Champaign, 2009. [24] Markus Greiner et al. “Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms”. In: Nature 415.6867 (2002), pp. 39–44. doi: 10.1038/415039a. [25] . Quantum Liquids: Bose Condensation and Cooper Pairing in Condensed-Matter Systems (Oxford Graduate Texts). Oxford University Press, 2006. isbn: 978-0198526438. [26] L. Tisza. “Transport Phenomena in II”. In: Nature 141.3577 (1938), pp. 913–913. doi: 10.1038/141913a0. [27] F. London. “The 휆-Phenomenon of Liquid Helium and the Bose-Einstein Degeneracy”. In: Nature 141.3571 (1938), pp. 643–644. doi: 10.1038/141643a0. [28] L. Landau. “Theory of the Superfluidity of Helium II”. In: Physical Review 60.4 (1941), pp. 356–358. doi: 10.1103/physrev.60.356. [29] Anthony J. Leggett. “A theoretical description of the new phases of liquid 3He”. In: Reviews of Modern Physics 47.2 (1975), pp. 331–414. doi: 10.1103/revmodphys.47.331. [30] D. M. Ceperley. “Path integrals in the theory of condensed helium”. In: Reviews of Modern Physics 67.2 (1995), pp. 279–355. doi: 10.1103/revmodphys.67.279. 178

[31] James F. Annett. Superconductivity, Superfluids, and Condensates (Oxford Master Series in Physics). Oxford University Press, 2004. isbn: 978-0-19-850756-7. [32] Immanuel Bloch. “Ultracold quantum gases in optical lattices”. In: Nature Physics 1.1 (2005), pp. 23–30. doi: 10.1038/nphys138. [33] N. N. Bogoljubov. “On a new method in the theory of superconductivity”. In: Il Nuovo Cimento 7.6 (1958), pp. 794–805. doi: 10.1007/bf02745585. [34] Ana Maria Rey et al. “Bogoliubov approach to superfluidity of atoms in an optical lattice”. In: Journal of Physics B: Atomic, Molecular and Optical Physics 36.5 (2003), pp. 825–841. doi: 10.1088/0953-4075/36/5/304. [35] E. L. Pollock and D. M. Ceperley. “Path-integral computation of superfluid densities”. In: Physical Review B 36.16 (1987), pp. 8343–8352. doi: 10.1103/physrevb.36.8343. [36] V. G. Rousseau. “Superfluid density in continuous and discrete spaces: Avoiding miscon- ceptions”. In: Physical Review B 90.13 (2014). doi: 10.1103/physrevb.90.134503. [37] J. Larson, A. Collin, and J. P. Martikainen. “Multiband bosons in optical lattices”. In: Physical Review A 79.3 (2009). doi: 10.1103/physreva.79.033603. [38] Carlos A. Parra-Murillo, Javier Madroñero, and Sandro Wimberger. “Two-band Bose- Hubbard model for many-body resonant tunneling in the Wannier-Stark system”. In: Phys- ical Review A 88.3 (2013). doi: 10.1103/physreva.88.032119. [39] Wei Xu, Maxim Olshanii, and Marcos Rigol. “Multiband effects and the Bose-Hubbard model in one-dimensional lattices”. In: Physical Review A 94.3 (2016). doi: 10 . 1103 / physreva.94.031601. [40] B. Capogrosso-Sansone, N. V. Prokof’ev, and B. V. Svistunov. “Phase diagram and ther- modynamics of the three-dimensional Bose-Hubbard model”. In: Physical Review B 75.13 (2007). doi: 10.1103/physrevb.75.134302. [41] Peter B. Weichman. “Dirty Bosons: Twenty Years Later”. In: Modern Physics Letters B 22.27 (2008), pp. 2623–2647. doi: 10.1142/s0217984908017187. [42] A. B. Harris. “Effect of random defects on the critical behaviour of Ising models”. In: Journal of Physics C: Solid State Physics 7.9 (1974), pp. 1671–1692. doi: 10.1088/0022- 3719/7/9/009. [43] A. B. Harris. “The ‘Harris criterion’ lives on”. In: Journal of Physics: Condensed Matter 28.42 (2016), p. 421006. doi: 10.1088/0953-8984/28/42/421006. [44] Thomas Vojta and Rastko Sknepnek. “Critical points and quenched disorder: From Harris criterion to rare regions and smearing”. In: physica status solidi (b) 241.9 (2004), pp. 2118– 2127. doi: 10.1002/pssb.200404798. [45] Thomas Vojta. “Rare region effects at classical, quantum and nonequilibrium phase transi- tions”. In: Journal of Physics A: Mathematical and General 39.22 (2006), R143–R205. doi: 10.1088/0305-4470/39/22/r01. 179

[46] Thomas Vojta. “Quantum Griffiths Effects and Smeared Phase Transitions in Metals: The- ory and Experiment”. In: Journal of Low Temperature Physics 161.1-2 (2010), pp. 299–323. doi: 10.1007/s10909-010-0205-4. [47] Thomas Vojta and José A. Hoyos. “Criticality and Quenched Disorder: Harris Criterion Ver- sus Rare Regions”. In: Physical Review Letters 112.7 (2014). doi: 10.1103/physrevlett. 112.075702. [48] Abel Weinrib and B. I. Halperin. “Critical phenomena in systems with long-range-correlated quenched disorder”. In: Physical Review B 27.1 (1983), pp. 413–427. doi: 10 . 1103 / physrevb.27.413. [49] J. T. Chayes et al. “Finite-Size Scaling and Correlation Lengths for Disordered Systems”. In: Physical Review Letters 57.24 (1986), pp. 2999–3002. doi: 10.1103/physrevlett.57.2999. [50] Ferenc Pázmándi, Richard T. Scalettar, and Gergely T. Zimányi. “Revisiting the Theory of Finite Size Scaling in Disordered Systems: 휈 Can Be Less than 2/d”. In: Physical Review Letters 79.25 (Dec. 1997), pp. 5130–5133. doi: 10.1103/physrevlett.79.5130. [51] Robert B. Griffiths. “Nonanalytic Behavior Above the Critical Point in a Random IsingFer- romagnet”. In: Physical Review Letters 23.1 (1969), pp. 17–19. doi: 10.1103/physrevlett. 23.17. [52] Barry M. McCoy and Tai Tsun Wu. “Random Impurities as the Cause of Smooth Specific Heats Near the Critical Temperature”. In: Physical Review Letters 21.8 (1968), pp. 549–551. doi: 10.1103/physrevlett.21.549. [53] Barry M. McCoy and Tai Tsun Wu. “Theory of a Two-Dimensional Ising Model with Ran- dom Impurities. I. Thermodynamics”. In: Physical Review 176.2 (1968), pp. 631–643. doi: 10.1103/physrev.176.631. [54] B. M. McCoy and T. T. Wu. “Theory of a Two-Dimensional Ising Model with Random Impurities. II. Spin Correlation Functions”. In: Physical Review 188.2 (1969), pp. 982– 1013. doi: 10.1103/physrev.188.982. [55] M.J. Thill and D.A. Huse. “Equilibrium behaviour of quantum Ising spin glass”. In: Physica A: Statistical Mechanics and its Applications 214.3 (1995), pp. 321–355. doi: 10.1016/ 0378-4371(94)00247-q. [56] Muyu Guo, R. N. Bhatt, and David A. Huse. “Quantum Griffiths singularities in the transverse-field Ising spin glass”. In: Physical Review B 54.5 (1996), pp. 3336–3342. doi: 10.1103/physrevb.54.3336. [57] H. Rieger and A. P. Young. “Griffiths singularities in the disordered phase of a quantum Ising spin glass”. In: Physical Review B 54.5 (1996), pp. 3328–3335. doi: 10.1103/physrevb. 54.3328. [58] A. P. Young and H. Rieger. “Numerical study of the random transverse-field Ising spin chain”. In: Physical Review B 53.13 (1996), pp. 8486–8498. doi: 10.1103/physrevb.53. 8486. 180

[59] Thomas Vojta and Jörg Schmalian. “Quantum Griffiths effects in itinerant Heisenberg mag- nets”. In: Physical Review B 72.4 (2005). doi: 10.1103/physrevb.72.045438. [60] H. F. Trotter. “On the product of semi-groups of operators”. In: Proceedings of the American Mathematical Society 10.4 (1959), pp. 545–545. doi: 10.1090/s0002-9939-1959-0108732- 6. [61] Richard P. Feynman. Statistical Mechanics: A Set Of Lectures (Advanced Books Classics). CRC Press, 2018. isbn: 978-0201360769. [62] M. V. Berry. “Quantal Phase Factors Accompanying Adiabatic Changes”. In: Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 392.1802 (1984), pp. 45–57. doi: 10.1098/rspa.1984.0023. [63] E. Y. Loh et al. “Sign problem in the numerical simulation of many-electron systems”. In: Physical Review B 41.13 (1990), pp. 9301–9307. doi: 10.1103/physrevb.41.9301. [64] D. Belitz and Thomas Vojta. “How generic scale invariance influences quantum and classical phase transitions”. In: Reviews of Modern Physics 77.2 (2005), pp. 579–632. doi: 10.1103/ revmodphys.77.579. [65] Amnon Aharony and A. B. Harris. “Absence of Self-Averaging and Universal Fluctuations in Random Systems near Critical Points”. In: Physical Review Letters 77.18 (1996), pp. 3700– 3703. doi: 10.1103/physrevlett.77.3700. [66] Soumen Roy and Somendra M. Bhattacharjee. “Is small-world network disordered?” In: Physics Letters A 352.1-2 (2006), pp. 13–16. doi: 10.1016/j.physleta.2005.10.105. [67] Shai Wiseman and Eytan Domany. “Critical behavior of the random-bond Ashkin-Teller model: A Monte Carlo study”. In: Physical Review E 51.4 (1995), pp. 3074–3086. doi: 10.1103/physreve.51.3074. [68] Andrea De Martino and Andrea Giansanti. “Percolation and lack of self-averaging in a frustrated evolutionary model”. In: Journal of Physics A: Mathematical and General 31.44 (1998), pp. 8757–8771. doi: 10.1088/0305-4470/31/44/005. [69] P. E. Berche et al. “Bond dilution in the 3D Ising model: a Monte Carlo study”. In: The European Physical Journal B 38.3 (2004), pp. 463–474. doi: 10.1140/epjb/e2004-00141- x. [70] Jurij Šmakov and Erik Sørensen. “Universal Scaling of the Conductivity at the Superfluid- Insulator Phase Transition”. In: Physical Review Letters 95.18 (2005). doi: 10 . 1103 / physrevlett.95.180603. [71] Frank Krüger, Seungmin Hong, and Philip Phillips. “Two distinct Mott-insulator to Bose- glass transitions and breakdown of self-averaging in the disordered Bose-Hubbard model”. In: Physical Review B 84.11 (2011). doi: 10.1103/physrevb.84.115118. [72] Anthony Hegg, Frank Kruger, and Philip W. Phillips. “Breakdown of self-averaging in the Bose glass”. In: Physical Review B 88.13 (2013). doi: 10.1103/physrevb.88.134206. [73] Ray Ng and Erik S. Sørensen. “Quantum Critical Scaling of Dirty Bosons in Two Dimen- sions”. In: Physical Review Letters 114.25 (2015). doi: 10.1103/physrevlett.114.255701. 181

[74] C. Larmier et al. “Finite-size effects and percolation properties of Poisson geometries”. In: Physical Review E 94.1 (2016). doi: 10.1103/physreve.94.012130. [75] C. Itoi. “Universal Nature of Replica Symmetry Breaking in Quantum Systems with Gaus- sian Disorder”. In: Journal of Statistical Physics 167.5 (2017), pp. 1262–1279. doi: 10. 1007/s10955-017-1778-y. [76] Zhenjiu Wang, Fakher F. Assaad, and Francesco Parisen Toldin. “Finite-size effects in canonical and grand-canonical quantum Monte Carlo simulations for fermions”. In: Physical Review E 96.4 (2017). doi: 10.1103/physreve.96.042131. [77] Subhadeep Roy. “Predictability and Strength of a Heterogeneous System: The Role of System Size and Disorder”. In: Phys. Rev. E 96, 042142 (2017) (July 8, 2017). doi: 10. 1103/PhysRevE.96.042142. [78] Victor Dotsenko et al. “Self-averaging in the random two-dimensional Ising ferromagnet”. In: Physical Review E 95.3 (2017). doi: 10.1103/physreve.95.032118. [79] Shai Wiseman and Eytan Domany. “Lack of self-averaging in critical disordered systems”. In: Physical Review E 52.4 (1995), pp. 3469–3484. doi: 10.1103/physreve.52.3469. [80] Shai Wiseman and Eytan Domany. “Finite-Size Scaling and Lack of Self-Averaging in Criti- cal Disordered Systems”. In: Physical Review Letters 81.1 (1998), pp. 22–25. doi: 10.1103/ physrevlett.81.22. [81] Amnon Aharony, A. Brooks Harris, and Shai Wiseman. “Critical Disordered Systems with Constraints and the Inequality 휈 > 2/푑”. In: Physical Review Letters 81.2 (1998), pp. 252– 255. doi: 10.1103/physrevlett.81.252. [82] Shai Wiseman and Eytan Domany. “Self-averaging, distribution of pseudocritical temper- atures, and finite size scaling in critical disordered systems”. In: Physical Review E 58.3 (1998), pp. 2938–2951. doi: 10.1103/physreve.58.2938. [83] Karim Bernardet, Ferenc Pázmándi, and G. G. Batrouni. “Disorder Averaging and Finite- Size Scaling”. In: Physical Review Letters 84.19 (2000), pp. 4477–4480. doi: 10 . 1103 / physrevlett.84.4477. [84] H. Chamati, E. Korutcheva, and N. S. Tonchev. “Finite-size scaling in disordered systems”. In: Physical Review E 65.2 (2002). doi: 10.1103/physreve.65.026129. [85] S. Chowdhury et al. “Configuration and self-averaging in disordered systems”. In: Indian Journal of Physics 90.6 (2015), pp. 649–657. doi: 10.1007/s12648-015-0789-2. [86] V. Gurarie et al. “Phase diagram of the disordered Bose-Hubbard model”. In: Physical Review B 80.21 (2009). doi: 10.1103/physrevb.80.214519. [87] Matthew James Pasienski. “Disordered insulator in an optical lattice”. PhD thesis. Univer- sity of Illinois at Urbana-Champaign, 2011. [88] S. Q. Zhou and D. M. Ceperley. “Construction of localized wave functions for a disordered optical lattice and analysis of the resulting Hubbard model parameters”. In: Physical Review A 81.1 (2010). doi: 10.1103/physreva.81.013402. 182

[89] Ulf Bissbort, Ronny Thomale, and Walter Hofstetter. “Stochastic mean-field theory: Method and application to the disordered Bose-Hubbard model at finite temperature and speckle disorder”. In: Physical Review A 81.6 (2010). doi: 10.1103/physreva.81.063643. [90] M. Pasienski et al. “A disordered insulator in an optical lattice”. In: Nature Physics 6.9 (2010), pp. 677–680. doi: 10.1038/nphys1726. [91] David McKay et al. “Metastable Bose-Einstein condensation in a strongly correlated optical lattice”. In: Physical Review A 91.2 (2015). doi: 10.1103/physreva.91.023625. [92] Fei Lin, T. A. Maier, and V. W. Scarola. “Disordered Supersolids in the Extended Bose- Hubbard Model”. In: Scientific Reports 7.1 (2017). doi: 10.1038/s41598-017-13040-9. [93] Pinaki Sengupta and Stephan Haas. “Quantum Glass Phases in the Disordered Bose- Hubbard Model”. In: Physical Review Letters 99.5 (2007). doi: 10.1103/physrevlett.99. 050403. [94] Omjyoti Dutta et al. “Non-standard Hubbard models in optical lattices: a review”. In: Reports on Progress in Physics 78.6 (2015), p. 066001. doi: 10.1088/0034-4885/78/6/ 066001. [95] K. V. Krutitsky et al. “Ultracold bosons in lattices with binary disorder”. In: Physical Review A 77.5 (2008). doi: 10.1103/physreva.77.053609. [96] Jens Kisker and Heiko Rieger. “Bose-glass and Mott-insulator phase in the disordered boson Hubbard model”. In: Physical Review B 55.18 (1997), R11981–R11984. doi: 10 . 1103 / physrevb.55.r11981. [97] Nikolay Prokof’ev and Boris Svistunov. “Superfluid-Insulator Transition in Commensurate Disordered Bosonic Systems: Large-Scale Worm Algorithm Simulations”. In: Physical Re- view Letters 92.1 (2004). doi: 10.1103/physrevlett.92.015703. [98] Anand Priyadarshee et al. “Quantum Phase Transitions of Hard-Core Bosons in Background Potentials”. In: Physical Review Letters 97.11 (2006). doi: 10.1103/physrevlett.97. 115703. [99] Peter Hitchcock and Erik S. Sørensen. “Bose-glass to superfluid transition in the three- dimensional Bose-Hubbard model”. In: Physical Review B 73.17 (2006). doi: 10.1103/ physrevb.73.174523. [100] Fei Lin, Erik S. Sørensen, and D. M. Ceperley. “Superfluid-insulator transition in the dis- ordered two-dimensional Bose-Hubbard model”. In: Physical Review B 84.9 (2011). doi: 10.1103/physrevb.84.094507. [101] Hannes Meier and Mats Wallin. “Quantum Critical Dynamics Simulation of Dirty Boson Systems”. In: Physical Review Letters 108.5 (2012). doi: 10.1103/physrevlett.108. 055701. [102] Lode Pollet. “A review of Monte Carlo simulations for the Bose–Hubbard model with di- agonal disorder”. In: Comptes Rendus Physique 14.8 (2013), pp. 712–724. doi: 10.1016/ j.crhy.2013.08.005. 183

[103] A E Niederle and H Rieger. “Superfluid clusters, percolation and phase transitions inthe disordered, two-dimensional Bose–Hubbard model”. In: New Journal of Physics 15.7 (2013), p. 075029. doi: 10.1088/1367-2630/15/7/075029. [104] Peter B. Weichman and Ranjan Mukhopadhyay. “Particle-hole symmetry and the dirty boson problem”. In: Physical Review B 77.21 (2008). doi: 10.1103/physrevb.77.214516. [105] Elliott Lieb, Theodore Schultz, and Daniel Mattis. “Two soluble models of an antiferro- magnetic chain”. In: Annals of Physics 16.3 (1961), pp. 407–466. doi: 10.1016/0003- 4916(61)90115-4. [106] D. D. Betts and M. H. Lee. “Critical Properties of the XY Model”. In: Physical Review Letters 20.26 (1968), pp. 1507–1510. doi: 10.1103/physrevlett.20.1507. [107] N. D. Mermin and H. Wagner. “Absence of Ferromagnetism or Antiferromagnetism in One- or Two-Dimensional Isotropic Heisenberg Models”. In: Physical Review Letters 17.22 (1966), pp. 1133–1136. doi: 10.1103/physrevlett.17.1133. [108] Miloje Makivić, Nandini Trivedi, and Salman Ullah. “Disordered bosons: Critical phenom- ena and evidence for new low energy excitations”. In: Physical Review Letters 71.14 (1993), pp. 2307–2310. doi: 10.1103/physrevlett.71.2307. [109] Mats Wallin et al. “Superconductor-insulator transition in two-dimensional dirty boson systems”. In: Physical Review B 49.17 (1994), pp. 12115–12139. doi: 10.1103/physrevb. 49.12115. [110] Ramesh V. Pai et al. “One-Dimensional Disordered Bosonic Hubbard Model: A Density- Matrix Renormalization Group Study”. In: Physical Review Letters 76.16 (1996), pp. 2937– 2940. doi: 10.1103/physrevlett.76.2937. [111] Prasenjit Sen, Nandini Trivedi, and D. M. Ceperley. “Simulation of Flux Lines with Colum- nar Pins: Bose Glass and Entangled Liquids”. In: Physical Review Letters 86.18 (2001), pp. 4092–4095. doi: 10.1103/physrevlett.86.4092. [112] Ji-Woo Lee, Min-Chul Cha, and Doochul Kim. “Phase Diagram of a Disordered Boson Hubbard Model in Two Dimensions”. In: Physical Review Letters 87.24 (2001). doi: 10. 1103/physrevlett.87.247006. [113] Lizeng Zhang and Michael Ma. “Real-space renormalization-group study of hard-core dirty bosons”. In: Physical Review B 45.9 (1992), pp. 4855–4863. doi: 10.1103/physrevb.45. 4855. [114] Kanwal G. Singh and Daniel S. Rokhsar. “Real-space renormalization study of disordered interacting bosons”. In: Physical Review B 46.5 (1992), pp. 3002–3008. doi: 10 . 1103 / physrevb.46.3002. [115] Ferenc Pázmándi, Gergely Zimányi, and Richard Scalettar. “Mean-Field Theory of the Localization Transition of Hard-Core Bosons”. In: Physical Review Letters 75.7 (1995), pp. 1356–1359. doi: 10.1103/physrevlett.75.1356. 184

[116] Ferenc Pázmándi and Gergely T. Zimányi. “Direct Mott insulator-to-superfluid transition in the presence of disorder”. In: Physical Review B 57.9 (1998), pp. 5044–5047. doi: 10. 1103/physrevb.57.5044. [117] Jiansheng Wu and Philip Phillips. “Minimal model for disorder-induced missing moment of inertia in solid 4He”. In: Physical Review B 78.1 (2008). doi: 10.1103/physrevb.78. 014515. [118] U. Bissbort and W. Hofstetter. “Stochastic mean-field theory for the disordered Bose- Hubbard model”. In: EPL (Europhysics Letters) 86.5 (2009), p. 50007. doi: 10.1209/0295- 5075/86/50007. [119] Leonardo Fallani, Chiara Fort, and Massimo Inguscio. “Bose-Einstein Condensates in Disor- dered Potentials”. In: Advances in Atomic, Molecular, and Optical Physics (2008), pp. 119– 160. doi: 10.1016/s1049-250x(08)00012-8. [120] Ş. G. Söyler et al. “Phase Diagram of the Commensurate Two-Dimensional Disordered Bose- Hubbard Model”. In: Physical Review Letters 107.18 (2011). doi: 10.1103/physrevlett. 107.185301. [121] Naomichi Hatano. “Reentrant Superfluid-Insulator Transitions of Random Boson Hubbard Models”. In: Journal of the Physical Society of Japan 64.5 (1995), pp. 1529–1551. doi: 10.1143/jpsj.64.1529. [122] Mateusz Łącki, Bogdan Damski, and Jakub Zakrzewski. “Locating the quantum critical point of the Bose-Hubbard model through singularities of simple observables”. In: Scientific Reports 6.1 (2016). doi: 10.1038/srep38340. [123] Zhiyuan Yao et al. “Critical Exponents of the Superfluid-Bose Glass Transition in Three Dimensions”. In: Physical Review Letters 112.22 (2014). doi: 10.1103/physrevlett.112. 225301. [124] S.H. Liu. “Fractals and Their Applications in Condensed Matter Physics”. In: Solid State Physics. Elsevier, 1986, pp. 207–273. doi: 10.1016/s0081-1947(08)60370-7. [125] Tsuneyoshi Nakayama and Kousuke Yakubo. Fractal Concepts in Condensed Matter Physics: v. 140 (Springer Series in Solid-State Sciences). Springer, 2013. isbn: 978-3-662-05193-1. [126] D. Stauffer and A. Aharony. Introduction To Percolation Theory. Taylor & Francis, 1994. isbn: 9781420074796. [127] Thomas Vojta and José A. Hoyos. “Quantum phase transitions on percolating lattices”. In: Recent Progress in Many-Body Theories. World Scientific, 2008. doi: 10.1142/9789812779885_ 0030. [128] S. Trotzky et al. “Suppression of the critical temperature for superfluidity near the Mott transition”. In: Nature Physics 6.12 (2010), pp. 998–1004. doi: 10.1038/nphys1799. [129] K. V. Krutitsky, A .Pelster, and R. Graham. “Mean-field phase diagram of disordered bosons in a lattice at nonzero temperature”. In: New Journal of Physics 8.9 (2006), pp. 187– 187. doi: 10.1088/1367-2630/8/9/187. 185

[130] P. Buonsante et al. “Mean-field phase diagram of cold lattice bosons in disordered poten- tials”. In: Physical Review A 76.1 (2007). doi: 10.1103/physreva.76.011602. [131] A.W. Burks. “Electronic Computing Circuits of the ENIAC”. In: Proceedings of the IRE 35.8 (1947), pp. 756–767. doi: 10.1109/jrproc.1947.234265. [132] Arthur W. Burks and Alice R. Burks. “First General-Purpose Electronic Computer”. In: IEEE Annals of the History of Computing 3.4 (1981), pp. 310–389. doi: 10.1109/mahc. 1981.10043. [133] Richard P. Feynman. “Simulating physics with computers”. In: International Journal of Theoretical Physics 21.6-7 (1982), pp. 467–488. doi: 10.1007/bf02650179. [134] Richard P. Feynman. “Quantum mechanical computers”. In: Foundations of Physics 16.6 (1986), pp. 507–531. doi: 10.1007/bf01886518. [135] E. Schachinger, H. Mitter, and H. Sormann, eds. Recent Progress in Many-Body Theories. Springer US, 1995. doi: 10.1007/978-1-4615-1937-9. [136] P. A. M. Dirac. The Principles of Quantum Mechanics (International Series of Monographs on Physics). Clarendon Press, 1982. isbn: 978-0198520115. [137] H.Q. Lin et al. “Exact Diagonalization Methods for Quantum Systems”. In: Computers in Physics 7.4 (1993), p. 400. doi: 10.1063/1.4823192. [138] J. M. Zhang and R. X. Dong. “Exact diagonalization: the Bose˘Hubbard model as an example”. In: European Journal of Physics 31.3 (2010), pp. 591–602. doi: 10.1088/0143- 0807/31/3/016. [139] David Raventós et al. “Cold bosons in optical lattices: a tutorial for exact diagonalization”. In: Journal of Physics B: Atomic, Molecular and Optical Physics 50.11 (2017), p. 113001. doi: 10.1088/1361-6455/aa68b1. [140] Dobriyan M. Benov. “The Manhattan Project, the first electronic computer and the Monte Carlo method”. In: Monte Carlo Methods and Applications 22.1 (2016). doi: 10.1515/mcma- 2016-0102. [141] Dirk P. Kroese et al. “Why the Monte Carlo method is so important today”. In: Wiley Interdisciplinary Reviews: Computational Statistics 6.6 (2014), pp. 386–392. doi: 10.1002/ wics.1314. [142] Malvin H. Kalos and Paula A. Whitlock. Monte Carlo Methods. Wiley-VCH, 2008. isbn: 978-3-527-40760-6. [143] Xiongfeng Ma et al. “Quantum random number generation”. In: npj Quantum Information 2.1 (2016). doi: 10.1038/npjqi.2016.21. [144] R. P. Feynman. “Space-Time Approach to Non-Relativistic Quantum Mechanics”. In: Re- views of Modern Physics 20.2 (1948), pp. 367–387. doi: 10.1103/revmodphys.20.367. [145] I. M. Gel’fand and A. M. Yaglom. “Integration in Functional Spaces and its Applications in Quantum Physics”. In: Journal of Mathematical Physics 1.1 (1960), pp. 48–69. doi: 10.1063/1.1703636. 186

[146] Makoto Matsumoto and Takuji Nishimura. “Mersenne twister: a 623-dimensionally equidis- tributed uniform pseudo-random number generator”. In: ACM Transactions on Modeling and Computer Simulation 8.1 (1998), pp. 3–30. doi: 10.1145/272991.272995. [147] Nicholas Metropolis et al. “Equation of State Calculations by Fast Computing Machines”. In: The Journal of Chemical Physics 21.6 (1953), pp. 1087–1092. doi: 10.1063/1.1699114. [148] Anders W. Sandvik and Juhani Kurkijärvi. “Quantum Monte Carlo simulation method for spin systems”. In: Physical Review B 43.7 (1991), pp. 5950–5961. doi: 10.1103/physrevb. 43.5950. [149] D. C. Handscomb. “The Monte Carlo method in quantum statistical mechanics”. In: Math- ematical Proceedings of the Cambridge Philosophical Society 58.04 (1962), p. 594. doi: 10.1017/s0305004100040639. [150] A W Sandvik. “A generalization of Handscomb’s quantum Monte Carlo scheme-application to the 1D Hubbard model”. In: Journal of Physics A: Mathematical and General 25.13 (1992), pp. 3667–3682. doi: 10.1088/0305-4470/25/13/017. [151] A. W. Sandvik, R. R. P. Singh, and D. K. Campbell. “Quantum Monte Carlo in the in- teraction representation: Application to a spin-Peierls model”. In: Physical Review B 56.22 (1997), pp. 14510–14528. doi: 10.1103/physrevb.56.14510. [152] Anders W. Sandvik. “Finite-size scaling of the ground-state parameters of the two-dimensional Heisenberg model”. In: Physical Review B 56.18 (1997), pp. 11678–11690. doi: 10.1103/ physrevb.56.11678. [153] Anders W. Sandvik. “Stochastic series expansion method with operator-loop update”. In: Physical Review B 59.22 (1999), R14157–R14160. doi: 10.1103/physrevb.59.r14157. [154] D. C. Handscomb. “A Monte Carlo method applied to the Heisenberg ferromagnet”. In: Mathematical Proceedings of the Cambridge Philosophical Society 60.01 (1964), p. 115. doi: 10.1017/s030500410003752x. [155] I. V. Rozhdestvensky and I. A. Favorsky. “Handscomb Monte-Carlo Method for S = 1/2 Trans- verse Ising Model”. In: Molecular Simulation 9.3 (1992), pp. 213–222. doi: 10 . 1080 / 08927029208047428. [156] Fabien Alet, Stefan Wessel, and Matthias Troyer. “Generalized directed loop method for quantum Monte Carlo simulations”. In: Physical Review E 71.3 (2005). doi: 10.1103/ physreve.71.036706. [157] Waseem S. Bakr et al. “A quantum gas microscope for detecting single atoms in a Hubbard- regime optical lattice”. In: Nature 462.7269 (2009), pp. 74–77. doi: 10.1038/nature08482. [158] Debayan Mitra et al. “Quantum gas microscopy of an attractive Fermi–Hubbard system”. In: Nature Physics 14.2 (2017), pp. 173–177. doi: 10.1038/nphys4297. [159] Oliver Penrose and Lars Onsager. “Bose-Einstein Condensation and Liquid Helium”. In: Physical Review 104.3 (1956), pp. 576–584. doi: 10.1103/physrev.104.576. 187

[160] Ushnish Ray and David M. Ceperley. “Revealing the condensate and noncondensate dis- tributions in the inhomogeneous Bose-Hubbard model”. In: Physical Review A 87.5 (2013). doi: 10.1103/physreva.87.051603. [161] David Chen et al. “Quantum Quench of an Atomic Mott Insulator”. In: Physical Review Letters 106.23 (2011). doi: 10.1103/physrevlett.106.235304. [162] Raphael Lopes et al. “Quantum Depletion of a Homogeneous Bose-Einstein Condensate”. In: Physical Review Letters 119.19 (2017). doi: 10.1103/physrevlett.119.190404. [163] Biswaroop Mukherjee et al. “Homogeneous Atomic Fermi Gases”. In: Physical Review Let- ters 118.12 (2017). doi: 10.1103/physrevlett.118.123401. [164] S. F. Edwards and D. V. Grinev. “Granular materials: Towards the statistical mechanics of jammed configurations”. In: Advances in Physics 51.8 (2002), pp. 1669–1684. doi: 10. 1080/0001873021000030780. [165] William Feller. An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition. Wiley, 1968. isbn: 978-0471257080. [166] William G. Cochran. Sampling Techniques, 3rd Edition. John Wiley & Sons, 1977. isbn: 978-0471162407. [167] Frederick James. Statistical Methods in Experimental Physics: 2nd Edition. World Scientific Publishing Company, 2006. isbn: 978-9812705273. [168] William Feller. An Introduction to Probability Theory and Its Applications, Vol. 2, 2nd Edition. John Wiley & Sons, Inc., 1971. isbn: 978-0471257097. [169] Paul Newbold, William Carlson, and Betty Thorne. Statistics for Business and Economics (8th Edition). Pearson, 2012. isbn: 978-0132745659. [170] A. Marte et al. “Feshbach Resonances in Rubidium 87: Precision Measurement and Analy- sis”. In: Physical Review Letters 89.28 (2002). doi: 10.1103/physrevlett.89.283202. [171] Cheng Chin et al. “Feshbach resonances in ultracold gases”. In: Reviews of Modern Physics 82.2 (2010), pp. 1225–1286. doi: 10.1103/revmodphys.82.1225. 188

Appendices 189

Appendix A

Statistics toolbox

The statistical quantities that are used throughout this dissertation, mainly over PartIII, have their definitions found in any good textbook on the subject [165–167]. However, there can be slight changes on such definitions from reference to reference that could lead to misunderstandings or more gravely in mismatches in case numerical comparison of calculations is desired. In order to avoid such problems, this Appendix will define every statistical quantity that I have used to obtain results hereby presented. I shall also briefly mention some of the techniques used to estimate population quantities from samples.

Population and sample statistics

Consider that, for some reason, we are interested in the percentage of fans that support a certain football team among all other fans that, in one way or the other, are interested in this sport within one single country – let’s assume it is Brazil. To obtain the definite answer, we would have to, in first place, ask every single Brazilian whether or not he or she is interested in football, following which we would then ask what team he or she supports. Let’s rule out pathological cases such as people that like football but do not support any team (this does not even make sense, by the way). It is also worth ruling out the subjective degree that the answer might have: some people could say they are interested in something but, in a larger context, it could be shown that they are not really interested. We are disregarding this bias, assuming and accepting only binary answers: yes or no. However, asking every single Brazilian is not a feasible task if we do not want to spend a lot of money and a lot of time. Actually, a lot of people would have died and a lot of other people would have been born in the meantime! In this case, obtaining the population1 percentage is just unpractical. What we could do instead of that is asking a certain amount of people – not the whole country – and trying to estimate this percentage. This amount of people that is smaller than the whole population is called a sample. Statistics, or more specifically inference, provides us some tools to have an idea of the extent to which estimates from samples are representative of the underlying

1The population here comprises every person that is interested in football, i.e. answered “yes” to the first question. population measures. It is evident that accessing properties of the population itself is an extremely rare possibility. However, depending on some specific details, samples can provide very good and reliable estimates in the sense that they can be made both precise and accurate. At first sight, it is somewhat intuitive that larger samples should give us better estimates. By enlarging the sample size we should be able to get in touch with the peculiarities of the population’s distribution in a higher degree. Nonetheless, obtaining larger samples can sometimes be very costly, therefore this is not always the more desirable route to obtain refined statistics. Details such as the shape and width of the population’s distribution are of primary importance. Features such as broadness, asymmetry and multi-modal behavior can deeply increase the difficulty in obtaining good estimates even for simple quantities. The example of our football poll also illustrates the possibility of biases. Suppose, for instance, that instead of asking every single Brazilian in the country, we make the questions for those that live in the state of São Paulo. It seems obvious that the results from the sample are going to be highly tendentious towards the local teams – Palmeiras, São Paulo, Santos and Portuguesa. We say then that our estimate for the percentage of fans that support these teams is a biased estimate. Removing this kind of bias from sample measures is an art that can enormously improve the confidence of estimates of quantities related to the population’s distribution, and attheend of this appendix I shall discuss one of them that, even though quite rugged, is very handy.

Distributions and random variables

Perhaps the most useful context to discuss statistical quantities that were relevant along this dissertation is making use of random variables. We assume that there is an underlying, unknown distribution, from which we are interested in estimate some property. The football poll can work quite well as an example, but let’s consider another one. We have a closed box that is known to be filled with colored balls. We want to estimate the percentage of black balls insidethebox. In order to do so, we poke a hole in the box and randomly pick one of the balls. The color of the picked ball is our random variable, that we shall denote as 푋. Usually, to a random variable are assigned numbers that represent the purpose of the variable. In this case, we could assign the value 0 for black, 1 for white, 2 for red, 3 for blue and so on, so forth. Our estimate of the percentage of the color black, for instance, is strictly related to the estimate of the probability distribution 풫(푋) of our random variable. We do not know this a priori probability distribution, perhaps only the person that prepared the box knows it exactly. However, after randomly picking, say, 10 balls, obtaining 2 black balls, 3 white balls, 3 red balls and 2 blue balls, we can estimate that the percentage of black balls in the box is 2/10, i. e. 풫(푋 = 0) ≈ 20%. Even though this certainly works if we keep drawing balls out of the box, this is not the most useful technique if we are interested in the general behavior and shape of 풫 rather than its details. The complete knowledge of a probability distribution comprises the specification of the value of 풫(푋) for each of the possible values that the random variable 푋 may take. However, a rigorous theorem [168] proves that if we know all of the momenta of the distribution 풫, we have exactly the same amount of information that when we know the exact 풫(푋). In what follows, we shall define these momenta up to fourth order. We shall assume that it is possible to assign aninfinite number of outcomes to our 푋 that will be a discrete random variable. Continuous variables are easily generalized from this discussion. Important note: In what follows, whenever we refer to a property of the underlying population’s distribution, we shall use plain Greek letters. When we refer to the corresponding sample quantity, we shall put a ˆ symbol – a circumflex, or hat – on top of it. For instance, 휎^ is the sample standard deviation while 휎 corresponds to the population standard deviation.

Expected value

This an important concept to properly define other statistical quantities that are going tobe employed. The name itself of this quantity capture its essence: it represents the expected value of a certain variable, or generally a certain function, over the population distribution. As so, it can only be surely known if we have access to the whole distribution, therefore it is often used to define population-related quantities. However, as we may see, it facilitates the calculation of sample-related quantities since expected values can be estimated. If we have a random variable 푋, a wide range of quantities related to the distribution of 푋 can be expressed in terms of generalized functions of 푋, that we shall denote by 푔(푋). The expected value of 푔 is then given by

∑︁ 퐸 [푔(푋)] = 푔(푋 = 푥푖)풫(푋 = 푥푖). (A.0.1) 푖∈ population Expected values have the important property of being linear on their variables. If 푋 and 푌 are two random variables and 푎, 푏 two constant numbers, then

퐸 [푎푋 + 푏푌 ] = 푎퐸 [푋] + 푏퐸 [푌 ] . (A.0.2) This can be further expanded to include functions since, in general, they are define in terms of their Taylor series. Yet another important property is that, when 푋 and 푌 are independent, uncorrelated random variables2, then we have

퐸 [푋 · 푌 ] = 퐸 [푋] · 퐸 [푌 ] . (A.0.3)

Average

Usually, the average of a random variable 푋 over the whole underlying population distribution is called the expected value of X and is denoted by 퐸[푋] or very often substituted by the shorter notation 퐸[푋] = 휇. Its definition is given by

∑︁ 퐸[푋] ≡ 휇 = 푥푖 · 풫(푋 = 푥푖), (A.0.4) 푖∈ population

2This can have several meanings but, to our purposes, it means that measuring 푋 does not influence the measurement of 푌 . where 푖 is an index that runs over all possible outcomes 푥푖 of variable 푋 within the whole popu- lation. The sample average of a random variable [푋^ ], also called the mean, is given by

^ 1 ∑︁ [푋] ≡ 휇^ = 푥푖, (A.0.5) 푁푠 푖∈ sample where 푖 now runs over the outcomes obtained from the sample that has accessed the population (︁ )︁ 푁 times. Note that, for some particular value 푥 = 푦, 1 ∑︀ 1 is an estimate for 푠 푖 푁푠 푖∈sample,푥푖=푦 풫(푋 = 푦). Since we do not have any previous knowledge over the population distribution and the estimate of the average does not require any other previously estimated quantity, this is an unbiased estimate for the expected value 퐸[푋].

Variance

The population variance, 휎2 is defined in terms of the expected value

2 [︁ 2]︁ ∑︁ 2 휎 ≡ 퐸 (푋 − 휇) = (푥푖 − 휇) · 풫(푋 = 푥푖). (A.0.6) 푖∈ population

It is sometimes useful to employ an alternate form that comes from the expansion

[︁ ]︁ [︁ ]︁ [︁ ]︁ [︁ ]︁ 퐸 (푋 − 휇)2 = 퐸 푋2 − 2푋휇 + 휇2 = 퐸 푋2 − 2휇퐸 [푋] + 휇2 = 퐸 푋2 − 휇2 [︁ ]︁ = 퐸 푋2 − 퐸 [푋]2, (A.0.7) particularly when considering the sample variance 휎^2, which can be readily expressed as

[︁ ]︁ 1 ^2 \^ 2 ∑︁ ^ 2 ^2 ^ 2 휎 = (푋 − [푋]) = (푥푖 − [푋]) = [푋 ] − [푋] . (A.0.8) 푁푠 푖∈ sample

This means that the sample variance can be written as the sample average of the square of the random variable 푋 minus the square of the sample average of 푋. Since both quantities have to be estimated, the sample variance, as defined, is a biased estimator. This can be seen by noticing that ⎡ ⎡ ⎤2⎤ [︁ ]︁ [︁ ]︁ 1 1 퐸 휎^2 = 퐸 [푋^2] − [푋^]2 = 퐸 ⎢ ∑︁ 푥2 − ∑︁ 푥 ⎥ ⎣ 푖 2 ⎣ 푖⎦ ⎦ 푁푠 푖∈ sample 푁푠 푖∈ sample

1 ∑︁ [︁ 2]︁ 1 ∑︁ = 퐸 푥푖 − 2 퐸 [푥푖 · 푥푗] 푁푠 푖∈ sample 푁푠 푖,푗∈ sample

1 ∑︁ [︁ 2]︁ 1 ∑︁ [︁ 2]︁ 1 ∑︁ = 퐸 푥푖 − 2 퐸 푥푖 − 2 퐸 [푥푖] · 퐸 [푥푗] 푁푠 푖∈ sample 푁푠 푖∈ sample 푁푠 푖̸=푗∈ sample (︃ )︃ 1 1 [︁ 2]︁ 1 ∑︁ = − 2 푁푠퐸 푥푖 − 2 퐸 [푥푖] · 퐸 [푥푗] 푁푠 푁푠 푁푠 푖̸=푗∈ sample (︂ )︂ 1 [︁ 2]︁ 1 ∑︁ = 1 − 퐸 푥푖 − 2 휇 · 휇 푁푠 푁푠 푖̸=푗∈ sample

푁푠 − 1 [︁ 2]︁ 1 2 = 퐸 푥푖 − 2 푁푠(푁푠 − 1)휇 푁푠 푁푠 푁푠 − 1 {︁ [︁ 2]︁ 2}︁ = 퐸 푥푖 − 휇 푁푠 푁 − 1 = 푠 휎2, (A.0.9) 푁푠 where we have used the fact that 퐸 [푋] = 퐸 [푥푖], since all drawings from the distribution are equivalent, and also that 퐸 [푥푖 · 푥푗] = 퐸 [푥푖] · 퐸 [푥푗] for 푖 ̸= 푗 since the drawings are independent, ^2 uncorrelated. This result shows that 휎 has a bias of order 1/푁푠, thus the smaller the sample the more biased the estimate for the population variance is. However, fortunately in this case the bias is made explicit, therefore it is quite easy to correct it. An unbiased estimate for the population variance is then given by

1 ^2 ∑︁ ^ 2 휎 unbiased = (푥푖 − [푋]) , (A.0.10) 푁푠 − 1 푖∈ sample [︁ ]︁ ^2 2 which is the form used in this dissertation. It is straightforward to verify that 퐸 휎 unbiased = 휎 .

Standard-deviation

The standard deviation 휎 comprises the spread of the values that a random variable 푋 can take, as shown in Fig. A.1. It is directly related to the variance 휎2, √ 휎 = 휎2, (A.0.11) and the most common reason for its use is that it has the same physical units as the random variable itself, which is not the case for the variance. For this quantity, estimators can be readily Figure A.1: Distributions with different standard deviations. Figure from [169], available online at slideplayer.com. constructed from sample measures using the same recipe as before,

√︁ 휎^ = 휎^2. (A.0.12)

Recall that 휎^2 is a biased estimator for 휎2, therefore a more appropriate form is given by

√︁ ^2 휎^ = 휎 unbiased, (A.0.13)

^2 where 휎 unbiased is defined in Eq. A.0.10. However, in spite of using an unbiased estimator for the variance, this does not guarantees that 휎^ is free of biases. Indeed, it is a biased estimator, and the cause for it is the very application of the square root operation. In this case, removing the bias is not as easy as it is for the variance, and other correction methods have to be used.

Standard-error

The standard-error is a quantity defined exclusively for sample-related properties. It isa measure of the precision of an estimate made, of course, by an estimator. This means that, whenever we perform an estimate of a certain property 푔, there is an associated standard error that is given by 휎^ Δ푔 = 푔 , (A.0.14) 푁푠 where 푁푠 is the size of the sample and 휎^푔 is an estimator for the standard-deviation of 푔. The most common situations associate standard-errors to estimates of averages. This definition can perhaps be further elucidated in AppendixB that exploits the Central Limit Theorem, and also in Section 4.2 where Monte Carlo estimators are discussed. Figure A.2: Hypothetical skewed distributions. Figure from Wikipedia.

Skewness

This quantity is defined, in similarity to the variance, as

[︃(︂푋 − 휇)︂3]︃ 휁 = 퐸 , (A.0.15) 휎 and is also called the third relative normalized momentum of the population distribution. It is a measure of the asymmetry of the distribution of a random variable around its average. Positively skewed distributions have tails to the right, while negative skewed distributions have tails to the left, as it is shown in Fig. A.2. Perfectly symmetric distributions have zero skewness. The sample skewness 휁^ is therefore readily calculated following the same recipe as the other quantities previously defined,

^ 1 1 ∑︁ 3 휁 = (푥푖 − 휇^) . (A.0.16) ^2 3/2 푁푠 (휎 unbiased) 푖∈ sample As it may be noticed, as we consider further moments of the distributions (the variance is the second moment), biases can become more and more pronounced. 휁^ is also a biased estimator and care must be taken when considering small samples. For instance, the estimated skewness of a small sample can be large in magnitude even though the underlying distribution is perfectly symmetric. In such cases, bias removal and calculation of associated errors are imperative for a more trustful analysis.

Kurtosis

Another quantity that is often used to characterize a distribution is the kurtosis 휅, defined here as the relative normalized fourth momentum of the distribution,

[︃(︂푋 − 휇)︂4]︃ 휅 = 퐸 , (A.0.17) 휎 Figure A.3: (A) Normal distribution, 휅exc = 0. (B) Platykurtic distribution, 휅exc < 0. (C) Leptokurtic distribution 휅exc > 0. Figure from what-when-how. which as usual can be defined for a sample through

1 1 ∑︁ 4 휅^ = (푥푖 − 휇^) . (A.0.18) ^2 2 푁푠 (휎 unbiased) 푖∈ sample Most of the times, the actually relevant quantity to analyze is the so-called excess kurtosis, defined as

휅exc = 휅 − 3, (A.0.19) with a similar definition for the sample excess kurtosis. The reason for that is because aperfect normal, Gaussian distribution has 휅 = 3, therefore the excess kurtosis is a measure that compares how “tailed” a distribution is compared to a normal one, as shown in Fig. A.3. As in the case of the skewness, this is also a highly biased estimator that needs to be properly considered in the case of small samples.

Jack-knife resampling

The question of biases in estimates made from samples is a subtle issue as one could notice from the definitions of the relevant statistical quantities used in this dissertation. This increases immensely when (i): the quantity that is going to be estimated has a complicate relation to the random variable and (ii): the sample is small. In all cases treated in this thesis, the samples were at least of moderate size and the calculated quantities were mostly relative moments. Nonetheless, removing biases is still important. There are several methods that correct biases and some of them can work better for certain estimators than others. Perhaps the one that predates most common methods such as bootstrap- ping is the Jack-knife resampling. It is very handy because it can be always applied and, in some situations, it is actually the solely possibility to treat biases. It has the convenient feature of, if one can expand the bias in an estimator in powers of the inverse sample size, it removes the first-order bias. The idea of the method is, given a sample of size 푁푠, calculate the quantity of interest for subsamples of size (푁푠 − 1) that are obtained by removing one of the individuals of the sample at each time. By comparing the average of these “jacked” values to the estimated quantity over the whole sample we can have an idea of how biased the estimation is. ^ In formal footing, consider that 휃 is the calculated estimator for a sample of size 푁푠, i. e.

^ 1 ∑︁ 휃 = 휃(푋 = 푥푖). (A.0.20) 푁푠 푖∈ sample ^ Consider also that 휃푖 is the same estimator calculated for a subsample where the individual 푖, that previously provided 휃(푋 = 푥푖), has been removed,

^ 1 ∑︁ 휃푖 = 휃(푋 = 푥푗). (A.0.21) 푁푠 − 1 푗∈ sample,푗̸=푖 ^ Of course, there are 푁푠 of such estimators, and we denote their average by 휃jack,

푁푠 ^ 1 ∑︁ ^ 휃jack = 휃푖. (A.0.22) 푁푠 푖=1 The “jack-knife” estimate for the bias on 휃^ is then given by

^ ^ ^ 훽휃^ = (푁푠 − 1) · (휃jack − 휃푖), (A.0.23) and the resulting bias-corrected jack-knife estimate of 휃 is

^ ^ ^ 휃corrected = 푁푠 · 휃 − (푁푠 − 1) · 휃jack. (A.0.24) In some specific cases, this can also remove second-order biases. It is a nice exercise to showthat the jack-knife corrected estimate for the sample variance 휎^2 leads exactly to the expression derived ^2 for 휎 unbiased of Eq. A.0.10. 198

Appendix B

Central Limit Theorem

The goal of this appendix is solely to expose the concept that is encapsulated within the Central limit Theorem (CLT), namely that the sampling distribution of the sample mean is normally distributed with a precise standard deviation if the draws from the original distribution, that does not need to be normal at all, are uncorrelated. I do not intend to derive the theorem, which can be found elsewhere in an extremely rigorous fashion [168], neither discuss its diverse forms and versions. Here, I will try to be practical by giving an example that hopefully will illustrate the idea that is behind the CLT which, as the reader may notice, has been invoked a number of times along the text. Consider that we, embedded with the World Cup (WC) spirit, are interested in the average number of goals scored in a match since the very first event that took place in 1930. In asimilar way as we have done in the previous appendix, we define a random variable 푋 that represents the possible outcomes of a match like, for instance: {0 = (0 − 0), 1 = (1 − 0), 2 = (1 − 1, 2 − 0), 3 = (2 − 1, 3 − 0), 4 = (2 − 2, 3 − 1, 4 − 0), 5 = (3 − 2, 4 − 1, 5 − 0), ...} and so on. Notice that the winner (or loser) does not come into play as we are just interested in the number of scores, therefore for our purposes 2 − 0 = 0 − 2. The initial concept that is important regarding the CLT is that the probability distribution of 푋, 푃 (푋), can be rigorously anything. The only assumption that we make about it is that its variance is finite, which is reasonable within our example. In particular, we expect 푃 (푋) to be quite skewed since, obviously, we must expect 푃 (1) > 푃 (5), for instance, as from our common sense, 1 − 0 is a more likely score than 3 − 2 or 5 − 0. In other words, 푃 (푋) is not a Gaussian distribution at all. We shall then define the average 휇 and variance 휎2 of 푃 (푋) as usual,

1 ∑︁ 휇 = 푋푖, (B.0.1) 푁퐺 푖∈ all WC

2 1 ∑︁ 2 휎 = (푋푖 − 휇) , (B.0.2) 푁퐺 푖∈ all WC 1 where 푁퐺 is the total number of games played in all editions of the World Cup . The CLT then gives us valuable information about the shape of the probability distribution 풫(푌 ) of the random variable 푌 that is the average number of goals scored in one single realization of the World Cup:

1 ∑︁ 푌푗 = 푋푖. (B.0.3) 푛퐺 푖∈ WC(j)

2 where 푛퐺 is the number of matches in a single WC . More specifically, the theorem shows that 풫(푌 ) is normally distributed with average

휇푌 = 휇 (B.0.4) and variance 2 2 휎 휎푌 = , (B.0.5) 푛퐺 i. e., 1 [︃ (푌 − 휇)2 ]︃ 풫(푌 ) = √︁ exp − . (B.0.6) 2 2휎2/푛 2휋휎 /푛퐺 퐺 The first important observation is that the average of 푋 is the same as the average of 푌 . The second point, which is the one that I explore to the largest extend along this dissertation, is that the variance of 푌 is reduced compared to the average of 푋. This indicates that if more games were played within each WC, the distribution of values of the average number of scores in a single realization would be narrower. In particular, if 푛퐺 were quite large, one single realization of the WC would be enough to tell us, confidently, what is the average number of goals scored inaWC 2 event. The two assumptions that we need for the CLT to work is that 휎 < ∞ and that 푋푖 and 푋푗 are uncorrelated, viz. the number of goals scored in match 푖 does not influence the number of goals scored in match 푗3. As I am not demonstrating this result, the reader may very well be skeptical about it. I then recommend accessing this wonderful application( http://onlinestatbook.com/stat_sim/ sampling_dist/index.html), that discusses the very same features that I tried to illustrate with my example.

1Up to the last event, Brazil 2014 WC, 836 matches were played, with 휇 = 3.23. Unfortunately I could not find the value of 휎2 and have not felt compelled to calculate it, but it certainly fits our condition 휎2 < ∞. 2For practical reasons, let us assume that this number is the same for every WC(푗), which is not true. 3Football lovers will certainly feel like this assumption is a weak point in this example. 200

Appendix C

Example of experimental setup

Even though ultracold atomic gases have the main ingredients that constitute a quantum many- body system, being therefore amenable to several theoretical and computational tools, the advent of optical lattices has been crucial for improving our knowledge of quantum states of matter. Along this dissertation, in several points I tried to elucidate theoretical arguments and results in the light of experiments that can realize the Bose-Hubbard model, including the disordered case. For this reason, in this appendix a typical experimental setup is presented. Several atomic species can be used in the context of cold atoms experiments. I have particularly chosen 87Rb atoms that are, of course, bosons: nuclei contain 37 protons and 50 neutrons and the 87 atoms are electrically neutral. Rb has a scattering length 푎 = 100푎0 at low temperatures, where −11 푎0 = 5.29×10 m is the Bohr radius. However, this quantity can be tuned using a magnetic field through the so-called Feshbach resonance [170, 171]. In fact, one can tune the interaction strength and ultimately change its sign, meaning that instead of repulsion the atoms would attract each other. In spite of that, as I have done along this dissertation, I shall only consider the repulsive case, since this feature is essential for the stability of the bosonic system. This atomic isotope of 85Rb, which is the least abundant isotope (≈ 28%), is extremely stable, with a half-life on the order of 1011 years1. The cubic optical lattice in which these atoms will “mimic” the physics of the Bose-Hubbard model is constructed by the standing wave pattern from a set of counter propagating laser beams that are shown red in Fig. C.1a. Experiments usually require harmonic traps of frequency 휔 to keep the atomic cloud confined, even though homogeneous traps are now made possible [162]. The wavelength of these lasers can change, leading to different lattice parameters. In this example, we have 휆 = 812 nm, and consequently a lattice parameter 푐 = 휆/2 = 406 nm, which for 87Rb atoms 2 2 sets a recoil energy 퐸푅 = ~ 휋 /2푚푐 = 167 nK. This corresponds to the energy imparted to an atom at rest from the absorption of a photon from the lasers that compose the optical lattice and sets up the energy scale of the system. On top of that, in order to generate the disordered lattice, a speckle field is employed using a 532 nm laser beam, shown in green in Fig. C.1a, that is projected through a holographic diffuser. The disorder strength Δ, shown in Fig. C.1b, corresponds to the intensity of this laser beam. The autocorrelation length along the transverse and longitudinal directions are, respectively, 570

1It eventually 훽−-decays to 87Sr. 201 nm and 3 휇m, as shown in Fig. C.1d. However, these directions are oriented at different angles relatively to the optical lattice beams. The projected lengths are then 790 and 650 nm. Notice that these values are of the same order of the lattice parameter 푐, which allows for the assertion of uncorrelated disorder that we have always used throughout this dissertation.

Figure C.1: Illustration of the experimental setup of a disordered optical lattice. (a) Counter propagating laser beams, in red, are used to construct the optical lattice. The disordered potential is generated by another superposed laser that goes though a holographic lens, called a speckle field, shown in green. (b) Shows the resulting lattice potential from the combination of the lasers. The disorder strength, given by Δ, is set by the diffusive light intensity. (c) Example of speckle distribution. (d) Measured speckle intensity used to calculate the autocorrelation function. See text for discussion. Figure from Ref. [8]. 202

Index of acronyms

c.d.f.: Cumulative distribution function, 98 p.d.f.: Probability density function, 96

BG: Bose glass, 74, 77, 78, 81, 82, 130, 138, 144, 146, 148–150, 153, 155, 162, 163, 170 BHM: Bose-Hubbard Model, 19, 23, 66, 71, 113

CLT: Central Limit Theorem, 21, 63, 64, 194, 198, 199

DBHM: Disordered Bose-Hubbard Model, 19, 20, 66, 69, 71, 113, 129, 130, 132, 135, 140, 141, 171

ED: Exact diagonalization, 129

FBZ: First Brillouin zone, 38

GP: Griffiths phase, 56 GR: Griffiths region, 57 GSL: GNU Scientific Library, 89, 90

HSP: Higher Symmetry Phase, 49, 50, 53, 54, 56

IRFP: Infinite randomness fixed point, 56

LDA: Local density approximation, 75, 129, 130, 134, 135 LMF: Local Mean Field, 75 LSP: Lower Symmetry Phase, 49, 53, 54, 56, 57

MI: Mott insulator, 36, 46, 47, 72–75, 77, 80–82, 130, 139, 159

NL: Normal liquid, 47 NSA: Non self-averaging, 62

PT: Phase transition, 58

QMC: Quantum Monte Carlo, 46, 82, 125 QPT: Quantum phase transition, 45, 72, 73, 82

RG: Renormalization Group, 17, 30, 49, 54, 60, 73 RSF: Reentrant superfluid, 77, 139, 146, 148, 150, 159, 163 203

SF: Superfluid, 35, 46, 47, 74, 77, 79, 81–83, 130, 138, 139, 145, 146, 155, 159, 161, 163, 170, 174 SMFT: Stochastic Mean Field Theory, 75 SSA: Strong self-averaging, 62 SSE: Stochastic series expansion, 20, 105, 109, 113, 122, 124, 127, 129, 141, 175

WGL: Wilson-Ginzburg-Landau, 52, 55, 60 WSA: Weak self-averaging, 62