Variational Monte Carlo Method for Fermions
Total Page:16
File Type:pdf, Size:1020Kb
Variational Monte Carlo Method for Fermions Vijay B. Shenoy [email protected] Indian Institute of Science, Bangalore December 26, 2009 1/91 Acknowledgements Sandeep Pathak Department of Science and Technology 2/91 Plan of the Lectures on Variational Monte Carlo Method Motivation, Overview and Key Ideas Details of VMC Application to Superconductivity 3/91 The Strongly Correlated Panorama... Fig. 1. Phase diagrams ofrepresentativemateri- als of the strongly cor- related electron family (notations are standard and details can be found in the original refer- ences). (A) Temperature versus hole density phase diagram of bilayer man- ganites (74), including several types of antiferro- magnetic (AF) phases, a ferromagnetic (FM) phase, and even a glob- ally disordered region at x 0 0.75. (B) Generic phase diagram for HTSC. SG stands for spin glass. (C) Phase diagram of single layered ruthenates (75, 76), evolving from a superconducting (SC) state at x 0 2 to an AF insulator at x 0 0(x controls the bandwidth rather than the carrier density). Ruthenates are believed to be clean metals at least at large x, thus providing a family of oxides where competition and com- plexity can be studied with less quenched dis- order than in Mn ox- ides. (D) Phase diagram of Co oxides (77), with SC, charge-ordered (CO), and magnetic regimes. (E) Phase diagram of the organic k-(BEDT- TTF)2Cu[N(CN)2]Cl salt (57). The hatched re- gion denotes the co- existence of metal and insulator phases. (F) Schematic phase dia- gram of the Ce-based heavy fermion materials (51). (Dagotto, Science, (2005)) 4/91 ...and now, Cold Atom Quantum Simulators (Bloch et al., RMP, (2009)) Hamiltonian...for here, or to go? 5/91 The Strategy Identify “key degrees of freedom” of the system of interest Identify the key interactions Write a Hamiltonian (eg. tJ-model)...reduce the problem to a model “Solve” the Hamiltonian .... Nature(Science)/Nature Physics/PRL/PRB... 6/91 “Solvers” Mean Field Theories (MFT) Slave Particle Theories (and associated MFTs) Dynamical Mean Field Theories (DMFT) Exact Diagonalization Quantum Monte Carlo Methods (eg. Determinental) Variational Monte Carlo (VMC) ... AdS/CFT 7/91 Great Expectations Prerequisites Graduate quantum mechanics including “second quantization” Ideas of “Mean Field Theory” Some familiarity with “classical” superconductivity At the end of the lectures, you can/will have Understood concepts of VMC Know-how to write your own code A free code (pedestrianVMC) Seen examples of VMC in strongly correlated systems 8/91 References NATO Series Lecture Notes by Nightingale, Umrigar, see http://www.phys.uri.edu/˜nigh/QMC-NATO/webpage/ abstracts/lectures.html Morningstar, hep-lat/070202 Ceperley, Chester and Kalos, PRB 17, 3081 (1977) 9/91 VMC in a Nutshell Strategy to study ground (and some special excited) state(s) “Guess” the ground state Use Schr¨odinger’s variational principle to optimize the ground state Calculate observables of interest with the optimal state ... 10/91 Hall of Fame Bardeen, Cooper, Schrieffer Laughlin Quote/Question from A2Z, Plain Vanilla JPCM (2004) The philosophy of this method (VMC )is analogous to that used by BCS for superconductivity, and by Laughlin for the fractional quantum Hall effect: simply guess a wavefunction. Is there any better way to solve a non-perturbative many-body problem? 11/91 Variational Principle Eigenstates of the Hamiltonian satisfy H Ψ = E Ψ | i | i where E is the energy eigenvalue Alternate way of stating this ◮ Define an “energy functional” for any given state Φ | i E[Φ] = Φ H Φ h | | i ◮ Among all states Φ that satisfy Φ Φ = 1, those that extremize E[Φ] | i h | i are eigenstates of H Exercise: Prove this! ◮ The state(s) that minimises(minimise) E[Φ] is(are) the ground state(s) 12/91 Variational Principle in Action...A Simple Example + Ground state H2 molecule with a given nuclear separation “Guess” the wave function (linear combination of atomic orbitals) ψ = a 1 + b 2 | i | i | i where 1, 2 are atomic orbitals centered around protons 1 and 2 | i Find energy function (taking everything real) 2 2 E(a, b)= H11a + 2H12ab + H22b (symbols have obvious meaning) 1 Minimize (with appropriate constraints) to find ψ g = ( 1 + 2 ) | i √2 | i | i (for H12 < 0) Question: Is this the actual ground state? Example shows key ideas (and limitations) of the variational approach! 13/91 Variational Approach in Many Body Physics Guess the ground state wavefunction Ψ | i Ψ depends on some variational parameters α1, α2... which “contain the| i physics” of the wavefunction ◮ Example, † † BCS = uk + vk c c 0 | i k↑ −k↓ | i " k # Y uk , vk are the “αi s” In general, αi s are chosen ◮ To realize some sort of “order” or “broken symmetry”, (for example, magnetic order)...there could be more than one order present (for example, superconductivity and magnetism) ◮ To enforce some sort of “correlation” that we “simply” know exists and can include all of the “effects” above ...your guess is not as good as mine! 14/91 Variational Principle vs. MFT Mean Field theory ◮ Replace H by HMF which is usually quadratic ◮ Use the Feynman identity F FMF + H HMF MF (F is free energy) ≤ h − i ◮ Kills “fluctuations” Variational Principle ◮ Works with the full Hamiltonian ◮ Non-perturbative ◮ Well designed wavefunctions can include fluctuations above MFT Usually MFT provides us with intuitive starting point...we can then “well design” the variational wavefunction to include the fluctuations that we miss 15/91 Overall Strategy of Variational Approach Summary of Variational Approach 1 Design a ground state Ψ(αi ) | i 2 Energy function Ψ(αi ) H Ψ(αi ) E(αi )= h | | i Ψ(αi ) Ψ(αi ) h | i m 3 Find α that minimize (extremize) E(αi ) and obtain ΨG i | i 4 Study ΨG | i This explains the “Variational” part of VMC ... But...why Monte Carlo? 16/91 Variational Approach: The Difficulty In many cases the variational programme can be carried out analytically where we have a well controlled analytical expression for the state, e. g., BCS state In most problems of interest we do not have a simple enough analytical expression for the wavefunction (e. g., Gutzwiller D wavefunction Ψgutz = g Ψfree ) | i | i In many of these cases we have expressions for the wavefunction which “look simple in real space”, say, Ψα(r1 , r2 ,..., rN, , r1 , r2 ,..., rN, ), where riσ are the coordinates of our↑ electrons↑ on↓ a lattice↓ ↓ (we will↓ work exclusively with lattices, most ideas can be taken to continuum in a straightforward way); Ψα has necessary permutation symmetries, and α subscripts reminds us of the underlying parameters Call C (r1 , r2 ,..., rN, , r1 , r2 ,..., rN, ) a configuration of the ≡ ↑ ↑ ↓ ↓ ↓ ↓ electrons 17/91 Variational Approach: The Difficulty Now, Ψ (C)HΨ (C) E(α )= C α∗ α i Ψ (C)Ψ (C) P C α∗ α i.e., we need to do a sum overP all configurations C of the our 2N electron system Note that in most practical cases we need sum the denominator as well, since we use unnormalized wavefunctions! Sums...how difficult can they be? Consider 50 electrons, and 50 electrons on a lattice with 100 ↑ ↓ sites....How many configurations are there? Num. Configurations 1060!!!! ∼ And this is the difficulty! ... Solution...use Monte Carlo method to perform the sum! 18/91 “Monte Carlo” of VMC How do we use Monte Carlo to evaluate E(αi )? Use the following trick Poor Notation Warning: HΨ(C) C H Ψ ≡ h | | i HΨα(C) Ψ (C)HΨ (C) C Ψα∗ (C)Ψα(C) Ψ (C) E(α ) = C α∗ α = α i Ψ (C)Ψ (C) Ψ (C)Ψ (C) P C α∗ α P C α∗ α HΨ (C) = PP(C) α P Ψ (C) C α X where Ψ (C) 2 P(C)= | α | C Ψα∗ (C)Ψα(C) is the probability of the configurationP C; obviously C P(C) = 1 Note that we do not know the denominator of P(C) P For any operator O OΨ (C) O = P(C) α h i Ψ (C) C α X 19/91 Idea of Monte Carlo Sums Key idea is not to sum over all configurations C, but to visit the “most important” configurations and add up their contribution The most important configurations are those that have “high” probability..., but we do not know the denominator of P(C) Need to develop a scheme where we start with a configuration C1 and “step” to another configuration C2 comparing P(C1) and P(C2)...and continue on Thus we do a “walk” in the space of configurations depending on P(C)...spending “most of our time” in high probability configurations...this is importance sampling Goal: construct a set of rules (algorithm) to “step” in the configuration space that performs the necessary importance sampling Such a stepping produces a sequence configurations a function of steps or “time”...but how do we create this “dynamics”? 20/91 Markov Chains Let us set notation: Label all the configurations (yes, all 1060) using natural numbers C , C ...Cm... etc; we will use P(1) P(C ), and in 1 2 ≡ 1 general P(m) P(Cm) and ≡ C ≡ m We construct transition (Markov) matrix W (n, m) W (n m) P P ≡ ← which defines the probability of moving from the configuration m to configuration n in a step, we consider only W s that are independent of time The probability of taking a step from m n is independent of how → the system got to the state m...such stochastic processes are called as Markov chain Clearly W (n, m) = 1, and W (n, m) 0 Exercise: If λ is a right n ≥ eigenvalue of the matrix W, show that |λ| ≤ 1.