
4 Importance sampling and update algorithms 4.1 Canonical ensemble One of the most common use for Monte Carlo method is to study thermodynam- • ics of some statistical model defined on a lattice. For example, we may have a (real-valued,say) field φx, which is defined at each lattice coordinate x, with the Hamiltonian energy functional H(φ). We shall discuss here regular (hyper) cubic lattices, where xi = ani, ni =0 ...Ni, where a is the lattice spacing (in physical units). The most common statistical ensemble we meet is the canonical ensemble, which • is defined by the partition function H(φ) Z = [dφ] exp . Z "− kBT # Here [dφ] dφ . The free energy is defined as F = k T ln Z. ≡ x x − B We would likeQ to evaluate Z, and especially thermal average of some quantity A: • 1 H(φ) A = [dφ] A(φ) exp . h i Z Z "− kBT # The dimensionality of the integrals is huge: (# points in space) (dim. of field φ ) 106 109. × x ∼ − Thus, some kind of (quasi?) Monte Carlo integration is required. The integral is also very strongly peaked: • H(φ) [dφ] exp = dEn(E)e−E/kB T , Z "− kBT # Z which is an integral over sharply localized distribution. Here n(E) is the density of states at energy E, i.e. n(E)= [dφ] δ(H(φ) E) Z − Only states with E< E contribute, which is an exponentially small fraction of the whole phase space.∼h Thisi kills standard “simple sampling” Monte Carlo, which is homogeneous over the phase space. This is a form of the so-called overlap problem – we are not sampling the interesting configurations nearly often enough. 35 4.2 Importance sampling Importance sampling takes care of the overlap problem: Select N configurations φ , chosen with the Boltzmann probability • i exp[ H(φ)/k T ] p(φ)= − B Z The set φ is called an ensemble of configurations. { i} Note that in practice we cannot calculate Z! Thus, we actually do not know the normalization of p(φ). However, this is sufficient for Monte Carlo simulation and measuring thermal averages. Measure observable A = A(φ ) from each configuration. • i i The expectation value of A is now • 1 A A A , as N . (2) h i ≡ N i → hh ii →∞ Xi If now the value of Ai’s are equal for all i, all of the configurations contribute with the same order of magnitude∼ . This is the central point of importance sampling in Monte Carlo simulations. The result (2) follows directly from the general results we got in Sect. 1. about importance sampling Monte Carlo integration, or even more directly by realizing that the Ai-values obviously come from distribution ′ ′ pA(A ) = [dφ] p(φ) δ(A A[φ]) Z − A¯ dAApA(A)= [dφ] A[φ] p(φ)= A . ⇒ ≡ Z Z hh ii 4.3 How to generate configurations with p(φ) = exp[ H(φ)/kBT ]/Z? − Directly – from scratch – we do not know how to do this! (except in some limited • simple cases.) 36 The basic method (which is really behind almost all of the modern Monte Carlo • simulation technology) is based on the proposal by Metropolis, Rosenbluth2 and Teller2 (J. Chem. Phys 21 1087 (1953)): we can modify existing configurations in small steps, Monte Carlo update steps. Thus, we obtain a Markov chain of configurations φ φ φ ...φ . 1 7→ 2 7→ 3 7→ n With suitably chosen updates and large enough n the configuration φn is a “good” sample of the canonical probability p(φ). How does this work? Let us look at it in detail: f Let f be an “update”, a modification of an existing configuration φ: φ φ′. It • can be here a “small” update, one spin variable or even a component of7→ it (if φ is a vector), or arbitrarily “large” one, for example all of the d.o.f’s (usually just repeated application of small updates). Transition probability W (φ φ′) is the probability distribution of configurations • f 7→ φ′, obtained by one application of f on some (fixed) configuration φ. It naturally has the following properties: ′ ′ ′ Wf (φ φ ) 0 , dφ Wf (φ φ )=1 7→ ≥ Z 7→ The latter property comes from the fact that f must take φ somewhere with prob- ability 1. We can also apply the update f to probability distributions p(φ): • f ′ ′ ′ p(φ) p (φ ) dφ p(φ) Wf (φ φ ). 7→ ≡ Z 7→ This follows directly from the fact that we can apply f to an ensemble of configu- rations from distribution p(φ). What requirements must a “good” update f fulfill: • A) f must preserve the equilibrium (Boltzmann) distribution p (φ) eq. ∝ exp[ H(φ)/k T ]: − B ′ ′ dφ Wf (φ φ ) peq.(φ)= peq.(φ ) Z 7→ In other words, peq.(φ) is a fixed point of f. B) f must be ergodic: starting from any configuration, repeated applica- tion of f brings us to arbitrarily close to any other configuration. This implies that dφ W (φ φ′) > 0 for any φ′. f 7→ R 37 These two simple and and rather intuitive properties are sufficient to guarantee • the following 2 important results: I) Any initial ensemble approaches the equilibrium canonical ensemble peq. as f is applied to it (“thermalization”). → II) If we sum up the distribution of all of the configurations in a single Markov chain φ φ φ φ . 07→ 1 7→ 2 7→ 3 the distribution approaches p , as the number of configurations . eq. →∞ Proof: let p(φ) be the probability distribution of the initial ensemble of config- • urations. After applying f once to all configs in the ensemble, we obtain the distribution p′(φ′). Let us now look how the norm p p evolves: || − eq|| ′ ′ ′ ′ ′ p peq. dφ p (φ ) peq.(φ ) || − || ≡ Z | − | ′ ′ = dφ dφ Wf (φ φ )(p(φ) peq.(φ)) Z | Z 7→ − | ′ ′ dφ dφ Wf (φ φ ) p(φ) peq.(φ) = p peq. ≤ Z Z 7→ | − | || − || Thus, we get closer to the equilibrium when f is applied. Nevertheless, this is not sufficient to show that we really get there; it might happen that p peq. reaches a plateau without going to zero. This would mean that there ex||ists− another|| fixed point ensemble besides the equilibrium one, i.e. an ensemble which is preserved by f. However, the ergodicity forbids this: let us assume that pfix is another fixed point, and let p p > 0. Then we can split φ’s into two non-empty sets: set A has || fix − eq.|| those configurations where (pfix(φ) peq.(φ)) 0 and set B the rest. Because of ergodicity, there must be some configurations− ≥φ′ for which W (φ φ′) is non-zero f 7→ for some φ in set A and some other in set B. Thus, on the 3rd line <, and p cannot be a fixed point. ≤ → 4.4 Detailed balance The standard update algorithms satisfy the following detailed balance condition: ′ ′ W (φ φ ) p (φ ) ′ f 7→ = eq. = e(H(φ)−H(φ ))/kB T W (φ′ φ) p (φ) f 7→ eq. Obviously, peq.(φ) is a fixed point of this update. Thus, (I) detailed balance and (II) ergodicity are sufficient for a correct Monte Carlo update. Detailed balance is not 38 a necessary condition; there may be updates which do not satisfy detailed balance. But it is much more difficult then to prove that these updates preserve the Bolzmann distribution. Thus, all of the commonly used update algorithms satisfy detailed balance. The physical interpretation of detailed balance is microscopic reversibility of the update algorithm. 4.5 Common transition probabilities The detailed balance condition is a restriction for Wf ; however, it is still very general. For concretenss, let us now consider an update step of one variable (lattice spin, for example). Some of the most common choices for Wf are (all of these satisfy detailed balance): Metropolis: (Metropolis et al, 1953) • ′ exp[ δH/kBT ] if δH > 0 Wf (φ φ )= C − , 7→ × ( 1 otherwise where δH H(φ′) H(φ) is the change in energy, and C is a normalization constant. ≡ − Glauber: • ′ C δH 1 Wf (φ φ )= 1 tanh = C 7→ 2 " − 2kBT !# 1 + exp[δH/kBT ] Heat bath: • W (φ φ′)= C exp[ H(φ′)/k T ] f 7→ − B This does not depend on old φ at all! Overrelaxation: The standard overrelaxation as presented in the literature is a • special non-ergodic update. It assumes that the energy functional H has a sym- metry wrt. some reflection in the configuration space. For example, there may exist some α so that H(α φ)= H(φ) − for any φ, and p(φ)= p(φ′). Then the reflection (“overrelaxation”) φ φ′ = α φ. 7→ − 39 This satisfies detailed balance, because H(φ) and p(φ) do not change. However, it obviously is not ergodic! One has to mix overrelaxation with other updates to achieve ergodicity. Nevertheless, overrelaxation is a very useful update, because it generally reduces autocorrelations better than other updates (to be discussed). Generalized overrelaxation: It is possible to generalize the overrelaxation update • to a non-symmetric case. This is not discussed in the standard literature, but the method is often useful. Thus, let us now define “overrelaxation” update through the transfer function W (φ φ′)= δ(φ′ F (φ)) , f 7→ − i.e. the update is deterministic: φ φ′ = F (φ). The function F is chosen so that (p = exp( H(φ)/k T )) 7→ − B dφ′ A) p(φ)dφ = p(φ′)dφ′ = p(φ′) dφ dφ dF (φ) H(F ( φ)) H(φ) = exp − ⇒ dφ " kBT # B) F [ F (φ)]= φ.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages23 Page
-
File Size-