<<

University of Rhode Island DigitalCommons@URI

Physics Faculty Publications

2000 Monte Carlo Computation of Correlation Times of Independent Relaxation Modes at Criticality M. P. Nightingale University of Rhode Island, [email protected]

H. W.J. Blöte

Follow this and additional works at: https://digitalcommons.uri.edu/phys_facpubs

Terms of Use All rights reserved under copyright.

Citation/Publisher Attribution Nightingale, M. P., & Blöte, H. W.J. (2000). Monte Carlo computation of correlation times of independent relaxation modes at criticality. Physical Review B, 62(2), 1089-1101. doi: 10.1103/PhysRevB.62.1089 Available at: http://dx.doi.org/10.1103/PhysRevB.62.1089

This Article is brought to you for free and open access by the Physics at DigitalCommons@URI. It has been accepted for inclusion in Physics Faculty Publications by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. PHYSICAL REVIEW B VOLUME 62, NUMBER 2 1 JULY 2000-II

Monte Carlo computation of correlation times of independent relaxation modes at criticality

M. P. Nightingale Department of Physics, University of Rhode Island, Kingston, Rhode Island 02881

H. W. J. Blo¨te Faculty of Applied Sciences, Delft University, P.O. Box 5046, 2600 GA Delft, The Netherlands Lorentz Institute, Leiden University, Niels Bohrweg 2, P.O. Box 9506, 2300 RA Leiden, The Netherlands ͑Received 13 January 2000͒ We investigate aspects of the universality of Glauber critical dynamics in two . We compute the z and numerically corroborate its universality for three different models in the static Ising universality class and for five independent relaxation modes. We also present evidence for universality of amplitude ratios, which shows that, as far as dynamic behavior is concerned, each model in a given universality class is characterized by a single nonuniversal metric factor which determines the overall time scale. This paper also discusses in detail the variational and projection methods that are used to compute relaxation times with high accuracy.

I. INTRODUCTION dates has been elusive. This is caused by the difficulty of obtaining the required accuracy in estimates of the dynamic Critical-point behavior is a manifestation of power-law critical exponent. Under these circumstances it is evident that divergences of the correlation length and the correlation only a limited progress has been made with respect to the time. The power laws that describe the of the interesting questions regarding dynamic universality classes. correlation length on approach of the critical point are ex- In this paper we present a detailed exposition of a method 6,7 pressed by of critical exponents that are dependent on of computing dynamic exponents with high accuracy. We the direction of this approach, which may, e.g., be ordering- consider single spin-flip Glauber dynamics. This is defined field like or temperature like. The exponents describing the by a Markov matrix, and computation of the correlation time singularities in thermodynamic quantities can be expressed in is viewed here as an eigenvalue problem, since correlation terms of the same exponents. In addition to the exponents times can be obtained from the subdominant eigenvalues of defining these power laws, another critical exponent, viz., the the Markov matrix. dynamic exponent z, is required for the singularities in the If a thermodynamic system is perturbed out of equilib- dynamics. This exponent z is defined by the relationship that rium, different thermodynamic quantities relax back at a dif- holds between the correlation length ␰ and the correlation ferent rates. More generally, there are infinitely many inde- time ␶, namely, ␶ϰ␰z. pendent relaxation modes for a system in the thermodynamic One of the directions along which one can approach the limit. Let us label the models within a given universality ␬ ␶ critical singularity is the finite-size direction; i.e., one in- class by means of , and denote by Li␬ the creases the system size L while keeping the independent time of relaxation i of a system of linear L. thermodynamic variables at their infinite-system critical val- In this paper we present strong numerical evidence that, as ues. In this case, ␰ϰL so that ␶ϰLz. This relation has been indeed theory suggests, at criticality used extensively to obtain the dynamic exponent z from the relaxation times have the following factorization prop- finite-size calculations. erty: In this paper we deal with universality of dynamic ␶ Ϸ z ͑ ͒ critical-point behavior. One would not expect systems in dif- Li␬ m␬AiL , 1 ferent static universality classes to have the same dynamic exponents, and even within the same static universality class, where m␬ is a nonuniversal metric factor, which differs for different dynamics may have different exponents. For in- different representatives of the same universality class as in- stance, in the case of the , Kawasaki dynamics dicated; Ai is a universal amplitude, which depends on the which satisfies a local conservation law1 has a larger value of mode i; and z is the universal dynamical exponent introduced z than Glauber dynamics,2 in which such a conservation law above. is absent. Also the introduction of nonlocal spin updates, as While the relaxation time of the slowest relaxation mode realized, e.g., in cluster , is known to lead to a is obtained from the second-largest eigenvalue of the Mar- different dynamic universal behavior.3–5 kov matrix, lower-lying eigenvalues yield the relaxation Conservation laws and nonlocal updates tend to have a times of faster modes. To compute these we construct, em- large effect on the numerical value of the dynamic expo- ploying a Monte Carlo method, variational approximants for nents, but until fairly recently, numerical resolution of the several eigenvectors. These approximants are called opti- expected differences of dynamic exponents of systems in dif- mized trial vectors. The corresponding eigenvalues can then ferent static universality classes for dynamics with local up- be estimated by evaluating with Monte Carlo techniques the

0163-1829/2000/62͑2͒/1089͑13͒/$15.00PRB 62 1089 ©2000 The American Physical Society 1090 M. P. NIGHTINGALE AND H. W. J. BLO¨ TE PRB 62

Ϯ overlap of these trial vectors and the corresponding matrix si ,...,sl assume values 1. Periodic boundaries are used elements of the Markov matrix in the truncated basis throughout. In particular, we focus on models described by spanned by these optimized trial vectors. It should be noted three ratios ␤ϭKЈ/K, namely, ␤ϭϪ1/4, 0 ͑nearest- that both the optimization scheme and the evaluation of these neighbor model͒ and 1 ͑equivalent-neighbor model͒. The matrix elements critically depend on the fact that the Markov nonplanar models for ␤Þ0 are not exactly solvable and their matrix is sparse. That is, the number of configurations acces- critical points are known only approximately. Yet it was sible from any given configuration is equal to the number of demonstrated to a high degree of numerical accuracy that sites only, rather than the number of possible spin configu- that they belong to the static Ising universality class.11,12 For rations. the nearest-neighbor model the critical coupling is K ϭ 1 ϩͱ Given such fixed trial vectors, this approach has the ad- 2 ln(1 2); for the other two models estimates of the vantage of simplicity and high statistical accuracy, but the critical points are Kϭ0.190 192 680 7(2) for ␤ϭ1 and K disadvantage is that results are subject to systematic, varia- ϭ0.697 220 7(2) for ␤ϭϪ1/4.12,13 tional errors, which only vanish in the ideal limit where the We use the dynamics of the heat-bath with ran- variational vectors become exact eigenvectors or span an in- dom site selection. The single-spin-flip dynamics is deter- variant subspace of the Markov matrix. Since the condition mined by the Markov matrix P defined as follows. The ele- is rarely satisfied in cases of practical interest, a projection ment P(SЈ,S) is the transition of going from Monte Carlo method is then used, to reduce the systematic configuration S to SЈ.IfS and SЈ differ by more than one error, but this is at the expense of an increase of the statisti- spin, P(SЈ,S)ϭ0. If both configurations differ by precisely cal errors. The method we use in this paper is a combination one spin, and generalization of the work of Umrigar et al.8 and that of 9 Ceperley and Bernu. 1 H͑SЈ͒ϪH͑S͒ To summarize, the Monte Carlo method discussed here P͑SЈ,S͒ϭ ͭ 1Ϫtanhͫ ͬͮ , ͑3͒ 2 2kT consists of two phases. In the first phase, trial vectors are 2L optimized. The ultimate, yet unattainable goal of this phase where L2 is the total number of spins. The diagonal elements is to construct exact eigenvectors. In this phase of the com- P(S,S) follow from the conservation of probability, putation, very small Monte Carlo samples are used, consist- ing typically of no more than a few thousand spin configu- ͑ Ј ͒ϭ ͑ ͒ rations. In the second phase, one performs a standard Monte ͚ P S ,S 1, 4 Ј Carlo computation in which one reduces statistical errors by S 2 increasing the length of the computation rather than the qual- where SЈ runs over all possible 2L spin configurations. ity of the variational approximation. We denote the probability of finding spin configuration S ␳ The computed correlation times, derived from the partial at time t by t(S). By , the stationary state of the solution of the eigenvalue problem as sketched above, are Markov process is the equilibrium distribution used in a finite-size analysis to compute the dynamic critical ϪH ͒ ␺ ͒2 exponent z. We verify its universality for several models in exp͓ ͑S /kT͔ B͑S ␳ϱ͑S͒ϭ ϵ , ͑5͒ the static universality class of the two-dimensional Ising Z Z model. We also address another manifestation of dynamic universality. As was mentioned, in addition to the usual where the normalization factor Z is the partition function. static critical exponents, there is only one new exponent that The dynamical process defined by Eq. ͑3͒ is constructed governs the leading singularities of critical dynamics, viz., z. so as to satisfy detailed balance, which is equivalent to the Similarly, one would expect that, within the context of statement that the matrix Pˆ with elements Glauber dynamics, the description of time-dependent critical-point amplitudes requires only a single nonuniversal 1 ˆ ͑ Ј ͒ϵ ͑ Ј ͒␺ ͑ ͒ ͑ ͒ metric factor to determine the time scale of each different P S ,S P S ,S B S 6 ␺ ͑SЈ͒ model within a given universality class. Our results corrobo- B rate this idea, which is the immediate generalization to criti- is symmetric. Therefore the eigenvalues of P are real. cal dynamics of work on static critical phenomena by Priv- The Markov matrix determines the time evolution of 10 ␳ man and Fisher. t(S), i.e., In this paper we apply the techniques outlined above to three different two-dimensional Ising models subject to ␳ ͑ ͒ϭ ͑ Ј͒␳ ͑ Ј͒ ͑ ͒ Glauber-like spin dynamics. These models are defined on a tϩ1 S ͚ P S,S t S . 7 SЈ simple quadratic lattice of size LϫL with periodic boundary ␳ conditions. The Hamiltonian H, defined on a general spin The simultaneous tЈ,tЈϩt(S,SЈ) that ϭ Ј Ј Ј configuration S (s1 ,s2 ,...),isgiven by the system is in state S at time t and in state S at time t ϩt is H͑S͒ ϭϪ Ϫ ͑ ͒ ␳ ͑ Ј͒ϭ t͑ Ј ͒␳ ͑ ͒ ͑ ͒ K͚ sis j KЈ͚ sksl , 2 tЈ,tЈϩt S,S P S ,S tЈ S , 8 kT ͗ij͘ [kl] where Pt(SЈ,S) denotes the (SЈ,S) element of the tth power where the first summation is on all nearest-neighbor pairs of of the matrix P. For sufficiently large times tЈ, one may take ϫ ␳ ϭ␳ sites of the square L L lattice, the second summation is on tЈ(S) ϱ(S) so that the autocorrelation function CA(t)of all next-nearest-neighbor pairs, and the Ising variables an observable A, the average with respect to time tЈ of PRB 62 MONTE CARLO COMPUTATION OF CORRELATION . . . 1091

A(tЈ)A(tϩtЈ), can equivalently be written as the ensemble The dynamical process defined by Eq. ͑3͒ is constructed average ͗A(tЈ)A(tЈϩt)͘ for large tЈ. Thus so as to satisfy detailed balance, which is equivalent to the statement that the matrix Pˆ with elements ͒ϭ ͒ ͒␳ ͒ ͑ ͒ CA͑t lim ͚ ͚ A͑S A͑SЈ tЈ,tЈϩt͑S,SЈ , 9 S Ј tЈ→ϱ S 1 Pˆ ͑SЈ,S͒ϵ P͑SЈ,S͒␺ ͑S͒ ͑13͒ ␺ ͒ B where A(S) denotes the value of A in a spin configuration S. B͑SЈ ͑ ͒ ␳ After substitution of Eq. 9 and expansion of A(S) tЈ(S)in right-hand eigenvectors of the Markov matrix, it follows at is symmetric. once that the time-dependent correlation functions of a sys- The layout of the rest of this paper is as follows. In Sec. II tem of size L have the following form: we discuss the general principles of the method used in this paper. The expressions in this section feature exhaustive t summation over all spin configurations, which renders them ͑ ͒ϭ͚ ␭t ͫ Ϫ ͬ ͑ ͒ CA t ci sgn Liexp , 10 useless for practical computations. The Monte Carlo summa- i L2␶ Li tion methods employed instead are discussed in Sec. III. where the dependence on the specific model ␬ has been sup- Trial vectors, a vital ingredient of the method, are discussed ␶ in Sec. IV, and numerical results are discussed in Sec. V. pressed in denoting by Li the relaxation times of the inde- ␶ Finally, Sec. VI contains a discussion of issues that remain to pendent modes of the equilibration process. The Li are de- termined by the eigenvalues of the Markov matrix. We be addressed by future work. ␭ ϭ L2Ϫ denote these eigenvalues Li (i 0,1,2,...,2 1), and or- ϭ␭ Ͼ͉␭ ͉у͉␭ ͉у der them so that 1 L0 L1 L2 . Note that II. EXACT SUMMATION ␭ ϭ••• conservation of probability implies that L0 1; by construc- tion, the corresponding right-hand eigenvector is the Boltz- A. Variational approximation mann distribution. The relaxation times are given by Single eigenvector In this section we discuss trial vector optimization, which 1 is used to reduce the statistical errors in the Monte Carlo ␶ ϭϪ ͑iϭ1,2,...͒. ͑11͒ Li 2 ͉␭ ͉ computation of eigenvalues of the Markov matrix. First, we L ln Li review the case of a single eigenvalue,8,14,15 and then we The factor L2 is inserted because, as usual, time is measured generalize to optimization of multiple trial vectors. In this units of one flip per spin, which corresponds to L2 iterations section, we discuss the exact expressions involving summa- of the process described by Eq. ͑7͒. tion over all possible spin configurations. In cases of practi- Note that the matrix P has the same symmetry cal interest, these expressions cannot be evaluated as written; properties as the Hamiltonian and the Boltzmann distribu- for their approximate evaluation one uses the Monte Carlo tion. In addition to spin inversion, these symmetries include methods discussed in Sec. III. translations, reflections, and rotations of the LϫL lattice. It A powerful method of optimizing a single, many- ͉␺ ͘ follows that each eigenvector of P, as well as its associated parameter trial vector, say, T , is minimization of the vari- relaxation mode, has distinct symmetry properties that can be ance of the configurational eigenvalue, which in the context characterized by a set of ‘‘quantum numbers.’’ For instance, of is called the local energy. That is, ␺ ϭ͗ ͉␺ ͘ the eigenvector associated with the second-largest eigen- define T(S) S T for an arbitrary configuration S.We value is antisymmetric under spin inversion and invariant wish to satisfy the eigenvalue equation under translations, reflections, and rotations. It describes the ␺Ј͑ ͒ϭ␭␺ ͑ ͒ ͑ ͒ relaxation of the total magnetization; this process, with re- T S T S , 14 ␶ laxation time L1, is thus the slowest relaxation mode con- tained in the stochastic matrix. where the prime indicates matrix multiplication by Pˆ , i.e., In this work, we restrict ourselves to relaxation modes Ј ϵ͚ ˆ Ј Ј that are invariant under geometric symmetries of the spin f (S) SЈP(S,S ) f (S ) for any function f defined on the ␺ lattice. However, in addition to eigenvectors that are sym- spin configurations. Even if T is not an eigenvector, one can metric under spin inversion, we include antisymmetric ones, define the configurational eigenvalue by so as to obtain the longest relaxation time. As a consequence of this restriction to geometric invariance, spatially nonho- ␺Ј͑ ͒ T S ,if␺ ͑S͒Þ0, mogeneous relaxation processes fall outside the scope of this ␭ ͑ ͒ϭ ␺ ͑ ͒ T ͑ ͒ c S ͭ T S 15 work. By design, the stationary state of the Markov process is 0, otherwise. the equilibrium state ␺ ͑ ͒ If T is not an eigenvector, Eq. 14 gives an overdetermined ͓ϪH͑ ͒ ͔ ␺ ͑ ͒2 ␭ ␺ exp S /kT B S set of equations for for a given T and a sufficiently big set ␳ϱ͑S͒ϭ ϵ , ͑12͒ Z Z of configurations S. One can obtain a least-squares estimate of the eigenvalue ␭ by minimizing the squared residual of where the normalization factor Z is the partition function. Eq. ͑14͒. This yields the usual variational estimate 1092 M. P. NIGHTINGALE AND H. W. J. BLO¨ TE PRB 62

span an invariant subspace of the matrix Pˆ , which in non- ͚ ␺Ј͑S͒␺ ͑S͒ ͚ ␭ ͑S͒␺ ͑S͒2 ␺ ͉ ˆ ͉␺ T T c T trivial applications of course is never the case. Again, how- ͗ T P T͘ S S ␭͑p͒ϭ ϭ ϭ , ⌳ˆ ͗␺ ͉␺ ͘ ever, one can solve for the matrix elements ij in a least- T T ␺ ͒2 ␺ ͒2 ͚ T͑S ͚ T͑S squares sense. This yields S S ͑ ͒ 16 ⌳ˆ ϭPˆ Nˆ Ϫ1, ͑20͒ ␭ which is the average of the configurational eigenvalue c . where The standard Rayleigh-Ritz variational method, which can be used for the largest eigenvalue, consists in maximiza- ˆ ϭ ␺ ͒␺ ͒ϭ ␺ ͉␺ ͑ ͒ ¯␭ Nij ͚ Ti͑S Tj͑S ͗ Ti Tj͘ 21 tion of (p) with respect to the parameters p. However, one S can formulate a different optimization criterion as follows. ¯␭ ␺ and The gradient of (p) with respect to T(S)is

͒ ͑ ␭͑ ͒ ␺Ј͑ ͒Ϫ¯␭͑ ͒␺¯ץ p T S p T S Pˆ ϭ͚ ␺Ј ͑S͒␺ ͑S͒ϭ͗␺ ͉Pˆ ͉␺ ͘. ͑22͒ ϭ2 . ͑17͒ ij Ti Tj Ti Tj ␺ ͑S͒ Sץ T ͚ ␺ ͑ Ј͒2 T S Note that although these matrix elements depend on the nor- SЈ ͉␺ ⌳ˆ malization of the Ti͘, the matrix is invariant under an Clearly, this gradient vanishes for any eigenvector and this overall change of normalization. suggests as an alternative optimization criterion minimiza- ϫ ⌳ˆ tion of the magnitude of the gradient of a normalized trial By diagonalization of the n n matrix one obtains an ␺ ͑ ͒ approximate, partial eigensystem of the Markov matrix. vector T . With respect to Eq. 14 , this corresponds to minimization of the normalized squared residual More specifically, suppose that ⌳ˆ ϭ Ϫ1 ˜␭ ˜␭ ͒ ͑ ͒ D diag͑ 0 ,..., nϪ1 D. 23 ͓␺Ј͑ ͒Ϫ¯␭͑ ͒␺ ͑ ͔͒2 ͚ T S p T S S The eigenvalues ˜␭ Ͼ˜␭ у у␭˜ Ϫ of ⌳ˆ are variational ␹2͑p͒ϭ 0 1 ••• n 1 lower bounds for the exact eigenvalues of the Markov matrix ␺ ͑ ͒2 ͚ T S ˜␭ р␭ 16 S P, in the sense that i i , if the exact eigenvalues are ˜␭ Ͼ˜␭ у numbered such that 0 1 •••, in contrast with the con- ͓␭ ͑ ͒Ϫ¯␭͑ ͔͒2␺ ͑ ͒2 vention used in the discussion following Eq. ͑10͒. This prop- ͚ c S p T S 2 S ͗␺ ͉͑Pˆ Ϫ¯␭͒ ͉␺ ͘ erty is a consequence of the interlacing property of the ei- ϭ ϭ T T ͗␺ ͉␺ ͘ , genvalues of symmetric matrices and their submatrices, also ␺ ͒2 T T 17 ͚ T͑S known as the separation theorem. Note that in denoting the S eigenvalues we omit the index L indicating system size, ͑ ͒ ˜␺ 18 where this is not confusing. The approximate eigenvectors i which equals the of the configurational eigenvalue, are given by as shown. nϪ1 ˜␺ ϭ ␺ ͑ ͒ i ͚ Dij Tj , 24 B. Multiple eigenvectors jϭ0 Minimization of ␹2(p) is a valid criterion for any eigen- which can be verified as follows: multiply Eq. ͑19͒ through vector, but if this is used without the equivalent of an or- ˜␺Ј ˜␺ by Dki , and sum on i to verify that k proportional to k . thogonalization procedure, one would in practice simply The expressions derived above are usually9 derived by start- keep reproducing an approximation to the same eigenvector, ing from the linear combinations given in this last equation. the dominant one most of the time. Since orthogonalization The Dij then are treated as variational parameters, and are is not easily implemented with Monte Carlo methods, we determined by requiring stationarity of the Rayleigh quo- utilize straightforward generalizations of Eqs. ͑14͒ and ͑16͒ tient. This yields the following equation for the Dij : to deal with more than one eigenvalue and eigenvector. Equation ͑18͒ is a little problematic in this respect, as will ˆ ϭ˜␭ ˆ ͑ ͒ become clear. ͚ DijP jk i͚ DijN jk , 25 ␺ j j Suppose we start from a set of n trial vectors Ti (i ϭ0,1,...,nϪ1). We can then write Eq. ͑14͒ in matrix form a generalized eigenvalue problem equivalent to the eigen- ⌳ˆ ͑ ͒ nϪ1 value problem defined by defined in Eq. 20 . ␺Ј ͑ ͒ϭ ⌳ˆ ␺ ͑ ͒ ͑ ͒ Next we discuss the generalization to more than one trial Ti S ͚ ij Tj S . 19 ͑ ͒ jϭ0 vector of minimization of the variance as given by Eq. 18 . In this context it is important to keep in mind that the varia- As before, the prime on the left-hand side indicates matrix tional approximation is invariant under replacement of the multiplication by Pˆ , and again Eq. ͑19͒ for all i and S form basis vectors by a nonsingular linear superposition. This ⌳ˆ ⌳ˆ an overdetermined set of equations for the matrix ij . These yields a similarity transformation of , and leaves invariant ␺ equations have no solution, unless the n basis vectors Ti the approximate eigenvalues and eigenvectors. Of course, PRB 62 MONTE CARLO COMPUTATION OF CORRELATION . . . 1093 one would like to have an optimization criterion that shares For tϭ0 these expressions reduce to Eqs. ͑21͒ and ͑22͒. this invariance. The squared residual of Eq. ͑19͒ fails in this One can view the matrix elements for tϾ0 as having been respect. This sum is not even invariant under a simple res- obtained by the substitution ͉␺ ͘→Pˆ t/2͉␺ ͘. Expansion in ␺ Ti Ti caling of the basis functions Ti , and there is no obvious the exact eigenvectors immediately shows that the spectral normalization comparable to the one used in Eq. ͑18͒. weights are reduced of ‘‘undesirable’’ eigenvectors with less One way to perform the optimization in an invariant way dominant eigenvalues, so that the vectors Pˆ t/2͉␺ ͘ span a is for each choice of the optimization parameters to compute Ti ͚ ͉␺ ϫ more nearly invariant subspace of Pˆ than the original states. linear combinations jVij Tj͘, where V is the n n matrix, ⌳ˆ Ϫ1 This process, however, becomes numerically unstable as t such that V V is diagonal. Each of these linear combina- →ϱ ␹2 ͑ ͒ , since in that case all basis vectors of the same symme- tions defines a via Eq. 18 , and the parameters in the try collapse onto the corresponding dominant state. basis functions can then be optimized by minimization of these n sums of squares. One may define a convex sum of ␹2 III. MONTE CARLO SUMMATION these i and optimize all parameters for all basis functions simultaneously with respect to this combined object func- Obviously, the summation over all spin configurations tion, or, as we did in our computations, one can perform the used in the expressions in the previous section can, in gen- optimization iteratively one vector at a time for eigenvalues eral, be done only for small systems. In this section, we with increasing distance from the top of the spectrum. discuss the Monte Carlo estimators of the expressions pre- Another approach that also yields invariant results is to sented above. In principle, matrix multiplication involves perform the optimization by dividing the set of configura- summation over all configurations and therefore is not prac- tions into several subsets, and computing a matrix ⌳ˆ for each tically feasible. However, for the dynamics we consider in subset. One can then minimize the variance of the eigenval- this paper the summation required for the matrix multiplica- ues over these subsets. This is the procedure we followed to ͉ ˆ ͉␺ tion by P in ͗S P T͘ is an exception, since for a given S produce the results reported in this paper. there are only L2 configurations SЈ from which S can be We have not investigated which of the two procedures reached with one or fewer spin flips, and these are the only described above is superior. Both do have a problem in com- configurations for which P(S,SЈ) does not vanish. For all mon, namely, that for a wide class of variational basis vec- other configuration sums a Monte Carlo method is used. tors, they give rise to a singular or nearly singular optimiza- To produce a Monte Carlo estimate of ␹2(p) as given in tion problem. This is a consequence of the fact that the basis ͑ ͒ ␣ Eq. 18 , sample M c spin configurations S␣ with states are not unique, even if the eigenvalue problem has a ϭ ␺ 2 1,...,M c from the Boltzmann distribution B(S␣) . This unique solution. For the optimization problem this means ¯␭ that there are many almost equivalent solutions, a problem yields a Monte Carlo estimate of (p): commonly encountered when one performs ͑nonlinear͒ least- squares parameter fits. ␺ˆ Ј͑ ͒␺ˆ ͑ ͒ ͚ T S␣ T S␣ More specifically, if the basis vectors are such that a lin- ␣ ear combination of trial vectors can be expressed exactly ͑or ¯␭͑p͒Ϸ , ͑28͒ ͒ ␺ˆ ͒2 to good approximation in the same functional form as the ͚ T͑S␣ trial vectors themselves, then there is a gauge symmetry ͑or ␣ an approximate gauge symmetry͒ that yields a class of where equivalent ͑or almost equivalent͒ solutions of the minimiza- ͉␺ ͚ ͉␺ tion problem. That is, if Ti͘ is a solution, jVij Tj͘ is an ␺ ͑S␣͒ equivalent ͑or almost equivalent͒ solution for any V. This ␺ˆ ͑ ͒ϭ T ͑ ͒ T S␣ ␺ ͒ , 29 problem can be solved straightforwardly by fixing the gauge B͑S␣ and performing the optimization subject to the constraint that ͚ ͉␺ ϰ͉␺ jVij Tj͘ Ti͘. If the gauge symmetry holds only ap- ␺Ј͑S␣͒ ␺ˆ Ј͑ ͒ϭ T ͑ ͒ proximately, this additional constraint may produce a sub- T S␣ . 30 ␺ ͑S␣͒ optimal solution. B Similarly, C. Beyond the variational approximation

The eigenvalues obtained by the variational scheme dis- ͓␺ˆ Ј͑ ͒Ϫ¯␭␺ˆ ͑ ͔͒2 ͚ T S␣ T S␣ cussed in the previous sections have a bias caused by admix- ␣ ture of eigenvectors in that part of the spectrum that is being ␹2͑p͒Ϸ . ͑31͒ ␺ˆ ͒2 ignored. This variational bias can be reduced in principle ͚ T͑S␣ arbitrarily as follows.9 ␣ Let us introduce generalized matrices with elements Parameter optimization for a single vector is done by gen- ˆ ͒ϭ ␺ ͉ ˆ t͉␺ ͑ ͒ erating a sample of a few thousand configurations and sub- Nij͑t ͗ Ti P Tj͘ 26 sequently varying the parameters p while keeping this and sample fixed. The same applies to the optimization of more than one vector, in which case estimates of the required ma- Pˆ ͒ϭ ␺ ͉ ˆ tϩ1͉␺ ͑ ͒ ˆ Pˆ ij͑t ͗ Ti P Tj͘. 27 trix elements Nij and ij are computed by 1094 M. P. NIGHTINGALE AND H. W. J. BLO¨ TE PRB 62

ˆ Ϸ ␺ˆ ͑ ͒␺ˆ ͑ ͒ϵ˜ ͑ ͒ ˆ (t)ϭ ␺ ͑ ͒ ˆ ͑ ͉ ͒ ˆ ͑ ͉ ͒␺ ͑ ͒ Nij ͚ Ti S␣ Tj S␣ Nij 32 Nij ͚ Ti St P St StϪ1 P S1 S0 Tj S0 ␣ ••• S0 ,...,St

and ϭ ␺ˆ ͑ ͒␺ˆ ͑ ͒ ͑ ͉ ͒ ͑ ͉ ͒ ͚ Ti St Tj S0 P St StϪ1 •••P S1 S0 S0 ,...,St Pˆ Ϸ ␺ˆ Ј ͑ ͒␺ˆ ͑ ͒ϵP˜ ͑ ͒ ϫ␺ ͑S ͒2 ͑38͒ ij ͚ Ti S␣ Tj S␣ ij . 33 B 0 ␣

M c We attached tildes to the symbols on the right-hand side Ϸ Ϫ1 ␺ˆ ͑ ͒␺ˆ ͑ ͒ ͑ ͒ M c ͚ Ti S␴ϩt Tj S␴ 39 of Eqs. ͑32͒ and ͑33͒ to indicate that the corresponding quan- ␴ϭ1 tities are stochastic variables, which is important to keep in mind for the following discussion. Similarly, Since the matrix Pˆ is symmetric, one might be inclined to symmetrize its estimator P˜ with respect to i and j. This ij Pˆ (t)ϭ ␺ ͑ ͒ ˆ ͑ ͉ ͒ ˆ ͑ ͉ ͒␺ ͑ ͒ symmetrization, however, destroys the zero-variance prin- ij ͚ Ti Stϩ1 P Stϩ1 St P S1 S0 Tj S0 S ,...,S ••• ciple satisfied by the expressions as written. As mentioned 0 tϩ1 before, the eigensystem of ⌳ˆ is obtained exactly and without ϭ ͚ ␺ˆ Ј ͑S ͒␺ˆ ͑S ͒Pˆ ͑S ͉S Ϫ ͒ Pˆ ͑S ͉S ͒ statistical noise, if the basis vectors ␺ are linear combina- Ti t Tj 0 t t 1 ••• 1 0 Ti S0 ,...,St tions of n exact eigenvectors. In that ideal case, ⌳ˆ is ϫ␺ ͑ ͒2 ͑ ͒ uniquely determined by Eq. ͑19͒ even if it is applied only to B S0 40 an subset of configurations S. The same holds for a weighted subset as represented by a Monte Carlo sample. Even though M c Ϸ͑ ͒Ϫ1 ͓␺ˆ Ј ͑ ͒␺ˆ ͑ ͒ϩ␺ˆ Ј ͑ ͒ the matrices Pˆ and Nˆ themselves depend on the weights and 2M c ͚ Ti S␴ϩt Tj S␴ Ti S␴ ␴ϭ1 the subset, factors responsible for statistical noise cancel in Pˆ ˆ Ϫ1 ϫ␺ˆ ͒ ͑ ͒ the product N . To demonstrate this, we write the estima- Tj͑S␴ϩt ͔. 41 tor in matrix form The first term in expression ͑41͒ follows immediately from ͑ ͒ N˜ ϭ⌿˜ ⌿˜ † ͑34͒ expression 40 ; to obtain the second term one has to use the time reversal symmetry of a stochastic process that satisfies and detailed balance, viz.,

͑ ͉ ͒ ͑ ͉ ͒␺ ͑ ͒2 P˜ ϭ⌿˜ Ј⌿˜ †, ͑35͒ P Stϩ1 St •••P S2 S1 B S1 ϭ ͉ ͒ ͉ ͒␺ ͒2 ͑ ͒ P͑S1 S2 P͑St Stϩ1 B͑Stϩ1 . 42 ⌿˜ ⌿˜ ••• where is a rectangular matrix with elements i␣ ϭ␺ˆ ⌿˜ Ј ϭ␺ˆ Ј ͑ ͒ Ti(S␣) and i␣ Ti(S␣). Equation 19 in matrix form Again, these estimators satisfy the zero-variance principle becomes mentioned above, as long as the expressions are used as writ- ten, i.e., without symmetrization with respect to i and j. ⌿˜ Јϭ⌳ˆ ⌿˜ . ͑36͒

Clearly, if this last equation holds, P˜ N˜ Ϫ1 ϭ⌿˜ Ј⌿˜ †(⌿˜ ⌿˜ †)Ϫ1ϭ⌳ˆ without statistical noise, as an- nounced. We have assumed that one matrix multiplication by the Markov matrix can be done exactly; repeated multiplications rapidly become intractable. This is a problem for the compu- tation of the matrix elements given in Eqs. ͑26͒ and ͑27͒.To obtain a statistical estimate of these matrix elements, one generates a with the Markov matrix P. One then exploits the fact that in the steady state of the Markov pro- cess, the relative probability of finding configurations S1 ,S2 ,...,Stϩ1 in immediate succession is given by ␺ ␺ 2 FIG. 1. Prefactor 1 / B of the first subdominant eigenvector of P͑S ϩ ͉S ͒ P͑S ͉S ͒␺ ͑S ͒ . ͑37͒ t 1 t ••• 2 1 B 1 the Markov matrix vs total magnetization M fora3ϫ3 nearest- ␤ϭ ␺ ␺ neighbor Ising ( 0) lattice. For each value of M 1(S)/ B(S)is For a Monte Carlo run of length M c , this property allows plotted for all configurations S with M(S)ϭM. All 512 points are us to write plotted, but because of symmetries, many coincide. PRB 62 MONTE CARLO COMPUTATION OF CORRELATION . . . 1095

␺ ␺ ␺ ␺ FIG. 2. Prefactor 1 / B of the first subdominant eigenvector of FIG. 4. Prefactor 2 / B of the second subdominant eigenvector the Markov matrix vs total magnetization M fora4ϫ4 nearest- of the Markov matrix vs total magnetization M fora5ϫ5 nearest- neighbor Ising (␤ϭ0) lattice. neighbor Ising (␤ϭ0) lattice.

IV. TRIAL VECTORS truth, but there are two shortcomings. First of all, there is ␺ ␺ scatter, which indicates that 1(S)/ B(S) is a function of As we mentioned, the form of the trial vectors used in more than just the magnetization. Second, the ‘‘curve’’ is these calculations is a major factor determining the statistical nonlinear. The latter problem can be cured quite easily by accuracy of the results. It not too difficult to make an initial ͑ ͒ ϩ 2 replacing m in Eq. 43 by an odd polynomial m(1 a2m guess for the form of the eigenvector corresponding to the ϩ •••) with coefficients ak to be determined variationally. second largest eigenvalue of the Markov matrix. Numeri- Similarly, computations for small systems ͑see Sec. V A cally exact calculations for small systems show that this ei- for further details͒ suggest that the second largest eigenvalue genvector is antisymmetric under spin inversion, which is a is associated with an eigenvector that is even under spin manifestation of the longevity of fluctuations of the magne- inversion, as illustrated in Fig. 4. A trial vector of this form tization and not a peculiarity of small systems. is readily constructed by replacing m on the right-hand side This suggests the following initial approximation of the of Eq. ͑43͒ by a polynomial even in m. It turns out that the eigenvector belonging to the second largest eigenvector, the general picture as just described is largely independent of L. first subdominant eigenvector of the symmetrized Markov More in general, the plots shown in Figs. 1–7 strongly matrix Pˆ : suggest that the subdominant eigenvectors of the Markov matrix P, subject to the imposed spin, rotation, and transla- ␺ ͒ϭ ␺ ͒ ͑ ͒ T1͑S m B͑S , 43 tion symmetries, are reasonably approximated by the Boltz- mann distribution multiplied by a mode-dependent function where m is the average magnetization. Figures 1–3 are plots of the magnetization. As can be seen in Figs. 1–7, the num- ␺ ␺ ϭ 2 of 1(S)/ B(S) versus the total magnetization M L m for ber of nodes of this prefactor increases by 1 as one steps ␺ ϫ ϫ the exact eigenvector 1 computed for 3 3, 4 4, and 5 down the spectrum, but it is also clear that, especially for the ϫ 5 nearest-neighbor Ising systems. For all three, the prefac- less dominant eigenvectors, the residual variance is signifi- ͑ ͒ tor m in Eq. 43 clearly captures a significant part of the cant. To begin to address the problem of the scatter and to improve the trial vector systematically, it is necessary to

␺ ␺ FIG. 3. Prefactor 1 / B of the first subdominant eigenvector of the Markov matrix vs total magnetization M fora5ϫ5 nearest- ␤ϭ ␺ ␺ neighbor Ising ( 0) lattice. Although up to 400 configuration FIG. 5. Prefactor 3 / B of the third subdominant eigenvector collapse onto a single point, crowding prevents individual resolu- of the Markov matrix vs total magnetization M fora5ϫ5 nearest- tion of most data points. neighbor Ising (␤ϭ0) lattice. 1096 M. P. NIGHTINGALE AND H. W. J. BLO¨ TE PRB 62

Ϯ Ј Ϯ ␺ ͑S͒ϭ␺ ͑S͒ͫ a ͑m͒ϩ ͚ a ͑m͒m m ␦ ϩ T B k1k2 k1 k2 k1 k2,0 k1 ,k2

Ј ϯ ϩ ͚ a ͑m͒m m m ␦ ϩ ϩ k1k2k3 k1 k2 k3 k1 k2 k3,0 k1 ,k2 ,k3

ϩ ...ͬ. ͑45͒

The primes attached to the summation signs indicate that ϭ terms with ki (0,0) are excluded. The coefficients Ϯ aϮ, a ,... are polynomials in m, which are either odd k1k2 or even under spin inversion and are to be chosen according to the desired symmetry. Rotation and reflection symmetries ␺ ␺ of the lattice are imposed by equating coefficients of the FIG. 6. Prefactor 4 / B of the fourth subdominant eigenvector of the Markov matrix vs total magnetization M fora5ϫ5 nearest- appropriate monomials in m. neighbor Ising (␤ϭ0) lattice. The results reported in Ref. 6 were obtained using a more complicated version of Eq. ͑45͒, namely, identify other important variables besides the magnetization Ϫ Ј Ϫ ␺ ͑S͒ϭ␺ ͑S͒ͫ a ͑m͒ϩ ͚ a ͑m͒m m ␦ ϩ and to incorporate them in the trial vector. We tried multi- T B k k k k k k ,0 k ,k 1 2 1 2 1 2 spin correlations involving nearby spins but after consider- 1 2 able failed experimentation we established that long- Ј ϩ ϩ ͚ a ͑m͒m m m ␦ ϩ ϩ ϩ ...ͬ k k k k k k ,0 wavelength fluctuations of the magnetization are the suitable k1k2k3 1 2 3 1 2 3 k1 ,k2 ,k3 variables. This is reasonable when one compares, e.g., the ␺ ␺ eigenvalue equations for 0(SЈ) and 1(SЈ) and realizes ϩ Ј ϩ ϫͫ a ͑m͒ϩ ͚ a ͑m͒m m ␦ ϩ k1k2 k1 k2 k1 k2,0 that the eigenvalues differ only very little from unity except k1 ,k2 for very small systems. We therefore used the Fourier com- ponents of the spin configuration, which are defined by Ј Ϫ ϩ ͚ a ͑m͒m m m ␦ ϩ ϩ k1k2k3 k1 k2 k3 k1 k2 k3,0 k1 ,k2 ,k3

L L 2␲i ϩ ͬ. ͑46͒ ϭ Ϫ2 ͫ ͑ ϩ ͒ͬ ͑ ͒ ••• mk L ͚ ͚ exp k1l1 k2l2 sl l , 44 ϭ ϭ 1 2 l1 1 l2 1 L In those calculations also the coupling constant appearing in the Boltzmann factor was treated as a variational parameter, where kϭ(k ,k ) with 0рk ,k ϽL, and s denotes the but it turned out that the optimal value of this parameter was 1 2 1 2 l1l2 spin at lattice site (l ,l ). Note that mϵm . If we restrict indistinguishable from the critical coupling. It does not seem 1 2 0,0 ͑ ͒ ourselves to eigenvectors that are translationally invariant, that the more complicated form of expression 46 resulted in the arguments presented in the previous paragraph yield the a major improvement, but we did not perform a systematic following trial odd or even vectors: comparison of these trial vectors. The coefficients in the trial vector are treated as varia- tional parameters. As in all nonlinear fitting problems it is important to use parameters parsimoniously, and to do so one has to establish a hierarchy among these parameters. The scheme we used was to iterate the following step: ͑a͒ sys- tematically add terms of increasing degree in m; ͑b͒ when this saturates, increase the degree of terms with products of Þ mk with mk (0,0). The effectivity of this variational approach using low- momentum Fourier components, as described here, becomes apparent when one compares the variational eigenvalues with the exact numerical ones. For instance, the difference in the case of the second eigenvalue of the Lϭ5 nearest- neighbor model was only 2ϫ10Ϫ7.

V. NUMERICAL RESULTS A. Exact eigenvectors for small systems FIG. 7. Prefactor ␺ /␺ of the fifth subdominant eigenvector of 5 B ˆ ϫ the Markov matrix vs total magnetization M fora5ϫ5 nearest- The full, symmetric Markov matrix P for an L L Ising 2 2 neighbor Ising (␤ϭ0) lattice. model is a 2L ϫ2L matrix, so that exact numerical calcula- PRB 62 MONTE CARLO COMPUTATION OF CORRELATION . . . 1097

tions are possible only for very small systems; see, e.g., re- given in expression ͑46͒ and used up to 36 variational pa- sults for Lр4 in Ref. 18. In the present work, we performed rameters. Also included in the present analysis are the simu- such exact computations for systems up to Lϭ5. In order to lations reported in Ref. 7, which contained 1.2ϫ108 Monte restrict the numerical task, we chose representations of Pˆ in Carlo samples for all three models with system sizes up to subspaces with the appropriate symmetries. Two distinct Lϭ20. symmetries were chosen, both of which impose invariance of In addition, new for each of the three models the eigenvectors of Pˆ with respect to geometric translation, were performed, with a length of 2ϫ108 Monte Carlo rotation, and mirror inversion. The vectors were chosen to be samples for system sizes up to Lϭ20 and of 1.6ϫ108 Monte either even or odd under spin inversion. This reduced by Carlo samples for system size Lϭ21. These new simulations almost a factor 400 the dimensionality of Pˆ . In this way, the used up to 89 variational parameters in the trial functions for computation of a restricted set of eigenvectors became fea- each eigenvector of each model. sible for the resulting matrices of order 86 056 for the L In order to suppress biases due to deviations of random- ϭ5 cases. For the diagonalization we made use of sparse- ness, we made use of a random number generator which matrix methods and the conjugate-gradient method ͑see, e.g., combines two different binary shift registers such as de- Refs. 19 and 20͒ which computes the eigenvector with the scribed and discussed in Ref. 21. largest eigenvalue. Subsequent orthogonalization with re- The required Fourier components of the spatial magneti- spect to this eigenvector yields the eigenvector with the sec- zation distribution were sampled at intervals of one sweep ond largest eigenvalue, and further eigenvectors can be ob- for the smallest systems up to about 15 sweeps for the largest tained similarly. Thus we obtained exact numerical solutions ones. The Monte Carlo calculation of the autocorrelation ␭ for six eigenvalues Li and their corresponding eigenvectors times ͑actually the eigenvalues of the Markov matrix͒ was ␺ ϭ i(S)(i 0,...,5)oftheeigenvalue equation performed for each run as a whole as well as separately for a number of up to 1024 blocks into which the run was split. ˆ ͒␺ ͒ϭ␭ ␺ ͒ ͑ ͒ This procedure enabled us to estimate the statistical ͚ P͑S,SЈ i͑SЈ Li i͑S . 47 SЈ errors. Furthermore, the calculation of the eigenvalues ac- Ϫ ␭ cording to Pˆ (t)Nˆ (t) 1ϭ⌳ˆ (t) still depends on the time dis- The largest eigenvalue L0 is equal to 1, in accordance with the conservation of probability; its corresponding eigenvec- placements t ͓see Eqs. ͑39͒ and ͑41͔͒. The calculation of ⌳ˆ (t) ␺ ϭ␺ Ϫ2 tor satisfies 0(S) B(S), as follows from detailed balance. was performed for time displacements tL ϭ0,1,2,... up ␺ ϭ␺ Ϫ It is even with respect to spin inversion: 0(S) 0( S) to 10 or 20 of the above-mentioned intervals. For small t where ϪS is obtained from S by inverting all spins. For all these eigenvalue estimates reflect variational bias due to the system sizes and models included here, we observed that the residual contributions of relaxation modes decaying faster six leading eigenvectors, ordered according to magnitude of than the mode for which the trial vector was constructed. If their eigenvalues, alternate between the odd and even sub- the relaxation times of these faster modes are considerably spaces: the first eigenvector is even, the second one is odd, shorter than that of the mode under investigation, one can the third one is even subspace, and so on, with the caveat clearly see a fast convergence of the eigenvalue estimate as a ϭ that for L 2, e.g., the odd subspace contains only two inde- function of t. Convergence, however, occurs to a level that is pendent states. As we discussed above, the resulting eigen- only approximately constant because of the correlated statis- vectors provide useful information on how to construct trial tical noise whose effect still depends on t. With increasing t, vectors; moreover, knowledge of accurate eigenvalues for one can also observe that the statistical errors increase. The Lр5 provided an powerful test of the Monte Carlo method, latter effect, which is as slow as the pertinent relaxation the results of which are presented in the following subsec- mode, occurs when the of the Monte Carlo tion. sample are decreasing significantly with t. This situation was indeed observed for the largest eigenvalues; the data con- B. Monte Carlo calculations verged well with t before the coherence of the sampled data All simulations took place at the respective critical points was lost. It was thus rather simple to select a ‘‘best estimate’’ of the models considered. This point is known exactly in the of those eigenvalues. However, the situation for the smaller ϭ ϩͱ ͔ eigenvalues investigated here was much more difficult, be- case of the nearest-neighbor model ͓Kc ln(1 2)/2 , and 12 cause the relative differences between subsequent autocorre- was determined numerically for the other two models: Kc ϭ0.190 192 680 7(2) for the equivalent-neighbor model and lation times are much smaller. The numerical results for the ϭ eigenvalues are listed in Ref. 22 Kc 0.697 220 7(2) for the model with antiferromagnetic next-nearest-neighbor interactions. The finite-size scaling analysis presented in Ref. 12 showed that, to the extent they are compatible with the numerical results, deviations from C. Determination of the dynamic exponent Ising universal behavior are extremely small. The raw simu- In two-dimensional Ising models, finite-size corrections lation data used in this current paper include the data on are known that decay with finite size as LϪ2, and which were based the numerical results for the largest relax- powers thereof may also be expected. In the absence of in- ation time of the nearest-neighbor model, reported in Ref. 6. formation on possible additional finite-size corrections of a The latter results were obtained from 8ϫ108 Monte Carlo different type that could occur in dynamic phenomena, we samples for systems with finite sizes up to Lϭ15. The trial try to describe the finite-size data for the various autocorre- vector used for these computations consisted was of the form lation times, as given in Eq. ͑11͒, by the formula 1098 M. P. NIGHTINGALE AND H. W. J. BLO¨ TE PRB 62

TABLE I. Best estimates for the dynamic exponent z for five TABLE II. Comparison between fits to the autocorrelation relaxation modes in three Ising-like models. These results were se- times, with and without a logarithmic finite-size dependence. These lected from a much larger set of least-squares fits, obtained for fits apply to the slowest relaxation mode. The first column shows different choices of the minimum system size and of the number of the minimum system size included, the second the number of cor- corrections taken into account ͑see Ref. 22͒. The error estimate in rection terms included. The fourth column displays the squared re- ␹2 the last decimal place of each entry is listed in parentheses, and is sidual z obtained when the dynamic exponent z was left free and taken to be two standard deviations in the best fit. the amplitude b of the logarithm was fixed at zero. The fifth column ␹2 shows the squared residual b when z was fixed at value 2, while b Mode ␤ϭϪ1/4 ␤ϭ0 ␤ϭ1 was left free. The sixth column lists the number of degrees of free- dom of the fit for comparison. 1 2.164 ͑3͒ 2.1660 ͑10͒ 2.1667 ͑5͒ 2 2.166 ͑3͒ 2.167 ͑1͒ 2.167 ͑1͒ у ␹2 ␹2 L Model nc z b d f 3 2.164 ͑5͒ 2.170 ͑2͒ 2.167 ͑1͒ 4 2.17 ͑1͒ 2.162 ͑4͒ 2.170 ͑8͒ 5 ␤ϭ0 1 239. 274. 76 5 2.15 ͑2͒ 2.17 ͑1͒ 2.17 ͑1͒ 6 ␤ϭ0 1 98. 187. 72 7 ␤ϭ0 1 83.5 127. 67 8 ␤ϭ0 1 68.7 87.9 62 nc 9 ␤ϭ0 1 63.8 71.9 57 ␶ Ϸ z ␣ Ϫ2k ͑ ͒ L L ͚ kL . 48 10 ␤ϭ0 1 55.9 57.5 52 kϭ0 ␣ 8 ␤ϭ1 1 56.0 71.1 51 Here z is the dynamic exponent, k the finite-size amplitude, 9 ␤ϭ1 1 50.1 57.4 47 and nc is the number of correction-to-scaling terms included. Not explicitly shown in this notation is that the autocorrela- 10 ␤ϭ1 1 49.4 49.7 43 tion times depend on the relaxation mode and the model. 4 ␤ϭ0 2 89.2 341. 76 ͑ ͒ On the basis of Eq. 48 , a considerable number of least- 5 ␤ϭ0 2 85.0 136. 75 squares fits were applied to the numerical results for the au- 6 ␤ϭ0 2 83.3 97.2 71 tocorrelation times. For each model and relaxation mode, one may vary both nc , the number of correction terms, and the low-L cutoff specifying the minimal system size included Next we address the question whether the finite-size di- in the fit. The smaller the number of corrections, the larger vergence of the autocorrelation times at criticality can be the low-L cutoff must be chosen in order to obtain an accept- described by a dynamic exponent zϭ2 when a logarithmic 2 able squared residual ␹ . A selection of fits that display the factor is included. This possibility was suggested by numerical trends is presented in Ref. 22 Domany,26 and pursued by Swendsen27 and Stauffer28, who 2 The ‘‘best fits’’ were chosen on the basis of the ␹ crite- used very large lattices and found that this possibility seems rion, the dependence on the low-L cutoff, and the mutual inconsistent with one way of , but not with a dif- consistency of fits with different nc . The fits are summarized ferent one. Further references concerning this question are in Table I. Since the errors are not only of a statistical nature, given in Ref. 29. but also depend on residual bias in the autocorrelation times Although the present work is restricted to very small sys- and subjective choices made in the selection of the best fits, tem sizes, the data are relatively accurate. Thus we tried to fit we quote error bars equal to two standard deviations as ob- the following form to the finite-size data for the slowest re- tained from statistical considerations only. We believe that laxation mode: these 2␴ error estimates are conservative in the case of the analysis of the second and third largest eigenvalues of the nc ␶ Ϸ z͑ ϩ ͒ͩ ␣ Ϫ2k ͪ ͑ ͒ ␤у0 models. The ␤ϭϪ1/4 model was found to be numeri- L L 1 b ln L ͚ kL . 49 kϭ0 cally less well behaved: the statistical errors, as well as the corrections to scaling, appear to be larger. Also, the construc- Fixing zϭ2 and taking b as a variable parameter, we found tion of trial vectors was somewhat less successful than in the that this form could well describe the data for large enough ␹2 cases of the ␤у0 models. nc . The quality, as determined by the criterion, of a num- The new data are somewhat more accurate than and con- ber of such fits is shown in Table II. For comparison we sistent with our previous work.7 They are also consistent include fits in which z is a variable parameter without a loga- with the results of Wang and Hu23 for the slowest relaxation rithmic correction. mode of a different set of Ising-like models. They provide a The results in Table II indicate that the fits with a variable clear confirmation of universality of the dynamic exponent, exponent are usually better than those which include a loga- with regard to relaxation modes as well as models. Our best rithmic term; i.e., the residual ␹2 decreases faster when the estimate zϭ2.1667(5) applies to the slowest ͑odd͒ relax- low-L cutoff is increased. This is especially apparent for ␤ϭ ␤ϭ ation mode of the equivalent-neighbor (␤ϭ1) model. small nc and for the 0 and 1 models, where the sta- This result for z is consistent with most of the recently tistical accuracy is optimal. published values. This agreement includes the results of Finally we tried a fit according to Eq. ͑49͒ with both z and Stauffer24 on damage spreading in the Ising model. ͑The b as free parameters. The resolution of both parameters si- value listed in Ref. 6 was incorrectly quoted.͒ It is slightly multaneously is quite hard, and lies near the limit of what larger than the value 2.14 derived by Alexandrowicz25 on the can be gleaned from the present data. For the nearest- basis of a scaling argument. neighbor model we find, using system sizes Lϭ8 and larger, PRB 62 MONTE CARLO COMPUTATION OF CORRELATION . . . 1099

TABLE III. Best estimates for the finite-size amplitudes of five relaxation modes in three Ising-like models. These results were selected from a much larger set of least-squares fits obtained for different choices of the minimum system size and of the number of corrections taken into account ͑see Ref. 22͒. The error estimate in the last decimal place of each entry is listed in parentheses, and is taken to be two standard deviations of the best fit. The amplitudes can be written as the product of mode-dependent and model- dependent constants ͑see text͒; the difference in the last decimal place between the amplitudes and this product is shown between square brackets.

Mode/model ␬ϭ1 ␬ϭ2 ␬ϭ3

1 ͑odd͒ 6.763 ͑6͓͒-10͔ 4.4089 ͑13͓͒8͔ 2.8312 ͑5͓͒0͔ 2 ͑even͒ 0.2516 ͑2͓͒2͔ 0.16364 ͑5͓͒0͔ 0.10510 ͑2͓͒0͔ 3 ͑odd͒ 0.1188 ͑1͓͒1͔ 0.07727 ͑3͓͒0͔ 0.04963 ͑1͓͒0͔ 4 ͑even͒ 0.07195 ͑7͓͒3͔ 0.04677 ͑4͓͒-4͔ 0.03008 ͑3͓͒2͔ 5 ͑odd͒ 0.0466 ͑1͓͒-1͔ 0.03041 ͑3͓͒-1͔ 0.01956 ͑2͓͒2͔

ϭ ϭ Ϯ ϭ Ϯ and nc 1, that z 2.13 0.07 and b 0.05 0.09, which are best. However, even in this case it is not possible to rule again fails to support the presence of a logarithmic term. out a divergence of the relaxation time of the form L2(1 ϩb ln L). Nevertheless, our results viewed in their totality D. Universality of finite-size amplitudes make this behavior rather unlikely. First, the assumption of this logarithmic form yields fits that converge less rapidly, as ␣ In order to determine the leading amplitudes 0 more mentioned above. Second, one would have to have bϷ1/6 precisely, we repeated the fits as used for the determination universally, independent of model and relaxation mode, of the dynamic exponent, but with the value of the latter since our results for the of system size we studied are fixed at zϭ13/6. We note that the combined results for z are consistent with a divergence of the form ␶ ϰLzϰL13/6 consistent with this fraction. A considerable number of fits L ϷL2(1ϩ1/6 ln L). Universality of the amplitude of a loga- were made, and ‘‘best estimates’’ of the amplitudes are pre- sented in Table III. rithmic correction would be quite unusual and does not fit As mentioned in Ref. 7, according to a modest generali- into any theoretical framework of which we are aware. zation of accepted ideas on universality, the finite-size am- Some open questions remain regarding the optimized ␣ variational vectors to which this computation owes its accu- plitudes of the autocorrelation times should satisfy 0 ϭ racy. Variational basis vectors of the general form given in Aim␬ , where the m␬ are nonuniversal, model-dependent constants; the subscript ␬ refers to the specific model. We Eq. ͑45͒ are special in the sense that all parameters enter use the notation ␬ϭ2ϩsgn(␤) so that ␬ϭ1 refers to the linearly. The method outlined here does not require this fea- model with ferromagnetic nearest-neighbor and antiferro- ture and in fact it was not present in previous computations, magnetic next-nearest-neighbor couplings, ␬ϭ2 denotes the reported in Ref. 6. nearest-neighbor model, and ␬ϭ3 refers to the equivalent- For most of the results presented here we used trial vec- ϭ tors with linear parameters optimized by minimization of the neighbor model. The Ai , i 1,...,5,are mode-dependent constants, whose ratios are universal. Since only the product variance of the configurational eigenvalues. As an apparently matters, we are free to choose an arbitrary value for one of equivalent alternative, the full set of symmetrized monomials ϭ in the Fourier coefficients of the spin configuration could be these constants. We chose to fix A1 1, so that all Ai should become universal constants. chosen as basis vectors rather than the linear combinations ϭ defined in Eq. ͑45͒. With this choice, the basis vectors would The remaining constants Ai (i 2,...,5)and m␬ were fitted accordingly to the amplitudes listed in Table III. The not have contained any parameters, but employing this big- ϭ Ϯ ger truncated basis, the same linear parameters would have result of this least-squares fit is m1 6.773 0.003, m2 ϭ Ϯ ϭ Ϯ ϭ ˆ 4.4081 0.0009, m3 2.8312 0.0005, A2 0.037 123 been reintroduced by computing the matrix elements Nij and Ϯ ϭ Ϯ ϭ Pˆ 0.000 008, A3 0.017 530 0.000 004, A4 0.010 618 ij and solving the generalized eigenvalue problem defined Ϯ ϭ Ϯ 0.000 006, and A5 0.006 901 0.000 005. This fit has by Eq. ͑25͒. In this way, we would have obtained the coef- eight degrees of freedom and ␹2ϭ9.0, in good agreement ficients for which the Rayleigh quotient is stationary, at least with the assumptions of dynamic universality, and suggest- if the summation over configurations could have been done ing that our 2␴ error estimates are not unrealistic. The dif- exactly. Proceeding in this way, we could have altogether ferences between our amplitude estimates and the fitted skipped the optimization scheme based on minimization of ͓ ͑ ͒ product Aim␬ are included in Table III, in units of the last the variance of the configurational eigenvalues cf. Eq. 18 decimal place listed. and Sec. II B͔. The obvious question is what is accomplished by the non- VI. DISCUSSION linear minimization of the variance of the configurational eigenvalues. We do not yet have a convincing answer to this The analysis presented above produces an apparently question. On the one hand, it is not difficult to show that the highly accurate estimate of the dynamic critical exponent z zero-variance principle holds for individual eigenstates. That ϭ2.1667Ϯ0.0005 for the case of the equivalent-neighbor is, if an eigenvector can represented exactly as a linear com- model, where the statistical accuracy and the convergence bination of the basis vectors, the variational (tϭ0) case will 1100 M. P. NIGHTINGALE AND H. W. J. BLO¨ TE PRB 62

already produce the exact result even if it the eigenvalue the stochastic matrix. In order to enable a comparison with problem contains eigenvectors that cannot be represented ex- other types of dynamics, we choose our unit of time as Ϫ actly in the truncated basis. This is in fact precisely what L2d 2yh Wolff steps (L is the linear system size, d the di- happens for the dominant even eigenvector: this vector is mensionality, and y h the magnetic renormalization expo- represented exactly even in the truncated basis we use, and nent͒. Since the average Wolff cluster consists of a number Ϫ indeed its eigenvalue is reproduced exactly. On the other of sites proportional to L2yh d, this choice guarantees that, hand, our tentative numerical show that the op- under equilibrium conditions, an average number of order Ld timization method produces more accurate results, which is spins is processed per unit of time. not surprising when one considers that the optimized basis Because of the efficiency of the Wolff algorithm, only a functions give rise to much smaller truncated basis sets. This few units of time are needed to generate an independent spin in turn yields a generalized eigenvalue problem involving configuration under practical circumstances. However, if the much smaller matrices that are numerically and statistically fully ordered antiferromagnetic state is chosen as the initial much more robust. spin configuration, a number of Wolff steps of order Ld is A related problem with a large basis set is that the matrix required to remove the the antiferromagnetic order; i.e., its Nˆ in the generalized eigenvalue problem of Eq. ͑25͒ be- relaxation to equilibrium is anomalously slow. A less ex- comes numerically singular. In fact, the only way in which treme but related phenomenon is observed under practical we were able to obtain meaningful results at all is by per- Wolff simulation conditions at equilibrium: from time to forming the inversion in the usual regularized fashion as fol- time large critical fluctuations occur that bring the system lows. Use the fact that Nˆ is symmetric and non-negative into a state of relatively large disorder and small magnetiza- definite to write it in the form tion. These configurations are relatively long lived. In the time autocorrelation function of the magnetization, this phe- 30 Nˆ ϭW diag͑␮2 ,...,␮2͒ W†. ͑50͒ nomenon translates into a slower-than-exponential decay, 1 n at least on the numerically accessible time scale. In the lan- Then define a regularized inverse of Nˆ 1/2 as follows: guage of Eq. ͑10͒ such a situation follows if one assumes the ␶ existence of anomalously large autocorrelation times Li as- ¯ Ϫ1/2ϭ ͑␮¯ Ϫ1 ␮¯ Ϫ1͒ † ͑ ͒ N W diag 1 ,..., n W , 51 sociated with anomalously small amplitudes ci . Under these circumstances we cannot exclude the possibility that the ␮¯ Ϫ1ϭ␮Ϫ1 ␮ where i i if i exceeds a suitable chosen threshold, longest relaxation time following from the Markov matrix ␮¯ Ϫ1ϭ e.g., the square root of the machine accuracy, and i 0 for the Wolff simulation of a finite system corresponds with otherwise. The nonvanishing eigenvalues of N¯ Ϫ 1/2Pˆ N¯ Ϫ 1/2 an extremely unlikely deviation from equilibrium. Since this then yield a subset of the eigenvalues of Pˆ that are least kind of fluctuations may have too low a probability to be of ˆ practical significance, these considerations suggest the possi- affected by the numerical singularity of N. bility that the time needed to generate an ‘‘independent con- Although further modifications of the computational pro- figuration’’ is not simply related to the second largest eigen- cedures may lead to additional improvements of our tech- value of the Markov matrix, but rather to some intricate nique, the numerical results obtained thus far are already average, possibly involving the complete spectrum. quite promising and the question arises what further applica- tions are obvious in the field of dynamics of Monte Carlo methods. ACKNOWLEDGMENTS For instance, it seems well possible to apply the present It is our pleasure to acknowledge stimulating discussions techniques in three dimensions and to spin-conserving Ka- with Bob Swendsen. This research was supported by the 1 wasaki dynamics although it is clear that the construction of U.S. National Science Foundation ͑NSF͒ through Grants trial vectors will have to be modified. In the same context, DMR-9725080. This research was conducted in part using we note that direct application of the method used in this the resources of the Cornell Theory Center, which received 3–5 paper to the dynamics of cluster algorithms is frustrated major funding from the NSF and New York State, with ad- by the requirement that one should be able to compute ditional support from the Advanced Research Projects ͉ ˆ ͉␺ ͑ ͒ ͗S P T͘ numerically exactly. An additional problem for Agency ARPA , the National Center for Research Re- such dynamics from the perspective of our approach is that sources at the National Institutes of Health ͑NIH͒, the IBM the concept correlation time has to be handled carefully in Corporation, and other members of the center’s Corporate this context. Research Institute. This research is supported in part by the Let us demonstrate this point by means of the following Dutch FOM foundation ͑‘‘Stichting voor Fundamenteel thought : the application of the Wolff algorithm Onderzoek der Materie’’͒ which is financially supported by to the ferromagnetic, critical Ising model. As usual, we de- the NWO ͑‘‘Nederlandse Organisatie voor Wetenschappelijk fine the autocorrelation times in terms of the eigenvalues of Onderzoek’’͒. PRB 62 MONTE CARLO COMPUTATION OF CORRELATION . . . 1101

1 K. Kawasaki, in Phase Transitions and Critical Phenomena, ed- 15 M. P. Nightingale and C. J. Umrigar, in Recent Advances in ited by C. Domb and M.S. Green ͑Academic, New York, 1972͒, Quantum Monte Carlo Methods, edited by W. A. Lester, Jr. Vol. 2. ͑World Scientific, Singapore, 1997͒. 2 R. J. Glauber, J. Math. Phys. 4, 294 ͑1963͒. 16 J. K. L. MacDonald, Phys. Rev. 43, 830 ͑1933͒. 3 R. H. Swendsen and J. S. Wang, Phys. Rev. Lett. 58,86͑1987͒. 17 J. H. Wilinson, The Algebraic Eigenvalue Problem ͑Clarendon 4 U. Wolff, Phys. Rev. Lett. 60, 1461 ͑1988͒. Press, Oxford, 1965͒, p. 103; G. H. Golub and C. F. van Loan, 5 ͑ ͒ J. R. Heringa and H. W. J. Blo¨te, Phys. Rev. E 57, 4976 1998 . Matrix Computations ͑The John Hopkins University Press, Bal- 6 M. P. Nightingale and H. W. J. Blo¨te, Phys. Rev. Lett. 76, 4548 timore, 1989͒, p. 411. ͑ ͒ 1996 . 18 M. P. Nightingale and H. W. J. Blo¨te, Physica A 104, 352 ͑1980͒. 7 ¨ M. P. Nightingale and H. W. J. Blote, Phys. Rev. Lett. 80, 1007 19 M. P. Nightingale, Proc. K. Ned. Akad. Wet., Ser. B: Phys. Sci. ͑1997͒. 82, 235 ͑1979͒. 8 C. J. Umrigar, K. G. Wilson, and J. W. Wilkins, Phys. Rev. Lett. 20 M. P. Nightingale, V. S. Viswanath, and G. Mu¨ller, Phys. Rev. B 60, 1719 ͑1988͒; C. J. Umrigar, K. G. Wilson, and J. W. 48, 7696 ͑1993͒. Wilkins, in Studies in Condensed Matter 21 H. W. J. Blo¨te, E. Luijten, and J. R. Heringa, J. Phys. A 28, 6289 Physics, Recent Developments, edited by D. P. Landau, K. K. ͑ ͒ ¨ 1995 . Mon, and H. B. Schuttler, Springer Proceedings in Physics 22 ͑Springer, Berlin, 1988͒. See EPAPS Document No. E-PRBMDO-62-042026 for appendi- 9 D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 ͑1988͒. ces. This document may be retrieved via the EPAPS homepage ͑ ͒ Also see B. Bernu, D. M. Ceperley, and W. A. Lester, Jr., ibid. http://www.aip.org/pubservs/epaps.html or from ftp.aip.org in 93, 552 ͑1990͒; W. R. Brown, W. A. Glauser, and W. A. Lester, the directory /epaps/. See the EPAPS homepage for more infor- Jr., ibid. 103, 9721 ͑1995͒. mation. 23 10 V. Privman and M. E. Fisher, Phys. Rev. B 30, 322 ͑1984͒. F.-G. Wang and C.-K. Hu, Phys. Rev. E 56, 2310 ͑1997͒. 11 H. W. J. Blo¨te and M. P. Nightingale, Physica A 134, 274 ͑1985͒. 24 D. Stauffer, J. Phys. A 26, L599 ͑1993͒. 12 M. P. Nightingale and H. W. J. Blo¨te, Physica A 251, 211 ͑1998͒. 25 Z. Alexandrowicz, Physica A 189, 148 ͑1992͒. 13 M. P. Nightingale and H. W. J. Blo¨te, J. Phys. A 15, L33 ͑1982͒. 26 E. Domany, Phys. Rev. Lett. 52, 871 ͑1984͒. 14 M. P. Nightingale, in Computer Simulation Studies in Condensed 27 R. H. Swendsen ͑private communication͒. Matter Physics, edited by D. P. Landau, K. K. Mon, and H. B. 28 D. Stauffer, Int. J. Mod. Phys. C 10, 931 ͑1999͒. Schu¨ttler, Springer Proceedings in Physics ͑Springer, Berlin, 29 D. Stauffer, Physica A 244, 344 ͑1997͒. 1997͒. 30 W. Kerler, Phys. Rev. D 47, R1285 ͑1993͒.