
M ONTE C ARLO M ETHODS The Monte Carlo Method in Science and Engineering Since 1953, researchers have applied the Monte Carlo method to a wide range of areas. Specialized algorithms have also been developed to extend the method’s applicability and efficiency. The author describes some of the algorithms that have been developed to perform Monte Carlo simulations in science and engineering. onte Carlo methods are algorithms might work well for functions of one variable, for solving various kinds of compu- such methods can be very inefficient for functions tational problems by using random of several variables. For example, to numerically numbers (or rather, pseudorandom integrate a function of an N-dimensional vector numbers).M Monte Carlo simulations play an impor- (where N = 100) with a grid of 10 points in each tant role in computational science and engineering, dimension would require the evaluation of 10100 with applications ranging from materials science to points, which is far too many to be computed. biology to quantum physics. They also play an im- Monte Carlo methods provide a way out of this portant role in a variety of other fields, including exponential time increase: as long as the function computer imaging, architecture, and economics. is reasonably well behaved, it can be estimated by Nicholas Metropolis suggested the name “Monte randomly selecting points in N-dimensional space Carlo”—in reference to the famous casino in and then taking an appropriate average of the Monaco—in one of the first applications of the function values at these points. By the central Monte Carlo method in physics.1 Because of the limit theorem, this method will display 1/ N repetitive nature of a typical Monte Carlo algo- convergence—that is, quadrupling the number of rithm, as well as the large number of calculations in- sampled points will halve the error, regardless of volved, the Monte Carlo method is particularly the number of dimensions. Another very impor- suited to calculation using a computer. tant application for Monte Carlo simulations is Monte Carlo methods are particularly useful for optimization. The traveling salesman problem is problems that involve a large number of degrees an example of an optimization problem that is of freedom. For example, deterministic methods very difficult to solve using conventional methods, of numerical integration operate by taking several but that might be approximately solved via Monte evenly spaced samples from a function. While this Carlo methods. A variety of Monte Carlo meth- ods such as stochastic tunneling,2 simulated annealing,3 genetic algorithms,4 and parallel tem- 1521-9615/06/$20.00 © 2006 IEEE pering5 have been developed to handle such opti- Copublished by the IEEE CS and the AIP mization problems. JACQUES G. AMAR One of the first uses of Monte Carlo simulations University of Toledo is described in the classic article by Nicholas C. Metropolis, Arianna W. Rosenbluth, Marshall N. MARCH/APRIL 2006 9 FURTHER READING References 1. M.H. Kalos and P.A. Whitlock, Monte Carlo Methods, John Wiley & Sons, iven the vast literature on Monte Carlo simulations, it is 1986. G virtually impossible to discuss all the methods that 2. D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statisti- have been developed in an article of this length. For fairly re- cal Physics, Cambridge Univ. Press, 2000. cent surveys of the literature, see the books on Monte Carlo 3. B.A Berg, Markov Chain Monte Carlo Simulations and Their Statistical methods by M.H. Kalos and P.A. Whitlock,1 D.P. Landau and Analysis, World Scientific, 2004. K. Binder,2 and B.A. Berg3 as well as a recent Los Alamos Na- 4. J.E. Gubernatis, ed., “The Monte Carlo Method in the Physical Sci- tional Laboratory conference proceedings on the Monte ences,” Am. Inst. of Physics Conf. Proc., no. 690, Am. Inst. of Physics, Carlo method in the physical sciences.4 A fairly recent discus- 2003. sion of extended ensemble methods is provided in a review 5. Y. Iba, “Extended Ensemble Monte Carlo,” Int’l J. Modern Physics C, vol. article by Yukito Iba.5 For a good description of classical 12, no. 5, 2001, pp. 623–656. Monte Carlo simulations of fluids, the book by Allen and 6. M.P. Allen and D.J. Tildesley, Computer Simulation of Liquids, Oxford Univ. Tildesley6 is also recommended. Press, 2001. Rosenbluth, Augusta H. Teller, and Edward tem makes a transition from state i to state j, or it 1 acc Teller. In this work, the general Metropolis algo- is rejected with probability 1 – Pij . The overall rithm is first described along with its application transition rate from state i to state j is thus given acc to the equation of state of fluids. Using the Los by the transition matrix wij = TijPij . Alamos National Laboratory’s MANIAC com- The sequence of configurations generated in a puter (11,000 operations per second), Metropolis Monte Carlo simulation is generally referred to as and his colleagues obtained results for the equa- a Markov chain because the transition rate or prob- tion of state of the hard disk fluid by performing ability depends on the current state but not on pre- Monte Carlo simulations of 2D systems with 56 vious states. To generate a Markov chain of states and 224 particles. Since then, the computational with the desired probability distribution Pi, the power of a single processor has increased by ap- overall transition probabilities wij should satisfy the proximately a factor of 105, and the Monte Carlo detailed balance condition algorithm has become increasingly sophisticated. This article describes some of the algorithms that wijPi = wjiPj,(1) have been developed to perform both equilibrium and nonequilibrium Monte Carlo simulations of a which implies that the desired distribution Pi is a variety of systems of interest in biology, physics, stationary state. Assuming ergodicity—that is, a chemistry, materials science, and engineering. nonzero multitransition probability of reaching any allowed state of the system from any other allowed Metropolis-Hastings Monte state—this condition further implies that, in such Carlo and Detailed Balance a Monte Carlo simulation, the system will ap- As in the original article by Metropolis and his proach the equilibrium ensemble distribution. For- colleagues, in many scientific applications, re- mally, the ergodicity requirement also implies that searchers use Monte Carlo simulations to sample the transition matrix w must satisfy as accurately as possible the properties of a many- n body system within a given statistical distribution [w ]ij > 0 (2) or ensemble. An example is the Gibbs canonical ensemble with probability distribution Pi = for n > nmax for all i,j. Although the ergodicity exp(–Ei/kBT)/Z, where Ei is the energy of the sys- properties of a particular many-body system are tem in configuration i, kB is Boltzmann’s constant, difficult to study, it is believed, in general, that al- T is the temperature, and Z is the partition func- most any reasonable choice of allowed trial moves tion. Starting with a given initial state i, two steps will satisfy ergodicity. are usually involved in generating the next Monte Several possible forms for the acceptance acc Carlo state. First, a possible new state or trial con- probabilities Pij satisfy the detailed balance con- figuration j is selected with a trial selection prob- dition in Equation 1. The simplest and most ability or rate Tij. Then the new configuration is commonly used corresponds to the Metropolis- acc 6 either accepted with probability P ij , and the sys- Hastings rule, 10 COMPUTING IN SCIENCE & ENGINEERING ⎛ ⎞ TPji j surv P acc = min⎜1 , ⎟ tr =−dPi () t =τ −−1 t/τ ij .(3)Pti () e ,(6) ⎝ TPij i ⎠ dt Another option is the “symmetric” Barker which implies that the average transition time is expression,7 given by átñ = . Such a distribution of transition times t can be generated using the expression PT acc = jji Pij + .(4)t = – ln(r), (7) PTiij P j T ji where r is a uniform random number between 0 Many Monte Carlo simulations, including the and 1. ones carried out in the original article by Me- tropolis and his colleagues,1 use symmetric trial Constant-NPT Ensemble configuration selection rates Tij = Tji. Thus, in a While Equation 5 may be used to perform Metrop- typical Metropolis Monte Carlo simulation of the olis Monte Carlo simulations in the Gibbs canoni- canonical ensemble with equilibrium distribution cal ensemble, researchers have extended the Monte acc Pi ~ exp(–Ei/kBT), the acceptance probability Pij Carlo method to a variety of other ensembles. For for a transition from state i to state j can be writ- example, in classical simulations of molecular liquids ten as and gases in the constant NPT ensemble (where N is the number of particles, P is the pressure, and T is acc  Pij = min(1, exp(– (Ej – Ei)), (5) the temperature), the configurational average of a quantity A can be rewritten as8  where = 1/kBT. An example is the Ising spin- model with “spin-flip” or Glauber dynamics: at A = each step, a spin is randomly selected from the lat- NPT − ∞ tice and then flipped with an appropriate accep- ZdVPVV1 ∫ exp(−β ) N tance probability. In Monte Carlo simulations of NPT 0 fluids with velocity-independent interactions,1 the 1 3N ∫ d sUA()exp(ss−β ()),(8) velocity (momentum) degrees of freedom can be 0 integrated out; only the atomic positions are im- portant. Thus, the energy Ei in Equation 5 can be where ZNPT is the corresponding partition func- taken to include only the configuration-dependent tion, V = L3 is the volume of the system, and U(s) portion of the energy.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-