Monte Carlo Sampling Methods

Total Page:16

File Type:pdf, Size:1020Kb

Monte Carlo Sampling Methods [1] Monte Carlo Sampling Methods Jasmina L. Vujic Nuclear Engineering Department University of California, Berkeley Email: [email protected] phone: (510) 643-8085 fax: (510) 643-9685 UCBNE, J. Vujic [2] Monte Carlo Monte Carlo is a computational technique based on constructing a random process for a problem and carrying out a NUMERICAL EXPERIMENT by N-fold sampling from a random sequence of numbers with a PRESCRIBED probability distribution. x - random variable N 1 xˆ = ---- x N∑ i i = 1 Xˆ - the estimated or sample mean of x x - the expectation or true mean value of x If a problem can be given a PROBABILISTIC interpretation, then it can be modeled using RANDOM NUMBERS. UCBNE, J. Vujic [3] Monte Carlo • Monte Carlo techniques came from the complicated diffusion problems that were encountered in the early work on atomic energy. • 1772 Compte de Bufon - earliest documented use of random sampling to solve a mathematical problem. • 1786 Laplace suggested that π could be evaluated by random sampling. • Lord Kelvin used random sampling to aid in evaluating time integrals associated with the kinetic theory of gases. • Enrico Fermi was among the first to apply random sampling methods to study neutron moderation in Rome. • 1947 Fermi, John von Neuman, Stan Frankel, Nicholas Metropolis, Stan Ulam and others developed computer-oriented Monte Carlo methods at Los Alamos to trace neutrons through fissionable materials UCBNE, J. Vujic Monte Carlo [4] Monte Carlo methods can be used to solve: a) The problems that are stochastic (probabilistic) by nature: - particle transport, - telephone and other communication systems, - population studies based on the statistics of survival and reproduction. b) The problems that are deterministic by nature: - the evaluation of integrals, - solving the systems of algebraic equations, - solving partial differential equations. UCBNE, J. Vujic Monte Carlo [5] Monte Carlo methods are divided into: a) ANALOG, where the natural laws are PRESERVED - the game played is the analog of the physical problem of interest (i.e., the history of each particle is simulated exactly), b) NON-ANALOG, where in order to reduce required computational time the strict analog simulation of particle histories is abounded (i.e., we CHEAT!) Variance-reduction techniques: - Absorption suppression - History termination and Russian Roulette - Splitting and Russian Roulette - Forced Collisions - Source Biasing UCBNE, J. Vujic Example 1: Evaluation of Integrals [6] fmax fx() y1 • reject • accept a x1 b b () () Ifx= ∫ dx - area uder the function f(x),Rba= – fmax - area of rectangle a I P = ---- - is a probability that a random point lies under f(x), thus IRP= R ()ξ ξ Step 1: Choose a random point (x1,y1):x1 = aba+ – 1 and y1 = fmax 2 Step 2: Check if y1 ≤ fx()1 - accept the point, if y1 < fx()1 - reject the point Step 3: Repeat this process N times, Ni - the number of accepted points Ni Ni Step 4: Determine P = ------ and the value of integral IR= ------ N N UCBNE, J. Vujic Major Components of Monte Carlo [7] Major Components of a Monte Carlo Algorithm • Probability distribution functions (pdf’s) - the physical (or mathematical) system must be described by a set of pdf’s. • Random number generator - a source of random numbers uniformly distributed on the unit interval must be available. • Sampling rule - a prescription for sampling from the specified pdf, assuming the availability of random numbers on the unit interval. • Scoring (or tallying) - the outcomes must be accumulated into overall tallies or scores for the quantities of interest. • Error estimation - an estimate of the statistical error (variance) as a function of the number of trials and other quantities must be determined. • Variance reduction techniques - methods for reducing the varinace in the estimated solution to reduce the computational time for Monte Carlo simulation. • Parallelization and vectorization - efficient use of advanced computer architectures. UCBNE, J. Vujic Probability Distribution Functions [8] Probability Distribution Functions Random Variable, x, - a variable that takes on particular val- ues with a frequency that is determined by some underlying probability distribution. Continuous Probability Distribution Paxb{}≤≤ Discrete Probability Distribution {} Px= xi = pi UCBNE, J. Vujic Probability Distribution Functions [9] PDFs and CDFs (continuous) Probability Density Function (PDF) - continuous • f(x), fx()dx= P{} x≤≤ x' x+ dx ∞ • 0 ≤ fx(), ∫ fx()dx= 1 b () –∞ fx • Probability{ axb≤≤ } = ∫ fx()dx a x → Cumulative Distribution Function (CDF) - continuous x • Fx()==∫ fx'()dx' Px'x{}≤ 1 –∞ • 0 ≤≤Fx() 1 Fx() d • 0 ≤ Fx()= fx() dx 0 b x → • ∫fx'()dx' ==Paxb{}≤≤ Fb()– Fa() a UCBNE, J. Vujic Probability Distribution Functions [10] PDFs and CDFs (discrete) Probability Density Function (PDF) - discrete () δ() • f(xi), fxi = pi xx– i ≤ () p • 0 fxi 3 ()∆() () p1 • ∑fxi xi = 1 or ∑pi = 1 fx i i p2 x x x Cumulative Distribution Function (CDF) - discrete 1 2 3 F(x) • Fx()==p fx()∆x ∑ i ∑ i i p1+p2+p3 1 < < xi x xi x p1+p2 • 0 ≤≤Fx() 1 p1 • x1 x2 x3 UCBNE, J. Vujic Monte Carlo & Random Sampling [11] Sampling from a given discrete distribution () Given fxi = pi and ∑pi = 1, i = 1, 2, ..., N i 0 p1 p1+p2 p1+p2+p3 p1+p2+...+pN=1 ξ ≤≤ξ () ()ξ ∈ and 0 1, then Px= xk ==pk P dk or k – 1 k ≤ ξ < ∑ pi ∑ pi i = 1 i = 1 UCBNE, J. Vujic Monte Carlo & Random Sampling [12] Sampling from a given continuous distribution If f(x) and F(x) represent PDF and CDF od a random variable x, and if ξ is a random number distributed uniformly on [0,1] with PDF g(ξ )=1, and if x is such that F(x) = ξ than for each ξ there is a corresponding x, and the variable x is distribute according to the probability density function f(x). Proof: For each ξξξξ in ( , +∆ ), there is x in (x,x+∆x). Assume that PDF for x is q(x). Show that q(x) = f(x): q(x)∆x = g(ξξ )∆ =∆ ξ = ( ξ +∆ ξξ)- = F(x+∆x)- F(x) q(x) = [F(x+∆x)- F(x)]/∆x = f(x) Thus, if x = F-1(ξ ), then x is distributed according to f(x). UCBNE, J. Vujic Monte Carlo & Random Sampling [13] Monte Carlo Codes Categories of Random Sampling • Random number generator uniform PDF on [0,1] • Sampling from analytic PDF’s normal, exponential, Maxwellian, ..... • Sampling from tabulated PDF’s angular PDF’s, spectrum, cross sect For Monte Carlo Codes... • Random numbers, ξ, are produced by the R.N. generator on [0,1] • Non-uniform random variates are produced from the ξ’s by — direct inversion of CDFs — rejection methods — transformations — composition (mixtures) — sums, products, ratios, ..... — table lookup + interpolation — lots (!) of other tricks ..... • < 10% of total cpu time (typical) UCBNE, J. Vujic Random Sampling Methods [14] Random Number Generator Pseudo-Random Numbers • Not strictly "random", but good enough 1 — pass statistical tests fx() — reproducible sequence 0 1 • Uniform PDF on [0,1] • Must be easy to compute, must have a large period 1 () Multiplicative congruential method Fx • Algorithm 0 1 S0 = initial seed, odd integer, < M Sk = G • Sk-1 mod M, k = 1, 2, ..... ξ k = Sk / M • Typical (vim, mcnp): 19 48 Sk = 5 • Sk-1 mod 2 ξ 48 k = Sk / 2 period = 246 ≈ 7.0 x 1013 UCBNE, J. Vujic Random Sampling Methods [15] Direct Sampling (Direct Inversion of CDFs) Direct Solution of xˆF← –1()ξ Sampling Procedure: 1 • Generate ξ ξ () ξ • Determine xˆFx such that ˆ = Fx() 0 x → xˆ Advantages • Straightforward mathematics & coding • "High-level" approach Disadvantages • Often involves complicated functions • In some cases, F(x) cannot be inverted (e.g., Klein-Nishina) UCBNE, J. Vujic Random Sampling Methods [16] Rejection Sampling Used when the inverse of CDF is costly ot impossible to find. Select a bounding function, g(x), such that • cgx⋅ ()≥ fx() for all x • g(x) is an easy-to-sample PDF Sampling Procedure: ← –1()ξ • sample xˆx from g(x): ˆg 1 cg() x ξ ⋅ ()≤ () • test: 2 cg xˆ fxˆ • reject if true accept xˆ , done () if false reject xˆ , try again fx 6 • accept Advantages x → • Simple computer operations Disadvantages • "Low-level" approach, sometimes hard to understand UCBNE, J. Vujic Random Sampling Methods [17] Table Look-Up Used when f(x) is given in a form of a histogram f 1 fi fx() f2 x1 x2 xi-1 xi F(x) ξ x1 x2 xi-1 x xi Then by linear interpolation ()() []()ξ xx– i – 1 Fi + xi – x Fi – 1 xi – xi – 1 – xiFi – 1 + xi – 1Fi Fx()= --------------------------------------------------------------------- , x = ---------------------------------------------------------------------------------- xi – xi – 1 Fi – Fi – 1 UCBNE, J. Vujic Random Sampling Methods [18] Sampling Multidimensional Random Variables If the random quantity is a function of two or more random variables that are independent, the joint PDF and CDF can be written as (), () () fxy = f1 x f2 y (), () () Fxy = F1 x F2 y EXAMPLE: Sampling the direction of isotropically scattered particle in 3D ΩΩθϕ()Ω, Ω Ω ==xi ++yj zk =vwu++, dΩ sinθdθdϕ –d()cosθ dϕ –dµdϕ -------- ==-------------------------- -------------------------------- =------------------ 4π 4π 4π 4π 1 1 f()Ω ==f ()µ f ()ϕ --------- 1 2 22π µ 1 F ()µ ===f ()µµ' d ' ---()ξµ + 1 or µ = 2ξ – 1 1 ∫–1 1 2 1 1 ϕ ϕ F ()ϕ ===f ()ϕϕ' d ' ------ ξ , or ϕ = 2πξ 1 ∫0 2 2π 2 2 UCBNE, J. Vujic Random Sampling Methods [19] Probability Density Function Direct Sampling Method Linear: fx()=02x , <<x 1 x ← ξ (L1, L2) –x Exponential: fx()=0e , < x x ← –logξ (E) u ← cos πξ 2D Isotropic: 1 2 1 f()ρ = ----- , ρ = ()uv, (C) 2π ← πξ v sin2 1 3D Isotropic: u ← 2ξ – 1 1 1 (I1, I2) f()Ω = ------ , Ω = ()uvw,, 4π ← 2 πξ v 1 –2u cos 2 ← 2 πξ w 1 –2u sin 2 π Maxwellian: 2 x –xT/ ← ()ξ ξ 2 ξ fx()=0----------- --- e , < x xT– log 1 – log 2 cos -- 3 (M1, M2, M3) T π T 2 π Watt –ab⁄ 4 wa← ()– logξ – logξ cos2 -- ξ 2e –xa⁄ 1 2 2 3 Spectrum: fx()= ---------------- e sinh bx ,0 < x π 3 2 (W1, W2, W3) a b a b 2 xw← ++--------- ()2ξ – 1 a bw 4 4 Normal: x – µ 2 –1 x ← µσ+ –22logξ cos πξ (N1, N2) 1 -2------------ 1 2 fx()= ------------- e σ σ 2π UCBNE, J.
Recommended publications
  • Monte Carlo Simulation in Statistics with a View to Recent Ideas
    (sfi)2 Statistics for Innovation Monte Carlo simulation in statistics, with a view to recent ideas Arnoldo Frigessi [email protected] THE JOURNAL OF CHEMICAL PHYSICS VOLUME 21, NUMBER 6 JUNE, 1953 Equation of State Calculations by Fast Computing Machines NICHOLAS METROPOLIS, ARIANNA W. ROSENBLUTH, MARSHALL N. ROSENBLUTH, AND AUGUSTA H. TELLER, Los Alamos Scientific Laboratory, Los Alamos, New Mexico AND EDWARD TELLER, * Department of Physics, University of Chicago, Chicago, Illinois (Received March 6, 1953) A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion. The Metropolis algorithm from1953 has been cited as among the top 10 algorithms having the "greatest influence on the development and practice of science and engineering in the 20th century." MANIAC The MANIAC II 1952 Multiplication took a milli- second. (Metropolis choose the name MANIAC in the hope of stopping the rash of ”silly” acronyms for machine names.) Some MCMC milestones 1953 Metropolis Heat bath, dynamic Monte Carlo 1970 Hastings Political Analysis, 10:3 (2002) Society for Political Methodology 1984 Geman & Geman Gibbs
    [Show full text]
  • A New Approach for Dynamic Stochastic Fractal Search with Fuzzy Logic for Parameter Adaptation
    fractal and fractional Article A New Approach for Dynamic Stochastic Fractal Search with Fuzzy Logic for Parameter Adaptation Marylu L. Lagunes, Oscar Castillo * , Fevrier Valdez , Jose Soria and Patricia Melin Tijuana Institute of Technology, Calzada Tecnologico s/n, Fracc. Tomas Aquino, Tijuana 22379, Mexico; [email protected] (M.L.L.); [email protected] (F.V.); [email protected] (J.S.); [email protected] (P.M.) * Correspondence: [email protected] Abstract: Stochastic fractal search (SFS) is a novel method inspired by the process of stochastic growth in nature and the use of the fractal mathematical concept. Considering the chaotic stochastic diffusion property, an improved dynamic stochastic fractal search (DSFS) optimization algorithm is presented. The DSFS algorithm was tested with benchmark functions, such as the multimodal, hybrid, and composite functions, to evaluate the performance of the algorithm with dynamic parameter adaptation with type-1 and type-2 fuzzy inference models. The main contribution of the article is the utilization of fuzzy logic in the adaptation of the diffusion parameter in a dynamic fashion. This parameter is in charge of creating new fractal particles, and the diversity and iteration are the input information used in the fuzzy system to control the values of diffusion. Keywords: fractal search; fuzzy logic; parameter adaptation; CEC 2017 Citation: Lagunes, M.L.; Castillo, O.; Valdez, F.; Soria, J.; Melin, P. A New Approach for Dynamic Stochastic 1. Introduction Fractal Search with Fuzzy Logic for Metaheuristic algorithms are applied to optimization problems due to their charac- Parameter Adaptation. Fractal Fract. teristics that help in searching for the global optimum, while simple heuristics are mostly 2021, 5, 33.
    [Show full text]
  • Introduction to Stochastic Processes - Lecture Notes (With 33 Illustrations)
    Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin Contents 1 Probability review 4 1.1 Random variables . 4 1.2 Countable sets . 5 1.3 Discrete random variables . 5 1.4 Expectation . 7 1.5 Events and probability . 8 1.6 Dependence and independence . 9 1.7 Conditional probability . 10 1.8 Examples . 12 2 Mathematica in 15 min 15 2.1 Basic Syntax . 15 2.2 Numerical Approximation . 16 2.3 Expression Manipulation . 16 2.4 Lists and Functions . 17 2.5 Linear Algebra . 19 2.6 Predefined Constants . 20 2.7 Calculus . 20 2.8 Solving Equations . 22 2.9 Graphics . 22 2.10 Probability Distributions and Simulation . 23 2.11 Help Commands . 24 2.12 Common Mistakes . 25 3 Stochastic Processes 26 3.1 The canonical probability space . 27 3.2 Constructing the Random Walk . 28 3.3 Simulation . 29 3.3.1 Random number generation . 29 3.3.2 Simulation of Random Variables . 30 3.4 Monte Carlo Integration . 33 4 The Simple Random Walk 35 4.1 Construction . 35 4.2 The maximum . 36 1 CONTENTS 5 Generating functions 40 5.1 Definition and first properties . 40 5.2 Convolution and moments . 42 5.3 Random sums and Wald’s identity . 44 6 Random walks - advanced methods 48 6.1 Stopping times . 48 6.2 Wald’s identity II . 50 6.3 The distribution of the first hitting time T1 .......................... 52 6.3.1 A recursive formula . 52 6.3.2 Generating-function approach .
    [Show full text]
  • A Selected Bibliography of Publications By, and About, J
    A Selected Bibliography of Publications by, and about, J. Robert Oppenheimer Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 17 March 2021 Version 1.47 Title word cross-reference $1 [Duf46]. $12.95 [Edg91]. $13.50 [Tho03]. $14.00 [Hug07]. $15.95 [Hen81]. $16.00 [RS06]. $16.95 [RS06]. $17.50 [Hen81]. $2.50 [Opp28g]. $20.00 [Hen81, Jor80]. $24.95 [Fra01]. $25.00 [Ger06]. $26.95 [Wol05]. $27.95 [Ger06]. $29.95 [Goo09]. $30.00 [Kev03, Kle07]. $32.50 [Edg91]. $35 [Wol05]. $35.00 [Bed06]. $37.50 [Hug09, Pol07, Dys13]. $39.50 [Edg91]. $39.95 [Bad95]. $8.95 [Edg91]. α [Opp27a, Rut27]. γ [LO34]. -particles [Opp27a]. -rays [Rut27]. -Teilchen [Opp27a]. 0-226-79845-3 [Guy07, Hug09]. 0-8014-8661-0 [Tho03]. 0-8047-1713-3 [Edg91]. 0-8047-1714-1 [Edg91]. 0-8047-1721-4 [Edg91]. 0-8047-1722-2 [Edg91]. 0-9672617-3-2 [Bro06, Hug07]. 1 [Opp57f]. 109 [Con05, Mur05, Nas07, Sap05a, Wol05, Kru07]. 112 [FW07]. 1 2 14.99/$25.00 [Ber04a]. 16 [GHK+96]. 1890-1960 [McG02]. 1911 [Meh75]. 1945 [GHK+96, Gow81, Haw61, Bad95, Gol95a, Hew66, She82, HBP94]. 1945-47 [Hew66]. 1950 [Ano50]. 1954 [Ano01b, GM54, SZC54]. 1960s [Sch08a]. 1963 [Kuh63]. 1967 [Bet67a, Bet97, Pun67, RB67]. 1976 [Sag79a, Sag79b]. 1981 [Ano81]. 20 [Goe88]. 2005 [Dre07]. 20th [Opp65a, Anoxx, Kai02].
    [Show full text]
  • 1 Stochastic Processes and Their Classification
    1 1 STOCHASTIC PROCESSES AND THEIR CLASSIFICATION 1.1 DEFINITION AND EXAMPLES Definition 1. Stochastic process or random process is a collection of random variables ordered by an index set. ☛ Example 1. Random variables X0;X1;X2;::: form a stochastic process ordered by the discrete index set f0; 1; 2;::: g: Notation: fXn : n = 0; 1; 2;::: g: ☛ Example 2. Stochastic process fYt : t ¸ 0g: with continuous index set ft : t ¸ 0g: The indices n and t are often referred to as "time", so that Xn is a descrete-time process and Yt is a continuous-time process. Convention: the index set of a stochastic process is always infinite. The range (possible values) of the random variables in a stochastic process is called the state space of the process. We consider both discrete-state and continuous-state processes. Further examples: ☛ Example 3. fXn : n = 0; 1; 2;::: g; where the state space of Xn is f0; 1; 2; 3; 4g representing which of four types of transactions a person submits to an on-line data- base service, and time n corresponds to the number of transactions submitted. ☛ Example 4. fXn : n = 0; 1; 2;::: g; where the state space of Xn is f1; 2g re- presenting whether an electronic component is acceptable or defective, and time n corresponds to the number of components produced. ☛ Example 5. fYt : t ¸ 0g; where the state space of Yt is f0; 1; 2;::: g representing the number of accidents that have occurred at an intersection, and time t corresponds to weeks. ☛ Example 6. fYt : t ¸ 0g; where the state space of Yt is f0; 1; 2; : : : ; sg representing the number of copies of a software product in inventory, and time t corresponds to days.
    [Show full text]
  • The Convergence of Markov Chain Monte Carlo Methods 3
    The Convergence of Markov chain Monte Carlo Methods: From the Metropolis method to Hamiltonian Monte Carlo Michael Betancourt From its inception in the 1950s to the modern frontiers of applied statistics, Markov chain Monte Carlo has been one of the most ubiquitous and successful methods in statistical computing. The development of the method in that time has been fueled by not only increasingly difficult problems but also novel techniques adopted from physics. In this article I will review the history of Markov chain Monte Carlo from its inception with the Metropolis method to the contemporary state-of-the-art in Hamiltonian Monte Carlo. Along the way I will focus on the evolving interplay between the statistical and physical perspectives of the method. This particular conceptual emphasis, not to mention the brevity of the article, requires a necessarily incomplete treatment. A complementary, and entertaining, discussion of the method from the statistical perspective is given in Robert and Casella (2011). Similarly, a more thorough but still very readable review of the mathematics behind Markov chain Monte Carlo and its implementations is given in the excellent survey by Neal (1993). I will begin with a discussion of the mathematical relationship between physical and statistical computation before reviewing the historical introduction of Markov chain Monte Carlo and its first implementations. Then I will continue to the subsequent evolution of the method with increasing more sophisticated implementations, ultimately leading to the advent of Hamiltonian Monte Carlo. 1. FROM PHYSICS TO STATISTICS AND BACK AGAIN At the dawn of the twentieth-century, physics became increasingly focused on under- arXiv:1706.01520v2 [stat.ME] 10 Jan 2018 standing the equilibrium behavior of thermodynamic systems, especially ensembles of par- ticles.
    [Show full text]
  • The Markov Chain Monte Carlo Approach to Importance Sampling
    The Markov Chain Monte Carlo Approach to Importance Sampling in Stochastic Programming by Berk Ustun B.S., Operations Research, University of California, Berkeley (2009) B.A., Economics, University of California, Berkeley (2009) Submitted to the School of Engineering in partial fulfillment of the requirements for the degree of Master of Science in Computation for Design and Optimization at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2012 c Massachusetts Institute of Technology 2012. All rights reserved. Author.................................................................... School of Engineering August 10, 2012 Certified by . Mort Webster Assistant Professor of Engineering Systems Thesis Supervisor Certified by . Youssef Marzouk Class of 1942 Associate Professor of Aeronautics and Astronautics Thesis Reader Accepted by . Nicolas Hadjiconstantinou Associate Professor of Mechanical Engineering Director, Computation for Design and Optimization 2 The Markov Chain Monte Carlo Approach to Importance Sampling in Stochastic Programming by Berk Ustun Submitted to the School of Engineering on August 10, 2012, in partial fulfillment of the requirements for the degree of Master of Science in Computation for Design and Optimization Abstract Stochastic programming models are large-scale optimization problems that are used to fa- cilitate decision-making under uncertainty. Optimization algorithms for such problems need to evaluate the expected future costs of current decisions, often referred to as the recourse function. In practice, this calculation is computationally difficult as it involves the evaluation of a multidimensional integral whose integrand is an optimization problem. Accordingly, the recourse function is estimated using quadrature rules or Monte Carlo methods. Although Monte Carlo methods present numerous computational benefits over quadrature rules, they require a large number of samples to produce accurate results when they are embedded in an optimization algorithm.
    [Show full text]
  • Random Numbers and Stochastic Simulation
    Stochastic Simulation and Randomness Random Number Generators Quasi-Random Sequences Scientific Computing: An Introductory Survey Chapter 13 – Random Numbers and Stochastic Simulation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial, educational use only. Michael T. Heath Scientific Computing 1 / 17 Stochastic Simulation and Randomness Random Number Generators Quasi-Random Sequences Stochastic Simulation Stochastic simulation mimics or replicates behavior of system by exploiting randomness to obtain statistical sample of possible outcomes Because of randomness involved, simulation methods are also known as Monte Carlo methods Such methods are useful for studying Nondeterministic (stochastic) processes Deterministic systems that are too complicated to model analytically Deterministic problems whose high dimensionality makes standard discretizations infeasible (e.g., Monte Carlo integration) < interactive example > < interactive example > Michael T. Heath Scientific Computing 2 / 17 Stochastic Simulation and Randomness Random Number Generators Quasi-Random Sequences Stochastic Simulation, continued Two main requirements for using stochastic simulation methods are Knowledge of relevant probability distributions Supply of random numbers for making random choices Knowledge of relevant probability distributions depends on theoretical or empirical information about physical system being simulated By simulating large number of trials, probability
    [Show full text]
  • Oral History Interview with Nicolas C. Metropolis
    An Interview with NICOLAS C. METROPOLIS OH 135 Conducted by William Aspray on 29 May 1987 Los Alamos, NM Charles Babbage Institute The Center for the History of Information Processing University of Minnesota, Minneapolis Copyright, Charles Babbage Institute 1 Nicholas C. Metropolis Interview 29 May 1987 Abstract Metropolis, the first director of computing services at Los Alamos National Laboratory, discusses John von Neumann's work in computing. Most of the interview concerns activity at Los Alamos: how von Neumann came to consult at the laboratory; his scientific contacts there, including Metropolis, Robert Richtmyer, and Edward Teller; von Neumann's first hands-on experience with punched card equipment; his contributions to shock-fitting and the implosion problem; interactions between, and comparisons of von Neumann and Enrico Fermi; and the development of Monte Carlo techniques. Other topics include: the relationship between Turing and von Neumann; work on numerical methods for non-linear problems; and the ENIAC calculations done for Los Alamos. 2 NICHOLAS C. METROPOLIS INTERVIEW DATE: 29 May 1987 INTERVIEWER: William Aspray LOCATION: Los Alamos, NM ASPRAY: This is an interview on the 29th of May, 1987 in Los Alamos, New Mexico with Dr. Nicholas Metropolis. Why don't we begin by having you tell me something about when you first met von Neumann and what your relationship with him was over a period of time? METROPOLIS: Well, he first came to Los Alamos on the invitation of Dr. Oppenheimer. I don't remember quite when the date was that he came, but it was rather early on. It may have been the fall of 1943.
    [Show full text]
  • Nash Q-Learning for General-Sum Stochastic Games
    Journal of Machine Learning Research 4 (2003) 1039-1069 Submitted 11/01; Revised 10/02; Published 11/03 Nash Q-Learning for General-Sum Stochastic Games Junling Hu [email protected] Talkai Research 843 Roble Ave., 2 Menlo Park, CA 94025, USA Michael P. Wellman [email protected] Artificial Intelligence Laboratory University of Michigan Ann Arbor, MI 48109-2110, USA Editor: Craig Boutilier Abstract We extend Q-learning to a noncooperative multiagent context, using the framework of general- sum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably converges given certain restrictions on the stage games (defined by Q-values) that arise during learning. Experiments with a pair of two-player grid games suggest that such restric- tions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Q-function, but sometimes fails to converge in the second, which has three different equilibrium Q-functions. In a comparison of offline learn- ing performance in both games, we find agents are more likely to reach a joint optimal path with Nash Q-learning than with a single-agent Q-learning method. When at least one agent adopts Nash Q-learning, the performance of both agents is better than using single-agent Q-learning. We have also implemented an online version of Nash Q-learning that balances exploration with exploitation, yielding improved performance.
    [Show full text]
  • Doing Mathematics on the ENIAC. Von Neumann's and Lehmer's
    Doing Mathematics on the ENIAC. Von Neumann's and Lehmer's different visions.∗ Liesbeth De Mol Center for Logic and Philosophy of Science University of Ghent, Blandijnberg 2, 9000 Gent, Belgium [email protected] Abstract In this paper we will study the impact of the computer on math- ematics and its practice from a historical point of view. We will look at what kind of mathematical problems were implemented on early electronic computing machines and how these implementations were perceived. By doing so, we want to stress that the computer was in fact, from its very beginning, conceived as a mathematical instru- ment per se, thus situating the contemporary usage of the computer in mathematics in its proper historical background. We will focus on the work by two computer pioneers: Derrick H. Lehmer and John von Neumann. They were both involved with the ENIAC and had strong opinions about how these new machines might influence (theoretical and applied) mathematics. 1 Introduction The impact of the computer on society can hardly be underestimated: it affects almost every aspect of our lives. This influence is not restricted to ev- eryday activities like booking a hotel or corresponding with friends. Science has been changed dramatically by the computer { both in its form and in ∗The author is currently a postdoctoral research fellow of the Fund for Scientific Re- search { Flanders (FWO) and a fellow of the Kunsthochschule f¨urMedien, K¨oln. 1 its content. Also mathematics did not escape this influence of the computer. In fact, the first computer applications were mathematical in nature, i.e., the first electronic general-purpose computing machines were used to solve or study certain mathematical (applied as well as theoretical) problems.
    [Show full text]
  • Stochastic Versus Uncertainty Modeling
    Introduction Analytical Solutions The Cloud Model Conclusions Stochastic versus Uncertainty Modeling Petra Friederichs, Michael Weniger, Sabrina Bentzien, Andreas Hense Meteorological Institute, University of Bonn Dynamics and Statistic in Weather and Climate Dresden, July 29-31 2009 P.Friederichs, M.Weniger, S.Bentzien, A.Hense Stochastic versus Uncertainty Modeling 1 / 21 Introduction Analytical Solutions Motivation The Cloud Model Outline Conclusions Uncertainty Modeling I Initial conditions { aleatoric uncertainty Ensemble with perturbed initial conditions I Model error { epistemic uncertainty Perturbed physic and/or multi model ensembles Simulations solving deterministic model equations! P.Friederichs, M.Weniger, S.Bentzien, A.Hense Stochastic versus Uncertainty Modeling 2 / 21 Introduction Analytical Solutions Motivation The Cloud Model Outline Conclusions Stochastic Modeling I Stochastic parameterization { aleatoric uncertainty Stochastic model ensemble I Initial conditions { aleatoric uncertainty Ensemble with perturbed initial conditions I Model error { epistemic uncertainty Perturbed physic and/or multi model ensembles Simulations solving stochastic model equations! P.Friederichs, M.Weniger, S.Bentzien, A.Hense Stochastic versus Uncertainty Modeling 3 / 21 Introduction Analytical Solutions Motivation The Cloud Model Outline Conclusions Outline Contrast both concepts I Analytical solutions of simple damping equation dv(t) = −µv(t)dt I Simplified, 1-dimensional, time-dependent cloud model I Problems P.Friederichs, M.Weniger, S.Bentzien,
    [Show full text]