An Invitation to the Monte Carlo Method: Simple Integration and Simulation

Total Page:16

File Type:pdf, Size:1020Kb

An Invitation to the Monte Carlo Method: Simple Integration and Simulation An Invitation to the Monte Carlo Method: Simple Integration and Simulation addprocs(4) import PyCall @everywhere using PyCall @pyimport IPython.display as IPyD 1.0 - Introduction IPyD.Image(url="https://upload.wikimedia.org/wikipedia/commons/8/8a/STAN_UL AM_HOLDING_THE_FERMIAC.jpg", width=400) 1.0.1 - From Solitaire to Neutron Screening: The History of the Monte Carlo Method In a nutshell, a Monte Carlo method is some computational algorithm that utilizes repeated random sampling (either via a true random number generator or with respect to some probability distribution). By accepting or rejecting these random points according to some conditions placed on the system, one can estimate some observable or quantity by performing statistical analysis in the large number limit. The first true Monte Carlo simulation was done by famed physicist and Nobel laureate Enrico Fermi in the 1930s, with the use of a brass and glass analog computer now known as the Monte Carlo trolley or FERMIAC (pictured above with mathematician Stanislaw Ulam, whom we'll get to in a moment.) Before the birth of the early electronic supercomputer ENIAC (the FERMIAC's namesake), Fermi and his students used the FERMIAC to study the properties of neutrons as they traveled through a material. The placement of the drums was determined by the material in question and the energy of the neutrons ("slow" or "fast"), the latter chosen from a pseudorandom number. The neutron then traveled to a new location (also determined by a set of random numbers), and the trolley rolled along and drew the path of the neutron on top of a scale model of the nuclear device. If the neutron entered a new material, the drums were recalibrated. Below, we see the FERMIAC being used for some calculation: IPyD.Image(url="https://upload.wikimedia.org/wikipedia/commons/8/8d/FERMIAC .jpg", width=450) The first full-blown Monte Carlo simulation was born out of the post-war period of nuclear physics research at Los Alamos National Laboratory. Although Enrico Fermi was the first to utilize such statistical sampling for the study of neutron physics, it was Polish-American mathematician Stanislaw Ulam who took the critical step of generalizing the method to large- scale computation. Ulam's insight can be traced back to the mid-1940s, not in an attempt to solve some difficult physical problem or create some new thermonuclear weapon, but instead to his love of solitaire. He wondered if there was any way to calculate the probability of winning a game of Canfield solitaire, a notoriously difficult variant of the traditional card game known for its low probability of winning. Despite the simple nature of the game, the combinatorics behind a winning game is highly non-trivial. Instead of attempting to find an analytical solution to this problem, Ulam imagined playing a hundred games and building an approximate probability distribution by emprically observing how many games were won and how many were lost. Such a simulation could be done on a computer, stochastically sampling the probability density by playing a large number of games via psuedorandom number generation. Subsequently, we build the probability distribution piece by piece. The importance of such a method in the physical sciences was noticed by Ulam's colleague at Los Alamos, the polymath and father of game theory John von Neumann. Together with von Neumann, Ulam was able to apply his method of empirically building probability distributions to the problem of radiation shielding. With the average distance a single neutron could travel before striking a nucleus found experimentally, Ulam's empirical construction of some general probability distribution could be used to find the estimated penetration depth of an ensemble of neutrons in a given material. In essence, von Neumann treated the history of a single neutron as a game of solitaire, sampling the probability distribution via pseudorandom number generation to determine if a given neutron will scatter or collide, or if fission will be induced. A "geneology" of each neutron can then be generated. With the creation of the ENIAC (Electronic Numerical Integrator and Computer) at the University of Pennsylvania in early 1946, computer simulations of such a system finally became a reality, and soon Ulam and von Neumann's algorithm was applied to a variety of difficult statistical problems, ranging from the hydrogen bomb to issues in physical chemistry. Ulam and von Neumann chose the moniker "Monte Carlo" upon suggestion by fellow Los Alamos physicist Nicholas Metropolis, in reference to Ulam's uncle who was known to frequent the popular Monacan casino. We'll learn more about Metropolis and his contributions to the now-famous Monte Carlo method later in these notes. The implementation of the method differs greatly depending on what system (physical, mathematical, economic, etc.) we are talking about. Our goal is to dive into quantum Monte Carlo for many-body systems, but to do that we have to build up a dictionary of basic functions and algorithms to efficiently and simply stochastically sample the given probability distribution, as well as to visualize the results. For that, we begin with a simple Monte Carlo integration example, and then procede to importance sampling and from there dip our toes into some simple physical systems of historical interest before we tackle the big guns. 1.0.2 - The Theory behind Monte Carlo Integration The reason for taking Monte Carlo integration as our first example is three-fold. First of all, it's a nice, simple introduction that people can understand without any knowledge of physics. Second, we can generalize the functions used in this code for future, more complex Monte Carlo programs. Third, Monte Carlo integration is, in fact, a general extension of regular discrete integration. To see this in it's entirety, we start with the definition of some probability distribution function . The probability of finding a random variable in this distribution between points and is then given by In the case of a uniformly random distribution in the interval , it is important to note that If we want the expected value of some function over the probability distribution, we just have to integrate over this probability distribution: Therefore, assuming a uniformly random distribution as above, we can estimate the integral of from a to b as follows: Because we are considering a uniformly random distribution, it is very easy to find via stochastically sampling across the given data range, selecting only the data that is "within" the function . This gives us a strategy for Monte Carlo integration for integration: calculate the expected value of some function via stochastic sampling, divide by the total number of points in the interval you're sampling, and then multiply by the total length/area/volume of the interval. Note that this is very similar to Simpson's rule or the trapazoidal rule, except here we sum over random points of the function rather than systematically finding some polynomial to tabulate the integral. 1.1 - Our First Monte Carlo Simulation: Numerically finding 1.1.1 - Buffon's Needle and Lazzarini's approximation of Pi IPyD.Image(url="https://upload.wikimedia.org/wikipedia/commons/5/5e/Buffon_ 1707-1788.jpg",width=350) The calculation of via stochastic sampling is nothing new to Monte Carlo. One of the first estimations of via random simulation is an application of Buffon's needle problem, a mathematical puzzle first proposed by Georges-Louis Leclerc, Comte de Buffon (shown above) in the 18th century. To understand the problem, imagine that we drop a needle on a wood panel floor, with each panel of the same width. What is the probability that a needle will fall on the border between a two panels? If we take a needle of length of length and a floor with boards of width , we can quantify this probability by defining the angle between the needle and the plane perpendicular to the panel border. If the end of the needle that is farthest away from the border is within a horizontal distance of , the needle will cross the border. We can thus integrate from to and divide by the total corresponding width in this region to obtain the solution to Buffon's problem: The application of Buffon's probability to the experimental determination of was first noticed by Mario Lazzarini at the dawn of the 20th century. By simply rearranging the above formula, we find a relationship for that depends upon a Monte Carlo-style stochastic sampling: Due to the simple nature of this system, it would be a reasonable starting point to begin with Buffon's needle problem and see if we can obtain a value for . The algorithm is simple: 1) Drop N needles of length onto a "board" with panels and n borders a distance apart. 2) Count the number of needles that land on a border. 3) Calculate the probability , and subsequently find the value of via the equation The easiest way to do this is by setting the width to in some arbitrary units. We can then initialize the number of needles on the border to be zero (e.g., on_border=0), and loop through Monte Carlo steps. During each step, we'll throw a needle of length onto the board, determine if its on the border, and, if it did indeed fall on the border between two panels, add it to on_border. If we take the panels to be arranged in the y-direction, then we can simplify our work by only taking simulating the initial -point of the head and the relative angle between the needle head and the horizonal axis. The code is given below: # MC_Buffon # author(s): Joshuah Heath # # function to calculate the
Recommended publications
  • The Convergence of Markov Chain Monte Carlo Methods 3
    The Convergence of Markov chain Monte Carlo Methods: From the Metropolis method to Hamiltonian Monte Carlo Michael Betancourt From its inception in the 1950s to the modern frontiers of applied statistics, Markov chain Monte Carlo has been one of the most ubiquitous and successful methods in statistical computing. The development of the method in that time has been fueled by not only increasingly difficult problems but also novel techniques adopted from physics. In this article I will review the history of Markov chain Monte Carlo from its inception with the Metropolis method to the contemporary state-of-the-art in Hamiltonian Monte Carlo. Along the way I will focus on the evolving interplay between the statistical and physical perspectives of the method. This particular conceptual emphasis, not to mention the brevity of the article, requires a necessarily incomplete treatment. A complementary, and entertaining, discussion of the method from the statistical perspective is given in Robert and Casella (2011). Similarly, a more thorough but still very readable review of the mathematics behind Markov chain Monte Carlo and its implementations is given in the excellent survey by Neal (1993). I will begin with a discussion of the mathematical relationship between physical and statistical computation before reviewing the historical introduction of Markov chain Monte Carlo and its first implementations. Then I will continue to the subsequent evolution of the method with increasing more sophisticated implementations, ultimately leading to the advent of Hamiltonian Monte Carlo. 1. FROM PHYSICS TO STATISTICS AND BACK AGAIN At the dawn of the twentieth-century, physics became increasingly focused on under- arXiv:1706.01520v2 [stat.ME] 10 Jan 2018 standing the equilibrium behavior of thermodynamic systems, especially ensembles of par- ticles.
    [Show full text]
  • Lecture #8: Monte Carlo Method
    Lecture #8: Monte Carlo method ENGG304: Uncertainty, Reliability and Risk Edoardo Patelli Institute for Risk and Uncertainty E: [email protected] W: www.liv.ac.uk/risk-and-uncertainty T: +44 01517944079 A MEMBER OF THE RUSSELL GROUP Edoardo Patelli University of Liverpool 18 March 2019 1 / 81 Lecture Outline 1 Introduction 2 Monte Carlo method Random Number Generator Sampling Methods Buffon’s experiment Monte Carlo integration Probability of failure 3 Summary 4 Computer based class Some useful slides Assignments Edoardo Patelli University of Liverpool 18 March 2019 2 / 81 Introduction Programme ENGG304 1 28/01/2019 Introduction 2 08/02/2019 Human error 3 11/02/2019 Qualitative risk assessment: Safety analysis 4 18/02/2019 Qualitative risk assessment: Event Tree and Fault Tree 5 25/02/2019 Tutorial I 6 04/03/2019 Model of random phenomena 7 11/03/2019 Structural reliability 8 18/03/2019 Monte Carlo simulation I + Hands-on session 9 25/03/2019 Tutorial II 10 01/04/2019 Monte Carlo simulation II + Tutorial III (Hands-on session) Edoardo Patelli University of Liverpool 18 March 2019 3 / 81 Introduction Summary Lecture #7 Safety Margin Fundamental problem Performance function defined as “Capacity” - “Demand” Safety margin: M = C − D = g(x) 2 For normal and independent random variables: M ∼ N(µM ; σM ) q 2 2 µM = µC − µD σM = σC + σD Reliability index β β = µM /σM represents the number of standard deviations by which the mean value of the safety margin M exceeds zero Edoardo Patelli University of Liverpool 18 March 2019 4 / 81 Introduction
    [Show full text]
  • Computer History a Look Back Contents
    Computer History A look back Contents 1 Computer 1 1.1 Etymology ................................................. 1 1.2 History ................................................... 1 1.2.1 Pre-twentieth century ....................................... 1 1.2.2 First general-purpose computing device ............................. 3 1.2.3 Later analog computers ...................................... 3 1.2.4 Digital computer development .................................. 4 1.2.5 Mobile computers become dominant ............................... 7 1.3 Programs ................................................. 7 1.3.1 Stored program architecture ................................... 8 1.3.2 Machine code ........................................... 8 1.3.3 Programming language ...................................... 9 1.3.4 Fourth Generation Languages ................................... 9 1.3.5 Program design .......................................... 9 1.3.6 Bugs ................................................ 9 1.4 Components ................................................ 10 1.4.1 Control unit ............................................ 10 1.4.2 Central processing unit (CPU) .................................. 11 1.4.3 Arithmetic logic unit (ALU) ................................... 11 1.4.4 Memory .............................................. 11 1.4.5 Input/output (I/O) ......................................... 12 1.4.6 Multitasking ............................................ 12 1.4.7 Multiprocessing .........................................
    [Show full text]
  • RN Random Numbers
    Random Numb ers RN Copyright C June Computational Science Education Pro ject Remarks Keywords Random pseudorandom linear congruential lagged Fib onacci List of prerequisites some exp osure to sequences and series some abilitytowork in base arithmetic helpful go o d background in FORTRAN or some other pro cedural language List of computational metho ds pseudorandom numb ers random numb ers linear congruential generators LCGs lagged Fib onacci generators LFGs List of architecturescomputers any computer with a FORTRAN compiler and bitwise logical functions List of co des supplied ranlcfor linear congruential generator ranlffor lagged Fib onacci generator randomfor p ortable linear congruential generator after Park and Miller getseedfor to generate initial seeds from the time and date implemented for Unix and IBM PCs Scop e lectures dep ending up on depth of coverage Notation Key a integer multiplier a m c integer constant c m D length of needle in Buons needle exp eriment F fraction of trials needle falls within ruled grid in Buons needle exp eriment k lag in lagged Fib onacci generator k lag in lagged Fib onacci generator k M numb er of binary bits in m m integer mo dulus m N numb er of trials N numb er of parallel pro cessors p P p erio d of generator p a prime integer R random real numb er R n n S spacing b etween grid in Buons needle exp eriment th X n random integer n Subscripts and Sup erscripts denotes initial value or seed th j denotes j value th denotes value th k denotes k value th n denotes n value
    [Show full text]
  • Markov Chain Monte Carlo Methods
    Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos Contents Markov Chain Monte Carlo Methods Sampling • Rejection • Importance • Hastings-Metropolis • Gibbs Markov Chains • Properties Particle Filtering • Condensation 2 Monte Carlo Methods 3 Markov Chain Monte Carlo History A recent survey places the Metropolis algorithm among the 10 algorithms that have had the greatest influence on the development and practice of science and engineering in the 20th century (Beichl&Sullivan, 2000). The Metropolis algorithm is an instance of a large class of sampling algorithms, known as Markov chain Monte Carlo (MCMC) MCMC plays significant role in statistics, econometrics, physics and computing science. There are several high-dimensional problems, such as computing the volume of a convex body in d dimensions and other high-dim integrals, for which MCMC simulation is the only known general approach for providing a solution within a reasonable time. 4 Markov Chain Monte Carlo History While convalescing from an illness in 1946, Stanislaw Ulam was playing solitaire. It, then, occurred to him to try to compute the chances that a particular solitaire laid out with 52 cards would come out successfully (Eckhard, 1987). Exhaustive combinatorial calculations are very difficult. New, more practical approach: laying out several solitaires at random and then observing and counting the number of successful plays. This idea of selecting a statistical sample to approximate a hard combinatorial problem by a much simpler problem is at the heart of modern Monte Carlo simulation. 5 FERMIAC FERMIAC: The beginning of Monte Carlo Methods Developed in the Early 1930’s The Monte Carlo trolley, or FERMIAC, was an analog computer invented by physicist Enrico Fermi to aid in his studies of neutron transport.
    [Show full text]
  • THE BEGINNING of the MONTE CARLO METHOD by N
    THE BEGINNING of the MONTE CARLO METHOD by N. Metropolis he year was 1945. Two earth- Some Background first electronic computer—the ENIAC— shaking events took place: the at the University of Pennsylvania in Phil- successful test at Alamogordo Most of us have grown so blase about adelphia. Their mentors were Physicist and the building of the first elec- computer developments and capabilities First Class John Mauchly and Brilliant tronicT computer. Their combined impact -even some that are spectacular—that Engineer Presper Eckert. Mauchly, fa- was to modify qualitatively the nature of it is difficult to believe or imagine there miliar with Geiger counters in physics global interactions between Russia and was a time when we suffered the noisy, laboratories, had realized that if electronic the West. No less perturbative were the painstakingly slow, electromechanical de- circuits could count, then they could do changes wrought in all of academic re- vices that chomped away on punched arithmetic and hence solve, inter alia, dif- search and in applied science. On a less cards. Their saving grace was that they ference equations—at almost incredible grand scale these events brought about a continued working around the clock, ex- speeds! When he’d seen a seemingly renascence of a mathematical technique cept for maintenance and occasional re- limitless array of women cranking out known to the old guard as statistical sam- pair (such as removing a dust particle firing tables with desk calculators, he’d pling; in its new surroundings and owing from a relay gap). But these machines been inspired to propose to the Ballistics to its nature, there was no denying its new helped enormously with the routine, rela- Research Laboratory at Aberdeen that an name of the Monte Carlo method.
    [Show full text]
  • Benchmarking OLTARIS for Deep Space Dose Analyses Using MCNP6
    Benchmarking OLTARIS For Deep Space Dose Analyses Using MCNP6 By John Daniel Baunach Thesis Submitted to the Faculty of the Graduate School of Vanderbilt University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in Physics May, 2015 Nashville, Tennessee Approved: Michael G. Stabin, Ph.D. Todd E. Peterson, Ph.D. Copyright © 2015 by John Daniel Baunach All Rights Reserved ii ACKNOWLEDGEMENTS I much first acknowledge the Nuclear Regulatory Commission, for without their Nuclear Education Grant Program I would not have had the chance to follow this research wherever it led. This opportunity for graduate-level research is something I never expected to have, so on behalf of all students who are supported by these funds -- thank you. An immeasurable debt of gratitude is owed to my advisor, Dr. Michael Stabin. His endless patience in teaching the most stubborn and frustrating of students is an inspiration as a person, as a teacher, and as an aspiring health physicist. The lessons I have learned under his tutelage far exceed any mere memorization of facts, and have taught me to expect a high standard in my own research and professional relationships. For this and much more -- thank you. Much debt is also owed to Dr. Robert Singleterry of NASA's Langley Research Center. I could not ask for a more knowledgeable guide for the content of this thesis, and for impressing on me the true challenges of any potential interplanetary spaceflight. For allowing this student to even be in the room with professionals I have looked up to my entire life -- thank you.
    [Show full text]
  • Stanislaw M. Ulam Papers 1916-1984 Mss.Ms.Coll.54
    Stanislaw M. Ulam Papers 1916-1984 Mss.Ms.Coll.54 American Philosophical Society 2002 105 South Fifth Street Philadelphia, PA, 19106 215-440-3400 [email protected] Stanislaw M. Ulam Papers 1916-1984 Mss.Ms.Coll.54 Table of Contents Summary Information ................................................................................................................................. 3 Background note ......................................................................................................................................... 4 Scope & content ..........................................................................................................................................7 Administrative Information .........................................................................................................................9 Related Materials ........................................................................................................................................ 9 Indexing Terms ........................................................................................................................................... 9 Bibliography ..............................................................................................................................................11 Collection Inventory ..................................................................................................................................12 Series I. Professional correspondence, 1932-1986................................................................................12
    [Show full text]
  • History of Monte Carlo
    The History of the Monte Carlo Methods Hamed Jafari Research Division of Computer Graphics Institute of Visual Computing & Human-Centered Technology TU Wien, Austria The Monte Carlo Integration The History of Monte Carlo Methods 2 1930s: Enrico Fermi Italian Physicist (1901-1954) „Architect of nuclear age“ Creator of world‘s 1st reactor Developed the idea of statistical sampling Left Italy and became a scientist on the Manhattan Project at Los Alamos in New Mexico The History of the Monte Carlo Methods 3 Early 1940s: The idea of the first digital Computer by John Mauchly and Presper Eckert At University of Pennsylvania They Proposed the idea of electronic computer to Ballistic Research Lab in Aberdeen Presper Eckert (center) Lead to “Project PX” The History of the Monte Carlo Methods 4 The ENIAC The first programmable, electronic, general-purpose digital computer Electronic Numerical Integrator and Computer 27 tons, 167 square meters, 18000 vacuum tubes Completed in 1946 – operational from 1947 until 1955 First review tests were firing table problems for Aberdeen vacuum tubes function tables The History of the Monte Carlo Methods 5 John von Neumann Hungarian-American mathematician, computer scientist and physicist. 1903-1957 1940s: Consultant to both Los Alamos and Aberdeen That connection lead to his involvement with ENIAC The History of the Monte Carlo Methods 6 Nicholas Metropolis Greek-American physicist (1915-1999) Was also working at Los Alamos Known for Metropolis-Hasting algorithm and as one of the creators of MANIAC The History
    [Show full text]
  • ENIAC– the First Electronic Computer, University of Pennsylvania
    Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA The Monte Carlo Method and MCNP – A Brief Review of Our 40 Year History Presentation to the International Topical Meeting on Industrial Radiation and Radioisotope Measurement Applications Conference Avneet Sood, PhD XCP-3 Group Leader 10 July 2017 Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA LA-UR-17-25633 Outline • Origins of the Monte Carlo method – Development of electronic computers and Monte Carlo method occur simultaneously – Ulam, Von Neumann, Richtmeyer, Metropolis, Fermi • Growth and usage of Monte Carlo codes – 1950’s, 1960’s, and 1970’s • Monte Carlo becomes mainstream; nuclear criticality and reactor • Emergence of MCNP • MCNP’s history • MCNP’s upcoming future *nearly all references can be found at: https://laws.lanl.gov/vhosts/mcnp.lanl.gov/references.shtml Los Alamos National Laboratory 7/14/2017 | 3 Abstract The Monte Carlo method for radiation particle transport has its origins at LANL dating back to the 1940’s. The creators of these methods were Drs. Stanislaw Ulam, John von Neumann, Robert Richtmyer, and Nicholas Metropolis. Monte Carlo methods for particle transport have been driving computational developments since the beginning of modern computers; this continues today. In the 1950’s and 1960’s, these new methods were organized into a series of special-purpose Monte Carlo codes, including MCS, MCN, MCP, and MCG. These codes were able to transport neutrons and photons for specialized LANL applications. In 1977, these separate codes were combined to create the first generalized Monte Carlo radiation particle transport code, MCNP.
    [Show full text]
  • Le Prime Macchine Da Calcolo Italiane E I Loro Inventori1
    Le prime macchine da calcolo italiane e i loro inventori1 Silvio Hénin Introduzione Se si tien conto di tutti i tipi di strumenti matematici inventati, fin dalle tavole aritmetiche sumere (ca. 2500 a.C.) e dagli abachi cinesi e greci (V – IV secolo a.C.) si può dedurre che dispositivi per alleviare la noiosa fatica del calcolo numerico apparvero molto presto nella storia delle civiltà [Campbell-Kelly 2003, Needham 1959, Schärlig 2001]. Le tavole delle moltiplicazioni e gli abachi erano più che sufficienti per il limitato fabbisogno di calcolo di mercanti, contabili e amministratori, almeno fino al XVII secolo, quando la rivoluzione scientifica e la crescita delle burocrazie governative richiesero calcoli più complessi e una maggior precisione, tanto da indurre il filosofo e matematico Gottfried Leibniz a esclamare “è indegno che uomini eccellenti debbano sprecare il loro tempo nel calcolo, quando un qualunque contadino potrebbe farlo altrettanto bene con l’aiuto di una macchina” e a inventare un calcolatore meccanico nel 1672, il terzo della storia dopo quelli di Schickard (1623) e di Pascal (1645). Nei successivi due secoli, numerosi scienziati e semplici artigiani si dedicarono all’invenzione e alla costruzione di macchine per il calcolo aritmetico, trasformando lentamente questi congegni da ‘inutili giocattoli […] destinati al fallimento’ [Williams 1990] ‒ seppure ingegnosi ed esteticamente attraenti ‒ in oggetti utili, producibili in massa e venduti in gran numero. Fu l’avvento del capitalismo industriale alla fine del XIX secolo, che causò una ‘crisi del controllo’ [Beninger 1986] che a sua volta generò un mercato ricettivo per macchine calcolatrici veloci e precise, necessarie alla elaborazione di dati numerici e statistici.
    [Show full text]
  • Monte Carlo Methods: Early History and the Basics
    MCMs: Early History and The Basics Monte Carlo Methods: Early History and The Basics Prof. Michael Mascagni Department of Computer Science Department of Mathematics Department of Scientific Computing Florida State University, Tallahassee, FL 32306 USA E-mail: [email protected] or [email protected] URL: http://www.cs.fsu.edu/∼mascagni MCMs: Early History and The Basics Outline of the Talk Early History of Probability Theory and Monte Carlo Methods Early History of Probability Theory The Stars Align at Los Alamos The Problems The People The Technology Monte Carlo Methods The Birth General Concepts of the Monte Carlo Method References MCMs: Early History and The Basics Early History of Probability Theory and Monte Carlo Methods Early History of Probability Theory Early History of Probability Theory I Probability was first used to understand games of chance 1. Antoine Gombaud, chevalier de Méré, a French nobleman called on Blaise Pascal and Pierre de Fermat were called on to resolve a dispute 2. Correspondence between Pascal and Fermat led to Huygens writing a text on “Probability" 3. Jacob Bernoulli, Abraham de Moivre, and Pierre-Simon, marquis de Laplace, led development of modern “Probability" 4. 1812: Laplace, Théorie Analytique des Probabilités MCMs: Early History and The Basics Early History of Probability Theory and Monte Carlo Methods Early History of Probability Theory Early History of Monte Carlo: Before Los Alamos I Buffon Needle Problem: Early Monte Carlo (experimental mathematics) 1. Problem was first stated in 1777 by Georges-Louis Leclerc, comte de Buffon 2. Involves dropping a needle on a lined surface and can be used to estimate 3.
    [Show full text]