1. Monte Carlo Integration

Total Page:16

File Type:pdf, Size:1020Kb

1. Monte Carlo Integration 1. Monte Carlo integration The most common use for Monte Carlo methods is the evaluation of integrals. This is also the basis of Monte Carlo simulations (which are actually integrations). The basic principles hold true in both cases. 1.1. Basics Basic idea becomes clear from the • “dartboard method” of integrating the area of an irregular domain: Choose points randomly within the rectangular box A # hits inside area = p(hit inside area) Abox # hits inside box as the # hits . ! 1 1 Buffon’s (1707–1788) needle experiment to determine π: • – Throw needles, length l, on a grid of lines l distance d apart. – Probability that a needle falls on a line: d 2l P = πd – Aside: Lazzarini 1901: Buffon’s experiment with 34080 throws: 355 π = 3:14159292 ≈ 113 Way too good result! (See “π through the ages”, http://www-groups.dcs.st-and.ac.uk/ history/HistTopics/Pi through the ages.html) ∼ 2 1.2. Standard Monte Carlo integration Let us consider an integral in d dimensions: • I = ddxf(x) ZV Let V be a d-dim hypercube, with 0 x 1 (for simplicity). ≤ µ ≤ Monte Carlo integration: • - Generate N random vectors x from flat distribution (0 (x ) 1). i ≤ i µ ≤ - As N , ! 1 V N f(xi) I : N iX=1 ! - Error: 1=pN independent of d! (Central limit) / “Normal” numerical integration methods (Numerical Recipes): • - Divide each axis in n evenly spaced intervals 3 - Total # of points N nd ≡ - Error: 1=n (Midpoint rule) / 1=n2 (Trapezoidal rule) / 1=n4 (Simpson) / If d is small, Monte Carlo integration has much larger errors than • standard methods. When is MC as good as Simpson? Let’s follow the error • 1=n4 = 1=N 4=d = 1=pN d = 8: ! In practice, MC integration becomes better when d>6–8. ∼ In lattice simulations d = 106 108! • − In practice, N becomes just too large very quickly for the standard • methods: already at d = 10, if n = 10 (pretty small), N = 1010. Too much! 4 1.3. Why error is 1=pN? / This is due to the central limit theorem (see, f.ex., L.E.Reichl, Sta- • tistical Physics). Let y = V f(x ) (absorbing the volume in f) be the value of the • i i integral using one random coordinate (vector) xi. The result of the integral, after N samples, is y + y + : : : + y z = 1 2 N : N N Let p(y ) be the probability distribution of y , and P (z ) the distri- • i i N N bution of zN . Now PN (zN ) = Z dy1 : : : dyN p(y1) : : : p(yN )δ(zN yi=N) − Xi Now • 1 = Z dy p(y) 5 y = dy y p(y) h i Z y2 = dy y2 p(y) h i Z σ = y2 y 2 = (y f )2 h i − h i h − h i i Here, and in the following, α means the expectation value of α h i (mean of the distribution pα(α)). The error of the Monte Carlo integration (of length N) is defined as • the width of the distribution PN (zN ), or more precisely, σ2 = z2 z 2 : N h N i − h N i Naturally z = y . • h N i h i Let us calculate σ : • N - Let us define the Fourier transformation of p(y): ik(y y ) φ(k) = Z dy e −h i p(y) 6 Similarly, the Fourier transformation of PN (z) is ik(zN zN ) ΦN (k) = Z dzN e −h i PN (zN ) i(k=N)(y1 y +:::+yN y ) = Z dy1 : : : dyN e −h i −h i p(y1) : : : p(yN ) = [φ(k=N)]N - Expanding φ(k=N) in powers of k=N (N large), we get 2 2 i(k=N)(y y ) 1 k σ φ(k=N) = dy e −h i p(y) = 1 + : : : Z − 2 N 2 Because of the oscillating exponent, φ(k=N) will decrease as k j j increases, and φ(k=N)N will decrease even more quickly. Thus, 2 2 3 N 1 k σ k k2σ2=2N Φ (k) = 01 + O( )1 e− N − 2 N 2 N 3 −! @ A 7 - Taking the inverse Fourier transform we obtain 1 ik(zN y ) P (z ) = dk e− −h i Φ (k) N N 2π Z N 2 2 1 ik(zN y ) k σ =2N = dk e− −h i e− 2π Z 2 v N 1 N(zN y ) = u exp 2 − h i 3 : u2π σ − 2σ2 t 4 5 Thus, the distribution P (z ) approaches Gaussian as N , • N N ! 1 and σN = σ=pN: 8 1.4. In summary: error in MC integration If we integrate I = dx f(x) ZV with Monte Carlo integration using N samples, the error is 2 2 v f f σN = V uh i − h i : u t N In principle, the expectation values here are exact properties of the dis- tribution of f, i.e. 1 1 f = dxf(x) = dy y p(y) f 2 = dxf 2(x) = dy y2p(y): h i V Z Z h i V Z Z Here p(y) was the distribution of values y = f(x) when the x-values are sampled with flat distribution. p(y) can be obtained as follows: the probability of the x-coordinate is flat, i.e. px(x) = C =const., and thus dx p (x)dx = C dy p(y)dy x dy ≡ ) 9 p(y) = C=(dy=dx) = C=f 0(x(y)) : Of course, we don’t know the expectation values beforehand! In practice, these are substituted by the corresponding Monte Carlo estimates: 1 2 1 2 f f(xi) f f (xi) : (1) h i ≈ N Xi h i ≈ N Xi Actually, in this case we must also use 2 2 v f f σN = V uh i − h i u N 1 t − because using (1) here would underestimate σN . Often in the literature the real expectation values are denoted by x , hh ii as opposed to the Monte Carlo estimates x . h i The error obtained in this way is called the 1-σ error; i.e. the error is the width of the Gaussian distribution of fN = 1=N f(xi): Xi 10 It means that the true answer is within V f σ with 68% probability. h i N ∼ This is the most common error value cited in the literature. This assumes that the distribution of fN really is Gaussian. The central limit theorem guarantees that this is so, provided that N is large enough (except in some pathological cases). However, in practice N is often not large enough for the Gaussianity to hold true! Then the distribution of fN can be seriously off-Gaussian, usually skewed. More refined methods can be used in these cases (for example, modified jackknife or bootstrap error estimation). However, of- ten this is ignored, because there just are not enough samples to deduce the true distribution (or people are lazy). 11 Example: convergence in MC integration Let us study the following integral: • 1 I = dx x2 = 1=3 : Z0 2 1 Denote y = f(x) = x and x = f − (y) = py. • If we sample x with a flat distribution, p (x) = 1, 0 < x 1, the • x ≤ probability distribution pf (y) of y = f(x) can be obtained from dx p (x)dx = p (x) dy p (y)dy x x jdyj ≡ f ) dx 1 1 1 1=2 p (y) = = 1=f 0(x) = x− = y− f jdy j j j 2 2 where 0 < y 1. ≤ @xi In more than 1 dim: pf (~y) = @y , the Jacobian determinant. j • 12 The result of a Monte Carlo integration using N samples x , y = • i i f(xi), is IN = (y1 + y2 + : : : + yN )=N. What is the probability distribution PN of IN ? This tells us how badly • the results of the (fixed length) MC integration scatter. P1(z) = pf (z) = 1=pz 1 y1 + y2 P2(z) = dy1dy2 pf (y1)pf (y2) δ(z ) Z0 − 2 π=2 0 < y 1=2 = 8 1 y ≤ < arcsin − 1=2 < y 1 y ≤ :1 y1 + y2 + y3 P3(z) = dy1dy2dy3 pf (y1)pf (y2)pf (y3) δ(z ) = : : : Z0 − 3 : : : 13 6 PN , measured from the results of 10 independent MC integrations for each N: 5 50 P 1 P10 P 4 2 40 P100 P 3 P1000 P10 3 30 P N PN 2 20 1 10 0 0 0 0.5 1 0.3 0.35 0.4 The expectation value = 1/3 for all PN . The width of the Gaussian is f 2 f 2 v v 4 σN = uhh ii − hh ii = u u u t N t45N 14 where 1 2 2 1 2 f = dxf (x) = dyy pf (y) = 1=5: hh ii Z0 Z1 For example, using N = 1000, σ 0:0094. Plotting 1000 ≈ (x 1=3)2 C exp 2 − 3 − 2σ2 4 1000 5 in the figure above we obtain a curve which is almost undistinguishable from the measured P1000 (blue dashed) curve. 15 In practice, the expectation value and the error is of course measured from a single Monte Carlo integration. For example, using again N = 1000 and the Monte Carlo estimates 2 2 1 2 1 2 v f f f = fi ; f = fi ; σN = uh i − h i h i N X h i N X u N 1 i i t − we obtain the following results (here 4 separate MC integrations): 1: 0:34300 0:00950 2: 0:33308 0:00927 3: 0:33355 0:00931 4: 0:34085 0:00933 Thus, both the average and the error estimate can be reliably calculated in a single MC integration.
Recommended publications
  • A Note on Random Number Generation
    A note on random number generation Christophe Dutang and Diethelm Wuertz September 2009 1 1 INTRODUCTION 2 \Nothing in Nature is random. number generation. By \random numbers", we a thing appears random only through mean random variates of the uniform U(0; 1) the incompleteness of our knowledge." distribution. More complex distributions can Spinoza, Ethics I1. be generated with uniform variates and rejection or inversion methods. Pseudo random number generation aims to seem random whereas quasi random number generation aims to be determin- istic but well equidistributed. 1 Introduction Those familiars with algorithms such as linear congruential generation, Mersenne-Twister type algorithms, and low discrepancy sequences should Random simulation has long been a very popular go directly to the next section. and well studied field of mathematics. There exists a wide range of applications in biology, finance, insurance, physics and many others. So 2.1 Pseudo random generation simulations of random numbers are crucial. In this note, we describe the most random number algorithms At the beginning of the nineties, there was no state-of-the-art algorithms to generate pseudo Let us recall the only things, that are truly ran- random numbers. And the article of Park & dom, are the measurement of physical phenomena Miller (1988) entitled Random generators: good such as thermal noises of semiconductor chips or ones are hard to find is a clear proof. radioactive sources2. Despite this fact, most users thought the rand The only way to simulate some randomness function they used was good, because of a short on computers are carried out by deterministic period and a term to term dependence.
    [Show full text]
  • Package 'Randtoolbox'
    Package ‘randtoolbox’ January 31, 2020 Type Package Title Toolbox for Pseudo and Quasi Random Number Generation and Random Generator Tests Version 1.30.1 Author R port by Yohan Chalabi, Christophe Dutang, Petr Savicky and Di- ethelm Wuertz with some underlying C codes of (i) the SFMT algorithm from M. Mat- sumoto and M. Saito, (ii) the Knuth-TAOCP RNG from D. Knuth. Maintainer Christophe Dutang <[email protected]> Description Provides (1) pseudo random generators - general linear congruential generators, multiple recursive generators and generalized feedback shift register (SF-Mersenne Twister algorithm and WELL generators); (2) quasi random generators - the Torus algorithm, the Sobol sequence, the Halton sequence (including the Van der Corput sequence) and (3) some generator tests - the gap test, the serial test, the poker test. See e.g. Gentle (2003) <doi:10.1007/b97336>. The package can be provided without the rngWELL dependency on demand. Take a look at the Distribution task view of types and tests of random number generators. Version in Memoriam of Diethelm and Barbara Wuertz. Depends rngWELL (>= 0.10-1) License BSD_3_clause + file LICENSE NeedsCompilation yes Repository CRAN Date/Publication 2020-01-31 10:17:00 UTC R topics documented: randtoolbox-package . .2 auxiliary . .3 coll.test . .4 coll.test.sparse . .6 freq.test . .8 gap.test . .9 get.primes . 11 1 2 randtoolbox-package getWELLState . 12 order.test . 12 poker.test . 14 pseudoRNG . 16 quasiRNG . 22 rngWELLScriptR . 26 runifInterface . 27 serial.test . 29 soboltestfunctions . 31 Index 33 randtoolbox-package General remarks on toolbox for pseudo and quasi random number generation Description The randtoolbox-package started in 2007 during an ISFA (France) working group.
    [Show full text]
  • Gretl User's Guide
    Gretl User’s Guide Gnu Regression, Econometrics and Time-series Allin Cottrell Department of Economics Wake Forest university Riccardo “Jack” Lucchetti Dipartimento di Economia Università Politecnica delle Marche December, 2008 Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see http://www.gnu.org/licenses/fdl.html). Contents 1 Introduction 1 1.1 Features at a glance ......................................... 1 1.2 Acknowledgements ......................................... 1 1.3 Installing the programs ....................................... 2 I Running the program 4 2 Getting started 5 2.1 Let’s run a regression ........................................ 5 2.2 Estimation output .......................................... 7 2.3 The main window menus ...................................... 8 2.4 Keyboard shortcuts ......................................... 11 2.5 The gretl toolbar ........................................... 11 3 Modes of working 13 3.1 Command scripts ........................................... 13 3.2 Saving script objects ......................................... 15 3.3 The gretl console ........................................... 15 3.4 The Session concept ......................................... 16 4 Data files 19 4.1 Native format ............................................. 19 4.2 Other data file formats ....................................... 19 4.3 Binary databases ..........................................
    [Show full text]
  • Scope of Various Random Number Generators in Ant System Approach for Tsp
    AC 2007-458: SCOPE OF VARIOUS RANDOM NUMBER GENERATORS IN ANT SYSTEM APPROACH FOR TSP S.K. Sen, Florida Institute of Technology Syamal K Sen ([email protected]) is currently a professor in the Dept. of Mathematical Sciences, Florida Institute of Technology (FIT), Melbourne, Florida. He did his Ph.D. (Engg.) in Computational Science from the prestigious Indian Institute of Science (IISc), Bangalore, India in 1973 and then continued as a faculty of this institute for 33 years. He was a professor of Supercomputer Computer Education and Research Centre of IISc during 1996-2004 before joining FIT in January 2004. He held a Fulbright Fellowship for senior teachers in 1991 and worked in FIT. He also held faculty positions, on leave from IISc, in several universities around the globe including University of Mauritius (Professor, Maths., 1997-98), Mauritius, Florida Institute of Technology (Visiting Professor, Math. Sciences, 1995-96), Al-Fateh University (Associate Professor, Computer Engg, 1981-83.), Tripoli, Libya, University of the West Indies (Lecturer, Maths., 1975-76), Barbados.. He has published over 130 research articles in refereed international journals such as Nonlinear World, Appl. Maths. and Computation, J. of Math. Analysis and Application, Simulation, Int. J. of Computer Maths., Int. J Systems Sci., IEEE Trans. Computers, Internl. J. Control, Internat. J. Math. & Math. Sci., Matrix & Tensor Qrtly, Acta Applicande Mathematicae, J. Computational and Applied Mathematics, Advances in Modelling and Simulation, Int. J. Engineering Simulation, Neural, Parallel and Scientific Computations, Nonlinear Analysis, Computers and Mathematics with Applications, Mathematical and Computer Modelling, Int. J. Innovative Computing, Information and Control, J. Computational Methods in Sciences and Engineering, and Computers & Mathematics with Applications.
    [Show full text]
  • Gretl 1.6.0 and Its Numerical Accuracy - Technical Supplement
    GRETL 1.6.0 AND ITS NUMERICAL ACCURACY - TECHNICAL SUPPLEMENT A. Talha YALTA and A. Yasemin YALTA∗ Department of Economics, Fordham University, New York 2007-07-13 This supplement provides detailed information regarding both the procedure and the results of our testing GRETL using the univariate summary statistics, analysis of variance, linear regression, and nonlinear least squares benchmarks of the NIST Statis- tical Reference Datasets (StRD), as well as verification of the random number generator and the accuracy of statistical distributions used for calculating critical values. The numbers of accurate digits (NADs) for testing the StRD datasets are also supplied and compared with several commercial packages namely SAS v6.12, SPSS v7.5, S-Plus v4.0, Stata 7, and Gauss for Windows v3.2.37. We did not perform any accuracy tests of the latest versions of these programs and the results that we supply for packages other than GRETL 1.6.0 are those independently verified and published by McCullough (1999a), McCullough and Wilson (2002), and Vinod (2000) previously. Readers are also referred to McCullough (1999b) and Keeling and Pavur (2007), which examine software accuracy and supply the NADS across four and nine software packages at once respectively. Since all these programs perform calculations using 64-bit floating point arithmetic, discrepan- cies in results are caused by differences in algorithms used to perform various statistical operations. 1 Numerical tests of the NIST StRD benchmarks For the StRD reference data sets, NIST provides certified values calculated with multiple precision in 500 digits of accuracy, which were later rounded to 15 significant digits (11 for nonlinear least squares).
    [Show full text]
  • Monte Carlo Method - Wikipedia
    2. 11. 2019 Monte Carlo method - Wikipedia Monte Carlo method Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes:[1] optimization, numerical integration, and generating draws from a probability distribution. In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases). Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.[2] In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable. When the probability distribution of the variable is parametrized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler.[3][4][5][6] The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution.
    [Show full text]
  • 3 Uniform Random Numbers 3 3.1 Random and Pseudo-Random Numbers
    Contents 3 Uniform random numbers 3 3.1 Random and pseudo-random numbers . 3 3.2 States, periods, seeds, and streams . 5 3.3 U(0; 1) random variables . 8 3.4 Inside a random number generator . 10 3.5 Uniformity measures . 12 3.6 Statistical tests of random numbers . 15 3.7 Pairwise independent random numbers . 17 End notes . 18 Exercises . 20 1 2 Contents © Art Owen 2009{2013 do not distribute or post electronically without author's permission 3 Uniform random numbers Monte Carlo methods require a source of randomness. For the convenience of the users, random numbers are delivered as a stream of independent U(0; 1) random variables. As we will see in later chapters, we can generate a vast assortment of random quantities starting with uniform random numbers. In practice, for reasons outlined below, it is usual to use simulated or pseudo- random numbers instead of genuinely random numbers. This chapter looks at how to make good use of random number generators. That begins with selecting a good one. Sometimes it ends there too, but in other cases we need to learn how best to set seeds and organize multiple streams and substreams of random numbers. This chapter also covers quality criteria for ran- dom number generators, briefly discusses the most commonly used algorithms, and points out some numerical issues that can trap the unwary. 3.1 Random and pseudo-random numbers It would be satisfying to generate random numbers from a process that accord- ing to well established understanding of physics is truly random.
    [Show full text]
  • A Recap of Randomness the Mersene Twister Xorshift Linear
    Programmatic Entropy: Exploring the Most Prominent PRNGs Jared Anderson, Evan Bause, Nick Pappas, Joshua Oberlin A Recap of Randomness Random number generating algorithms are rated by two primary measures: entropy - the measure of disorder in the numbers created and period - how long it takes before the PRNG begins to inevitably cycle its pattern. While high entropy and a long period are desirable traits, it is Xorshift sometimes necessary to settle for a less intense method of random Linear Congruential Generators Xorshift random number generators are an extremely fast and number generation to not sacrifice performance of the product the PRNG efficient type of PRNG discovered by George Marsaglia. An xorshift is required for. However, in the real world PRNGs must also be evaluated PRNG works by taking the exclusive or (xor) of a computer word with a for memory footprint, CPU requirements, and speed. shifted version of itself. For a single integer x, an xorshift operation can In this poster we will explore three of the major types of PRNGs, their produce a sequence of 232 - 1 random integers. For a pair of integers x, history, their inner workings, and their uses. y, it can produce a sequence of 264 - 1 random integers. For a triple x, y, z, it can produce a sequence of 296 - 1, and so on. The Mersene Twister The Mersenne Twister is the most widely used general purpose pseudorandom number generator today. A Mersenne prime is a prime number that is one less than a power of two, and in the case of a 19937 mersenne twister, is its chosen period length (most commonly 2 −1).
    [Show full text]
  • Ziggurat Revisited
    Ziggurat Revisited Dirk Eddelbuettel RcppZiggurat version 0.1.3 as of July 25, 2015 Abstract Random numbers following a Standard Normal distribution are of great importance when using simulations as a means for investigation. The Ziggurat method (Marsaglia and Tsang, 2000; Leong, Zhang, Lee, Luk, and Villasenor, 2005) is one of the fastest methods to generate normally distributed random numbers while also providing excellent statistical properties. This note provides an updated implementations of the Ziggurat generator suitable for 32- and 64-bit operating system. It compares the original implementations to several popular Open Source implementations. A new implementation embeds the generator into an appropriate C++ class structure. The performance of the different generator is investigated both via extended timing and through a series of statistical tests, including a suggested new test for testing Normal deviates directly. Integration into other systems such as R is discussed as well. 1 Introduction Generating random number for use in simulation is a classic topic in scientific computing and about as old as the field itself. Most algorithms concentrate on the uniform distribution. Its values can be used to generate randomly distributed values from other distributions simply by using the relevant inverse function. Regarding terminology, all computational algorithms for generation of random numbers are by definition deterministic. Here, we follow standard convention and refer to such numbers as pseudo-random as they can always be recreated given the seed value for a given sequence. We are not concerned in this note with quasi-random numbers (also called low-discrepnancy sequences). We consider this topic to be subset of pseudo-random numbers subject to distributional constraints.
    [Show full text]
  • Gretl Manual
    Gretl Manual Gnu Regression, Econometrics and Time-series Library Allin Cottrell Department of Economics Wake Forest University August, 2005 Gretl Manual: Gnu Regression, Econometrics and Time-series Library by Allin Cottrell Copyright © 2001–2005 Allin Cottrell Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see http://www.gnu.org/licenses/fdl.html). iii Table of Contents 1. Introduction........................................................................................................................................... 1 Features at a glance ......................................................................................................................... 1 Acknowledgements .......................................................................................................................... 1 Installing the programs................................................................................................................... 2 2. Getting started ...................................................................................................................................... 4 Let’s run a regression ...................................................................................................................... 4 Estimation output............................................................................................................................. 6 The
    [Show full text]
  • Econometrics with Octave
    Econometrics with Octave Dirk Eddelb¨uttel∗ Bank of Montreal, Toronto, Canada. [email protected] November 1999 Summary GNU Octave is an open-source implementation of a (mostly Matlab compatible) high-level language for numerical computations. This review briefly introduces Octave, discusses applications of Octave in an econometric context, and illustrates how to extend Octave with user-supplied C++ code. Several examples are provided. 1 Introduction Econometricians sweat linear algebra. Be it for linear or non-linear problems of estimation or infer- ence, matrix algebra is a natural way of expressing these problems on paper. However, when it comes to writing computer programs to either implement tried and tested econometric procedures, or to research and prototype new routines, programming languages such as C or Fortran are more of a bur- den than an aid. Having to enhance the language by supplying functions for even the most primitive operations adds extra programming effort, introduces new points of failure, and moves the level of abstraction further away from the elegant mathematical expressions. As Eddelb¨uttel(1996) argues, object-oriented programming provides `a step up' from Fortran or C by enabling the programmer to seamlessly add new data types such as matrices, along with operations on these new data types, to the language. But with Moore's Law still being validated by ever and ever faster processors, and, hence, ever increasing computational power, the prime reason for using compiled code, i.e. speed, becomes less relevant. Hence the growing popularity of interpreted programming languages, both, in general, as witnessed by the surge in popularity of the general-purpose programming languages Perl and Python and, in particular, for numerical applications with strong emphasis on matrix calculus where languages such as Gauss, Matlab, Ox, R and Splus, which were reviewed by Cribari-Neto and Jensen (1997), Cribari-Neto (1997) and Cribari-Neto and Zarkos (1999), have become popular.
    [Show full text]
  • Lab 4: Monte Carlo Integration and Variance Reduction
    Lab 4: Monte Carlo Integration and Variance Reduction Lecturer: Zhao Jianhua Department of Statistics Yunnan University of Finance and Economics Task The objective in this lab is to learn the methods for Monte Carlo Integration and Variance Re- duction, including Monte Carlo Integration, Antithetic Variables, Control Variates, Importance Sampling, Stratified Sampling, Stratified Importance Sampling. 1 Monte Carlo Integration 1.1 Simple Monte Carlo estimator 1.1.1 Example 5.1 (Simple Monte Carlo integration) Compute a Monte Carlo(MC) estimate of Z 1 θ = e−xdx 0 and compare the estimate with the exact value. m <- 10000 x <- runif(m) theta.hat <- mean(exp(-x)) print(theta.hat) ## [1] 0.6324415 print(1 - exp(-1)) ## [1] 0.6321206 : − : The estimate is θ^ = 0:6355 and θ = 1 − e 1 = 0:6321. 1.1.2 Example 5.2 (Simple Monte Carlo integration, cont.) R 4 −x Compute a MC estimate of θ = 2 e dx: and compare the estimate with the exact value of the integral. m <- 10000 x <- runif(m, min = 2, max = 4) theta.hat <- mean(exp(-x)) * 2 print(theta.hat) 1 ## [1] 0.1168929 print(exp(-2) - exp(-4)) ## [1] 0.1170196 : − : The estimate is θ^ = 0:1172 and θ = 1 − e 1 = 0:1170. 1.1.3 Example 5.3 (Monte Carlo integration, unbounded interval) Use the MC approach to estimate the standard normal cdf Z 1 1 2 Φ(x) = p e−t /2dt: −∞ 2π Since the integration cover an unbounded interval, we break this problem into two cases: x ≥ 0 and x < 0, and use the symmetry of the normal density to handle the second case.
    [Show full text]