Sampling and Monte Carlo Integration

Total Page:16

File Type:pdf, Size:1020Kb

Sampling and Monte Carlo Integration Sampling and Monte Carlo Integration Michael Gutmann Probabilistic Modelling and Reasoning (INFR11134) School of Informatics, University of Edinburgh Spring semester 2018 Recap Learning and inference often involves intractable integrals, e.g. I Marginalisation Z p(x) = p(x, y)dy y I Expectations Z E [g(x) | yo] = g(x)p(x|yo)dx for some function g. I For unobserved variables, likelihood and gradient of the log lik Z L(θ) = p(D; θ) = p(u, D; θdu), u ∇θ`(θ) = Ep(u|D;θ) [∇θ log p(u, D; θ)] Notation: Ep(x) is sometimes used to indicate that the expectation is taken with respect to p(x). Michael Gutmann Sampling and Monte Carlo Integration 2 / 41 Recap Learning and inference often involves intractable integrals, e.g. I For unnormalised models with intractable partition functions p˜(D; θ) L(θ) = R x p˜(x; θ)dx ∇θ`(θ) ∝ m(D; θ) − Ep(x;θ) [m(x; θ)] I Combined case of unnormalised models with intractable partition functions and unobserved variables. I Evaluation of intractable integrals can sometimes be avoided by using other learning criteria (e.g. score matching). I Here: methods to approximate integrals like those above using sampling. Michael Gutmann Sampling and Monte Carlo Integration 3 / 41 Program 1. Monte Carlo integration 2. Sampling Michael Gutmann Sampling and Monte Carlo Integration 4 / 41 Program 1. Monte Carlo integration Approximating expectations by averages Importance sampling 2. Sampling Michael Gutmann Sampling and Monte Carlo Integration 5 / 41 Averages with iid samples I Tutorial 7: For Gaussians, the sample average is an estimate (MLE) of the mean (expectation) E[x] 1 n x¯ = X x ≈ [x] n i E i=1 I Gaussianity not needed: assume xi are iid observations of x ∼ p(x). Z 1 n [x] = xp(x)dx ≈ x¯ x¯ = X x E n n n i i=1 I Subscript n reminds us that we used n samples to compute the average. I Approximating integrals by means of sample averages is called Monte Carlo integration. Michael Gutmann Sampling and Monte Carlo Integration 6 / 41 Averages with iid samples I Sample average is unbiased 1 n n [¯x ] = X [x ] =∗ [x] = [x] E n n E i nE E i=1 (∗: “identically distributed” assumption is used, not independence) I Variability " # 1 n 1 n 1 [¯x ] = X x =∗ X [x ] = [x] V n n2 V i n2 V i nV i=1 i=1 (∗: independence assumption used) I Squared error decreases as 1/n h i 1 [¯x ] = (¯x − [x])2 = [x] V n E n E nV Michael Gutmann Sampling and Monte Carlo Integration 7 / 41 Averages with iid samples I Weak law of large numbers: [x] Pr (|x¯ − [x]| ≥ ) ≤ V n E n2 I As n → ∞, the probability for the sample average to deviate from the expected value goes to zero. I We say that sample average converges in probability to the expected value. I Speed of convergence depends on the variance V[x]. I Different “laws of large numbers” exist that make different assumptions. Michael Gutmann Sampling and Monte Carlo Integration 8 / 41 Chebyshev’s inequality I Weak law of large numbers is a direct consequence of Chebyshev’s inequality I Chebyshev’s inequality: Let s be some random variable with mean E[s] and variance V[s]. [s] Pr (|s − [s]| ≥ ) ≤ V E 2 I This means that for all random variables: I probability to deviate more than three standard deviation from the mean is less than 1/9 ≈ 0.11 p (set = 3 V(s)) I Probability to deviate more than 6 standard deviations: 1/36 ≈ 0.03. These are conservative values; for many distributions, the probabilities will be smaller. Michael Gutmann Sampling and Monte Carlo Integration 9 / 41 Proofs (not examinable) I Chebyshev’s inequality follows from Markov’s inequality. I Markov’s inequality: For a random variable y ≥ 0 [y] Pr(y ≥ t) ≤ E (t > 0) t I Chebyshev’s inequality is obtained by setting y = |s − E[s]| 2 2 Pr (|s − E[s]| ≥ t) = Pr (s − E[s]) ≥ t (s − [s])2 ≤ E E . t2 Chebyshev’s inequality follows with t = , and because 2 E[(s − E[s] ] is the variance V[s] of s. Michael Gutmann Sampling and Monte Carlo Integration 10 / 41 Proofs (not examinable) Proof for Markov’s inequality: Let t be an arbitrary positive number and y a one-dimensional non-negative random variable with pdf p. We can decompose the expectation of y using t as split-point, Z ∞ Z t Z ∞ E[y] = up(u)du = up(u)du + up(u)du. 0 0 t Since u ≥ t in the second term, we obtain the inequality Z t Z ∞ E[y] ≥ up(u)du + tp(u)du. 0 t The second term is t times the probability that y ≥ t, so that Z t E[y] ≥ up(u)du + t Pr(y ≥ t) 0 ≥ t Pr(y ≥ t), where the second line holds because the first term in the first line is non-negative. This gives Markov’s inequality (y) Pr(y ≥ t) ≤ E (t > 0) t Michael Gutmann Sampling and Monte Carlo Integration 11 / 41 Averages with correlated samples I When computing the variance of the sample average [x] [¯x ] = V V n n we assumed the samples are identically and independently distributed. I The variance shrinks with increasing n and the average becomes more and more concentrated around E[x]. I Corresponding results exist for the case of statistically dependent samples xi . Known as “ergodic theorems”. I Important for the theory of Markov chain Monte Carlo methods but requires advanced mathematical theory. Michael Gutmann Sampling and Monte Carlo Integration 12 / 41 More general expectations I So far, we have considered Z 1 n [x] = xp(x)dx ≈ X x E n i i=1 where xi ∼ p(x) I This generalises Z 1 n [g(x)] = g(x)p(x)dx ≈ X g(x ) E n i i=1 where xi ∼ p(x) 1 I Variance of the approximation if the xi are iid is n V[g(x)] Michael Gutmann Sampling and Monte Carlo Integration 13 / 41 Example (Based on a slide from Amos Storkey) Z 1 n [g(x)] = g(x)N (x; 0, 1)dx ≈ X g(x )(x ∼ N (x; 0, 1)) E n i i i=1 for g(x) = x and g(x) = x 2 Left: sample average as a function of n Right: Variability (0.5 quantile: solid, 0.1 and 0.9 quantiles: dashed) 3 3 2.5 2.5 2 2 1.5 1.5 1 1 0.5 0.5 Average 0 0 -0.5 -0.5 -1 Distribution of the average -1.5 -1 -2 -1.5 0 200 400 600 800 1000 0 200 400 600 800 1000 Number of samples Number of samples Michael Gutmann Sampling and Monte Carlo Integration 14 / 41 Example (Based on a slide from Amos Storkey) Z 1 n [g(x)] = g(x)N (x; 0, 1)dx ≈ X g(x )(x ∼ N (x; 0, 1)) E n i i i=1 for g(x) = exp(0.6x 2) Left: sample average as a function of n Right: Variability (0.5 quantile: solid, 0.1 and 0.9 quantiles: dashed) 10 15 9 8 7 10 6 5 Average 4 5 3 Distribution of the average 2 1 0 0 200 400 600 800 1000 0 200 400 600 800 1000 Number of samples Number of samples Michael Gutmann Sampling and Monte Carlo Integration 15 / 41 Example I Indicators that something is wrong: I Strong fluctuations in the sample average as n increases. I Large non-declining variability. I Note: integral is not finite: Z 1 Z exp(0.6x 2)N (x; 0, 1)dx = √ exp(0.6x 2) exp(−0.5x 2)dx 2π 1 Z = √ exp(0.1x 2)dx 2π = ∞ but for any n, the sample average is finite and may be mistaken for a good approximation. I Check variability when approximating the expected value by a sample average! Michael Gutmann Sampling and Monte Carlo Integration 16 / 41 Approximating general integrals I If the integral does not correspond to an expectation, we can smuggle in a pdf q to rewrite it as an expected value with respect to q Z Z q(x) I = g(x)dx = g(x) dx q(x) Z g(x) = q(x)dx q(x) g(x) = Eq(x) q(x) 1 n g(x ) ≈ X i n q(x ) i=1 i with xi ∼ q(x) (iid) I This is the basic idea of importance sampling. I q is called the importance (or proposal) distribution Michael Gutmann Sampling and Monte Carlo Integration 17 / 41 Choice of the importance distribution I Call the approximation bI, n 1 X g(xi ) bI = n q(x ) i=1 i I bI is unbiased by construction g(x) Z g(x) Z [bI] = = q(x)dx = g(x)dx = I E E q(x) q(x) I Variance " # h i 1 g(x) 1 g(x)2 1 g(x)2 bI = = − V nV q(x) nE q(x) n E q(x) | {z } I2 Depends on the second moment. Michael Gutmann Sampling and Monte Carlo Integration 18 / 41 Choice of the importance distribution I The second moment is " # g(x)2 Z g(x)2 Z g(x)2 = q(x)dx = dx E q(x) q(x) q(x) Z |g(x)| = |g(x)| dx q(x) I Bad: q(x) is small when |g(x)| is large.
Recommended publications
  • Lecture 15: Approximate Inference: Monte Carlo Methods
    10-708: Probabilistic Graphical Models 10-708, Spring 2017 Lecture 15: Approximate Inference: Monte Carlo methods Lecturer: Eric P. Xing Name: Yuan Liu(yuanl4), Yikang Li(yikangl) 1 Introduction We have already learned some exact inference methods in the previous lectures, such as the elimination algo- rithm, message-passing algorithm and the junction tree algorithms. However, there are many probabilistic models of practical interest for which exact inference is intractable. One important class of inference algo- rithms is based on deterministic approximation schemes and includes methods such as variational methods. This lecture will include an alternative very general and widely used framework for approximate inference based on stochastic simulation through sampling, also known as the Monte Carlo methods. The purpose of the Monte Carlo method is to find the expectation of some function f(x) with respect to a probability distribution p(x). Here, the components of x might comprise of discrete or continuous variables, or some combination of the two. Z hfi = f(x)p(x)dx The general idea behind sampling methods is to obtain a set of examples x(t)(where t = 1; :::; N) drawn from the distribution p(x). This allows the expectation (1) to be approximated by a finite sum N 1 X f^ = f(x(t)) N t=1 If the samples x(t) is i.i.d, it is clear that hf^i = hfi, and the variance of the estimator is h(f − hfi)2i=N. Monte Carlo methods work by drawing samples from the desired distribution, which can be accomplished even when it it not possible to write out the pdf.
    [Show full text]
  • Estimating Standard Errors for Importance Sampling Estimators with Multiple Markov Chains
    Estimating standard errors for importance sampling estimators with multiple Markov chains Vivekananda Roy1, Aixin Tan2, and James M. Flegal3 1Department of Statistics, Iowa State University 2Department of Statistics and Actuarial Science, University of Iowa 3Department of Statistics, University of California, Riverside Aug 10, 2016 Abstract The naive importance sampling estimator, based on samples from a single importance density, can be numerically unstable. Instead, we consider generalized importance sampling estimators where samples from more than one probability distribution are combined. We study this problem in the Markov chain Monte Carlo context, where independent samples are replaced with Markov chain samples. If the chains converge to their respective target distributions at a polynomial rate, then under two finite moment conditions, we show a central limit theorem holds for the generalized estimators. Further, we develop an easy to implement method to calculate valid asymptotic standard errors based on batch means. We also provide a batch means estimator for calculating asymptotically valid standard errors of Geyer’s (1994) reverse logistic estimator. We illustrate the method via three examples. In particular, the generalized importance sampling estimator is used for Bayesian spatial modeling of binary data and to perform empirical Bayes variable selection where the batch means estimator enables standard error calculations in high-dimensional settings. arXiv:1509.06310v2 [math.ST] 11 Aug 2016 Key words and phrases: Bayes factors, Markov chain Monte Carlo, polynomial ergodicity, ratios of normalizing constants, reverse logistic estimator. 1 Introduction Let π(x) = ν(x)=m be a probability density function (pdf) on X with respect to a measure µ(·). R Suppose f : X ! R is a π integrable function and we want to estimate Eπf := X f(x)π(x)µ(dx).
    [Show full text]
  • Distilling Importance Sampling
    Distilling Importance Sampling Dennis Prangle1 1School of Mathematics, University of Bristol, United Kingdom Abstract in almost any setting, including in the presence of strong posterior dependence or discrete random variables. How- ever it only achieves a representative weighted sample at a Many complicated Bayesian posteriors are difficult feasible cost if the proposal is a reasonable approximation to approximate by either sampling or optimisation to the target distribution. methods. Therefore we propose a novel approach combining features of both. We use a flexible para- An alternative to Monte Carlo is to use optimisation to meterised family of densities, such as a normal- find the best approximation to the posterior from a family of ising flow. Given a density from this family approx- distributions. Typically this is done in the framework of vari- imating the posterior, we use importance sampling ational inference (VI). VI is computationally efficient but to produce a weighted sample from a more accur- has the drawback that it often produces poor approximations ate posterior approximation. This sample is then to the posterior distribution e.g. through over-concentration used in optimisation to update the parameters of [Turner et al., 2008, Yao et al., 2018]. the approximate density, which we view as dis- A recent improvement in VI is due to the development of tilling the importance sampling results. We iterate a range of flexible and computationally tractable distribu- these steps and gradually improve the quality of the tional families using normalising flows [Dinh et al., 2016, posterior approximation. We illustrate our method Papamakarios et al., 2019a]. These transform a simple base in two challenging examples: a queueing model random distribution to a complex distribution, using a se- and a stochastic differential equation model.
    [Show full text]
  • Importance Sampling & Sequential Importance Sampling
    Importance Sampling & Sequential Importance Sampling Arnaud Doucet Departments of Statistics & Computer Science University of British Columbia A.D. () 1 / 40 Each distribution πn (dx1:n) = πn (x1:n) dx1:n is known up to a normalizing constant, i.e. γn (x1:n) πn (x1:n) = Zn We want to estimate expectations of test functions ϕ : En R n ! Eπn (ϕn) = ϕn (x1:n) πn (dx1:n) Z and/or the normalizing constants Zn. We want to do this sequentially; i.e. …rst π1 and/or Z1 at time 1 then π2 and/or Z2 at time 2 and so on. Generic Problem Consider a sequence of probability distributions πn n 1 de…ned on a f g sequence of (measurable) spaces (En, n) n 1 where E1 = E, f F g 1 = and En = En 1 E, n = n 1 . F F F F F A.D. () 2 / 40 We want to estimate expectations of test functions ϕ : En R n ! Eπn (ϕn) = ϕn (x1:n) πn (dx1:n) Z and/or the normalizing constants Zn. We want to do this sequentially; i.e. …rst π1 and/or Z1 at time 1 then π2 and/or Z2 at time 2 and so on. Generic Problem Consider a sequence of probability distributions πn n 1 de…ned on a f g sequence of (measurable) spaces (En, n) n 1 where E1 = E, f F g 1 = and En = En 1 E, n = n 1 . F F F F F Each distribution πn (dx1:n) = πn (x1:n) dx1:n is known up to a normalizing constant, i.e.
    [Show full text]
  • Monte Carlo Method - Wikipedia
    2. 11. 2019 Monte Carlo method - Wikipedia Monte Carlo method Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes:[1] optimization, numerical integration, and generating draws from a probability distribution. In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases). Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.[2] In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable. When the probability distribution of the variable is parametrized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler.[3][4][5][6] The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution.
    [Show full text]
  • 1. Monte Carlo Integration
    1. Monte Carlo integration The most common use for Monte Carlo methods is the evaluation of integrals. This is also the basis of Monte Carlo simulations (which are actually integrations). The basic principles hold true in both cases. 1.1. Basics Basic idea becomes clear from the • “dartboard method” of integrating the area of an irregular domain: Choose points randomly within the rectangular box A # hits inside area = p(hit inside area) Abox # hits inside box as the # hits . ! 1 1 Buffon’s (1707–1788) needle experiment to determine π: • – Throw needles, length l, on a grid of lines l distance d apart. – Probability that a needle falls on a line: d 2l P = πd – Aside: Lazzarini 1901: Buffon’s experiment with 34080 throws: 355 π = 3:14159292 ≈ 113 Way too good result! (See “π through the ages”, http://www-groups.dcs.st-and.ac.uk/ history/HistTopics/Pi through the ages.html) ∼ 2 1.2. Standard Monte Carlo integration Let us consider an integral in d dimensions: • I = ddxf(x) ZV Let V be a d-dim hypercube, with 0 x 1 (for simplicity). ≤ µ ≤ Monte Carlo integration: • - Generate N random vectors x from flat distribution (0 (x ) 1). i ≤ i µ ≤ - As N , ! 1 V N f(xi) I : N iX=1 ! - Error: 1=pN independent of d! (Central limit) / “Normal” numerical integration methods (Numerical Recipes): • - Divide each axis in n evenly spaced intervals 3 - Total # of points N nd ≡ - Error: 1=n (Midpoint rule) / 1=n2 (Trapezoidal rule) / 1=n4 (Simpson) / If d is small, Monte Carlo integration has much larger errors than • standard methods.
    [Show full text]
  • Blackimportance Sampling and Its Optimality for Stochastic
    Electronic Journal of Statistics ISSN: 1935-7524 Importance Sampling and its Optimality for Stochastic Simulation Models Yen-Chi Chen and Youngjun Choe University of Washington, Department of Statistics Seattle, WA 98195 e-mail: [email protected]∗ University of Washington, Department of Industrial and Systems Engineering Seattle, WA 98195 e-mail: [email protected]∗ Abstract: We consider the problem of estimating an expected outcome from a stochastic simulation model. Our goal is to develop a theoretical framework on importance sampling for such estimation. By investigating the variance of an importance sampling estimator, we propose a two-stage procedure that involves a regression stage and a sampling stage to construct the final estimator. We introduce a parametric and a nonparametric regres- sion estimator in the first stage and study how the allocation between the two stages affects the performance of the final estimator. We analyze the variance reduction rates and derive oracle properties of both methods. We evaluate the empirical performances of the methods using two numerical examples and a case study on wind turbine reliability evaluation. MSC 2010 subject classifications: Primary 62G20; secondary 62G86, 62H30. Keywords and phrases: nonparametric estimation, stochastic simulation model, oracle property, variance reduction, Monte Carlo. 1. Introduction The 2011 Fisher lecture (Wu, 2015) features the landscape change in engineer- ing, where computer simulation experiments are replacing physical experiments thanks to the advance of modeling and computing technologies. An insight from the lecture highlights that traditional principles for physical experiments do not necessarily apply to virtual experiments on a computer. The virtual environ- ment calls for new modeling and analysis frameworks distinguished from those arXiv:1710.00473v2 [stat.ME] 25 Sep 2019 developed under the constraint of physical environment.
    [Show full text]
  • Lab 4: Monte Carlo Integration and Variance Reduction
    Lab 4: Monte Carlo Integration and Variance Reduction Lecturer: Zhao Jianhua Department of Statistics Yunnan University of Finance and Economics Task The objective in this lab is to learn the methods for Monte Carlo Integration and Variance Re- duction, including Monte Carlo Integration, Antithetic Variables, Control Variates, Importance Sampling, Stratified Sampling, Stratified Importance Sampling. 1 Monte Carlo Integration 1.1 Simple Monte Carlo estimator 1.1.1 Example 5.1 (Simple Monte Carlo integration) Compute a Monte Carlo(MC) estimate of Z 1 θ = e−xdx 0 and compare the estimate with the exact value. m <- 10000 x <- runif(m) theta.hat <- mean(exp(-x)) print(theta.hat) ## [1] 0.6324415 print(1 - exp(-1)) ## [1] 0.6321206 : − : The estimate is θ^ = 0:6355 and θ = 1 − e 1 = 0:6321. 1.1.2 Example 5.2 (Simple Monte Carlo integration, cont.) R 4 −x Compute a MC estimate of θ = 2 e dx: and compare the estimate with the exact value of the integral. m <- 10000 x <- runif(m, min = 2, max = 4) theta.hat <- mean(exp(-x)) * 2 print(theta.hat) 1 ## [1] 0.1168929 print(exp(-2) - exp(-4)) ## [1] 0.1170196 : − : The estimate is θ^ = 0:1172 and θ = 1 − e 1 = 0:1170. 1.1.3 Example 5.3 (Monte Carlo integration, unbounded interval) Use the MC approach to estimate the standard normal cdf Z 1 1 2 Φ(x) = p e−t /2dt: −∞ 2π Since the integration cover an unbounded interval, we break this problem into two cases: x ≥ 0 and x < 0, and use the symmetry of the normal density to handle the second case.
    [Show full text]
  • Bayesian Computation: MCMC and All That
    Bayesian Computation: MCMC and All That SCMA V Short-Course, Pennsylvania State University Alan Heavens, Tom Loredo, and David A van Dyk 11 and 12 June 2011 11 June Morning Session 10.00 - 13.00 10.00 - 11.15 The Basics: Heavens Bayes Theorem, Priors, and Posteriors. 11.15 - 11.30 Coffee Break 11.30 - 13.00 Low Dimensional Computing Part I Loredo 11 June Afternoon Session 14.15 - 17.30 14.15 - 15.15 Low Dimensional Computing Part II Loredo 15.15 - 16.00 MCMC Part I van Dyk 16.00 - 16.15 Coffee Break 16.15 - 17.00 MCMC Part II van Dyk 17.00 - 17.30 R Tutorial van Dyk 12 June Morning Session 10.00 - 13.15 10.00 - 10.45 Data Augmentation and PyBLoCXS Demo van Dyk 10.45 - 11.45 MCMC Lab van Dyk 11.45 - 12.00 Coffee Break 12.00 - 13.15 Output Analysls* Heavens and Loredo 12 June Afternoon Session 14.30 - 17.30 14.30 - 15.25 MCMC and Output Analysis Lab Heavens 15.25 - 16.05 Hamiltonian Monte Carlo Heavens 16.05 - 16.20 Coffee Break 16.20 - 17.00 Overview of Other Advanced Methods Loredo 17.00 - 17.30 Final Q&A Heavens, Loredo, and van Dyk * Heavens (45 min) will cover basic convergence issues and methods, such as G&R's multiple chains. Loredo (30 min) will discuss Resampling The Bayesics Alan Heavens University of Edinburgh, UK Lectures given at SCMA V, Penn State June 2011 I V N E R U S E I T H Y T O H F G E R D I N B U Outline • Types of problem • Bayes’ theorem • Parameter EsJmaon – Marginalisaon – Errors • Error predicJon and experimental design: Fisher Matrices • Model SelecJon LCDM fits the WMAP data well.
    [Show full text]
  • Monte Carlo and Markov Chain Monte Carlo Methods
    MONTE CARLO AND MARKOV CHAIN MONTE CARLO METHODS History: Monte Carlo (MC) and Markov Chain Monte Carlo (MCMC) have been around for a long time. Some (very) early uses of MC ideas: • Conte de Buffon (1777) dropped a needle of length L onto a grid of parallel lines spaced d > L apart to estimate P [needle intersects a line]. • Laplace (1786) used Buffon’s needle to evaluate π. • Gosset (Student, 1908) used random sampling to determine the distribution of the sample correlation coefficient. • von Neumann, Fermi, Ulam, Metropolis (1940s) used games of chance (hence MC) to study models of atomic collisions at Los Alamos during WW II. We are concerned with the use of MC and, in particular, MCMC methods to solve estimation and detection problems. As discussed in the introduction to MC methods (handout # 4), many estimation and detection problems require evaluation of integrals. EE 527, Detection and Estimation Theory, # 4b 1 Monte Carlo Integration MC Integration is essentially numerical integration and thus may be thought of as estimation — we will discuss MC estimators that estimate integrals. Although MC integration is most useful when dealing with high-dimensional problems, the basic ideas are easiest to grasp by looking at 1-D problems first. Suppose we wish to evaluate Z G = g(x) p(x) dx Ω R where p(·) is a density, i.e. p(x) ≥ 0 for x ∈ Ω and Ω p(x) dx = 1. Any integral can be written in this form if we can transform their limits of integration to Ω = (0, 1) — then choose p(·) to be uniform(0, 1).
    [Show full text]
  • Importance Sampling: Intrinsic Dimension And
    Submitted to Statistical Science Importance Sampling: Intrinsic Dimension and Computational Cost S. Agapiou∗, O. Papaspiliopoulos† D. Sanz-Alonso‡ and A. M. Stuart§ Abstract. The basic idea of importance sampling is to use independent samples from a proposal measure in order to approximate expectations with respect to a target measure. It is key to understand how many samples are required in order to guarantee accurate approximations. Intuitively, some notion of distance between the target and the proposal should determine the computational cost of the method. A major challenge is to quantify this distance in terms of parameters or statistics that are pertinent for the practitioner. The subject has attracted substantial interest from within a variety of communities. The objective of this paper is to overview and unify the resulting literature by creating an overarching framework. A general theory is presented, with a focus on the use of importance sampling in Bayesian inverse problems and filtering. 1. INTRODUCTION 1.1 Our Purpose Our purpose in this paper is to overview various ways of measuring the com- putational cost of importance sampling, to link them to one another through transparent mathematical reasoning, and to create cohesion in the vast pub- lished literature on this subject. In addressing these issues we will study impor- tance sampling in a general abstract setting, and then in the particular cases of arXiv:1511.06196v3 [stat.CO] 14 Jan 2017 Bayesian inversion and filtering. These two application settings are particularly important as there are many pressing scientific, technological and societal prob- lems which can be formulated via inversion or filtering.
    [Show full text]
  • Importance Sampling
    Deep Learning Srihari Importance Sampling Sargur N. Srihari [email protected] 1 Deep Learning Srihari Topics in Monte Carlo Methods 1. Sampling and Monte Carlo Methods 2. Importance Sampling 3. Markov Chain Monte Carlo Methods 4. Gibbs Sampling 5. Mixing between separated modes 2 Deep Learning Srihari Importance Sampling: Choice of p(x) • Sum of integrand to be computed: Discrete case Continuous case s = p(x)f(x) = E ⎡f(x)⎤ s = p(x)f(x)dx = E ⎡f(x)⎤ ∑ p ⎣ ⎦ ∫ p ⎣ ⎦ x • An important step is deciding which part of the integrand has the role of the probability p(x) – from which we sample x(1),..x(n) • And which part has role of f (x) whose expected value (under the probability distribution) is estimated as 1 n sˆ f( (i)) n = ∑ x n i=1 Deep Learning Decomposition of Integrand Srihari • In the equation s = ∑ p(x)f(x) s = ∫ p(x)f(x)dx x • There is no unique decomposition because. p(x) f (x) can be rewritten as p(x)f(x) p(x)f(x) = q(x) q( ) x – where we now sample from q and average p(x)f(x) q( ) x s = p(x)f(x)dx – In many cases the problem ∫ is specified naturally as expectation of f (x) given distribution p(x) • But it may not be optimal in no of samples required Deep LearningDifficulty of sampling from p(x) Srihari • Principal reason for sampling p(x) is evaluating expectation of some f (x) E[f ] = f(x)p(x)dx ∫ • Given samples x(i), i=1,..,n, from p(x), the finite sum approximation is 1 n fˆ = f (x (i) ) n∑ i=1 • But drawing samples p(x) may be impractical 5 Deep Learning Srihari Using a proposal distribution Rejection sampling uses
    [Show full text]