Lab 4: Monte Carlo Integration and Variance Reduction Lecturer: Zhao Jianhua Department of Statistics Yunnan University of Finance and Economics Task The objective in this lab is to learn the methods for Monte Carlo Integration and Variance Re- duction, including Monte Carlo Integration, Antithetic Variables, Control Variates, Importance Sampling, Stratified Sampling, Stratified Importance Sampling. 1 Monte Carlo Integration 1.1 Simple Monte Carlo estimator 1.1.1 Example 5.1 (Simple Monte Carlo integration) Compute a Monte Carlo(MC) estimate of Z 1 θ = e−xdx 0 and compare the estimate with the exact value. m <- 10000 x <- runif(m) theta.hat <- mean(exp(-x)) print(theta.hat) ## [1] 0.6324415 print(1 - exp(-1)) ## [1] 0.6321206 : − : The estimate is θ^ = 0:6355 and θ = 1 − e 1 = 0:6321. 1.1.2 Example 5.2 (Simple Monte Carlo integration, cont.) R 4 −x Compute a MC estimate of θ = 2 e dx: and compare the estimate with the exact value of the integral. m <- 10000 x <- runif(m, min = 2, max = 4) theta.hat <- mean(exp(-x)) * 2 print(theta.hat) 1 ## [1] 0.1168929 print(exp(-2) - exp(-4)) ## [1] 0.1170196 : − : The estimate is θ^ = 0:1172 and θ = 1 − e 1 = 0:1170. 1.1.3 Example 5.3 (Monte Carlo integration, unbounded interval) Use the MC approach to estimate the standard normal cdf Z 1 1 2 Φ(x) = p e−t /2dt: −∞ 2π Since the integration cover an unbounded interval, we break this problem into two cases: x ≥ 0 and x < 0, and use the symmetry of the normal density to handle the second case. R x −t2/2 • To estimate θ = 0 e dt for x > 0, we can generate random U(0; x) numbers, but it would change the parameters of uniform dist. for each different value. We prefer an algorithm that always samples from U(0; 1) via a change of variables. Making the substitution y = t/x, we have dt = xdy and Z 1 2 θ = xe−(xy) /2dy: 0 −(xY )2/2 Thus, θ = EY [xe ], where r.v. Y has U(0; 1) dist. Generate iid U(0; 1) random numbers u1; :::; um, and compute X 1 m 2 −(uix) /2 θ^ = gm(u) = xe m t=1 p Sample mean θ^ ! E[θ^] = θ as m ! 1. If x > 0, estimate of Φ(x) is 0:5 + θ^/ 2π. If x < 0, compute Φ(x) = 1 − Φ(−x). x <- seq(0.1, 2.5, length = 10) m <- 10000 u <- runif(m) cdf <- numeric(length(x)) for (i in 1:length(x)) { g <- x[i] * exp(-(u * x[i])^2/2) cdf[i] <- mean(g)/sqrt(2 * pi) + 0.5 } Phi <- pnorm(x) print(round(rbind(x, cdf, Phi), 3)) ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] ## x 0.10 0.367 0.633 0.900 1.167 1.433 1.700 1.967 2.233 2.500 ## cdf 0.54 0.643 0.737 0.816 0.879 0.925 0.957 0.978 0.991 0.999 ## Phi 0.54 0.643 0.737 0.816 0.878 0.924 0.955 0.975 0.987 0.994 Now the estimates θ^for ten values of x are stored in the vector cdf. Compare the estimates with the value Φ(x) computed (numerically) by the pnorm function. The MC estimates appear to be very close to the pnorm values (The estimates will be worse in the extreme upper tail of the dist.) 2 1.1.4 Example 5.4 (Example 5.3, cont.) Let I(·) be the indicator function, and Z ∼ N(0; 1). Then for any constant x we have E[I(Z ≤ x)] = P (Z ≤ x) = Φ(x). Generate a random sample z1; :::; zm from the standard normal dist. Then the sample mean X [ 1 m Φ(x) = I(zi ≤ x) ! E[I(Z ≤ x)] = Φ(x): m i=1 x <- seq(0.1, 2.5, length = 10) m <- 10000 z <- rnorm(m) dim(x) <- length(x) p <- apply(x, MARGIN = 1, FUN = function(x, z){ mean(z < x) }, z = z) Phi <- pnorm(x) print(round(rbind(x, p, Phi), 3)) ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] ## x 0.100 0.367 0.633 0.900 1.167 1.433 1.700 1.967 2.233 2.500 ## p 0.548 0.648 0.741 0.815 0.877 0.925 0.956 0.976 0.987 0.994 ## Phi 0.540 0.643 0.737 0.816 0.878 0.924 0.955 0.975 0.987 0.994 Compared with Example 5.3, beer agreement with pnorm in the upper tail, but worse agree- ment near the center. 1.1.5 Example 5.5 (Error bounds for MC integration) Estimate the variance of the estimator in Example 5.4, and construct approximate 95% CI for estimates of Φ(2) and Φ(2:5). x <- 2 m <- 10000 z <- rnorm(m) g <- (z < x) #the indicator function v <- mean((g - mean(g))^2)/m cdf <- mean(g) c(cdf, v) ## [1] 9.771000e-01 2.237559e-06 c(cdf - 1.96 * sqrt(v), cdf + 1.96 * sqrt(v)) ## [1] 0.9741681 0.9800319 The probability P (I(Z < x) = 1) is Φ(2) u 0:977. The variance of g(X) is therefore (0:977)(1−0:977)/10000 = 2:223e−06. The MC estimate 2:228e−06 of variance is quite close to this value. The probability P (I(Z < x) = 1) is Φ(2:5) u 0:995. The MC estimate 5:272e − 07 of variance is approximately equal to the theoretical value (0:995)(1 − 0:995)/10000 = 4:975e − 07. 3 2 Antithetic Variables Refer to Example 5.3, estimation of the standard normal cdf Z x 1 2 Φ(x) = p e−t /2dt: −∞ 2π Now use antithetic variables, and find the approximate reduction in standard error. Now the −(xU)2/2 target parameter is θ = EU [xe ], where U has the U(0; 1) dist. By restricting the simulation to the upper tail, the function g(·) is monotone, so the hypoth- esis of Corollary 5.1 is satisfied. Generate random numbers u1; :::; um/2 ∼ U(0; 1) and compute half of the replicates using 2 (j) −(uj )x /2 Yj = g (u) = xe ; j = 1; :::; m/2 as before, but compute the remaining half of the replicates using 2 0 −(−(uj )x) /2 Yj = xe ; j = 1; :::; m/2: The sample mean X ( ) 1 m/2 − 2 − − 2 θ^ = xe (uj )x /2 + xe ((1 uj )x) /2 m j=1 ( ) X − 2 − − 2 1 m/2 xe (uj )x /2 + xe ((1 uj )x) /2 = m/2 j=1 2 p converges to E[θ^] = θ as m ! 1. If x > 0, the estimate of Φ(x) is 0:5+θ^/ 2π. If x < 0 compute Φ(x) = 1 − Φ(−x). 2.1 Example 5.6 (Antithetic variables) MC.Phi below implements MC estimation of Φ(x), which compute the estimate with or without antithetic sampling. MC.Phi function could be made more general if an argument naming a function, the integrand, is added (see integrate). MC.Phi <- function(x, R = 10000, antithetic = TRUE){ u <- runif(R/2) if (!antithetic) v <- runif(R/2) else v <- 1 - u u <- c(u, v) cdf <- numeric(length(x)) for (i in 1:length(x)) { g <- x[i] * exp(-(u * x[i])^2/2) cdf[i] <- mean(g)/sqrt(2 * pi) + 0.5 } cdf } Compare estimates obtained from a single MC experiment: x <- seq(0.1, 2.5, length = 5) Phi <- pnorm(x) set.seed(123) 4 MC1 <- MC.Phi(x, anti = FALSE) set.seed(123) MC2 <- MC.Phi(x) print(round(rbind(x, MC1, MC2, Phi), 5)) ## [,1] [,2] [,3] [,4] [,5] ## x 0.10000 0.70000 1.30000 1.90000 2.50000 ## MC1 0.53983 0.75825 0.90418 0.97311 0.99594 ## MC2 0.53983 0.75805 0.90325 0.97132 0.99370 ## Phi 0.53983 0.75804 0.90320 0.97128 0.99379 The approximate reduction in variance can be estimated for given x by a simulation under both methods. m <- 1000 MC1 <- MC2 <- numeric(m) x <- 1.95 for (i in 1:m) { MC1[i] <- MC.Phi(x, R = 1000, anti = FALSE) MC2[i] <- MC.Phi(x, R = 1000) } print(sd(MC1)) ## [1] 0.006874616 print(sd(MC2)) ## [1] 0.0004392972 print((var(MC1) - var(MC2))/var(MC1)) ## [1] 0.9959166 3 Control Variates 3.0.1 Example 5.7 (Control variate) Apply the control variate approach to compute Z 1 θ = E[eU ] = eudu; 0 where U ∼ U(0; 1). θ = e − 1 = 1:718282 by integration. If the simple MC approach is applied with m replicates, The variance of the estimator is V ar(g(U))/m, where e2 − 1 : V ar(g(U)) = V ar(eU ) = E[eU ] − θ2 = − (e − 1)2 = 0:2420351: 2 A natural choice for a control variate is U ∼ U(0; 1). Then E[U] = 1/2, V ar(U) = 1/12, and : Cov(eU ;U) = 1 − (1/2)(e − 1) = 0:1408591: 5 Hence U ∗ −Cov(e ;U) : c = = −12 + 6(e − 1) = −1:690309: V ar(U) U Our controlled estimator is θ^c∗ = e − 1:690309(U − 0:5). For m replicates, mV ar(θ^c∗ ) is ( ) −Cov(eU ;U) e2 − 1 e − 1 V ar(eU ) − = − (e − 1)2 − 12 1 − V ar(U) 2 2 : = 0:2420356 − 12(0:1408591)2 = 0:003940175: The percent reduction in variance using the control variate compared with the simple MC esti- mate is 100(0:2429355 − 0:003940175/0:2429355) = 98:3781%: Empirically comparing the simple MC estimate with the control variate approach m <- 10000 a <- -12 + 6 * (exp(1) - 1) U <- runif(m) T1 <- exp(U) #simple MC T2 <- exp(U) + a * (U - 1/2) #controlled mean(T1) ## [1] 1.711419 mean(T2) ## [1] 1.717676 (var(T1) - var(T2))/var(T1) ## [1] 0.9837252 illustrating that the percent reduction 98.3781% in variance derived above is approximately achieved in this simulation.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-