Generating Random Variates
Total Page:16
File Type:pdf, Size:1020Kb
CHAPTER 8 Generating Random Variates 8.1 Introduction................................................................................................2 8.2 General Approaches to Generating Random Variates ....................................3 8.2.1 Inverse Transform...............................................................................................................3 8.2.2 Composition.....................................................................................................................11 8.2.3 Convolution......................................................................................................................16 8.2.4 Acceptance-Rejection.......................................................................................................18 8.2.5 Special Properties.............................................................................................................26 8.3 Generating Continuous Random Variates....................................................27 8.4 Generating Discrete Random Variates.........................................................27 8.4.3 Arbitrary Discrete Distribution...........................................................................................28 8.5 Generating Random Vectors, Correlated Random Variates, and Stochastic Processes ........................................................................................................34 8.5.1 Using Conditional Distributions ..........................................................................................35 8.5.2 Multivariate Normal and Multivariate Lognormal................................................................36 8.5.3 Correlated Gamma Random Variates ................................................................................37 8.5.4 Generating from Multivariate Families ................................................................................39 8.5.5 Generating Random Vectors with Arbitrarily Specified Marginal Distributions and Correlations ...................................................................................................................................................40 8.5.6 Generating Stochastic Processes........................................................................................41 8.6 Generating Arrival Processes .....................................................................42 8.6.1 Poisson Process................................................................................................................42 8.6.2 Nonstationary Poisson Process..........................................................................................43 8.6.3 Batch Arrivals ...................................................................................................................50 8-1 8.1 Introduction Algorithms to produce observations (“variates”) from some desired input distribution (exponential, gamma, etc.) Formal algorithm—depends on desired distribution But all algorithms have the same general form: Generate one or more Transformation Return X ~ desired IID U(0, 1) random (depends on desired distribution numbers distribution) Note critical importance of a good random-number generator (Chap. 7) May be several algorithms for a desired input distribution form; want: Exact: X has exactly (not approximately) the desired distribution Example of approximate algorithm: Treat Z = U1 + U2 + ... + U12 – 6 as N(0, 1) Mean, variance correct; rely on CLT for approximate normality Range clearly incorrect Efficient: Low storage Fast (marginal, setup) Efficient regardless of parameter values (robust) Simple: Understand, implement (often tradeoff against efficiency) Requires only U(0, 1) input One U ® one X (if possible—for speed, synchronization in variance reduction) 8-2 8.2 General Approaches to Generating Random Variates Five general approaches to generating a univariate RV from a distribution: Inverse transform Composition Convolution Acceptance-rejection Special properties 8.2.1 Inverse Transform Simplest (in principle), “best” method in some ways; known to Gauss Continuous Case Suppose X is continuous with cumulative distribution function (CDF) F(x) = P(X £ x) for all real numbers x that is strictly increasing over all x Algorithm: 1. Generate U ~ U(0, 1) (random-number generator) 2. Find X such that F(X) = U and return this value X Step 2 involves solving the equation F(X) = U for X; the solution is written X = F–1(U), i.e., we must invert the CDF F Inverting F might be easy (exponential), or difficult (normal) in which case numerical methods might be necessary (and worthwhile—can be made “exact” up to machine accuracy) 8-3 Proof: (Assume F is strictly increasing for all x.) For a fixed value x0, –1 (def. of X in algorithm) P(returned X is £ x0) = P(F (U) £ x0) -1 = P(F(F (U)) £ F(x0)) (F is monotone •) (def. of inverse function) = P(U £ F(x0)) = P(0 £ U £ F(x0)) (U ³ 0 for sure) = F(x0) – 0 (U ~ U(0,1)) = F(x0) (as desired) Proof by Picture: x0 Pick a fixed value x0 X1 £ x0 if and only if U1 £ F(x0), so P(X1 £ x0) = P(U1 £ F(x0)) = F(x0), by definition of CDFs 8-4 Example of Continuous Inverse-Transform Algorithm Derivation Weibull (a, b) distribution, parameters a > 0 and b > 0 a ìab -a xa-1e-(x/ b ) if x > 0 Density function is f (x) = í î0 otherwise a x ì1 - e -(x/ b ) if x > 0 CDF is F(x) = ò f (t)dt = í -¥ î0 otherwise Solve U = F(X) for X: a U = 1 - e -(X / b ) a e -( X / b ) = 1 - U - (X / b )a = ln(1 - U ) X / b = [- ln(1 - U )]1 / a X = b[- ln(1 -U )]1 /a Since 1 – U ~ U(0, 1) as well, can replace 1 – U by U to get the final algorithm: 1. Generate U ~ U(0, 1) 2. Return X = b (- ln U )1 / a 8-5 Intuition Behind Inverse-Transform Method (Weibull (a = 1.5, b = 6) example) 8-6 The algorithm in action: 8-7 Discrete Case Suppose X is discrete with cumulative distribution function (CDF) F(x) = P(X £ x) for all real numbers x and probability mass function p(xi) = P(X = xi), where x1, x2, ... are the possible values X can take on Algorithm: 1. Generate U ~ U(0,1) (random-number generator) 2. Find the smallest positive integer I such that U £ F(xI) 3. Return X = xI Step 2 involves a “search” of some kind; several computational options: Direct left-to-right search—if p(xi)’s fairly constant If p(xi)’s vary a lot, first sort into decreasing order, look at biggest one first, ..., smallest one last—more likely to terminate quickly Exploit special properties of the form of the p(xi)’s, if possible Unlike the continuous case, the discrete inverse-transform method can always be used for any discrete distribution (but it may not be the most efficient approach) Proof: From the above picture, P(X = xi) = p(xi) in every case 8-8 Example of Discrete Inverse-Transform Method Discrete uniform distribution on 1, 2, ..., 100 xi = i, and p(xi) = p(i) = P(X = i) = 0.01 for i = 1, 2, ..., 100 F (x ) 1.00 0.99 0.98 0.97 0.03 0.02 0.01 x 0 1 2 3 98 99 100 “Literal” inverse transform search: 1. Generate U ~ U(0,1) 2. If U £ 0.01 return X = 1 and stop; else go on 3. If U £ 0.02 return X = 2 and stop; else go on 4. If U £ 0.03 return X = 3 and stop; else go on . 100. If U £ 0.99 return X = 99 and stop; else go on 101. Return X = 100 Equivalently (on a U-for-U basis): 1. Generate U ~ U(0,1) 2. Return X = ë100 Uû + 1 8-9 Generalized Inverse-Transform Method Valid for any CDF F(x): return X = min{x: F(x) ³ U}, where U ~ U(0,1) Continuous, possibly with flat spots (i.e., not strictly increasing) Discrete Mixed continuous-discrete Problems with Inverse-Transform Approach Must invert CDF, which may be difficult (numerical methods) May not be the fastest or simplest approach for a given distribution Advantages of Inverse-Transform Approach Facilitates variance-reduction techniques; an example: Study effect of speeding up a bottleneck machine Compare current machine with faster one To model speedup: Change service-time distribution for this machine Run 1 (baseline, current system): Use inverse transform to generate service-time variates for existing machine Run 2 (with faster machine): Use inverse transform to generate service-time variates for proposed machine Use the very same uniform U(0,1) random numbers for both variates: Service times will be positively correlated with each other Inverse-transform makes this correlation as strong as possible, in comparison with other variate-generation methods Reduces variability in estimate of effect of faster machine Main reason why inverse transform is often regarded as “the best” method Common random numbers Can have big effect on estimate quality (or computational effort required) Generating from truncated distributions (pp. 447-448 and Problem 8.4 of SMA) Generating order statistics without sorting, for reliability models: Y1, Y2, ..., Yn IID ~ F; want Y(i) — directly, generate Yi’s, and sort –1 Alternatively, return Y(i) = F (V) where V ~ beta(i, n – i + 1) 8-10 8.2.2 Composition Want to generate from CDF F, but inverse transform is difficult or slow Suppose we can find other CDFs F1, F2, ... (finite or infinite list) and weights p1, p2, ... (pj ³ 0 and p1 + p2 + ... = 1) such that for all x, F(x) = p1F1(x) + p2F2(x) + ... (Equivalently, can decompose density f(x) or mass function p(x) into convex combination of other density or mass functions) Algorithm: 1. Generate a positive random integer J such that P(J = j) = pj 2. Return X with CDF FJ (given J = j, X is generated independent of J) Proof: For fixed x, P(returned X £ x) = å P(X £ x | J = j)P(J = j) (condition on J = j) j = å P(X £ x | J = j) p j (distribution of