[1]

Monte Carlo Sampling Methods

Jasmina L. Vujic

Nuclear Department , Berkeley

Email: [email protected] phone: (510) 643-8085 fax: (510) 643-9685

UCBNE, J. Vujic

[2] Monte Carlo

Monte Carlo is a computational technique based on constructing a random process for a problem and carrying out a NUMERICAL EXPERIMENT by N-fold sampling from a random sequence of numbers with a PRESCRIBED .

x - random variable N 1 xˆ = ---- x N∑ i i = 1 Xˆ - the estimated or sample mean of x x - the expectation or true mean value of x

If a problem can be given a PROBABILISTIC interpretation, then it can be modeled using RANDOM NUMBERS. UCBNE, J. Vujic

[3] Monte Carlo

¥ Monte Carlo techniques came from the complicated diffusion problems that were encountered in the early work on atomic energy. ¥ 1772 Compte de Bufon - earliest documented use of random sampling to solve a mathematical problem. ¥ 1786 Laplace suggested that π could be evaluated by random sampling. ¥ Lord Kelvin used random sampling to aid in evaluating time associated with the kinetic theory of gases. ¥ was among the first to apply random sampling methods to study moderation in Rome. ¥ 1947 Fermi, John von Neuman, , , Stan Ulam and others developed computer-oriented Monte Carlo methods at Los Alamos to trace through fissionable materials

UCBNE, J. Vujic

Monte Carlo [4]

Monte Carlo methods can be used to solve: a) The problems that are (probabilistic) by nature: - particle transport, - telephone and other communication systems, - population studies based on the of survival and reproduction. b) The problems that are deterministic by nature: - the evaluation of integrals, - solving the systems of algebraic equations, - solving partial differential equations.

UCBNE, J. Vujic

Monte Carlo [5]

Monte Carlo methods are divided into: a) ANALOG, where the natural laws are PRESERVED - the game played is the analog of the physical problem of interest (i.e., the history of each particle is simulated exactly), b) NON-ANALOG, where in order to reduce required computational time the strict analog of particle histories is abounded (i.e., we CHEAT!) Variance-reduction techniques: - Absorption suppression - History termination and Russian Roulette - Splitting and Russian Roulette - Forced Collisions - Source Biasing UCBNE, J. Vujic

Example 1: Evaluation of Integrals [6]

fmax fx() y1 ¥ reject

¥ accept a x1 b b () () Ifx= ∫ dx - area uder the function f(x),Rba= Ð fmax - area of rectangle a I P = ---- - is a probability that a random point lies under f(x), thus IRP= R

()ξ ξ Step 1: Choose a random point (x1,y1):x1 = aba+ Ð 1 and y1 = fmax 2 Step 2: Check if y1 ≤ fx()1 - accept the point, if y1 < fx()1 - reject the point

Step 3: Repeat this process N times, Ni - the number of accepted points Ni Ni Step 4: Determine P = ------and the value of IR= ------N N

UCBNE, J. Vujic

Major Components of Monte Carlo [7]

Major Components of a Monte Carlo Algorithm ¥ Probability distribution functions (pdf’s) - the physical (or mathematical) system must be described by a set of pdf’s. ¥ Random number generator - a source of random numbers uniformly distributed on the unit interval must be available. ¥ Sampling rule - a prescription for sampling from the specified pdf, assuming the availability of random numbers on the unit interval. ¥ Scoring (or tallying) - the outcomes must be accumulated into overall tallies or scores for the quantities of interest. ¥ Error estimation - an estimate of the statistical error (variance) as a function of the number of trials and other quantities must be determined. ¥ Variance reduction techniques - methods for reducing the varinace in the estimated solution to reduce the computational time for Monte Carlo simulation. ¥ Parallelization and vectorization - efficient use of advanced computer architectures.

UCBNE, J. Vujic

Probability Distribution Functions [8]

Probability Distribution Functions

Random Variable, x, - a variable that takes on particular val- ues with a frequency that is determined by some underlying probability distribution.

Continuous Probability Distribution

Paxb{}≤≤

Discrete Probability Distribution

{} Px= xi = pi

UCBNE, J. Vujic

Probability Distribution Functions [9]

PDFs and CDFs (continuous)

Probability Density Function (PDF) - continuous ¥ f(x), fx()dx= P{} x≤≤ x' x+ dx ∞ ¥ 0 ≤ fx(), ∫ fx()dx= 1 b () Ð∞ fx ¥ Probability{ axb≤≤ } = ∫ fx()dx a x → Cumulative Distribution Function (CDF) - continuous x ¥ Fx()==∫ fx'()dx' Px'x{}≤ 1 Ð∞ ¥ 0 ≤≤Fx() 1 Fx() d ¥ 0 ≤ Fx()= fx() dx 0 b x → ¥ ∫fx'()dx' ==Paxb{}≤≤ Fb()Ð Fa() a UCBNE, J. Vujic Probability Distribution Functions [10]

PDFs and CDFs (discrete)

Probability Density Function (PDF) - discrete () δ() ¥ f(xi), fxi = pi xxÐ i ≤ () p ¥ 0 fxi 3 ()∆() () p1 ¥ ∑fxi xi = 1 or ∑pi = 1 fx i i p2

x x x Cumulative Distribution Function (CDF) - discrete 1 2 3 F(x) ¥ Fx()==p fx()∆x ∑ i ∑ i i p1+p2+p3 1 < < xi x xi x p1+p2 ¥ 0 ≤≤Fx() 1 p1 ¥ x1 x2 x3

UCBNE, J. Vujic Monte Carlo & Random Sampling [11]

Sampling from a given discrete distribution

() Given fxi = pi and ∑pi = 1 , i = 1, 2, ..., N i

0 p1 p1+p2 p1+p2+p3 p1+p2+...+pN=1

ξ

≤≤ξ () ()ξ ∈ and 0 1 , then Px= xk ==pk P dk or

k Ð 1 k ≤ ξ < ∑ pi ∑ pi i = 1 i = 1

UCBNE, J. Vujic Monte Carlo & Random Sampling [12]

Sampling from a given continuous distribution

If f(x) and F(x) represent PDF and CDF od a random variable x, and if ξ is a random number distributed uniformly on [0,1] with PDF g(ξ )=1, and if x is such that

F(x) = ξ than for each ξ there is a corresponding x, and the variable x is distribute according to the probability density function f(x).

Proof:

For each ξξξξ in ( , +∆ ), there is x in (x,x+∆x). Assume that PDF for x is q(x). Show that q(x) = f(x):

q(x)∆x = g(ξξ )∆ =∆ ξ = ( ξ +∆ ξξ)- = F(x+∆x)- F(x)

q(x) = [F(x+∆x)- F(x)]/∆x = f(x)

Thus, if x = F-1(ξ ), then x is distributed according to f(x).

UCBNE, J. Vujic Monte Carlo & Random Sampling [13]

Monte Carlo Codes

Categories of Random Sampling ¥ Random number generator uniform PDF on [0,1] • Sampling from analytic PDF’s normal, exponential, Maxwellian, ..... • Sampling from tabulated PDF’s angular PDF’s, spectrum, cross sect

For Monte Carlo Codes... ¥ Random numbers, ξ , are produced by the R.N. generator on [0,1] ¥ Non-uniform random variates are produced from the ξ ’s by — direct inversion of CDFs — rejection methods — transformations — composition (mixtures) — sums, products, ratios, ..... — table lookup + interpolation — lots (!) of other tricks ..... ¥ < 10% of total cpu time (typical)

UCBNE, J. Vujic Random Sampling Methods [14]

Random Number Generator

Pseudo-Random Numbers ¥ Not strictly "random", but good enough 1 — pass statistical tests fx() — reproducible sequence 0 1 ¥ Uniform PDF on [0,1] ¥ Must be easy to compute, must have a large period 1 () Multiplicative congruential method Fx ¥ Algorithm 0 1 S0 = initial seed, odd integer, < M

Sk = G ¥ Sk-1 mod M, k = 1, 2, ..... ξ k = Sk / M

¥ Typical (vim, mcnp): 19 48 Sk = 5 ¥ Sk-1 mod 2 ξ 48 k = Sk / 2 period = 246 ≈ 7.0 x 1013

UCBNE, J. Vujic Random Sampling Methods [15]

Direct Sampling (Direct Inversion of CDFs)

Direct Solution of xˆF← Ð1()ξ

Sampling Procedure: 1 ¥ Generate ξ ξ () ξ ¥ Determine xˆFx such that ˆ = Fx()

0 x → xˆ Advantages ¥ Straightforward & coding ¥ "High-level" approach

Disadvantages ¥ Often involves complicated functions ¥ In some cases, F(x) cannot be inverted (e.g., Klein-Nishina)

UCBNE, J. Vujic Random Sampling Methods [16]

Rejection Sampling Used when the inverse of CDF is costly ot impossible to find. Select a bounding function, g(x), such that ¥ cgx⋅ ()≥ fx() for all x ¥ g(x) is an easy-to-sample PDF

Sampling Procedure: ← Ð1()ξ ¥ sample xˆx from g(x): ˆg 1 cg() x ξ ⋅ ()≤ () ¥ test: 2 cg xˆ fxˆ ¥ reject if true accept xˆ , done () if false reject xˆ , try again fx 6 ¥ accept Advantages x → ¥ Simple computer operations Disadvantages ¥ "Low-level" approach, sometimes hard to understand

UCBNE, J. Vujic Random Sampling Methods [17]

Table Look-Up Used when f(x) is given in a form of a histogram

f 1 fi fx() f2

x1 x2 xi-1 xi F(x) ξ

x1 x2 xi-1 x xi

Then by linear interpolation ()() []()ξ xxÐ i Ð 1 Fi + xi Ð x Fi Ð 1 xi Ð xi Ð 1 Ð xiFi Ð 1 + xi Ð 1Fi Fx()= ------, x = ------xi Ð xi Ð 1 Fi Ð Fi Ð 1

UCBNE, J. Vujic Random Sampling Methods [18]

Sampling Multidimensional Random Variables

If the random quantity is a function of two or more random variables that are independent, the joint PDF and CDF can be written as

(), () () fxy = f1 x f2 y

(), () () Fxy = F1 x F2 y EXAMPLE: Sampling the direction of isotropically scattered particle in 3D

ΩΩθϕ()Ω, Ω Ω ==xi ++yj zk =vwu++, dΩ sinθdθdϕ Ðd()cosθ dϕ Ðdµdϕ ------==------=------4π 4π 4π 4π

1 1 f()Ω ==f ()µ f ()ϕ ------1 2 22π µ 1 F ()µ ===f ()µµ' d ' ---()ξµ + 1 or µ = 2ξ Ð 1 1 ∫Ð1 1 2 1 1 ϕ ϕ F ()ϕ ===f ()ϕϕ' d ' ------ξ , or ϕ = 2πξ 1 ∫0 2 2π 2 2

UCBNE, J. Vujic Random Sampling Methods [19]

Probability Density Function Direct Sampling Method

Linear: fx()=02x , <

← 2 πξ w 1 Ð2u sin 2

π Maxwellian: 2 x ÐxT/ ← ()ξ ξ 2 ξ fx()=0------e , < x xTÐ log 1 Ð log 2 cos -- 3 (M1, M2, M3) T π T 2 π Watt Ðab⁄ 4 wa← ()Ð logξ Ð logξ cos2 -- ξ 2e Ðxa⁄ 1 2 2 3 Spectrum: fx()= ------e sinh bx ,0 < x π 3 2 (W1, W2, W3) a b a b 2 xw← ++------()2ξ Ð 1 a bw 4 4

Normal: x Ð µ 2 Ð1 x ← µσ+ Ð22logξ cos πξ (N1, N2) 1 -2------1 2 fx()= ------e σ σ 2π UCBNE, J. Vujic

← ξ Random Sampling Examples [20]

Example — 2D Isotropic 1 f()ρ = ----- , ρ = ()uv, 2π

Rejection (old vim) SUBROUTINE AZIRN_VIM( S, C ) IMPLICIT DOUBLE PRECISION (A-H, O-Z) 100 R1=2.*RANF() - 1. R1SQ=R1*R1 R2=RANF() R2SQ=R2*R2 RSQ=R1SQ+R2SQ s IF(1.-RSQ)100,105,105 105 S=2.*R1*R2/RSQ C=(R2SQ-R1SQ)/RSQ RETURN c END

Direct (racer, new vim)

subroutine azirn_new( s, c ) implicit double precision (a-h,o-z) parameter ( twopi = 2.*3.14159265 ) phi = twopi*ranf() c = cos(phi) s = sin(phi) return end UCBNE, J. Vujic Random Sampling Examples [21]

Example — Watt Spectrum

Ðab⁄ 4 ⁄ 2e Ðxa ()x = ------e sinh bx ,0 < x πa3b

Rejection (mcnp) ¥ Based on Algorithm R12 from 3rd Monte Carlo Sampler, Everett & Cashwell

⁄ {}()2 ⁄ ¥ DefineK = 1 + ab 8 ,LaKK= + Ð 1 12/ , MLa= Ð 1 ← ξ ← ξ ¥ Setx Ðlog 1 , y Ðlog 2 ¥ If{}yMxÐ ()+ 1 2 ≤ bLx , accept: return (Lx) otherwise, reject

Direct (new vim) ¥ Sample from Maxwellian in C-of-M, transform to lab π wa← ()Ð logξ Ð logξ cos2 -- ξ 1 2 2 3 2 a b 2 xw← ++------()2ξ Ð 1 a bw (assume isotropic emission from fission fragment 4 4 moving with constant velocity in C-of-M) ¥ Unpublished sampling scheme, based on original Watt spectrum derivation

UCBNE, J. Vujic Random Sampling Examples [22]

Example — Linear PDF

fx()=02x , ≤≤x 1

Rejection (strictly — this is not "rejection", but has the same flavor) ξ ≥ ξ ← ξ if 1 2 , then xˆ 1 ← ξ else xˆ 2 or ← ()ξ , ξ xˆmax1 2 or ← ξ ξ xˆ 1 Ð 2

Direct Inversion

Fx()=0x2 , ≤≤x 1

← ξ

UCBNE, J. Vujic Random Sampling Examples [23]

Example — Collision Type Sampling

Assume (for photon interactions):

µ µ µ µ tot = cs ++fe pp Define µ µ µ cs fe pp p1 = µ------, p2 = µ------, and p3 = µ------tot tot tot with 3 ∑ pi = 1. i = 1 Then 0 p1 p1+p2 p1+p2+p3=1

Collision event: Photoefect <<ξ p1 p1 + p2

UCBNE, J. Vujic [24]

How Do We Model A Complicated Physical System? a) Need to know the of the system b) Need to derive equations describing physical processes c) Need to generate material specific data bases (cross sections for particle interactions, kerma- factors, Q-factors) d) Need to “translate” equations into a computer program (code) e) Need to "describe" geometrical configuration of the system to computer f) Need an adequate computer system g) Need a physicist smart enough to do the job and dumb enough to want to do it.

UCBNE, J. Vujic Transport Equation [25]

Time-Dependent Particle Transport Equation (Boltzmann Transport Equation):

1 ∂ --- ψ()ΩψrE,,,Ω t + • ∇ ()rE,,,Ω t +=Σ ()ψE ()rE,,,Ω t v∂t t

∞ Σ ()Ψ→ , Ω → Ω ()Ω,,,Ω ∫ dE' ∫ s E' E ' rE' 'td ' +  0 4π

∞ χ() E υΣ ()Ψ()Ω,,,Ω ------π ∫ dE' ∫ f E' rE' 'td ' + 4  0 4π

1 ------QrEt(),, 4π

UCBNE, J. Vujic