Monte Carlo methods

In option pricing there are two main approaches: Monte Carlo Methods Monte Carlo methods for estimating expected values of financial payoff functions based on underlying assets. Prof. Mike Giles [email protected] E.g., we want to estimate E[f(S(T ))] where 1 2 S(T ) = S0 exp (r σ )T + σW (T ) Oxford University Mathematical Institute − 2 and W (T ) is driving Brownian motion at terminal time T Numerical approximation of the PDE which describes the evolution of the .

u(s, t) = E f(S(T )) S(t) = s | Usually less costly than MC when there are very few underlying assets (M 3), but much more expensive ≤ MC Lecture 1 – p. 1 when there are many. MC Lecture 1 – p. 2

Geometric Brownian Motion Correlated Brownian Motions

In this first lecture, we consider M underlying assets, each Different assets do not behave independently – on , modelled by Geometric Brownian Motion they tend to move up and down together.

dSi = r Si dt + σi Si dWi This is modelled by introducing correlation between the driving Brownian motions so that so Ito calculus gives us E [Wi(T ) Wj(T )]=Ωi,j T 1 2 Si(T ) = Si(0) exp (r 2σi )T + σiWi(T ) − Ω   where i,j is the correlation coefficient, and hence in which each W (T ) is Normally distributed with zero i T and T . E W (T ) W (T ) = Ω T.   We can use standard Random Number Generation software (e.g. randn function in MATLAB) to generate How do we generate samples of Wi(T )? samples of each Wi(T ), but there is a problem . . .

MC Lecture 1 – p. 3 MC Lecture 1 – p. 4 Correlated Normal Random Variables Correlated Normal Random Variables

Suppose x is a vector of independent N(0, 1) variables, and To get E[yyT ] = Ω, we need to find L such that define a new vector y = Lx. LLT = Ω Each element of y is Normally distributed, E[y] = L E[x] = 0, and L is not uniquely defined – simplest choice is to use a T T T T T T E[yy ] = E[Lxx L ] = L E[xx ] L = LL . Cholesky factorization in which L is lower-triangular, with a positive diagonal. since E[xxT ] = I because In MATLAB, this is achieved using elements of x are independent = E[xi xj] = 0 for i = j ⇒ 6 elements of x have unit variance = E[x2] = 1 L = chol(Omega,’lower’); ⇒ i

MC Lecture 1 – p. 5 MC Lecture 1 – p. 6

Basket Call Option Monte Carlo Error and CLT

In a basket call option, the discounted payoff function is This MC estimate is unbiased, meaning that the average error is zero + 1 E[εN ] = 0 f = exp( rT ) S (T ) K − M i − i ! where εN is the error f N E[f]. X − where K is the strike, r is the risk-free interest rate, and (x)+ max(x, 0). ≡ In addition, the proves that for large N the error is asymptotically Normally distributed Monte Carlo estimation of E[f(S(T ))] is very simple – we 1/2 generate N independent samples of W (T ), compute S(T ), εN (f) σN − Z and then average to get ∼ with Z a N(0, 1) and σ2 the variance of f: N 1 (n) 2 2 f N N − f E[f] σ = [f] (f [f]) . ≡ ≈ V E E n=1 ≡ − X MC Lecture 1 – p. 7   MC Lecture 1 – p. 8 CLT Empirical Variance

This that Given N samples, the empirical variance is

1/2 1 N P N σ− εN < s 1 2 Φ( s), 2 2 1 (n) 2 2 ≈ − − σ = N − f f = f (f) h i − N − n=1 where Φ(s) is the Normal CDF (cumulative distribution X   function). where e N N 2 1 (n) 2 1 (n) Typically we use s = 3, corrsponding to a 3-standard f = N − f , f = N − f deviation confidence interval, with 1 2 Φ( s) 0.997. n=1 n=1 − − ≈ X X  

Hence, with 99.7%, we have σ2 is a slightly biased estimator for σ2 – an unbiased estimator is 1/2 1 1/2 N σ− εN < 3 = εN < 3 σN − ⇒ | | e N N σ2 = σ2 = f 2 (f)2 N 1 N 1 − This bounds the accuracy, but we need an estimate for σ. − − MC Lecture 1 – p. 9   MC Lecture 1 – p. 10 b e Applications Applications

Geometric Brownian motion for single asset: MC calculation with up to 106 paths; true value = 17.663

1 2 1 S(T ) = S0 exp (r 2 σ )T + σW (T ) − MC error lower bound  upper bound For the European call option, 0.5

f(S) = exp( rT )(S K)+ − − 0 where K is the strike price. Error

For numerical we will consider a European call -0.5 with r =0.05, σ = 0.2, T =1, S0 =110, K =100.

-1 The analytic value is known for comparison. 0 2 4 6 8 10 N ×10 5 MC Lecture 1 – p. 11 MC Lecture 1 – p. 12 Applications Applications

The upper and lower bounds are given by MATLAB code:

3 σ r=0.05; sig=0.2; T=1; S0=110; K=100; Mean , ± √N N = 1:1000000; e Y = randn(1,max(N)); % Normal random variables so more than a 99.7% probability that the true value lies within these bounds. S = S0*exp((r-sigˆ2/2)*T + sig*sqrt(T)*Y); F = exp(-r*T)*max(0,S-K);

sum1 = cumsum(F); % cumulative summation of sum2 = cumsum(F.ˆ2); % payoff and its square val = sum1./N; sd = sqrt(sum2./N - val.ˆ2);

MC Lecture 1 – p. 13 MC Lecture 1 – p. 14

Applications Applications err = european_call(r,sig,T,S0,K,’value’) - val; CLT: 100 independent tests, each with 100 samples

1 plot(N,err, ... 0.9

N,err-3*sd./sqrt(N), ... 0.8 N,err+3*sd./sqrt(N)) 0.7 axis([0 length(N) -1 1]) 0.6 xlabel(’N’); ylabel(’Error’) 0.5 CDF legend(’MC error’,’lower bound’,’upper bound’) 0.4 0.3

0.2 MC results 0.1 CLT 0 10 15 20 25 MC value

MC Lecture 1 – p. 15 MC Lecture 1 – p. 16 Applications Applications

CLT: 104 independent tests, each with 100 samples CLT: 104 independent tests, each with 104 samples

1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5 CDF CDF 0.4 0.4

0.3 0.3

0.2 0.2 MC results MC results 0.1 CLT 0.1 CLT 0 0 10 15 20 25 16.5 17 17.5 18 18.5 MC value MC value

MC Lecture 1 – p. 17 MC Lecture 1 – p. 18

Basket call option Applications

5 underlying assets starting at S0 = 100, with call option r=0.05; sigma=0.2; rho=0.1; T=1; K=100; S0=100; on arithmetic mean with strike K = 100 Geometric Brownian Motion model, r = 0.05,T = 1 N = 10ˆ5; % number of MC samples volatility σ = 0.2 and correlation matrix Omega = eye(5) + rho*(ones(5)-eye(5)); L = chol(Omega,’lower’); % Cholesky factorisation 1 0.1 0.1 0.1 0.1 W = sqrt(T)*L*randn(5,N); 0.1 1 0.1 0.1 0.1   S = S0.*exp((r-0.5*sigmaˆ2)*T + sigma*W); Ω = 0.1 0.1 1 0.1 0.1    0.1 0.1 0.1 1 0.1    S = 0.2*sum(S,1); % average asset value  0.1 0.1 0.1 0.1 1    F = exp(-r*T)*max(S-K,0); % call option   val = sum(F)/N % mean and its std. dev. sd = sqrt( (sum(F.ˆ2)/N - val.ˆ2)/(N-1) ) MC Lecture 1 – p. 19 MC Lecture 1 – p. 20 Variance Reduction Review of elementary results

If a, b are random variables, and λ,µ are constants, then Monte Carlo is a very simple method; it gets complicated when we try to reduce the variance, and hence the number E[a + µ] = E[a] + µ of samples required. V[a + µ] = V[a] There are several approaches: E[λ a] = λ E[a] antithetic variables 2 V[λ a] = λ V[a] control variates E[a + b] = E[a] + E[b] importance V[a + b] = V[a] + 2 Cov[a, b] + V[b] stratified sampling Latin hypercube where 2 V[a] E (a E[a]) = E a2 (E[a])2 quasi-Monte Carlo ≡ − − h i Cov[a, b] E (a E[a])(b E[b])  We will discuss control variates. ≡ − − MC Lecture 1 – p. 21 h i MC Lecture 1 – p. 22

Review of elementary results Control Variates

If a, b are independent random variables then Suppose we want to approximate E[f] using a simple Monte Carlo average f. E[f(a) g(b)] = E[f(a)] E[g(b)]

Hence, Cov[a, b] = 0 and therefore V[a + b] = V[a] + V[b] If there is another payoff g for which we know E[g], can use g E[g] to reduce error in f E[f]. Extending this to a set of N iid (independent identically − − distributed) r.v.’s xn, we have How? By defining a new estimator N N V xn = V[xn] = N V[x] f = f λ (g E[g]) − − "n=1 # n=1 X X b and so N Again unbiased since E[f] = E[f] = E[f] 1 1 V N − xn = N − V[x] " n=1 # b X MC Lecture 1 – p. 23 MC Lecture 1 – p. 24 Control Variates Control Variates

For a single sample, The resulting variance is

2 2 V[f λ (g E[g])] = V[f] 2 λ Cov[f,g] + λ V[g] 1 (Cov[f,g]) 1 2 − − − N − V[f] 1 = N − V[f] 1 ρ − [f] [g] −  V V   For an average of N samples, where ρ is the correlation between f and g.

1 2 V[f λ (g E[g])] = N − V[f] 2 λ Cov[f,g] + λ V[g] − − − The challenge is to choose a good g which is well   correlated with f – the , and hence the optimal λ, can be estimated from the . To minimise this, the optimum value for λ is

Cov[f,g] λ = V[g]

MC Lecture 1 – p. 25 MC Lecture 1 – p. 26

Control Variates Application

For a basket option with M underlying assets, we know that MATLAB code, part 1 – estimating optimal λ: for each asset r=0.05; sig=0.2; T=1; S0=110; K=100; E[exp( rT ) Si(T )] = Si(0) − so we could use N = 1000; Y = randn(1,N); % Normal random variable M 1 S = S0 exp((r-sigˆ2/2) T + sig sqrt(T) Y); g = exp( rT ) S (T ) * * * * − M i m=1 F = exp(-r*T)*max(0,S-K); X C = exp(-r*T)*S; with M Fave = sum(F)/N; 1 E[g] = Si(0) Cave = sum(C)/N; M m=1 lam = sum((F-Fave). (C-Cave)) / sum((C-Cave).ˆ2); X * Numerical test will do the simpler scalar case with M =1. MC Lecture 1 – p. 27 MC Lecture 1 – p. 28 Application Application

MATLAB code, part 2 – control variate estimation: Results:

N = 1e5; >> cv Y = randn(1,N); % Normal random variable S = S0*exp((r-sigˆ2/2)*T + sig*sqrt(T)*Y); estimated price (without CV) = 17.624089 +/- 0.178710 estimated price (with CV) = 17.651112 +/- 0.045128 F = exp(-r*T)*max(0,S-K); exact price = 17.662954 C = exp(-r*T)*S; F2 = F - lam*(C-S0);

Fave = sum(F)/N; F2ave = sum(F2)/N; sd = sqrt( sum((F -Fave ).ˆ2)/(N*(N-1)) ); sd2 = sqrt( sum((F2-F2ave).ˆ2)/(N*(N-1)) );

MC Lecture 1 – p. 29 MC Lecture 1 – p. 30

Final words

Monte Carlo estimation is very simple, and efficient when there are multiple underlying assets

Need to generate correlated Normal random variables

Central Limit Theorem (CLT) is very important in giving a confidence interval for the computed value

The use of a control variate with known expected value can greatly reduce the number of MC samples required

MC Lecture 1 – p. 31 Greeks

In Monte Carlo applications we don’t just want to know the Monte Carlo Methods expected discounted value of some payoff

Prof. Mike Giles V = E[f(S(T ))] [email protected] For hedging and risk analysis, we also want to know a Oxford University Mathematical Institute whole of “Greeks” corresponding to first and second derivatives of V with respect to various parameters:

∂V ∂2V ∆ = , Γ = 2 , ∂S0 ∂S0 ∂V ∂V ρ = , Vega = . ∂r ∂σ

MC Lecture 2 – p. 1 MC Lecture 2 – p. 2

Finite difference sensitivities Finite difference sensitivities

If V (θ) = E[f(S(T ))] for an input parameter θ is sufficiently The clear advantage of this approach is that it is very ∂V simple to implement (hence the most popular in practice?) differentiable, then the sensitivity can be approximated ∂θ by one-sided finite difference However, the disadvantages are: ∂V V (θ+∆θ) V (θ) expensive (2 extra sets of calculations for central = − + O(∆θ) ∂θ ∆θ differences) or by central finite difference significant bias error if ∆θ too large machine roundoff errors if ∆θ too small ∂V V (θ+∆θ) V (θ ∆θ) = − − + O((∆θ)2) large variance if f(S(T )) discontinuous and ∆θ small ∂θ 2∆θ

(This approach is referred to as getting Greeks by “bumping” the input parameters.)

MC Lecture 2 – p. 3 MC Lecture 2 – p. 4 Finite difference sensitivities Finite difference sensitivities

Let X(i)(θ+∆θ) and X(i)(θ ∆θ) be the values of f(S(T )) If independent samples are taken for both X(i)(θ+∆θ) and − obtained for different MC samples, so the central difference X(i)(θ ∆θ) then ∂V − estimate for is given by ∂θ 1 2 V[Y ] V[X(θ+∆θ)] + V[X(θ ∆θ)] ≈ 2N∆θ − N N   j 1 1 (i) 1 (i) X   Y = N − X (θ+∆θ) N − X (θ ∆θ) b 2 2∆θ − − ! 1 Xi=1 Xi=1 2 N V[f] N ≈ 2N∆θ b 1   = X(i)(θ+∆θ) X(i)(θ ∆θ) V[f] 2N∆θ − − = i=1 2N(∆θ)2 X   which is very large for ∆θ 1. ≪

MC Lecture 2 – p. 5 MC Lecture 2 – p. 6

Finite difference sensitivities Finite difference sensitivities

It is much better for X(i)(θ+∆θ) and X(i)(θ ∆θ) to use the However, there are problems if ∆θ is chosen to be − same set of random inputs. extremely small. In finite precision arithmetic, computing X(θ) has an error (i) If X (θ) is differentiable with respect to θ, then which is approximately random with r.m.s. magnitude δ 6 ∂X(i) single precision δ 10− X X(i)(θ+∆θ) X(i)(θ ∆θ) 2∆θ ≈ | | − − ≈ ∂θ double precision δ 10 14 X ≈ − | | and hence Hence, should choose bump ∆θ so that 1 ∂X V[Y ] N − V , ≈ ∂θ X(θ+∆θ) X(θ ∆θ) δ   | − − |≫ which behaves well forb∆θ 1, so one should choose a small value for ∆θ to minimise≪ the bias due to the finite Most banks probably use double precision to be safe. differencing.

MC Lecture 2 – p. 7 MC Lecture 2 – p. 8 Finite difference sensitivities Finite difference sensitivities

Next, we analyse the variance of the finite difference What is the probability that S(θ ∆θ) will be on different ± estimator when the payoff is discontinuous. sides of the discontinuity?

The problem is that a small bump in the asset S can Separation of S(θ ∆θ) is O(∆θ) produce a big bump in the payoff – not differentiable ± P( S(θ) K

① 1 1 This leads to a variance which is O(N − ∆θ− ).

✲ ①K S

MC Lecture 2 – p. 9 MC Lecture 2 – p. 10

Finite difference sensitivities Pathwise sensitivities

So, small ∆θ gives a large variance, while a large ∆θ gives We start with a large finite difference discretisation error. V E [f(S(T ))] = f(S(T )) pW (W ) dW To determine the optimum choice we use this result: ≡ Z 2 2 where p (W ) is the probability density function for W (T ), E Y E[Y ] = V[Y ] + E[Y ] E[Y ] W − − and differentiate this to get      Mean Squareb Error = varianceb +(bbias)2 ∂V ∂f ∂S(T ) ∂f ∂S(T ) = p dW = ∂θ ∂S ∂θ W E ∂S ∂θ In our case, we have Z   a with ∂S(T )/∂θ being evaluated at fixed W . V[Y ]+(bias)2 + b ∆θ4. ∼ N ∆θ which can be minimisedb by choosing ∆θ appropriately.

MC Lecture 2 – p. 11 MC Lecture 2 – p. 12 Pathwise sensitivities Pathwise sensitivities

This leads to the estimator To handle payoffs which do not have the necessary continuity/smoothness one can modify the payoff N 1 ∂f ∂S(i) (S(i)) N ∂S ∂θ For digital options it is common to use a piecewise linear Xi=1 approximation to limit the magnitude of ∆ near maturity which is the derivative of the usual price estimator – avoids large transaction costs

N 1 Bank selling the option will price it conservatively f(S(i)) (i.e. over-estimate the price) N i=1 X ✻ ✄✄✄ ✄✄✄ Gives incorrect estimates when f(S) is discontinuous. ✄✄✄ ∂f ✄✄✄ e.g. for digital put = 0 so estimated value of Greek is ✄✄✄ ∂S ✄✄✄ zero – clearly wrong. ✄✄✄ ✲ S MC Lecture 2 – p. 13 K MC Lecture 2 – p. 14

SDE Path Simulation Euler-Maruyama method

So far, we have considered European options with assets The simplest approximation for the scalar SDE satisfying Geometric Brownian Motion SDEs dS = a(S,t) dt + b(S,t) dW Now we consider the more general case in which the is the forward Euler scheme, which is known as the solution to the SDE needs to be approximated because Euler-Maruyama approximation when applied to SDEs: the option is path-dependent, and/or the SDE is not integrable Sn+1 = Sn + a(Sn,tn) h + b(Sn,tn)∆Wn Here h is the timestep, S is the approximation to S(nh) and b b nb b the ∆Wn are i.i.d. N(0, h) Brownian increments. b For ODEs, the O(h) accuracy of forward Euler method is considered poor, but for SDEs it is very hard to do better.

MC Lecture 2 – p. 15 MC Lecture 2 – p. 16 Weak convergence Weak convergence

In finance applications, mostly concerned with weak Key theoretical result (Bally and Talay, 1995): errors, the error in the expected payoff. For a European payoff f(S(T )) this is If p(S) is the p.d.f. for S(T ) and p(S) is the p.d.f. for ST/h computed using the Euler-Maruyama approximation, a(S,t) b(S,t) S,t b E[f(S(T ))] E[f(ST/h)] then if and are Lipschitzb w.r.t. − p(S) p(S) = O(h) and it is of order α if b k − k1 α and hence for bounded payoffs E[f(S(T ))] E[f(ST/h)] = O(h ) − b E[f(S(T ))] E[f(ST/h)] = O(h) For a path-dependent option, theb weak error is − This holds even for digital optionsb with discontinuous E[f(S)] E[f(S)] − payoffs f(S). Earlier theory covered only European options where f(S) is a function of the entireb b path S(t), and f(S) is such as put and call options with Lipschitz payoffs. a corresponding approximation. MC Lecture 2 – p. 17 MC Lecture 2 – p. 18 b b Weak convergence Weak convergence

Numerical demonstration: Geometric Brownian Motion Comparison to exact solution:

dS = r S dt + σ S dW Weak error r = 0.05, σ = 0.5, T = 1 MC error

S = 100, K = 110 European call: 0 . 10 -1

Plot shows weak error versus analytic expectation when Error using 108 paths, and also Monte Carlo error (3 standard deviations)

10 -2 10 -2 10 -1 h

MC Lecture 2 – p. 19 MC Lecture 2 – p. 20 Weak convergence Weak convergence

Previous plot showed difference between exact expectation To minimise the number of paths that need to be simulated, and numerical approximation. best to use same driving Brownian path when doing 2h and h approximations – i.e. take Brownian increments for h What if the exact solution is unknown? Compare simulation and sum in pairs to get Brownian increments for approximations with timesteps h and 2h. 2h simulation.

If This is like using the same driving Brownian paths for finite h difference Greeks. The variance is lower because the h and E[f(S(T ))] E[f(S )] a h − T/h ≈ 2h paths are close to each other (strong convergence). then b E[f(S(T ))] E[f(S2h )] 2 a h − T/2h ≈ and so b E[f(Sh )] E[f(S2h )] a h T/h − T/2h ≈

b b MC Lecture 2 – p. 21 MC Lecture 2 – p. 22

Weak convergence Barrier option

Comparison to 2h approximation: Some path-dependent options give only O(√h) weak convergence if the numerical payoff is not constructed

Weak error carefully. 10 -1 MC error A down-and-out call option has discounted payoff

exp( rT )(S K)+1 − T − mint S(t)>B

Error i.e. it is like a standard call option except that it pays nothing 10 -2 if the minimum value drops below the barrier B.

The natural numerical discretisation of this is

f = exp( rT )(S K)+1 M minn Sn>B 10 -3 − − 10 -2 10 -1 h b MC Lecture 2 – p. 23 b MC Lecture 2 – p. 24 Barrier option Barrier option

Numerical demonstration: Geometric Brownian Motion Comparison to exact solution:

dSt = r St dt + σ St dWt Weak error MC error r = 0.05, σ = 0.5, T = 1 10 1

Down-and-out call: S0 = 100, K = 110,B = 90. 10 0

Plots shows weak error versus analytic expectation using Error 106 paths, and difference from 2h approximation using 10 -1 105 paths.

(We don’t need as many paths as before because the weak 10 -2 errors are much larger in this case.) 10 -2 10 -1 h

MC Lecture 2 – p. 25 MC Lecture 2 – p. 26

Barrier option Lookback option

Comparison to 2h approximation: A floating-strike lookback call option has discounted payoff

Weak error exp( rT ) ST min St MC error − − [0,T ] 10 1  

The natural numerical discretisation of this is 10 0

Error f = exp( rT ) SM min Sn − − n 10 -1   b b

10 -2 10 -2 10 -1 h

MC Lecture 2 – p. 27 MC Lecture 2 – p. 28 Lookback option Lookback option

Comparison to exact solution Comparison to 2h approximation

Weak error Weak error MC error MC error 10 1 10 1

10 0 Error

Error 10 0

10 -1

10 -1 10 -2 10 -1 10 -2 10 -1 h h MC Lecture 2 – p. 29 MC Lecture 2 – p. 30

Brownian Bridge Brownian Bridge

To recover O(h) weak convergence we need some theory. Two key results: The probability of the minimum dropping below B is: Consider simple Brownian motion P( min S(t)B) dS = a dt + b dW tn

MC Lecture 2 – p. 31 MC Lecture 2 – p. 32 Barrier option Barrier option

Returning now to the barrier option, how do we define the Alternative 1: treating the drift and volatility as being numerical payoff f(S)? approximately constant within each timestep, the probability of having crossed the barrier within timestep n is b b First, calculate Sn as usual using Euler-Maruyama method. + + 2(Sn+1 B) (Sn B) Pn = exp − − 2 − b (Sn,tn) h ! Second, two alternatives:b b b use (approximate) probability of crossing the barrier Probability at end of not having crossedb barrier is directly sample (approximately) the minimum in each (1 Pn) and so the payoff is − n timestep Y f(S) = exp( rT )(S K)+ (1 P ). − M − − n n Y b b b I prefer this approach because it is differentiable – good for MC Lecture 2 – p. 33 Greeks MC Lecture 2 – p. 34

Barrier option Weak convergence

Alternative 2: again treating the drift and volatility as being Barrier: comparison to solution approximately constant within each timestep, define the minimum within timestep n as Weak error MC error 10 0 M = 1 S + S (S S )2 2 b2(S ,t ) h log U n 2 n+1 n − n+1− n − n n n  q  c b b b b b where the Un are i.i.d. uniform [0, 1] random variables. 10 -1 Error The payoff is then

10 -2 + f(S) = exp( rT )(SM K) 1 − − minn Mn>B c b b b -3 With this approach one can stop the path calculation as 10 10 -2 10 -1 soon as one M drops below B. h n MC Lecture 2 – p. 35 MC Lecture 2 – p. 36

c Weak convergence Lookback option

Barrier: h versus 2h solution This is treated in a similar way to Alternative 2 for the barrier option.

Weak error MC error 10 0 We construct a minimum Mn within each timestep and then the payoff is c -1 10 f(S) = exp( rT ) SM min Mn − − n Error   b b b c 10 -2 This is differentiable, so good for Greeks – unlike Alternative 2 for the barrier option.

10 -3 10 -2 10 -1 h

MC Lecture 2 – p. 37 MC Lecture 2 – p. 38

Weak convergence Weak convergence

Lookback: comparison to true solution Lookback: h versus 2h solution

1 10 1 Weak error 10 Weak error MC error MC error

10 0 10 0

Error -1

Error 10 10 -1

10 -2

10 -2

10 -2 10 -1 10 -2 10 -1 h h

MC Lecture 2 – p. 39 MC Lecture 2 – p. 40 Final Words

The Greeks are very important “Bumping” is the standard approach, but Pathwise Sensitivity analysis is more efficient and more accurate, provided the payoff does not have a discontinuity

The Euler-Maruyama method is used for path-dependent payoffs and general SDEs It gives O(h) weak convergence – the error (bias) in the expected value Care must be taken with some path-dependent options to achieve this order of convergence.

MC Lecture 2 – p. 41