Monte Carlo Simulation
Total Page:16
File Type:pdf, Size:1020Kb
Lecture 4 Monte Carlo Simulation Lecture Notes by Jan Palczewski with additions by Andrzej Palczewski Computational Finance – p. 1 We know from Discrete Time Finance that one can compute a fair price for an option by taking an expectation rT EQ e− X Therefore, it is important to have algorithms to compute the expectation of a random variable with a given distribution. In general, we cannot do it precisely – there is only an ap- proximation on our disposal. In previous lectures we studied methods to generate inde- pendent realizations of a random variable. We can use this knowledge to generate a sample from a distribution of X. Main theoretical ingredients that allow to compute approxi- mations of expectation from a sample of a given distribution are provided by the Law of Large Numbers and the Central Limit Theorem. Computational Finance – p. 2 Let X1,X2,... be a sequence of independent identically dis- tributed random variables with expected value µ and vari- ance σ2. Define the sequence of averages X + X + + X Y = 1 2 · · · n , n = 1, 2,.... n n (Law of Large Numbers) Yn converges to µ almost surely as n . →∞ Let (X µ)+(X µ) + +(X µ) Z = 1 − 2 − · · · n − . n √n (Central Limit Theorem) The distribution of Zn converges to N(0,σ2). Computational Finance – p. 3 Assume we have a random variable X with unknown expec- tation and variance a = EX, b2 = VarX. We are interested in computing an approximation to a and possibly b. Suppose we are able to take independent samples of X using a pseudo-random number generator. We know from the strong law of large number, that the aver- age of a large number of samples can give a good approxi- mation to the expected value. Computational Finance – p. 4 Therefore, if we let X1,X2,...,XM denote a sample from a distribution of X, one might expect that M 1 a = X M M i Xi=1 is a good approximation to a. To estimate variance we use the following formula (notice that we divide by M 1 and not by M) − M (X a )2 b2 = i=1 i − M . M M 1 P − Computational Finance – p. 5 Assessment of the precision of estimation of a and b2 M By the central limit theorem we know that i=1 Xi behaves approximately like a (Ma,Mb2) distributed random vari- able. N P Therefore b2 a a is approximately (0, ). M − N M In other words b aM a Z, − ∼ √M where Z (0, 1). ∼N b Therefore aM converges to a with speed . O √M Computational Finance – p. 6 More quantitative 1.96b 1.96b P a aM a + = 0.95. − √M ≤ ≤ √M This is equivalent to 1.96b 1.96b P aM a aM + = 0.95. − √M ≤ ≤ √M Replacing the unknown b by the approximation bM we see that the unknown expected value a lies in the interval 1.96 bM 1.96 bM aM , aM + − √M √M approximately with probability 0.95. The interval above is called a 95 percent confidence inter- val. Computational Finance – p. 7 Confidence intervals explained If Z N(µ,σ2) then ∼ P(µ 1.96σ<Z<µ + 1.96σ) = 0.95 − because Φ(1.96) = 0.975 To construct a confidence interval with a confidence level α different from 0.95, we have to find a number A such that 1 α φ(A) = 1 − . − 2 Then the α-confidence interval P(µ Aσ<Z<µ + Aσ) = α. − Computational Finance – p. 8 The width of the confidence interval is a measure of the accuracy of our estimate. In 95% of cases the true value lies in the confidence interval. Beware! In 5% of cases it is outside the interval! The width of the confidence interval depends on two factors: Number of simulations M Variance of the variable X Computational Finance – p. 9 1.96 bM 1.96 bM aM , aM + − √M √M 1. The size of the confidence interval shrinks like the in- verse square root of the number of samples. This is one of the main disadvantages of Monte Carlo method. 2. The size of the confidence interval is directly propor- tional to the standard deviation of X. This indicates that Monte Carlo method works the better, the smaller the variance of X is. This leads to the idea of variance reduction, which we shall discuss later. Computational Finance – p. 10 Monte Carlo in a nutshell To compute a = EX we generate M independent samples for X and compute aM . In order to monitor the error we also approximate the vari- 2 ance by bM . 1.96 bM 1.96 bM aM , aM + − √M √M Important! People often use confidence intervals with other confidence levels, e.g. 99% or even 99.9%. Then instead of 1.96 we have a different number. Computational Finance – p. 11 Monte Carlo put into action We can now apply Monte Carlo simulation for the computa- tion of option prices. We consider a European-style option ψ(ST ) with the payoff function ψ depending on the terminal stock price. We assume that under a risk-neutral measure the stock price S at t 0 is given by t ≥ 1 S = S exp r σ2 t + σW . t 0 − 2 t Here Wt is a Brownian motion. Computational Finance – p. 12 We know that W √tZ with Z (0, 1). t ∼ ∼N We can therefore write the stock price at expiry T as 1 S = S exp r σ2 T + σ√T Z . T 0 − 2 We can compute a fair price for ψ(ST ) taking the discounted expectation rT (r 1 σ2)T +σ√T Z E e− ψ S0e − 2 h i This expectation can be computed via Monte Carlo method with the following algorithm Computational Finance – p. 13 ✬for i = 1 to M ✩ compute an (0, 1) sample ξ N i set S = S exp r 1σ2 T + σ√T ξ i 0 − 2 i set rT Vi = e− ψ(Si) end 1 M set aM = M i=1 Vi P ✫ ✪ The output aM provides an approximation of the option price. To assess the quality of our computation, we com- pute an approximation to the variance 1 b2 = (V a )2 M M 1 i − M − Xi to construct a confidence interval. Computational Finance – p. 14 Confidence intervals v. sample size Computational Finance – p. 15 Variance Reduction Techniques Antithetic Variates Control Variate Importance Sampling (not discussed here) Stratified Sampling (not discussed here) Computational Finance – p. 16 We have seen before that the size of the confidence interval is determined by the value of Var(X) , p √M where M is the sample size. We would like to find a method to decrease the width of the confidence interval by other means than increasing the sample size M. A simple idea is to replace X with a random variable Y which has the same expectation, but a lower variance. In this case we can compute the expectation of X via Monte Carlo method using Y instead of X. Since the variance of Y is lower than the variance of X the results are better. Computational Finance – p. 17 The question is how we can find such a random variable Y with a smaller variance that X and the same expectation. There are two main methods to find Y : a method of antithetic variates a method of control variates Computational Finance – p. 18 Antithetic Variates The idea is as follows. Next to X consider a random variable Z which has the same expectation and variance as X but is negative correlated with X, i.e. cov(X,Z) 0. ≤ Now take as Y the random variable X + Z Y = . 2 Obviously E(Y )= E(X). On the other hand X + Z X + Z Var(Y )= cov , 2 2 1 1 = Var(X) + 2 cov(X,Z) + Var(Z) Var(X). 4 ≤ 2 ≤0 =Var(X) | {z } | {z } With this we can reduce the variance by a factor of 2. Computational Finance – p. 19 On the way to find Z... In the extreme case, if we would know that E(X) = 0, we could just take Z = X. Then Y = 0 would give the right result, even deterministically.− The equality above would then trivially hold cov(X, X) = Var(X). − − But in general we don’t know the expectation. The expectation is what we want to compute, so this naive idea is not applicable. But it puts us on the right track! Computational Finance – p. 20 The following Lemma is a step further to identify a suitable candidate for Z and hence for Y . Lemma. Let X be an arbitrary random variable and f a monotonically increasing or monotonically decreasing func- tion, then cov(f(X),f( X)) 0. − ≤ Let us now consider the case of a random variable which is of the form f(U), where U is a standard normal distributed random variable, i.e. U (0, 1). ∼N The standard normal distribution is symmetric, and hence also U (0, 1). − ∼N It then follows obviously that f(U) f( U). ∼ − In particular they have the same expectation ! Computational Finance – p. 21 Therefore, in order to compute the expectation of X = f(U), we can take Z = f( U) and define − f(U) + f( U) Y = − . 2 If we now assume that the map f is monotonically increas- ing, then we conclude from the previous Lemma that cov f(U),f( U) 0 − ≤ and we finally obtain f(U) + f( U) E − = E(f(U)), 2 f(U) + f( U) 1 Var − < Var(f(U)).