Lecure 5 Finite Difference Methods
Total Page:16
File Type:pdf, Size:1020Kb
Chapter 5 Finite Difference Methods Math6911 S08, HM Zhu References 1. Chapters 5 and 9, Brandimarte 2. Section 17.8, Hull 3. Chapter 7, “Numerical analysis”, Burden and Faires 2 Math6911, S08, HM ZHU Outline • Finite difference (FD) approximation to the derivatives • Explicit FD method • Numerical issues • Implicit FD method • Crank-Nicolson method • Dealing with American options • Further comments 3 Math6911, S08, HM ZHU Chapter 5 Finite Difference Methods 5.1 Finite difference approximations Math6911 S08, HM Zhu Finite-difference mesh • Aim to approximate the values of the continuous function f(t, S) on a set of discrete points in (t, S) plane • Divide the S-axis into equally spaced nodes at distance ∆S apart, and, the t-axis into equally spaced nodes a distance ∆t apart • (t, S) plane becomes a mesh with mesh points on (i ∆t, j∆S) • We are interested in the values of f(t, S) at mesh points (i ∆t, j∆S), denoted as fi,j = fit,jS( ∆∆) 5 Math6911, S08, HM ZHU The mesh for finite-difference approximation fi,j = fit,jS( ∆∆) S Smax=M∆S ? j ∆S t i ∆t T=N ∆t 6 Math6911, S08, HM ZHU Black-Scholes Equation for a European option with value V(S,t) ∂V 1 ∂ 2V ∂V + σ 2S 2 + rS − rV = 0 (5.1) ∂t 2 ∂ 2S ∂S where 0 < S < +∞ and 0 ≤ t < T with proper final and boundary conditions Notes: This is a second-order hyperbolic, elliptic, or parabolic, forward or backward partial differential equation Its solution is sufficiently well behaved ,i.e. well-posed Math6911, S08, HM ZHU Finite difference approximations The basic idea of FDM is to replace the partial derivatives by approximations obtained by Taylor expansions near the point of interests For example, ∂+∆−+∆−f ()S,t f ()()S,t t f S,t f ()()S,t t f S,t =≈lim ∂∆tt∆→t 0 ∆ t for small ∆t,using Taylor expansion at point () S,t ∂fS,t() 2 fS,tt()()+∆ = fS,t + ∆ t + O() ∆ t ∂t () 8 Math6911, S08, HM ZHU Forward-, Backward-, and Central- difference approximation to 1st order derivatives central backward forward tt− ∆ t tt+ ∆ ∂+∆−ft,S( ) ft( t,S) ft,S( ) Forward: ≈+∆Ot() ∂∆tt ∂−−∆f() t,S f() t,S f ( t t,S ) Backward: ≈+∆Ot() ∂∆tt ∂+∆−−∆f() t,S f()() t t,S f t t,S 2 Central: ≈+∆Ot() ∂∆tt2 () Symmetric Central-difference approximations to 2nd order derivatives 2 ∂+∆−+−∆f( t,S) f( t,S S) 2 f( t,S) f( t,S S) 2 ≈+∆OS 2 2 ()() ∂S ()∆S Use Taylor's expansions for ft,S( +∆ S) and ft,S( −∆ S) around point ()t,S : ft,S()+∆ S = ? + ft,S()−∆ S = ? 10 Math6911, S08, HM ZHU Finite difference approximations ∂∂ffff− ff− Forward Difference:≈≈i,j++11 i,j, i,j i,j ∂∆tt ∂∆ SS ∂∂ffff−− ff Backward Difference: ≈≈i,j i−−11 ,j, i,j i,j ∂∆∂∆ttSS ∂∂ffff−− ff Central Difference: ≈≈i+−11 ,j i ,j, i,j +− 11 i,j ∂∆∂∆ttSS22 As to the second derivative, we have: 2 ∂ f ⎛⎞ffffi,j+−11−− i,j i,j i,j 2 ≈ ⎜⎟−∆S ∂S ⎝⎠∆∆SS fffi,j+−11−+2 i,j i,j = 2 ()∆S 11 Math6911, S08, HM ZHU Finite difference approximations • Depending on which combination of schemes we use in discretizing the equation, we will have explicit, implicit, or Crank-Nicolson methods • We also need to discretize the boundary and final conditions accordingly. For example, for European Call, Final Condition: fN,j =∆−= max() j S K ,0 , for j0 , 1 ,...,M Boundary Conditions: ⎪⎧ fi,0 = 0 , for i,,...,N= 0 1 ⎨ −−∆rN() i t ⎩⎪ fSKei,M=− max 12 Math6911, S08, HM ZHU where SMS.max =∆ Chapter 5 Finite Difference Methods 5.2.1 Explicit Finite-Difference Method Math6911 S08, HM Zhu Explicit Finite Difference Methods ∂∂ff1 ∂2 f In ++rSσ 22 S = rf, at point ( i ∆∆ t, j S ), set ∂∂tS2 ∂ S2 ∂ f ff− backward difference: ≈ i,j i−1 ,j ∂∆tt ∂f ff− central difference: ≈ i,j+−11 i,j , ∂∆SS2 and ∂ 2 f ff+−2 f ≈==∆i,j+−11 i,j i,j ,rf rf,S jS ∂∆SS22 i,j Math6911, S08, HM ZHU Explicit Finite Difference Methods Rewriting the equation, we get an explicit scheme: *** fafbfcfi−−11 ,j=++ j i,j j i,j j i,j + 1 (5.2) where * 1 22 atjrjj =∆()σ − 2 * 22 btjrj =−∆1 ()σ + * 1 22 ctjrjj =∆()σ + 2 for iN-,N-,...,, == 1210 and j 12, , ..., M - 1 . Math6911, S08, HM ZHU Numerical Computation Dependency S Smax=M∆S (j+1)∆S x j∆S x x (j-1)∆S x 0 0 (i-1)∆t i∆t T=N ∆t t Math6911, S08, HM ZHU Implementation 1. Starting with the final values fN,j , we apply (5.2) to solve fjN,j−1 for 1≤≤M. − 1 We use the boundary condition to determine ff. N, − 10 and N-,M 1 2. Repeat the process to determine f N, − 2 j and so on 17 Math6911, S08, HM ZHU Example We compare explicit finite difference solution for a European put with the exact Black-Scholes formula, where T = 5/12 yr, S0=$50, K = $50, σ=30%, r = 10%. Black-Scholes Price: $2.8446 EFD Method with Smax=$100, ∆S=2, ∆t=5/1200: $2.8288 EFD Method with Smax=$100, ∆S=1, ∆t=5/4800: $2.8406 18 Math6911, S08, HM ZHU Example (Stability) We compare explicit finite difference solution for a European put with the exact Black-Scholes formula, where T = 5/12 yr, S0=$50, K = $50, σ=30%, r = 10%. Black-Scholes Price: $2.8446 EFD Method with Smax=$100, ∆S=2, ∆t=5/1200: $2.8288 EFD Method with Smax=$100, ∆S=1.5, ∆t=5/1200: $3.1414 EFD Method with Smax=$100, ∆S=1, ∆t=5/1200: -$2.8271E22 19 Math6911, S08, HM ZHU Chapter 5 Finite Difference Methods 5.2.2 Numerical Stability Math6911 S08, HM Zhu Numerical Accuracy • The problem itself • The discretization scheme used • The numerical algorithm used 21 Math6911, S08, HM ZHU Conditioning Issue Suppose we have mathematically posed problem: yfx= () where yx is to evaluated given an input . Let xxx* =+δδ for small change x. * If hen fx() is near fx,() then we call the problem is well - conditioned. Otherwise, it is ill-posed/ill-conditioned. 22 Math6911, S08, HM ZHU Conditioning Issue • Conditioning issue is related to the problem itself, not to the specific numerical algorithm; Stability issue is related to the numerical algorithm • One can not expect a good numerical algorithm to solve an ill- conditioned problem any more accurately than the data warrant • But a bad numerical algorithm can produce poor solutions even to well-conditioned problems 23 Math6911, S08, HM ZHU Conditional Issue The concept "near" can be measured by further information about the particular problem: * fx()− fx() δ x ≤≠Cfx()() 0 fx() x where C is called condition number of this problem. If C is large, the problem is ill-conditioned. 24 Math6911, S08, HM ZHU Floating Point Number & Error Let x be any real number. e Infinite decimal expansion : x = ± .x1x2 xd 10 e Truncated floating point number : x ≈ fl()x = ± .x1x2 xd 10 where x1 ≠ 0, 0 ≤ xi ≤ 9, d : an integer, precision of the floating point system e : an bounded integer Floating point or roundoff error : fl()x − x 25 Math6911, S08, HM ZHU Error Propagation When additional calculations are done, there is an accumulation of these floating point errors. Example : Let x = − 0.6667 and fl()x = −0.667 100 where d = 3. Floating point error : fl()x − x = −0.0003 Error propagation : fl()x 2 − x2 = 0.00040011 26 Math6911, S08, HM ZHU Numerical Stability or Instability Stability ensures if the error between the numerical soltuion and the exact solution remains bounded as numerical computation progresses. * That is, fx()(the solution of a slightly perturbed problem) * is near fx()(the computed solution ) . Stability concerns about the behavior offfit,jSi,j −∆∆() as numerical computation progresses for fixed discretization steps ∆∆t and S. 27 Math6911, S08, HM ZHU Convergence issue Convergence of the numerical algorithm concerns about the behavior of ffit,jSt,i,j −∆∆() as ∆∆→S 0 for fixed values ()it,jS∆∆. For well-posed linear initial value problem, Stability ⇔ Convergence (Lax's equivalence theorem, Richtmyer and Morton, "Difference Methods for Initial Value Problems" (2nd) 1967) 28 Math6911, S08, HM ZHU Numerical Accuracy • These factors contribute the accuracy of a numerical solution. We can find a “good” estimate if our problem is well- conditioned and the algorithm is stable ** ⎫ Stable: fx()≈ fx( ) ⎪ fx* ()≈ fx () * ⎬ Well-conditioned: fx()≈ fx()⎭⎪ 29 Math6911, S08, HM ZHU Chapter 5 Finite Difference Methods 5.2.3 Financial Interpretation of Numerical Instability Math6911 S08, HM Zhu Financial Interpretation of instability (Hall, page 423-4) If ∂∂fS and ∂2 fS ∂2 are assumed to be the same at ( i,j +1 ) as they are at (i, j ), we obtain equations of the form: ˆ fafbfcfi,j=++ˆˆ ji,j+−11 ji,j + 1 ji,j ++ 11 (5.3) where 1 ⎛⎞1122 1 aˆ j = ⎜⎟∆−∆=tjσπ trj d 1 +∆rt⎝⎠221 +∆ rt ˆ 1122 bjtj =−∆=()1 σπ0 11+∆rt +∆ rt 11⎛⎞1122 ctjtrjˆ j =∆+∆=⎜⎟σπu 11+∆rt⎝⎠22 +∆ rt for i == N -1210 , N - , ..., , and j 12 , , ..., M - 1 . Math6911, S08, HM ZHU Explicit Finite Difference Methods ƒi +1, j +1 πu π0 ƒi , j ƒi +1, j πd ƒi +1, j –1 These coefficients can be interpreted as probabilities times a discount factor. If one of these probability < 0, instability occurs. Math6911, S08, HM ZHU Explicit Finite Difference Method as Trinomial Tree Check if the mean and variance of the Expected value of the increase in asset price during ∆t: E[]∆=−∆SSrjStrStππdu + 0 0 +∆ π = ∆ ∆= ∆ Variance of the increment: ⎡⎤2 2222 2 2 2 E⎣⎦∆=−∆()SSππd + 0 0 +∆ () πu = σσjSt()∆∆=∆ St 2 2 22 222 22 Var[]∆= E⎣⎦⎡⎤ ∆ − E [] ∆=σσStrSt ∆−() ∆ ≈ St ∆ which is coherent with geometric Brownian motion in a risk-neutral world Math6911, S08, HM ZHU Change of Variable Define ZlnS.= The B-S equation becomes ∂∂∂fff⎛⎞σσ222 +−⎜⎟rrf +2 = ∂∂∂tZZ⎝⎠22 The corresponding difference equation is 22 ffi,j++++−+++−++11111111111−−−+ i,j⎛⎞σσ f i,j f i,j f i,j2 f i,j f i,j +−⎜⎟rr +2 =fi,j ∆∆∆tZZ⎝⎠22 2 or *** fi,j=++αβγ ji,jfff+−11 ji,j + 1 ji,j ++ 11 ()54 .