Pricing the American Option Using Itô's Formula and Optimal Stopping
Total Page:16
File Type:pdf, Size:1020Kb
U.U.D.M. Project Report 2014:3 Pricing the American Option Using Itô’s Formula and Optimal Stopping Theory Jonas Bergström Examensarbete i matematik, 15 hp Handledare och examinator: Erik Ekström Januari 2014 Department of Mathematics Uppsala University PRICING THE AMERICAN OPTION USING ITO'S^ FORMULA AND OPTIMAL STOPPING THEORY JONAS BERGSTROM¨ Abstract. In this thesis the goal is to arrive at results concerning the value of American options and a formula for the perpetual American put option. For the stochastic dynamics of the underlying asset I look at two cases. The first is the standard Black-Scholes model and the second allows for the asset to jump to zero i.e default. To achieve the goals stated above the first couple of sections introduces some basic concepts in probability such as processes and information. Before introducing It^o's formula this paper contains a not too rigorous introduction of stochastic differential equations and stochastic integration. Then the Black-Scholes model is introduced followed by a section about optimal stopping theory in order to arrive at the American option. Acknowledgements. I would like to thank my supervisor Erik Ek- str¨omfor his support. His input has been essential for the progress and structure of this thesis. 1. Introduction This section contains the definition of one of the most important building blocks in continuous probability namely Wiener process. This is followed by definitions of information and martingales. Definition 1.1. A Wiener process, W = fWt; t ≥ 0g, starting from W0 = 0 is a continuous time stochastic process taking values in R s.t • W has independent increments i.e Wv − Wu and Wt − Ws are inde- pendent whenever u ≤ v ≤ s ≤ t • Ws+t − Ws ∼ N(0; t) W Definition 1.2. The symbol Ft denotes the information generated by the process W = fWs; s ≥ 0g for s 2 [0; t]. If Y is a stochastic process s.t W W Yt 2 Ft , then Y is said to be adapted to the filtration Ft . This simply means that Y can be observed at time t. −rs W For example let Ys := e Ws. Then Yt 2 Ft . This is because, given the trajectory of W between 0 and t, the value of Y can be determined. −rs W If we define Ys := e Ws+; > 0 then Yt 2= Ft since Wt+ exists in the "future" beyond our information at time t. 1 2 JONAS BERGSTROM¨ W Definition 1.3. (Martingale) The process Xt 2 Ft is called a Martingale if • E[jXtj] < 1 W • E[XtjFs ] = E[Xs]; 8s ≤ t W W Xt 2 Ft is called a Submartingale if E[XtjFs ] ≥ E[Xs]; 8s ≤ t: W W Xt 2 Ft is called a Supermartingale if E[XtjFs ] ≤ E[Xs]; 8s ≤ t: 2. Stochastic Differential Equations Let Xt be a stochastic process that resembles the value of an asset. What is a reasonable way to mathematically construct price evolutions in continuous time? That is, what can be said about dXt = Xt+dt − Xt? One can assume that X should change proportionally with the increment of time, dt. Driven by the assets fundamental values and market expectation, dt, should be amplified by a deterministic function of the asset. Call this function u(Xt). Thus one arrives at dXt = u(Xt)dt: To make this model more realistic one also adds a non-deterministic term dWt = Wt+dt − dW ∼ N(0; dt) that is amplified by a deterministic function σ that depends on the same variables as u. The result is what is called a stochastic differential equation (SDE) that describes the local dynamics of a stochastic process in continuous time dXt = u(Xt)dt + σ(Xt)dWt X0 = x were the solution to this system is Z t Z t Xt = x + u(Xs) ds + σ(Xs) dWs 0 0 3. stochastic integrals This section is devoted to give a good interpretation of Z t f(s) dWs: 0 If one assumes that f is a simple function over [0; t]; meaning that [0; t] can be split in smaller intervals were f is equal to some constant on respective intervals, then one can formulate the stochastic integral as n−1 Z t X f(s) dWs = f(tk)(Wtk+1 − Wtk ) 0 k=0 where 0 = t0 < ::: < tn = t: PRICING THE AMERICAN OPTIONUSING ITO'S^ FORMULA AND OPTIMAL STOPPING THEORY3 For a non-simple f one creates a sequence of simple functions, fn with certain properties s.t Z t Z t f(s) dWs = lim fn(s)dWs: 0 n!1 0 Proposition 3.1. For any process f, with conditions • E[f 2] < 1 W • f(τ) is adapted to Fτ then Z t W E[ f(τ)dWτ jFs ] = 0 s Proof. In this proof one assumes that f is simple because the full proof is outside of the scope of this thesis. From the law of iterated expectations it is true, for s < t, that Z t Z t W W W E[E[ f(τ)dWτ jFt ]jFs ] = E[ f(τ)dWτ jFs ] s s Now looking at the left-hand side inside the first expectation it follows that Z t n−1 W X W E[ f(τ)dWτ jFt ] = E[f(τk)(Wτk+1 − Wτk )jFt ] s k=0 where s = τ0 < ::: < τn = t: Since f(τk) depends on the value of the process W from s to τk (which are W all given by Ft , for 8k s.t 0 ≤ k ≤ n). Because of independent increments Wτk+1 −Wτk does not depend on the interval [s; τk] and hence is independent of f(τk). Then it follows that W W W E[f(τk)(Wτk+1 − Wτk )jFt ] = E[f(τk)jFt ]E[Wτk+1 − Wτk jFt ] = 0 Z t Z t W W W ) E[ f(τ)dWτ jFt ] = 0 ) E[ f(τ)dWτ jFs ] = E[0jFs ] = 0 s s Theorem 3.2. Assume that dXt = u(Xt)dt + σ(Xt)dWt: If u = 0 P-a.s 8t ) Xt is a martingale. Proof. dXt = u(Xt)dt + σ(Xt)dWt have the solution Z t Z t Xt = Xs + u(Xτ ) dτ + σ(Xτ )dWτ ; s ≤ t: s s 4 JONAS BERGSTROM¨ Taking expected value yields Z t Z t W W W W E[XtjFs ] = E[XsjFs ] + E[ u(Xτ ) dτjFs ] + E[ σ(Xτ )dWτ jFs ] s s Z t W = Xs + E[ u(Xτ ) dτjFs ] s = Xs 4. Ito's^ Formula It^o'sformula helps to give a description of the local dynamics of a stochas- tic process that is a function of an underlying process with a given stochastic differential. Theorem 4.1. (It^o'sFormula) Assume that the stochastic process X = fXt; t ≥ 0g has the differential dXt = u(Xt)dt + σ(Xt)dWt where W u,σ 2 Ft . Define a new stochastic process f(t; Xt), where f is assumed to be smooth. Given the multiplication table 8 2 < (dt) = 0 dt · dWt = 0 2 : (dWt) = dt the stochastic differential for f becomes @f @f 1 @2f (1) df = dt + dX + (dX )2 @t @x t 2 @x2 t or equivalently @f @f 1 @2f @f df = ( + u + σ2 )dt + σ dW @t @x 2 @x2 @x t Proof. To prove that (1) holds for all t we look at the taylor expansion around the fixed point (t; x). @f @f 1 @2f @2f @2f f(t+h; x+k) = f(t; x)+h (t; x)+k (t; x)+ (h2 +2hk +k2 )+I @t @x 2 @t2 @t@x @t2 1 @ @ I = (h + k )3f(t + sh; x + sk) 3! @t @x where 0 < s < 1. Now let h ! dt and k ! dXt to get @f @f 1 @2f @2f 1 @2f f(t+dt; x+dX ) = f(t; x)+ dt+ dX + (dt)2+ dt·dX + (dX )2+I t @t @x t 2 @t2 @t@x t 2 @x2 t PRICING THE AMERICAN OPTIONUSING ITO'S^ FORMULA AND OPTIMAL STOPPING THEORY5 2 Since (dt) = 0, dt · dXt = dt(udt + σdWt) = 0 and the fact that @ @ 3 ( @t dt + @x dXt) = 0 it follows that @f @f 1 @2f df = f(t + dt; x + dX ) − f(t; x) = dt + dX + (dX )2 t @t @x t 2 @x2 t Now we obtain 2 2 2 2 2 2 (dXt) = u (dt) + 2uσdt · dWt + σ (dWt) = σ dt @f @f 1 @2f @f ) df = ( + u + σ2 )dt + σ dW @t @x 2 @x2 @x t Theorem 4.2. (Feynman-Kac˘) Assume that F solves the boundary value problem @F @F 1 2 @2F @t + u @x + 2 σ @x2 − rF = 0 F (T; s) = Φ(s): @F −rs 2 @F −rt Also assume that E[jσ(Xs) @x (s; Xs)e j ] < 1 and σ(Xt) @x e is adapted W to Ft . Assume that Xs is the solution to dXs = u(Xs)ds + σ(Xs)dWs Xt = x Then it follows that −r(T −t) F (t; x) = e Et:x[Φ(XT )] where Et;x[ . ] = E[ . jXt = x]: Proof. To prove this define a new stochastic process −rs f(s; Xs) = e F (s; Xs) and use It^o'sformula. @F @F 1 @2F @F df = e−rt(−rF + + u + σ2 )dt + σe−rt dW @t @x 2 @x2 @x t where u = u(Xt); σ = σ(Xt);F = F (t; Xt) etc. The solution to this SDE is Z T 2 Z T −rs @F @F 1 2 @ F −rs @F f(T;XT ) = f(t; Xt)+ e (−rF + +u + σ 2 )ds+ σe dWs t @t @x 2 @x t @x where the integrands are evaluated at (s; Xs).