Dx(T) = −Λx(T) Dt + Dw (T), Λ > 0
Total Page:16
File Type:pdf, Size:1020Kb
LECTURE 4 STOCHASTIC DIFFERENTIAL EQUATIONS AND SOLUTIONS Let us consider the following simple stochastic ordinary equation: dX(t) = −λX(t) dt + dW (t); λ > 0: (0.1) It can be readily verified by Ito's formula that the following process Z t −λt −λ(t−s) X(t) = e x0 + e dW (s) (0.2) 0 satisfies Equation (0.1). By the Kolmogrov continuity theorem, the solution is H¨oldercontinuous of order less than 1=2 in time since 2 [jX(t) − X(s)j2] ≤ (t − s)2( + x2) + jt − sj : (0.3) E λ 0 This simple model shows that the solution to a stochastic differential equation is H¨oldercontinuous of order less than 1=2 and thus does not have derivatives in time. This low regularity of solutions leads to different concerns in SODEs (and their numerical methods) from ODEs. 1. Existence and uniqueness of strong solutions W > W Let (Ω; F; P) be a probability space and (W (t); Ft ) = ((W1(t);:::;Wm(t)) ; Ft ) be an m- W dimensional standard Wiener process, where Ft ; 0 ≤ t ≤ T; is an increasing family of σ-subalgebras of F induced by W (t): Consider the system of Ito SODEs m X dX = a(t; X)dt + σr(t; X)dWr(t); t 2 (t0;T ];X(t0) = x0; (1.1) r=1 where X; a; σr are m-dimensional column-vectors and x0 is independent of w. We assume that a(t; x) and σ(t; x) are sufficiently smooth and globally Lipschitz. Remark 1.1. The SODEs (1.1) can be rewritten in Stratonovich sense under mild conditions. The equation (1.1) can be written as m X dX = [a(t; X) − c(t; X)]dt + σr(t; X)dWr(t); t 2 (t0;T ];X(t0) = x0; (1.2) r=1 where m 1 X @σr(t; X) c(t; X) = σ (t; X); 2 @x r r=1 @σr and @x is the Jacobi matrix of the column-vector σr: 2 @σ @σ 3 1;r ··· 1;r 6 @x1 @xm 7 @σr @σr @σr 6 . 7 = ··· = 6 . .. 7 : @x @x @x 6 . 7 1 m 4@σ @σ 5 m;r ··· m;r @x1 @xm 2 2 We denote f 2 Lad(Ω; L ([a; b])) if f(t) is adapted to Ft and f(t; !) 2 L ([a; b]), i.e., ( Z b ) 2 2 f 2 Lad(Ω; L ([a; b])) = f(t; !)jf(t; !) is Ft-measurable and P( fs ds < 1) = 1 : a Here fFt; a ≤ t ≤ bg is a filtration such that Date: November 3, 2019. 1 2 LECTURE 4 • for each t, f(t) and W (t) are Ft-measurable, i.e., f(t) and W (t) are adapted to the filtration Ft. • for any s ≤ t, W (t) − W (s) is independent of the σ-filed Fs. Definition 1.2 (A strong solution to a SODE). We say that X(t) is a (strong) solution to SDE (1.1) if 1 • a(t; X(t)) 2 Lad(Ω;L ([c; d])), 2 • σ(t; X(t)) 2 Lad(Ω;L ([c; d])), • and X(t) satisfies the following integral equation a.s. Z t Z t X(t) = x + a(s; X(s)) ds + σ(s; X(s)) dW (s): (1.3) 0 0 In general, it is difficult to give a necessary and sufficient condition for the existence and uniqueness of strong solutions. Usually, we can give sufficient conditions. 2 Theorem 1.3 (Existence and uniqueness). If X0 is F0-measurable and E[X0 ] < 1. The coefficients a; σ satisfy the following conditions. • (Lipschitz condition) a and σ are Lipschitz continuous, i.e., there is a constant K > 0 such that m X ja(x) − a(y)j + jσr(x) − σr(y)j ≤ Kjx − yj: r=1 • (Linear growth) a and σ grow at most linearly i.e., there is a C > 0 such that ja(x)j + jσ(x)j ≤ C(1 + jxj); then the SDE above has a unique strong solution and the solution has the following properties • X(t) is adapted to the filtration generated by X0 and W (s)(s ≤ t). Z t 2 • E[ X (s) ds] < 1. 0 See [Øksendal, 2003, Chapter 5] for a proof. Here are some examples where the conditions in the theorem are satisfied. • (Geometric Brownian motion) For µ, σ 2 R, dX(t) = µX(t) dt + σX(t) dW (t);X0 = x: • (Sine process) For σ 2 R, dX(t) = sin(X(t)) dt + σ dW (t);X0 = x: • (modified Cox-Ingersoll-Ross process) For θ1; θ2 2 R, θ2 dX(t) = −θ X(t) dt + θ p1 + X(t)2 dW (t);X = x: θ + 2 > 0: 1 2 0 1 2 Remark 1.4. The condition in the theorem is also known as global Lipschitz condition. A straight- forward generalization is one-sided Lipschitz condition (global monotone condition) m > X 2 2 (x − y) (a(x) − a(y)) + p0 jσr(x) − σr(y)j ≤ Kjx − yj ; p0 > 0; r=1 and the growth condition can also be generalized as m > X 2 2 x a(x) + p1 jσr(x)j ≤ C(1 + jxj ): r=1 Theorem 1.5 (Regularity of the solution). Under the conditions of Theorem 1.3, the solution is continuous and there exists a constant C > 0 depending only on t that 2p p E[jX(t) − X(s)j ] ≤ C jt − sj ; p ≥ 1: LECTURE 4 3 The proof of this theorem rely on the Burkholder-Davis-Gundy inequality. Then by the Kol- mogorov continuity theorem, we can conclude that the solution is only H¨oldercontinuous with exponent less than 1=2, which is the same as Brownian motion. 2. Solution methods This process (0.2) here is a special case of the Ornstein-Uhlenbeck process, which satisfies the equation dX(t) = κ(θ − X(t)) dt + σ dW (t): (2.1) where κ, σ > 0; θ 2 R. The solution to (2.1) can be obtained by the method of change-of-variable: Y (t) = θ − X(t). Then by Ito's formula we have dY (t) = −κY (t) dt + σ d(−W (t)): Similar to (0.2), the solution is Z t −κt −κ(t−s) Y (t) = e Y0 + σ e d(−W (s)): (2.2) 0 Then by Y (t) = θ − X(t), we have Z t −κt −κt −κ(t−s) X(t) = X0e + θ(1 − e ) + σ e dW (s): 0 In a more general case, we can use similar ideas to find explicit solutions to SODEs. 2.1. The integrating factor method. We apply the integrating factor method to solve nonlinear SDEs of the form dX(t) = f(t; X(t)) dt + σ(t)X(t) dW (t);X0 = x: (2.3) where f is a continuous deterministic function defined from R+ × R to R. • Step 1. Solve the equation dG(t) = σ(t)G(t) dW (t): Then we have Z t 1 Z t G(t) = exp( σ(s) dW (s) − σ2(s) ds): 0 2 0 The integrating factor function is defined by F (t) = G−1(t). It can be readily verified that F (t) satisfies dF (t) = −σ(t)F (t) dW (t) + σ2(t)F (t) dt: • Step 2. Let X(t) = G(t)C(t) and then C(t) = F (t)X(t). Then by the product rule, (2.3) can be written as d(F (t)X(t)) = F (t)f(t; X(t)) dt: Then Ct satisfies the following \deterministic" ODE dC(t) = F (t)f(t; G(t)C(t)): (2.4) • Step 3. Once we obtain C(t), we get X(t) from X(t) = G(t)C(t). Remark 2.1. When (2.4) cannot be explicitly solved, we may use some numerical methods to obtain C(t). Example 2.2. Use the integrating factor method to solve the SDE −1 dX(t) = (X(t)) dt + αX(t) dW (t);X0 = x > 0; where α is a constant. 4 LECTURE 4 −1 α2 Solution. Here f(t; x) = x and F (t) = exp(−αW (t) + 2 t). We only need to solve dC(t) = F (t)[G−1(t)C(t)]−1 = F 2(t)=C(t): This gives d(C(t))2 = 2F 2(t) dt and thus Z t (C(t))2 = 2 exp(−2αW (s) + α2s) ds + x2: 0 Since the initial condition is x > 0, we take Y (t) > 0 such that s α2 Z t X(t) = G(t)Y (t) = exp(αW (t) − t) 2 exp(−2αW (s) + α2s) ds + x2 > 0: 2 0 2.2. Moment equations of solutions. For a more complicated SODE, we cannot obtain a solution that can be written explicitly in terms of W (t). For example, the modified Cox-Ingersoll-Ross model (2.5) does not have an explicit solution: p dX(t) = κ(θ − X(t))dt + σ X(t)dW (t);X0 = x; (2.5) However, we can say a bit more about the moments of the process X(t). Write (2.5) in its integral form: Z t Z t X(t) = x + κ (θ − X(s))ds + σ pX(s) dW (s) (2.6) 0 0 and using Ito's formula gives Z t Z t Z t X2(t) = x2 + (2κθ + σ2) X(s) ds − 2κ X(s)2 ds + 2σ (X(s))3=2 dW (s): (2.7) 0 0 0 From this equation and the properties of Ito's integral, we can obtain the moments of the solution.