Basic Numerical Solution Methods for Differential Equations

Basic Numerical Solution Methods for Differential Equations

Basic Numerical Solution Methods for Differential Equations Sebastian Merkel February 14, 2019 1 Ordinary Differential Equations (ODEs) 1.1 Preliminaries • A first-order ordinary differential equation is an equation of the form f(x; y; y0) = 0 (1) with a function f : Rn × Rn × R ! Rn. A solution is a differentiable function y : R ! Rn with the property 0 8x 2 R : f(y(x); y (x); x) = 0: Definitions are adjusted in an obvious way, if f is just defined on a subset of Rn × Rn × R and/or one is only interested in a solution y defined on a subset of R. • A k-th order ODE is similarly an equation of the form f(y; y0; :::; y(k); x) = 0 with a fucntion f : Rn × · · · × Rn × R ! Rn. Such an equation can always be reduced to a k · n- dimensional first-order ODE by associating it with an equivalent equation for the k · n-dimensional vector function 0 y 1 B y0 C B C z := B . C ; @ . A y(k−1) whose first deriative is 0 y0 1 By(1)C z0 = B C : B . C @ . A y(k) The equivalent equation is h(z; z0; x) = 0 1 with 00 1 0 0 1 1 z1 z1 ( f (z ; z0 ; : : : ; z0 ; x) ; i = 1; : : : ; n h BB . C ; B . C ; xC = i 1 1 k i @@ . A @ . A A 0 0 zj−1;i − zj;i; j 2 f2; :::; kg; i = (j − 1)n + 1; :::; jn zk zk for i = 1; :::; k · n. For this reason we focus in the following on first-order ODEs. • An equation of the form (1) is called (fully) implicit. Often, the equation can be written in explicit form, y0 = g(x; y): (2) • For explicit ODEs there are simple sufficient conditions for existence and uniqueness of a solution n 1 satisfying y(x0) = y0 for a given initial value (x0; y0) 2 R × R : { Existence (Peano theorem): if g is continuous, then a solution y to (2) such that y(x0) = y0 exists locally around x0 (meaning a solution function exists on some interval [x0 − "; x0 + "]) { Existence and Uniqueness (Picard–Lindel¨oftheorem): if g is continuous and satisfies a local Lipschitz condition with respect to y, that is there is a constant L < 1, such that jg(x; y) − g(x; z)j ≤ Ljy − zj for all y; z in some neighborhood of y0 and all x in some neighborhood of x0, then there is a unique solution to (2) satisfying y(x0) = y0 { at least locally around x0 (meaning there is an interval [x0 − "; x0 + "] and a unique solution on that interval) @g Local Lipschitz conditions are e.g. satisfied, if @y (x; y) exists and is bounded (locally around (x0; y0)), so regularly existence and uniqueness is not a problem in applications. Note that these local solutions can usually be extended to a \global" solution, if they do not blow up/reach inadmissible values in finite time. 1.2 Numerical Solutions of ODEs 1.2.1 Explicit Euler Method Let the following objects be given: some explicit ODE of the form (2), an initial condition (x0; y0) and a desired solution domain [x0; x¯]. A simple solution approach is to discretize the solution domain [x0; x¯] into N + 1 points, x0 < x1 < ··· < xN =x ¯ 0 dy yi−yi−1 and approximate the derivatives y (xi−1) = (xi−1) by finite differences for i = 1; :::; N, where dx xi−xi−1 yi is an approximation for the unknown y(xi). With this approximation, equation (2) evaluated at the points xi−1 becomes yi − yi−1 = g (xi−1; yi−1) ; xi − xi−1 which can be solved for yi: yi = yi−1 + g (xi−1; yi−1)(xi − xi−1) : (3) 1 If the first-order system is a reduction from a k-th order system as above, this means that at x0 the values of all first k − 1 derivatives must be pinned down by the initial value. 2 Together with the initial values y0, equation (3) defines a simple recursion for the approximate function N values fyigi=0. Alternatively, one may view the Euler method as a sequence of first-order Taylor approximations to the function y: a Taylor expansion around xi−1 evaluated in xi yields 0 y(xi) ≈ y(xi−1) + y (xi−1)(xi − xi−1) = y(xi−1) + g(xi−1; y(xi−1)) (xi − xi−1) ; where the second equation holds, because y satisfied ODE (3). Replacing the true function values y(xi−1) and y(xi) by their approximations yi−1 and yi yields again recursion (3). 1.2.2 Implicit Euler Method N Again, let an initial condition (x0; y0), a solution domain [x0; x¯] and a discretization fxigi=0 of that 0 yi−yi−1 domain be given. The explicit Euler method approximates derivatives y (xi−1) by and uses the xi−xi−1 N ODE in the points fx0; :::; xN−1g to derive an explicit recursion for fyigi=0. The implicit Euler method 0 yi−yi−1 instead approximates y (xi) by and uses the ODE in the points fx1; :::; xN g, which leads to the xi−xi−1 equations yi − yi−1 = g(xi; yi); i = 1; :::; N xi − xi−1 Now, there is in general no explicit solution for yi, instead one has to solve in each step a { potentially nonlinear { equation system to find them. While this looks much more complicated than the explicit method, the implicit method has often superior stability properties (compare Problem 4 of Problem Set 1). This method does not require the ODE to be given in explicit form (2). It works equally for a fully implicit ODE (1). Then the equation system to solve in each step is yi − yi−1 f xi; yi; = 0: xi − xi−1 1.2.3 Other Methods There are a lot of other methods, that we do not cover explicitly. Two common classes of methods are (both exist in implicit and explicit variants): • Runge-Kutta methods: like the Euler method they are single-step methods in that when computing the solution yi over one discretization interval [xi−1; xi], they only use the function value information from the last step (xi−1; yi−1) as an initial condition. However, each step is computed using multiple stages with the goal of increasing the order of consistency2. Each stage involves an evaluation of g or { in the case of implicit methods { an equation to be solved. Matlab's standard solvers for nonstiff problems are mainly based on these methods. 0 • Multi-step methods: use information (xj; yj; yj) from multiple previous steps j ≤ i − 1 when 0 computing (yi; yi) with the aim of gaining efficiency. Implicit versions of (linear) multi-step methods have worse stability properties than certain single-step methods for consistency orders larger than 2. 2In each step, a numerical scheme produces an error, the so-called truncation error. The method is said to be consistent p of order p, if the truncation error in step i converges to 0 faster than (xi −xi−1) as the the step-width xi −xi−1 approaches 0. Euler methods are consistent of order 1. 3 Details on solvers pre-implemented in Matlab can be found in the ODE section of the Matlab help. You can also execute the command odeexamples for example code using the different Matlab solvers. 2 Partial Differential Equations (PDEs) 2.1 Some Basic Definitions A general partial differential equation in n variables of order k is an equation of the form F x; u; Du; D2u; : : : ; Dku = 0; 2 where x 2 n, u : n ! is a function and D1u = @u , D2u = @ u , R R R @xi @xi @xi i=1;:::;n 1 2 i1;i2=1;:::;n 3 D2u = @ u etc. A (classical) solution u is a k times continuously differentiable @xi @xi @xi 1 2 3 i1;i2;i3=1;:::;n function that satisfies the equation for all x. The solution theory of PDEs is much more complex than the one of ODEs and often, solutions only exist according to some weaker solution concept.3 The focus here will be on second order PDEs4 and in particular on PDEs that have the following quasilinear structure n n X X @2u a (x; u; Du) (x) + G(x; u; Du) = 0: (4) ij @x @x i=1 j=1 i j Despite the name \quasilinear", such PDEs can be highly nonlinear. These PDEs are typically classified into parabolic, hyperbolic and elliptic equations, although this classification does not cover all possible cases:5 • The equation is called parabolic, if the matrix (aij(x; y; z)) is (positive or negative) semidefinite with exactly one eigenvalue assuming the value 0 for all x 2 Rn, y 2 R, z 2 Rn • The equation is called hyperbolic, if the matrix (aij(x; y; z)) has one eigenvalue that is positive or negative and the remaining n − 1 eigenvalues have opposite sign (but are 6= 0) for all x 2 Rn, y 2 R, z 2 Rn • The equation is called elliptic, if the matrix (aij(x; y; z)) is (positive or negative) definite for all x 2 Rn, y 2 R, z 2 Rn Equations typically encountered in economics and finance are elliptic or parabolic. If the equation is parabolic, then the eigenvalue 0 of the (aij(x; y; z)) matrix usually has an Euklidian basis vector as an eigenvector. The associated coordinate can usually be interpreted as \time" and is separated in notation 3This may be also relevant for economists.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us