Numerical Methods for Ordinary Differential Equations
Total Page:16
File Type:pdf, Size:1020Kb
Numerical methods for ordinary differential equations Ulrik Skre Fjordholm May 1, 2018 Chapter 1 Introduction Consider the ordinary differential equation (ODE) x.t/ f .x.t/; t/; x.0/ x0 (1.1) P D D d d d where x0 R and f R R R . Under certain conditions on f there exists a unique solution of (1.1), and2 for certainW types of! functions f (such as when (1.1) is separable) there are techniques available for computing this solution. However, for most “real-world” examples of f , we have no idea how the solution actually looks like. We are left with no choice but to approximate the solution x.t/. Assume that we would like to compute the solution of (1.1) over a time interval t Œ0; T for some T > 01. The most common approach to finding an approximation of the solution of2 (1.1) starts by partitioning the time interval into a set of points t0; t1; : : : ; tN , where tn nh is a time step and h T is the step size2. A numerical method then computes an approximationD of the actual solution D N value x.tn/ at time t tn. We will denote this approximation by yn. The basis of most numerical D methods is the following simple computation: Integrate (1.1) over the time interval Œtn; tn 1 to get C Z tn 1 C x.tn 1/ x.tn/ f .x.s/; s/ ds: (1.2) C D C tn Although we could replace x.tn/ and x.tn 1/ by their approximations yn and yn 1, we cannot use the formula (1.2) directly because the integrandC depends on the exact solution x.s/C . What we can do, however, is to approximate the integral in (1.2) by some quadrature rule, and then approximate x at the quadrature points. More specifically, we consider a quadrature rule K Z tn 1 C X .k/ g.s/ ds .tn 1 tn/ bkg.s / t C n k 1 D .1/ .K/ where g is any function, b1; : : : ; bk R are quadrature weights and s ; : : : ; s Œtn; tn 1 are the quadrature points. Two basic requirements2 of this approximations is that if g2is nonnegativeC then the quadrature approximation is also nonnegative, and that the rule correctly integrates constant functions. It is straightforward to see that these requirements translate to b1; : : : ; bk 0 and b1 bK 1: > C C D 1In this note we will only consider a time interval t Œ0; T for some positive endtime T > 0, but is it fully possible to 2 work with negative times or even the whole real line t R. 2The more sophisticated adaptive time stepping methods2 use a step size h which can change from one time step to another. We will not consider such methods in this note. 1 .1/ tn tn 1 Some popular methods include the midpoint rule (K 1, b1 1, s C2 C ), the trapezoidal 1 .1/ .2/ D D D 1 rule (K 2, b1 b2 2 , s tn, s tn 1) and Simpson’s rule (K 3, b1 b3 6 , C 4 D.1/ D .2/D tn tn D1 .3/ D D D D b2 6 , s tn, s C2 C , s tn 1). As you might recall, these three methods have D D3 3 D 4 D C an error of O.h /, O.h / and O.h /, respectively, where h tn 1 tn denotes the length of the interval3. D C Applying the quadrature rule to (1.2) yields K X .k/ .k/ yn 1 yn h bkf yn ; sn (1.3) C D C k 1 D .k/ .k/ .k/ where sn Œtn; tn 1 are the quadrature points and yn x sn . The initial approximation y0 2 C is simply set to the initial data prescribed in (1.1), y0 x0. .k/ .k/ D If the quadrature points yn ; sn are either yn; tn or yn 1; tn 1 (or some combination of these) C C then we can solve (1.3) for yn 1 and get a viable numerical method for (1.1). Two such methods, the explicit and implicit Euler methods,C are the topic of Chapter 2. However, if we want to construct .k/ more accurate numerical methods then we have to include quadrature points at times sn which lie strictly between tn and tn 1, and consequently we need some way of computing the intermediate .k/ .k/ C values yn x sn . A systematic way of computing these points is the so-called Runge–Kutta methods. These are methods which converge to the exact solution much faster than the Euler meth- ods, and will be the topic of Chapter 4. Some natural questions arise when deriving numerical methods for (1.1): How large is the ap- proximation error for a fixed step size h > 0? Do the computed solutions converge to x.t/ as h 0? How fast do they converge, and can we increase the speed of convergence? Is the method stable?! The purpose of these notes is to answer all of these questions for some of the most commonly used numerical methods for ODEs. Basic assumptions In this note we will assume f is continuously differentiable, is bounded and has bounded first 1 d d derivatives—that is, we assume that f C .R R ; R / and that there are constants M > 0 and K > 0 such that 2 sup f .x; t/ 6 M; (1.4a) d j j x R t2 R 2 sup Dxf .x; t/ 6 K; (1.4b) d .x;t/R R ˇ ˇ ˇ@f ˇ sup ˇ .x; t/ˇ 6 K: (1.4c) d ˇ @t ˇ .x;t/R R @f i (Here, Dxf denotes the Jacobian matrix of f , Dxf .x; t/ .x; t/, and denotes the i;j D @xj k k matrix norm.) In particular, f is Lipschitz in x with Lipschitz constant K, so these assumptions guarantee that Picard iterations converge, and we can conclude that there exists a solution for all times t R. Uniqueness of the solution follows from Lipschitz boundedness of f together with Gronwall’s2 lemma. 3 Here we use the “Big O notation”. If eh is some quantity depending on a parameter h > 0 (such as the error in a p numerical approximation), then eh O.h / means that there exists some constant C > 0 such that for small values of h, Dp we can bound the error as eh 6 C h . j j 2 The assumptions in (1.4) can be replaced by local bounds, although one then needs to be more careful in some of the computations in this note. In particular, the solution might not exist for all times. Without going into detail we state here that all the results in this note are true (with minor modifications) assuming only local versions of the bounds (1.4). 3 Chapter 2 Euler’s methods The absolutely simplest quadrature rule that we can use in (1.3) is the one-point method K 1, .1/ D b1 1, sn tn. This yields the forward Euler or explicit Euler method D D yn 1 yn hf yn; tn : (2.1) C D C As its name implies, this method is explicit, meaning that the approximation yn 1 at the next time C step is given by a formula depending only on the approximation yn at the current time step. Explicit methods are easy to use on a computer because we can type the formula more or less directly in the .1/ source code. Another popular one-point quadrature is K 1, b1 1, sn tn 1, which gives the backward Euler or implicit Euler method D D D C yn 1 yn hf yn 1; tn 1 : (2.2) C D C C C This method is implicit in the sense that one has to solve an algebraic relation in order to find yn 1 C as a function of only yn. Both of the Euler methods can be seen as first-order Taylor expansions of x.t/ around t tn and t tn 1, respectively. D D C 2.1 Example. Consider the ODE (1.1) with n 1 and f .x/ ax for some a R. The forward Euler method is D D 2 yn 1 yn hf.yn/ yn 1 ah : C D C D C The implicit Euler method is yn 1 yn hf.yn 1/ yn ahyn 1; C D C C D C C and solving for yn 1 yields the explicit expression C yn yn 1 : C D 1 ah In both cases the method can be written as the difference equation yn 1 byn, whose solution is n C D n yn b x0. Thus, the explicit Euler method can be written as yn .1 ah/ x0 and the implicit D n T D C Euler method as yn .1 ah/ x0. Using the fact that h , it is a straightforward exercise D D N in calculus to show that for either scheme, the endtime solution yN converges to the exact value aT x.T / e x0 as N . FigureD 2.1 show a! computation 1 with both schemes using the parameter a 1 and initial data D x0 1. While the error in both schemes increase over time, this error decreases as the resolution N is increased.D 4 200 200 Forward Euler Forward Euler Backward Euler Backward Euler 150 Exact solution 150 Exact solution y y 100 100 50 50 0 0 0 1 2 3 4 5 0 1 2 3 4 5 t t Figure 2.1: Forward and backward Euler computations for the linear, one-dimensional ODE in Ex- ample 2.1 up to time T 5 using N 30 (left) and N 60 (right) time steps.