Numerical Methods for Ordinary Differential Equations

Numerical Methods for Ordinary Differential Equations

I Numerical Methods for Ordinary Differential Equations 1 Introduction Mathematica® uses, for the integration of systems of ordinary differen­ tial equations, suitable internal routines which are not visible by the user. These ordinary differential equation solvers are like black boxes devised by specialists which are in charge of giving as output the evolution of the state variable, when the mathematical problem is given as input. Therefore, in principle, the user may even forget about the numerical scheme actually used and trust the internal library. However, we believe that, when handling computer libraries, it is always useful to know what are the methods used and their peculiarities. The technical details will be reserved to numerical analysis courses. This appendix will report the basic knowledge on some numerical meth­ ods commonly used for the integration of initial and boundary-value prob­ lems. Completeness is not, however, claimed. These methods also support the solution of initial-boundary value problems which have been briefly dealt with in Chapter 9. It is written in the form of a manual for the user and will try to sum­ marize the essential features of some well-known methods, weighting their pros and cons and pointing out, for instance, their accuracy and stability properties. 341 342 Mechanics and Dynamic Systems 2 Numerical Methods for Initial-Value Problems In order to understand the concepts of accuracy and stability, consider the simplest conceivable method for solving initial-value problems, the Euler method. If one knows the state variable at time ti and wants to compute its value at time ti+l = ti + h, the simplest idea consists in approximating the solution of du dt = f(t,u), (1.2.1) { U(ti) = Ui, with (1.2.2) or (1.2.3) Since the above discretization corresponds to a Taylor expansion of the solution u stopped at first order, which corresponds somewhat to following the di­ rection of the tangent to the solution in til as shown in Figure 1.1a. u ui m;; h t t (a) (b) Figure I.1 - Differential representation of (a) Forward Euler method and (b) Crank-Nicholson method. Appendix I 343 The truncation error Terr is defined as the norm of the difference between the solution to the differential equation and the numerical solution divided by the time step used in the numerical scheme. It can be proved, under suitable regularity assumptions, that (1.2.4) Le., the error goes like h and one has a first order method. In general, if the truncation error goes like hP , the method is pth order, which means that halving the step halves the truncation error p times. This is what is generally meant by accuracy of the method. Another important requirement of a numerical scheme is that these errors should not be amplified without bound as time goes by, Le., the time step has to be chosen so that the scheme is stable. It can be proved that each scheme is characterized by a domain in the complex plane called the stability region, which can be used to determine a condition on the time step to be used in the integration. The following procedure should be used: • Starting from a nonlinear model, consider its linearized form du dt = Au, obtained for instance by linearization about the initial condition; • Compute the eigenvalues AI, ... , An of A, the so-called spectrum of the linearized model; • Choose, if possible, a not too restrictive time step h such that all hAi (i = 1, ... n) belong to the stability region. Then the numerical errors are not amplified during the integration and therefore the scheme can be considered for simulating the original nonlinear model. If, instead, there are some eigenvalues which remain outside the stability region, then one must be aware that the numerical errors grow exponentially in time. For instance, in the case of forward Euler method (1.2.2), the stability region A is the region in the complex plane inside the unit circle centered in -1. This means that An~ = [-2,0] and AnSS = {O}, where ~ and SS are, respectively, the real and imaginary axes. Hence the method can be used if the spectrum of the ordinary differential equation has a negative real eigenvalue A, since one can set h < 2/IAI to force that eigenvalue in the stability region. However it can not be used if the ordinary eigenvalue has a purely imaginary eigenvalue (corresponding to an undamped oscillation), because one is never able to force it inside the stability region, and it is 344 Mechanics and Dynamic Systems not convenient if the real part of an eigenvalue is much smaller than the imaginary part (small damping). Runge-Kutta methods Runge-Kutta methods are popillar for their adaptability and versatility and work quite well for all nonstiff problems. They are obtained by evaluating the function f at different values of t and u, and by suitably combining these values. In this way it is possible to obtain methods of any order of accuracy. For instance, one has 2nd order: UHl = Ui + ~ ( K 1 + K2), (1.2.5) where ~ 3rd order: UHl = Ui + (K 1 + 4K2 + K 3 ), (1.2.6) where (1.2.7) where K 1 = f(ti, Ui), ~,Ui ~Kl) K2 = f (t i + + , ~,Ui ~K2) K 3 = f (t i + + , K 4 = f(ti + h, Ui + hK3 ). Recalling that the initial-value problem with initial condition U(ti) = Ui can be written in integral form as U(t) = Ui + lit f(8, U(8)) d8, (1.2.8) Appendix I 345 and, in particular, (1.2.9) it can be observed that both the third- and the fourth-order methods are closely related to Simpson's rule applied to the integrals in (1.2.9). The stability regions of the second-order method has an ellipsoidal shape, while that of the third- and fourth-order methods are bean-shaped. More in detail 2nd order: An~= [-2,0] and An~={O}, 3rd order: An~= [-2.51,0] and A n ~ = [-v'3, v'3], (1.2.10) 4th order: A n ~ = [-2.79,0] and A n ~ = [-2V2,2V2]. Third and fourth-order Runge-Kutta methods are then recommended when dealing with problems for which the spectrum is not available since they include both a part of the imaginary axis and a part of the negative real axis. The disadvantage of these schemes is that several evaluations of f have to be performed per time step. Therefore, if the computation of f is very heavy, this method may demand an excessive amount of labor per step and become inconvenient. With respect to the multistep methods that will be dealt with later on, Runge-Kutta methods have the advantage of being self-starting and adaptive, in the sense that the time step can be changed at any moment according to an estimate of the local error. Unfortunately, it is not simple to get this estimate, as it is for the predictor-corrector methods (as we shall see). In this case, one can either integrate again using a method of the same order but halving the time step, or integrate again with a higher-order scheme. The time step has then to be decreased if the difference between the two values (over h) is larger than a specified maximum tolerance and can be increased if this difference is smaller than the minimum required tolerance. Of course, this checking is time consuming. Crank-Nicholson and Predictor-Corrector Methods All the previous methods get the approximation UiH at t = ti+l on the only basis of the solution Ui at the previous step t = ti. The Crank-Nicholson method, instead, averages the values of f in ti and tiH so that it can be written as h UiH = Ui + 2" [f(ti, Ui) + f(ti+l, UiH)J. (1.2.11) This corresponds to proceeding for half the time step along the value of the derivative at ti and for the remaining part along the derivative at tiH 346 Mechanics and Dynamic Systems (see Figure I.1b), or to evaluating the integral in (1.2.9) by the trapezoidal rule (see Figure 1.2b), i.e., it approximates the integral under f(t, u) with the one under the polynomial of degree 1 (the linear interpolant) linking with a straight line the nodes and The method obtained in this way is second order. Its stability region is the whole negative half plane, and therefore A n ~ = (-00,0] and An'S = (-00, +00). (1.2.12) The magnitude of the stability region is one reason why this method is very popular, in spite of its not too high accuracy. To obtain the value UiH the Crank-Nicholson method then uses both the information at time ti and at time tHI and the linear interpolant to ap­ proximate the integral in (1.2.9). One could generalize this same procedure by exploiting more information from the past. For instance, taking into account Pi- l = (ti-l, f(ti-l' ui-d) one can compute the parabola through Pi-I, Pi and PHI and approximate the same area considered before with the area under the parabola. Proceeding further, one can consider also Pi - 2 and compute the cubic interpolant. According to the number of nodes used, one has the following implicit multistep methods (or Adams-Moulton methods) 1st order: UiH =Ui + hf(ti+l' Ui+l), h 2nd order: Ui+l =Ui + 2" [f(tiH, uHd + f(ti, Ui)], h 3rd order: UiH =Ui + 12 [5f(tH1 , uHd + 8f(ti, Ui) - f(ti-l, ui-d), h 4th order: Ui+l =Ui + 24 [9f(tHI' uHd + 19f(ti, Ui) - 5f(ti-l, ui-d + f(ti-2, Ui-2)]' (1.2.13) The first order scheme is called Backward Euler method, while the sec­ ond order scheme is the Crank-Nicholson method.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    75 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us