Numerical Solution of Ordinary Differential Equations

Total Page:16

File Type:pdf, Size:1020Kb

Numerical Solution of Ordinary Differential Equations NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS Kendall Atkinson, Weimin Han, David Stewart University of Iowa Iowa City, Iowa A JOHN WILEY & SONS, INC., PUBLICATION Copyright c 2009 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created ore extended by sales representatives or written sales materials. The advice and strategies contained herin may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services please contact our Customer Care Department with the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format. Library of Congress Cataloging-in-Publication Data: Numerical Solution of Ordinary Differential Equations / Kendall E. Atkinson . [et al.]. p. cm.—(Wiley series in ???????) “Wiley-Interscience." Includes bibliographical references and index. ISBN ????????????? (pbk.) 1. Numerical analysis. 2. Ordinary differential equations. I. Atkinson, Kendall E. II. Series. MATLAB R is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB R software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB R software. QA31.????.???? 2008 510.??????-??? Printed in the United States of America. 10987654321 To Alice, Huidi, and Sue Preface This book is an expandedversion of supplementarynotes that we usedfora courseon ordinary differential equations for upper-division undergraduate students and begin- ning graduate students in mathematics, engineering, and sciences. The book intro- duces the numerical analysis of differential equations, describing the mathematical background for understanding numerical methods and giving information on what to expect when using them. As a reason for studying numerical methods as a part of a more general course on differential equations, many of the basic ideas of the numerical analysis of differential equations are tied closely to theoretical behavior associated with the problem being solved. For example, the criteria for the stability of a numericalmethod is closely connectedto the stability of the differential equation problem being solved. This book can be used for a one-semester course on the numerical solution of dif- ferentialequations, orit can be used as a supplementarytextforacourseonthetheory and application of differential equations. In the latter case, we present more about numericalmethodsthanwouldordinarilybecoveredin a class on ordinary differential equations. This allows the instructor some latitude in choosing what to include, and it allows the students to read further into topics that may interest them. For example, the book discusses methods for solving differential algebraic equations (Chapter 10) and Volterra integral equations (Chapter 12), topics not commonly included in an introductory text on the numerical solution of differential equations. vii viii PREFACE We also include MATLAB R programs to illustrate many of the ideas that are introduced in the text. Much is to be learned by experimenting with the numerical solution of differential equations. The programs in the book can be downloadedfrom the following website. http://www.math.uiowa.edu/NumericalAnalysisODE/ This site also contains graphical user interfaces for use in experimenting with Euler’s method and the backward Euler method. These are to be used from within the framework of MATLAB. Numerical methods vary in their behavior, and the many different types of differ- ential equation problems affect the performance of numerical methods in a variety of ways. An excellent book for “real world” examples of solving differential equations is that of Shampine, Gladwell, and Thompson [74]. The authors would like to thank Olaf Hansen, California State University at San Marcos, for his comments on reading an early version of the book. We also express our appreciation to John Wiley Publishers. CONTENTS Introduction 1 1 Theory of differential equations: An introduction 3 1.1 General solvability theory 7 1.2 Stability of the initial value problem 8 1.3 Direction fields 11 Problems 13 2 Euler’s method 15 2.1 Definition of Euler’s method 16 2.2 ErroranalysisofEuler’smethod 21 2.3 Asymptotic error analysis 26 2.3.1 Richardsonextrapolation 28 2.4 Numerical stability 29 2.4.1 Roundingerroraccumulation 30 Problems 32 ix x CONTENTS 3 Systems of differential equations 37 3.1 Higher-orderdifferentialequations 39 3.2 Numericalmethodsforsystems 42 Problems 46 4 The backward Euler method and the trapezoidal method 49 4.1 The backward Euler method 51 4.2 The trapezoidal method 56 Problems 62 5 Taylor and Runge–Kutta methods 67 5.1 Taylor methods 68 5.2 Runge–Kutta methods 70 5.2.1 A general framework for explicit Runge–Kutta methods 73 5.3 Convergence, stability, and asymptotic error 75 5.3.1 Errorpredictionandcontrol 78 5.4 Runge–Kutta–Fehlbergmethods 80 5.5 MATLAB codes 82 5.6 ImplicitRunge–Kuttamethods 86 5.6.1 Two-pointcollocationmethods 87 Problems 89 6 Multistep methods 95 6.1 Adams–Bashforth methods 96 6.2 Adams–Moulton methods 101 6.3 Computercodes 104 6.3.1 MATLAB ODE codes 105 Problems 106 7 General error analysis for multistep methods 111 7.1 Truncation error 112 7.2 Convergence 115 7.3 A general error analysis 117 7.3.1 Stability theory 118 7.3.2 Convergence theory 122 7.3.3 Relative stability and weak stability 122 Problems 123 CONTENTS xi 8 Stiff differential equations 127 8.1 Themethodoflinesforaparabolicequation 131 8.1.1 MATLABprogramsforthemethodoflines 135 8.2 Backwarddifferentiationformulas 140 8.3 Stability regions for multistep methods 141 8.4 Additionalsourcesofdifficulty 143 8.4.1 A-stability and L-stability 143 8.4.2 Time-varyingproblemsandstability 145 8.5 Solvingthefinite-differencemethod 145 8.6 Computercodes 146 Problems 147 9 Implicit RK methods for stiff differential equations 149 9.1 FamiliesofimplicitRunge–Kuttamethods 149 9.2 StabilityofRunge–Kuttamethods 154 9.3 Order reduction 156 9.4 Runge–Kuttamethodsforstiffequationsinpractice 160 Problems 161 10 Differential algebraic equations 163 10.1 Initial conditions and drift 165 10.2DAEsasstiffdifferentialequations 168 10.3 Numericalissues:higherindexproblems 169 10.4 BackwarddifferentiationmethodsforDAEs 173 10.4.1 Index 1 problems 173 10.4.2 Index 2 problems 174 10.5Runge–KuttamethodsforDAEs 175 10.5.1 Index 1 problems 176 10.5.2 Index 2 problems 179 10.6 Indexthreeproblemsfrommechanics 181 10.6.1 Runge–Kuttamethodsformechanicalindex3systems 183 10.7 Higher index DAEs 184 Problems 185 11 Two-point boundary value problems 187 11.1 A finite-difference method 188 11.1.1 Convergence 190 xii CONTENTS 11.1.2 A numerical example 190 11.1.3Boundaryconditionsinvolvingthederivative 194 11.2 Nonlineartwo-pointboundaryvalueproblems 195 11.2.1Finitedifferencemethods 197 11.2.2 Shooting methods 201 11.2.3 Collocation methods 204 11.2.4Othermethodsandproblems 206 Problems 206 12 Volterra integral equations 211 12.1 Solvability theory 212 12.1.1 Special equations 214 12.2 Numerical methods 215 12.2.1Thetrapezoidalmethod 216 12.2.2Errorforthetrapezoidalmethod 217 12.2.3Generalschemafornumericalmethods 219 12.3 Numerical methods: Theory 223 12.3.1 Numerical stability 225 12.3.2Practicalnumericalstability 227 Problems 231 Appendix A. Taylor’s Theorem 235 Appendix B. Polynomial interpolation 241 References 245 Index 250 Introduction Differential equations are among the most important mathematical tools used in pro- ducing models in the physical sciences, biological sciences, and engineering. In this text, we consider numerical methods for solving ordinary differential equations, that is, those differential equations that have only one independent variable. The differential equations we consider in most of the book are of the form Y ′(t)= f(t, Y (t)), where Y (t) is an unknown function that is being sought. The given
Recommended publications
  • Numerical Integration
    Chapter 12 Numerical Integration Numerical differentiation methods compute approximations to the derivative of a function from known values of the function. Numerical integration uses the same information to compute numerical approximations to the integral of the function. An important use of both types of methods is estimation of derivatives and integrals for functions that are only known at isolated points, as is the case with for example measurement data. An important difference between differen- tiation and integration is that for most functions it is not possible to determine the integral via symbolic methods, but we can still compute numerical approx- imations to virtually any definite integral. Numerical integration methods are therefore more useful than numerical differentiation methods, and are essential in many practical situations. We use the same general strategy for deriving numerical integration meth- ods as we did for numerical differentiation methods: We find the polynomial that interpolates the function at some suitable points, and use the integral of the polynomial as an approximation to the function. This means that the truncation error can be analysed in basically the same way as for numerical differentiation. However, when it comes to round-off error, integration behaves differently from differentiation: Numerical integration is very insensitive to round-off errors, so we will ignore round-off in our analysis. The mathematical definition of the integral is basically via a numerical in- tegration method, and we therefore start by reviewing this definition. We then derive the simplest numerical integration method, and see how its error can be analysed. We then derive two other methods that are more accurate, but for these we just indicate how the error analysis can be done.
    [Show full text]
  • On the Numerical Solution of Equations Involving Differential Operators with Constant Coefficients 1
    ON THE NUMERICAL SOLUTION OF EQUATIONS 219 The author acknowledges with thanks the aid of Dolores Ufford, who assisted in the calculations. Yudell L. Luke Midwest Research Institute Kansas City 2, Missouri 1 W. E. Milne, "The remainder in linear methods of approximation," NBS, Jn. of Research, v. 43, 1949, p. 501-511. 2W. E. Milne, Numerical Calculus, p. 108-116. 3 M. Bates, On the Development of Some New Formulas for Numerical Integration. Stanford University, June, 1929. 4 M. E. Youngberg, Formulas for Mechanical Quadrature of Irrational Functions. Oregon State College, June, 1937. (The author is indebted to the referee for references 3 and 4.) 6 E. L. Kaplan, "Numerical integration near a singularity," Jn. Math. Phys., v. 26, April, 1952, p. 1-28. On the Numerical Solution of Equations Involving Differential Operators with Constant Coefficients 1. The General Linear Differential Operator. Consider the differential equation of order n (1) Ly + Fiy, x) = 0, where the operator L is defined by j» dky **£**»%- and the functions Pk(x) and Fiy, x) are such that a solution y and its first m derivatives exist in 0 < x < X. In the special case when (1) is linear the solution can be completely determined by the well known method of varia- tion of parameters when n independent solutions of the associated homo- geneous equations are known. Thus for the case when Fiy, x) is independent of y, the solution of the non-homogeneous equation can be obtained by mere quadratures, rather than by laborious stepwise integrations. It does not seem to have been observed, however, that even when Fiy, x) involves the dependent variable y, the numerical integrations can be so arranged that the contributions to the integral from the upper limit at each step of the integration, at the time when y is still unknown at the upper limit, drop out.
    [Show full text]
  • The Original Euler's Calculus-Of-Variations Method: Key
    Submitted to EJP 1 Jozef Hanc, [email protected] The original Euler’s calculus-of-variations method: Key to Lagrangian mechanics for beginners Jozef Hanca) Technical University, Vysokoskolska 4, 042 00 Kosice, Slovakia Leonhard Euler's original version of the calculus of variations (1744) used elementary mathematics and was intuitive, geometric, and easily visualized. In 1755 Euler (1707-1783) abandoned his version and adopted instead the more rigorous and formal algebraic method of Lagrange. Lagrange’s elegant technique of variations not only bypassed the need for Euler’s intuitive use of a limit-taking process leading to the Euler-Lagrange equation but also eliminated Euler’s geometrical insight. More recently Euler's method has been resurrected, shown to be rigorous, and applied as one of the direct variational methods important in analysis and in computer solutions of physical processes. In our classrooms, however, the study of advanced mechanics is still dominated by Lagrange's analytic method, which students often apply uncritically using "variational recipes" because they have difficulty understanding it intuitively. The present paper describes an adaptation of Euler's method that restores intuition and geometric visualization. This adaptation can be used as an introductory variational treatment in almost all of undergraduate physics and is especially powerful in modern physics. Finally, we present Euler's method as a natural introduction to computer-executed numerical analysis of boundary value problems and the finite element method. I. INTRODUCTION In his pioneering 1744 work The method of finding plane curves that show some property of maximum and minimum,1 Leonhard Euler introduced a general mathematical procedure or method for the systematic investigation of variational problems.
    [Show full text]
  • Improving Numerical Integration and Event Generation with Normalizing Flows — HET Brown Bag Seminar, University of Michigan —
    Improving Numerical Integration and Event Generation with Normalizing Flows | HET Brown Bag Seminar, University of Michigan | Claudius Krause Fermi National Accelerator Laboratory September 25, 2019 In collaboration with: Christina Gao, Stefan H¨oche,Joshua Isaacson arXiv: 191x.abcde Claudius Krause (Fermilab) Machine Learning Phase Space September 25, 2019 1 / 27 Monte Carlo Simulations are increasingly important. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/ComputingandSoftwarePublicResults MC event generation is needed for signal and background predictions. ) The required CPU time will increase in the next years. ) Claudius Krause (Fermilab) Machine Learning Phase Space September 25, 2019 2 / 27 Monte Carlo Simulations are increasingly important. 106 3 10− parton level W+0j 105 particle level W+1j 10 4 W+2j particle level − W+3j 104 WTA (> 6j) W+4j 5 10− W+5j 3 W+6j 10 Sherpa MC @ NERSC Mevt W+7j / 6 Sherpa / Pythia + DIY @ NERSC 10− W+8j 2 10 Frequency W+9j CPUh 7 10− 101 8 10− + 100 W +jets, LHC@14TeV pT,j > 20GeV, ηj < 6 9 | | 10− 1 10− 0 50000 100000 150000 200000 250000 300000 0 1 2 3 4 5 6 7 8 9 Ntrials Njet Stefan H¨oche,Stefan Prestel, Holger Schulz [1905.05120;PRD] The bottlenecks for evaluating large final state multiplicities are a slow evaluation of the matrix element a low unweighting efficiency Claudius Krause (Fermilab) Machine Learning Phase Space September 25, 2019 3 / 27 Monte Carlo Simulations are increasingly important. 106 3 10− parton level W+0j 105 particle level W+1j 10 4 W+2j particle level − W+3j 104 WTA (>
    [Show full text]
  • Polarization Fields and Phase Space Densities in Storage Rings: Stroboscopic Averaging and the Ergodic Theorem
    Physica D 234 (2007) 131–149 www.elsevier.com/locate/physd Polarization fields and phase space densities in storage rings: Stroboscopic averaging and the ergodic theorem✩ James A. Ellison∗, Klaus Heinemann Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131, United States Received 1 May 2007; received in revised form 6 July 2007; accepted 9 July 2007 Available online 14 July 2007 Communicated by C.K.R.T. Jones Abstract A class of orbital motions with volume preserving flows and with vector fields periodic in the “time” parameter θ is defined. Spin motion coupled to the orbital dynamics is then defined, resulting in a class of spin–orbit motions which are important for storage rings. Phase space densities and polarization fields are introduced. It is important, in the context of storage rings, to understand the behavior of periodic polarization fields and phase space densities. Due to the 2π time periodicity of the spin–orbit equations of motion the polarization field, taken at a sequence of increasing time values θ,θ 2π,θ 4π,... , gives a sequence of polarization fields, called the stroboscopic sequence. We show, by using the + + Birkhoff ergodic theorem, that under very general conditions the Cesaro` averages of that sequence converge almost everywhere on phase space to a polarization field which is 2π-periodic in time. This fulfills the main aim of this paper in that it demonstrates that the tracking algorithm for stroboscopic averaging, encoded in the program SPRINT and used in the study of spin motion in storage rings, is mathematically well-founded.
    [Show full text]
  • Approximations in Numerical Analysis, Cont'd
    Jim Lambers MAT 460/560 Fall Semester 2009-10 Lecture 7 Notes These notes correspond to Section 1.3 in the text. Approximations in Numerical Analysis, cont'd Convergence Many algorithms in numerical analysis are iterative methods that produce a sequence fαng of ap- proximate solutions which, ideally, converges to a limit α that is the exact solution as n approaches 1. Because we can only perform a finite number of iterations, we cannot obtain the exact solution, and we have introduced computational error. If our iterative method is properly designed, then this computational error will approach zero as n approaches 1. However, it is important that we obtain a sufficiently accurate approximate solution using as few computations as possible. Therefore, it is not practical to simply perform enough iterations so that the computational error is determined to be sufficiently small, because it is possible that another method may yield comparable accuracy with less computational effort. The total computational effort of an iterative method depends on both the effort per iteration and the number of iterations performed. Therefore, in order to determine the amount of compu- tation that is needed to attain a given accuracy, we must be able to measure the error in αn as a function of n. The more rapidly this function approaches zero as n approaches 1, the more rapidly the sequence of approximations fαng converges to the exact solution α, and as a result, fewer iterations are needed to achieve a desired accuracy. We now introduce some terminology that will aid in the discussion of the convergence behavior of iterative methods.
    [Show full text]
  • On the Three Methods for Bounding the Rate of Convergence for Some Continuous-Time Markov Chains
    On the Three Methods for Bounding the Rate of Convergence for some Continuous-time Markov Chains Alexander Zeifman∗ Yacov Satiny Anastasia Kryukovaz Rostislav Razumchik§ Ksenia Kiseleva{ Galina Shilovak Abstract. Consideration is given to the three different analytical methods for the computation of upper bounds for the rate of convergence to the limiting regime of one specific class of (in)homogeneous continuous- time Markov chains. This class is particularly suited to describe evolutions of the total number of customers in (in)homogeneous M=M=S queueing systems with possibly state-dependent arrival and service intensities, batch arrivals and services. One of the methods is based on the logarithmic norm of a linear operator function; the other two rely on Lyapunov functions and differential inequalities, respectively. Less restrictive conditions (compared to those known from the literature) under which the methods are applicable, are being formulated. Two numerical examples are given. It is also shown that for homogeneous birth-death Markov processes defined on a finite state space with all transition rates being positive, all methods yield the same sharp upper bound. ∗Vologda State University, Vologda, Russia; Institute of Informatics Problems, Federal Research Center “Computer Science and Control”, Russian Academy of Sciences, Moscow, arXiv:1911.04086v2 [math.PR] 24 Feb 2020 Russia; Vologda Research Center of the Russian Academy of Sciences, Vologda, Russia. E-mail: [email protected] yVologda State University, Vologda, Russia zVologda State
    [Show full text]
  • CONVERGENCE RATES of MARKOV CHAINS 1. Orientation 1
    CONVERGENCE RATES OF MARKOV CHAINS STEVEN P. LALLEY CONTENTS 1. Orientation 1 1.1. Example 1: Random-to-Top Card Shuffling 1 1.2. Example 2: The Ehrenfest Urn Model of Diffusion 2 1.3. Example 3: Metropolis-Hastings Algorithm 4 1.4. Reversible Markov Chains 5 1.5. Exercises: Reversiblity, Symmetries and Stationary Distributions 8 2. Coupling 9 2.1. Coupling and Total Variation Distance 9 2.2. Coupling Constructions and Convergence of Markov Chains 10 2.3. Couplings for the Ehrenfest Urn and Random-to-Top Shuffling 12 2.4. The Coupon Collector’s Problem 13 2.5. Exercises 15 2.6. Convergence Rates for the Ehrenfest Urn and Random-to-Top 16 2.7. Exercises 17 3. Spectral Analysis 18 3.1. Transition Kernel of a Reversible Markov Chain 18 3.2. Spectrum of the Ehrenfest random walk 21 3.3. Rate of convergence of the Ehrenfest random walk 23 1. ORIENTATION Finite-state Markov chains have stationary distributions, and irreducible, aperiodic, finite- state Markov chains have unique stationary distributions. Furthermore, for any such chain the n−step transition probabilities converge to the stationary distribution. In various ap- plications – especially in Markov chain Monte Carlo, where one runs a Markov chain on a computer to simulate a random draw from the stationary distribution – it is desirable to know how many steps it takes for the n−step transition probabilities to become close to the stationary probabilities. These notes will introduce several of the most basic and important techniques for studying this problem, coupling and spectral analysis. We will focus on two Markov chains for which the convergence rate is of particular interest: (1) the random-to-top shuffling model and (2) the Ehrenfest urn model.
    [Show full text]
  • Chapter 16: Differential Equations
    ✐ ✐ ✐ ✐ Chapter 16 Differential Equations A vast number of mathematical models in various areas of science and engineering involve differ- ential equations. This chapter provides a starting point for a journey into the branch of scientific computing that is concerned with the simulation of differential problems. We shall concentrate mostly on developing methods and concepts for solving initial value problems for ordinary differential equations: this is the simplest class (although, as you will see, it can be far from being simple), yet a very important one. It is the only class of differential problems for which, in our opinion, up-to-date numerical methods can be learned in an orderly and reasonably complete fashion within a first course text. Section 16.1 prepares the setting for the numerical treatment that follows in Sections 16.2–16.6 and beyond. It also contains a synopsis of what follows in this chapter. Numerical methods for solving boundary value problems for ordinary differential equations receive a quick review in Section 16.7. More fundamental difficulties arise here, so this section is marked as advanced. Most mathematical models that give rise to differential equations in practice involve partial differential equations, where there is more than one independent variable. The numerical treatment of partial differential equations is a vast and complex subject that relies directly on many of the methods introduced in various parts of this text. An orderly development belongs in a more advanced text, though, and our own description in Section 16.8 is downright anecdotal, relying in part on examples introduced earlier. 16.1 Initial value ordinary differential equations Consider the problem of finding a function y(t) that satisfies the ordinary differential equation (ODE) dy = f (t, y), a ≤ t ≤ b.
    [Show full text]
  • Differential Equations
    Differential Equations • Overview of differential equation • Initial value problem • Explicit numeric methods • Implicit numeric methods • Modular implementation Physics-based simulation • It’s an algorithm that produces a sequence of states over time under the laws of physics • What is a state? Physics-based simulation xi xi+1 xi xi+1 = xi +∆x ∆x Physics-based simulation xi Newtonian laws gravity wind gust xi+1 elastic force . integrator xi xi+1 = xi +∆x ∆x Differential equations • What is a differential equation? • It describes the relation between an unknown function and its derivatives • Ordinary differential equation (ODE) • is the relation that contains functions of only one independent variable and its derivatives Ordinary differential equations An ODE is an equality equation involving a function and its derivatives known function x˙(t)=f(x(t)) time derivative of the unknown function that unknown function evaluates the state given time What does it mean to “solve” an ODE? Symbolic solutions • Standard introductory differential equation courses focus on finding solutions analytically • Linear ODEs can be solved by integral transforms • Use DSolve[eqn,x,t] in Mathematica Differential equation: x˙(t)= kx(t) − kt Solution: x(t)=e− Numerical solutions • In this class, we will be concerned with numerical solutions • Derivative function f is regarded as a black box • Given a numerical value x and t, the black box will return the time derivative of x Physics-based simulation xi Newtonian laws gravity wind gust xi+1 elastic force . integrator
    [Show full text]
  • Introduction to Numerical Analysis
    INTRODUCTION TO NUMERICAL ANALYSIS Cho, Hyoung Kyu Department of Nuclear Engineering Seoul National University 10. NUMERICAL INTEGRATION 10.1 Background 10.11 Local Truncation Error in Second‐Order 10.2 Euler's Methods Range‐Kutta Method 10.3 Modified Euler's Method 10.12 Step Size for Desired Accuracy 10.4 Midpoint Method 10.13 Stability 10.5 Runge‐Kutta Methods 10.14 Stiff Ordinary Differential Equations 10.6 Multistep Methods 10.7 Predictor‐Corrector Methods 10.8 System of First‐Order Ordinary Differential Equations 10.9 Solving a Higher‐Order Initial Value Problem 10.10 Use of MATLAB Built‐In Functions for Solving Initial‐Value Problems 10.1 Background Ordinary differential equation A differential equation that has one independent variable A first‐order ODE . The first derivative of the dependent variable with respect to the independent variable Example Rates of water inflow and outflow The time rate of change of the mass in the tank Equation for the rate of height change 10.1 Background Time dependent problem Independent variable: time Dependent variable: water level To obtain a specific solution, a first‐order ODE must have an initial condition or constraint that specifies the value of the dependent variable at a particular value of the independent variable. In typical time‐dependent problems . Initial condition . Initial value problem (IVP) 10.1 Background First order ODE statement General form Ex) . Flow lines Analytical solution In many situations an analytical solution is not possible! Numerical solution of a first‐order ODE A set of discrete points that approximate the function y(x) Domain of the solution: , N subintervals 10.1 Background Overview of numerical methods used/or solving a first‐order ODE Start from the initial value Then, estimate the value at a second nearby point third point … Single‐step and multistep approach .
    [Show full text]
  • Numerical Methods for Evolutionary Systems, Lecture 2 Stiff Equations
    Numerical Methods for Evolutionary Systems, Lecture 2 C. W. Gear Celaya, Mexico, January 2007 Stiff Equations The history of stiff differential equations goes back 55 years to the very early days of machine computation. The first identification of stiff equations as a special class of problems seems to have been due to chemists [chemical engineers] in 1952 (C.F. Curtiss and J.O. Hirschfelder, Integration of stiff equations, Proc. of the National Academy of Sciences of U.S., 38 (1952), pp 235--243.) For 15 years stiff equations presented serious difficulties and were hard to solve, both in chemical problems (reaction kinetics) and increasingly in other areas (electrical engineering, mechanical engineering, etc) until around 1968 when a variety of methods began to appear in the literature. The nature of the problems that leads to stiffness is the existence of physical phenomena with very different speeds (time constants) so that, while we may be interested in relative slow aspects of the model, there are features of the model that could change very rapidly. Prior to the availability of electronic computers, one could seldom solve problems that were large enough for this to be a problem, but once electronic computers became available and people began to apply them to all sorts of problems, we very quickly ran into stiff problems. In fact, most large problems are stiff, as we will see as we look at them in a little detail. L2-1 Copyright 2006, C. W. Gear Suppose the family of solutions to an ODE looks like the figure below. This looks to be an ideal problem for integration because almost no matter where we start the final solution finishes up on the blue curve.
    [Show full text]