Numerical Methods for Ordinary Differential Equations

Total Page:16

File Type:pdf, Size:1020Kb

Numerical Methods for Ordinary Differential Equations Numerical methods for ordinary differential equations Ulrik Skre Fjordholm May 1, 2018 Chapter 1 Introduction Consider the ordinary differential equation (ODE) x.t/ f .x.t/; t/; x.0/ x0 (1.1) P D D d d d where x0 R and f R R R . Under certain conditions on f there exists a unique solution of (1.1), and2 for certainW types of! functions f (such as when (1.1) is separable) there are techniques available for computing this solution. However, for most “real-world” examples of f , we have no idea how the solution actually looks like. We are left with no choice but to approximate the solution x.t/. Assume that we would like to compute the solution of (1.1) over a time interval t Œ0; T for some T > 01. The most common approach to finding an approximation of the solution of2 (1.1) starts by partitioning the time interval into a set of points t0; t1; : : : ; tN , where tn nh is a time step and h T is the step size2. A numerical method then computes an approximationD of the actual solution D N value x.tn/ at time t tn. We will denote this approximation by yn. The basis of most numerical D methods is the following simple computation: Integrate (1.1) over the time interval Œtn; tn 1 to get C Z tn 1 C x.tn 1/ x.tn/ f .x.s/; s/ ds: (1.2) C D C tn Although we could replace x.tn/ and x.tn 1/ by their approximations yn and yn 1, we cannot use the formula (1.2) directly because the integrandC depends on the exact solution x.s/C . What we can do, however, is to approximate the integral in (1.2) by some quadrature rule, and then approximate x at the quadrature points. More specifically, we consider a quadrature rule K Z tn 1 C X .k/ g.s/ ds .tn 1 tn/ bkg.s / t C n k 1 D .1/ .K/ where g is any function, b1; : : : ; bk R are quadrature weights and s ; : : : ; s Œtn; tn 1 are the quadrature points. Two basic requirements2 of this approximations is that if g2is nonnegativeC then the quadrature approximation is also nonnegative, and that the rule correctly integrates constant functions. It is straightforward to see that these requirements translate to b1; : : : ; bk 0 and b1 bK 1: > C C D 1In this note we will only consider a time interval t Œ0; T for some positive endtime T > 0, but is it fully possible to 2 work with negative times or even the whole real line t R. 2The more sophisticated adaptive time stepping methods2 use a step size h which can change from one time step to another. We will not consider such methods in this note. 1 .1/ tn tn 1 Some popular methods include the midpoint rule (K 1, b1 1, s C2 C ), the trapezoidal 1 .1/ .2/ D D D 1 rule (K 2, b1 b2 2 , s tn, s tn 1) and Simpson’s rule (K 3, b1 b3 6 , C 4 D.1/ D .2/D tn tn D1 .3/ D D D D b2 6 , s tn, s C2 C , s tn 1). As you might recall, these three methods have D D3 3 D 4 D C an error of O.h /, O.h / and O.h /, respectively, where h tn 1 tn denotes the length of the interval3. D C Applying the quadrature rule to (1.2) yields K X .k/ .k/ yn 1 yn h bkf yn ; sn (1.3) C D C k 1 D .k/ .k/ .k/ where sn Œtn; tn 1 are the quadrature points and yn x sn . The initial approximation y0 2 C is simply set to the initial data prescribed in (1.1), y0 x0. .k/ .k/ D If the quadrature points yn ; sn are either yn; tn or yn 1; tn 1 (or some combination of these) C C then we can solve (1.3) for yn 1 and get a viable numerical method for (1.1). Two such methods, the explicit and implicit Euler methods,C are the topic of Chapter 2. However, if we want to construct .k/ more accurate numerical methods then we have to include quadrature points at times sn which lie strictly between tn and tn 1, and consequently we need some way of computing the intermediate .k/ .k/ C values yn x sn . A systematic way of computing these points is the so-called Runge–Kutta methods. These are methods which converge to the exact solution much faster than the Euler meth- ods, and will be the topic of Chapter 4. Some natural questions arise when deriving numerical methods for (1.1): How large is the ap- proximation error for a fixed step size h > 0? Do the computed solutions converge to x.t/ as h 0? How fast do they converge, and can we increase the speed of convergence? Is the method stable?! The purpose of these notes is to answer all of these questions for some of the most commonly used numerical methods for ODEs. Basic assumptions In this note we will assume f is continuously differentiable, is bounded and has bounded first 1 d d derivatives—that is, we assume that f C .R R ; R / and that there are constants M > 0 and K > 0 such that 2 sup f .x; t/ 6 M; (1.4a) d j j x R t2 R 2 sup Dxf .x; t/ 6 K; (1.4b) d .x;t/R R ˇ ˇ ˇ@f ˇ sup ˇ .x; t/ˇ 6 K: (1.4c) d ˇ @t ˇ .x;t/R R @f i (Here, Dxf denotes the Jacobian matrix of f , Dxf .x; t/ .x; t/, and denotes the i;j D @xj k k matrix norm.) In particular, f is Lipschitz in x with Lipschitz constant K, so these assumptions guarantee that Picard iterations converge, and we can conclude that there exists a solution for all times t R. Uniqueness of the solution follows from Lipschitz boundedness of f together with Gronwall’s2 lemma. 3 Here we use the “Big O notation”. If eh is some quantity depending on a parameter h > 0 (such as the error in a p numerical approximation), then eh O.h / means that there exists some constant C > 0 such that for small values of h, Dp we can bound the error as eh 6 C h . j j 2 The assumptions in (1.4) can be replaced by local bounds, although one then needs to be more careful in some of the computations in this note. In particular, the solution might not exist for all times. Without going into detail we state here that all the results in this note are true (with minor modifications) assuming only local versions of the bounds (1.4). 3 Chapter 2 Euler’s methods The absolutely simplest quadrature rule that we can use in (1.3) is the one-point method K 1, .1/ D b1 1, sn tn. This yields the forward Euler or explicit Euler method D D yn 1 yn hf yn; tn : (2.1) C D C As its name implies, this method is explicit, meaning that the approximation yn 1 at the next time C step is given by a formula depending only on the approximation yn at the current time step. Explicit methods are easy to use on a computer because we can type the formula more or less directly in the .1/ source code. Another popular one-point quadrature is K 1, b1 1, sn tn 1, which gives the backward Euler or implicit Euler method D D D C yn 1 yn hf yn 1; tn 1 : (2.2) C D C C C This method is implicit in the sense that one has to solve an algebraic relation in order to find yn 1 C as a function of only yn. Both of the Euler methods can be seen as first-order Taylor expansions of x.t/ around t tn and t tn 1, respectively. D D C 2.1 Example. Consider the ODE (1.1) with n 1 and f .x/ ax for some a R. The forward Euler method is D D 2 yn 1 yn hf.yn/ yn 1 ah : C D C D C The implicit Euler method is yn 1 yn hf.yn 1/ yn ahyn 1; C D C C D C C and solving for yn 1 yields the explicit expression C yn yn 1 : C D 1 ah In both cases the method can be written as the difference equation yn 1 byn, whose solution is n C D n yn b x0. Thus, the explicit Euler method can be written as yn .1 ah/ x0 and the implicit D n T D C Euler method as yn .1 ah/ x0. Using the fact that h , it is a straightforward exercise D D N in calculus to show that for either scheme, the endtime solution yN converges to the exact value aT x.T / e x0 as N . FigureD 2.1 show a! computation 1 with both schemes using the parameter a 1 and initial data D x0 1. While the error in both schemes increase over time, this error decreases as the resolution N is increased.D 4 200 200 Forward Euler Forward Euler Backward Euler Backward Euler 150 Exact solution 150 Exact solution y y 100 100 50 50 0 0 0 1 2 3 4 5 0 1 2 3 4 5 t t Figure 2.1: Forward and backward Euler computations for the linear, one-dimensional ODE in Ex- ample 2.1 up to time T 5 using N 30 (left) and N 60 (right) time steps.
Recommended publications
  • Deep Hamiltonian Networks Based on Symplectic Integrators Arxiv
    Deep Hamiltonian networks based on symplectic integrators Aiqing Zhu1,2, Pengzhan Jin1,2, and Yifa Tang1,2,* 1LSEC, ICMSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China 2School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China *Email address: [email protected] Abstract HNets is a class of neural networks on grounds of physical prior for learning Hamiltonian systems. This paper explains the influences of different integrators as hyper-parameters on the HNets through error analysis. If we define the network target as the map with zero empirical loss on arbitrary training data, then the non-symplectic integrators cannot guarantee the existence of the network targets of HNets. We introduce the inverse modified equations for HNets and prove that the HNets based on symplectic integrators possess network targets and the differences between the network targets and the original Hamiltonians depend on the accuracy orders of the integrators. Our numerical experiments show that the phase flows of the Hamiltonian systems obtained by symplectic HNets do not exactly preserve the original Hamiltonians, but preserve the network targets calculated; the loss of the network target for the training data and the test data is much less than the loss of the original Hamiltonian; the symplectic HNets have more powerful generalization ability and higher accuracy than the non-symplectic HNets in addressing predicting issues. Thus, the symplectic integrators are of critical importance for HNets. Key words. Neural networks, HNets, Network target, Inverse modified equations, Symplectic integrator, Error analysis. arXiv:2004.13830v1 [math.NA] 23 Apr 2020 1 Introduction Dynamical systems play a critical role in shaping our understanding of the physical world.
    [Show full text]
  • Arxiv:1108.0322V1 [Physics.Comp-Ph] 1 Aug 2011 Hamiltonian Phase Space Structure Be Preserved
    Symplectic integrators with adaptive time steps A S Richardson and J M Finn T-5, Applied Mathematics and Plasma Physics, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA E-mail: [email protected], [email protected] Abstract. In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, D = D(t). In this case, backwards error analysis shows that while the algorithms remain symplectic, parametric instabilities arise because of resonance between oscillations of D(t) and the orbital motion. In the second category the time step is a function of phase space variables D = D(q; p). In this case, the system of equations to be solved is analyzed by introducing a new time variable t with dt = D(q; p)dt. The transformed equations are no longer in Hamiltonian form, and thus are not guaranteed to be stable even when integrated using a method which is symplectic for constant D. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping.
    [Show full text]
  • Numerical Solution of Ordinary Differential Equations
    NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS Kendall Atkinson, Weimin Han, David Stewart University of Iowa Iowa City, Iowa A JOHN WILEY & SONS, INC., PUBLICATION Copyright c 2009 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created ore extended by sales representatives or written sales materials. The advice and strategies contained herin may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
    [Show full text]
  • Some Robust Integrators for Large Time Dynamics
    Razafindralandy et al. Adv. Model. and Simul. in Eng. Sci. (2019)6:5 https://doi.org/10.1186/s40323-019-0130-2 R E V I E W Open Access Some robust integrators for large time dynamics Dina Razafindralandy1* , Vladimir Salnikov2, Aziz Hamdouni1 and Ahmad Deeb1 *Correspondence: dina.razafi[email protected] Abstract 1Laboratoire des Sciences de l’Ingénieur pour l’Environnement This article reviews some integrators particularly suitable for the numerical resolution of (LaSIE), UMR 7356 CNRS, differential equations on a large time interval. Symplectic integrators are presented. Université de La Rochelle, Their stability on exponentially large time is shown through numerical examples. Next, Avenue Michel Crépeau, 17042 La Rochelle Cedex, France Dirac integrators for constrained systems are exposed. An application on chaotic Full list of author information is dynamics is presented. Lastly, for systems having no exploitable geometric structure, available at the end of the article the Borel–Laplace integrator is presented. Numerical experiments on Hamiltonian and non-Hamiltonian systems are carried out, as well as on a partial differential equation. Keywords: Symplectic integrators, Dirac integrators, Long-time stability, Borel summation, Divergent series Introduction In many domains of mechanics, simulations over a large time interval are crucial. This is, for instance, the case in molecular dynamics, in weather forecast or in astronomy. While many time integrators are available in literature, only few of them are suitable for large time simulations. Indeed, many numerical schemes fail to correctly predict the expected physical phenomena such as energy preservation, as the simulation time grows. For equations having an underlying geometric structure (Hamiltonian systems, varia- tional problems, Lie symmetry group, Dirac structure, ...), geometric integrators appear to be very robust for large time simulation.
    [Show full text]
  • Some Robust Integrators for Large Time Dynamics
    Razafindralandy et al. Adv. Model. and Simul. in Eng. Sci. (2019)6:5 https://doi.org/10.1186/s40323-019-0130-2 R E V I E W Open Access Some robust integrators for large time dynamics Dina Razafindralandy1* , Vladimir Salnikov2, Aziz Hamdouni1 and Ahmad Deeb1 *Correspondence: dina.razafi[email protected] Abstract 1Laboratoire des Sciences de l’Ingénieur pour l’Environnement This article reviews some integrators particularly suitable for the numerical resolution of (LaSIE), UMR 7356 CNRS, differential equations on a large time interval. Symplectic integrators are presented. Université de La Rochelle, Their stability on exponentially large time is shown through numerical examples. Next, Avenue Michel Crépeau, 17042 La Rochelle Cedex, France Dirac integrators for constrained systems are exposed. An application on chaotic Full list of author information is dynamics is presented. Lastly, for systems having no exploitable geometric structure, available at the end of the article the Borel–Laplace integrator is presented. Numerical experiments on Hamiltonian and non-Hamiltonian systems are carried out, as well as on a partial differential equation. Keywords: Symplectic integrators, Dirac integrators, Long-time stability, Borel summation, Divergent series Introduction In many domains of mechanics, simulations over a large time interval are crucial. This is, for instance, the case in molecular dynamics, in weather forecast or in astronomy. While many time integrators are available in literature, only few of them are suitable for large time simulations. Indeed, many numerical schemes fail to correctly predict the expected physical phenomena such as energy preservation, as the simulation time grows. For equations having an underlying geometric structure (Hamiltonian systems, varia- tional problems, Lie symmetry group, Dirac structure, ...), geometric integrators appear to be very robust for large time simulation.
    [Show full text]
  • High-Order Regularised Symplectic Integrator for Collisional Planetary Systems Antoine C
    A&A 628, A32 (2019) https://doi.org/10.1051/0004-6361/201935786 Astronomy & © A. C. Petit et al. 2019 Astrophysics High-order regularised symplectic integrator for collisional planetary systems Antoine C. Petit, Jacques Laskar, Gwenaël Boué, and Mickaël Gastineau ASD/IMCCE, CNRS-UMR8028, Observatoire de Paris, PSL University, Sorbonne Université, 77 avenue Denfert-Rochereau, 75014 Paris, France e-mail: [email protected] Received 26 April 2019 / Accepted 15 June 2019 ABSTRACT We present a new mixed variable symplectic (MVS) integrator for planetary systems that fully resolves close encounters. The method is based on a time regularisation that allows keeping the stability properties of the symplectic integrators while also reducing the effective step size when two planets encounter. We used a high-order MVS scheme so that it was possible to integrate with large time-steps far away from close encounters. We show that this algorithm is able to resolve almost exact collisions (i.e. with a mutual separation of a fraction of the physical radius) while using the same time-step as in a weakly perturbed problem such as the solar system. We demonstrate the long-term behaviour in systems of six super-Earths that experience strong scattering for 50 kyr. We compare our algorithm to hybrid methods such as MERCURY and show that for an equivalent cost, we obtain better energy conservation. Key words. celestial mechanics – planets and satellites: dynamical evolution and stability – methods: numerical 1. Introduction accuracy. Because it is only necessary to apply them when an output is desired, they do not affect the performance of the inte- Precise long-term integration of planetary systems is still a chal- grator.
    [Show full text]
  • Chapter 16: Differential Equations
    ✐ ✐ ✐ ✐ Chapter 16 Differential Equations A vast number of mathematical models in various areas of science and engineering involve differ- ential equations. This chapter provides a starting point for a journey into the branch of scientific computing that is concerned with the simulation of differential problems. We shall concentrate mostly on developing methods and concepts for solving initial value problems for ordinary differential equations: this is the simplest class (although, as you will see, it can be far from being simple), yet a very important one. It is the only class of differential problems for which, in our opinion, up-to-date numerical methods can be learned in an orderly and reasonably complete fashion within a first course text. Section 16.1 prepares the setting for the numerical treatment that follows in Sections 16.2–16.6 and beyond. It also contains a synopsis of what follows in this chapter. Numerical methods for solving boundary value problems for ordinary differential equations receive a quick review in Section 16.7. More fundamental difficulties arise here, so this section is marked as advanced. Most mathematical models that give rise to differential equations in practice involve partial differential equations, where there is more than one independent variable. The numerical treatment of partial differential equations is a vast and complex subject that relies directly on many of the methods introduced in various parts of this text. An orderly development belongs in a more advanced text, though, and our own description in Section 16.8 is downright anecdotal, relying in part on examples introduced earlier. 16.1 Initial value ordinary differential equations Consider the problem of finding a function y(t) that satisfies the ordinary differential equation (ODE) dy = f (t, y), a ≤ t ≤ b.
    [Show full text]
  • Arxiv:2011.14984V2 [Astro-Ph.IM] 7 Jan 2021 While Hermite Codes Have Become the Standard for Collisional (Makino & Aarseth 1992; Hut Et Al
    MNRAS 000,1–18 (2020) Preprint 8 January 2021 Compiled using MNRAS LATEX style file v3.0 FROST: a momentum-conserving CUDA implementation of a hierarchical fourth-order forward symplectic integrator Antti Rantala1¢, Thorsten Naab1, Volker Springel1 1Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748, Garching, Germany Accepted XXX. Received YYY; in original form ZZZ ABSTRACT We present a novel hierarchical formulation of the fourth-order forward symplectic integrator and its numerical implementation in the GPU-accelerated direct-summation N-body code FROST. The new integrator is especially suitable for simulations with a large dynamical range due to its hierarchical nature. The strictly positive integrator sub-steps in a fourth- order symplectic integrator are made possible by computing an additional gradient term in addition to the Newtonian accelerations. All force calculations and kick operations are synchronous so the integration algorithm is manifestly momentum-conserving. We also employ a time-step symmetrisation procedure to approximately restore the time-reversibility with adaptive individual time-steps. We demonstrate in a series of binary, few-body and million- body simulations that FROST conserves energy to a level of jΔ퐸/퐸j ∼ 10−10 while errors in linear and angular momentum are practically negligible. For typical star cluster simulations, max ∼ × / 5 we find that FROST scales well up to #GPU 4 # 10 GPUs, making direct summation N-body simulations beyond # = 106 particles possible on systems with several hundred and more GPUs. Due to the nature of hierarchical integration the inclusion of a Kepler solver or a regularised integrator with post-Newtonian corrections for close encounters and binaries in the code is straightforward.
    [Show full text]
  • Introduction to Numerical Analysis
    INTRODUCTION TO NUMERICAL ANALYSIS Cho, Hyoung Kyu Department of Nuclear Engineering Seoul National University 10. NUMERICAL INTEGRATION 10.1 Background 10.11 Local Truncation Error in Second‐Order 10.2 Euler's Methods Range‐Kutta Method 10.3 Modified Euler's Method 10.12 Step Size for Desired Accuracy 10.4 Midpoint Method 10.13 Stability 10.5 Runge‐Kutta Methods 10.14 Stiff Ordinary Differential Equations 10.6 Multistep Methods 10.7 Predictor‐Corrector Methods 10.8 System of First‐Order Ordinary Differential Equations 10.9 Solving a Higher‐Order Initial Value Problem 10.10 Use of MATLAB Built‐In Functions for Solving Initial‐Value Problems 10.1 Background Ordinary differential equation A differential equation that has one independent variable A first‐order ODE . The first derivative of the dependent variable with respect to the independent variable Example Rates of water inflow and outflow The time rate of change of the mass in the tank Equation for the rate of height change 10.1 Background Time dependent problem Independent variable: time Dependent variable: water level To obtain a specific solution, a first‐order ODE must have an initial condition or constraint that specifies the value of the dependent variable at a particular value of the independent variable. In typical time‐dependent problems . Initial condition . Initial value problem (IVP) 10.1 Background First order ODE statement General form Ex) . Flow lines Analytical solution In many situations an analytical solution is not possible! Numerical solution of a first‐order ODE A set of discrete points that approximate the function y(x) Domain of the solution: , N subintervals 10.1 Background Overview of numerical methods used/or solving a first‐order ODE Start from the initial value Then, estimate the value at a second nearby point third point … Single‐step and multistep approach .
    [Show full text]
  • Numerical Methods for Evolutionary Systems, Lecture 2 Stiff Equations
    Numerical Methods for Evolutionary Systems, Lecture 2 C. W. Gear Celaya, Mexico, January 2007 Stiff Equations The history of stiff differential equations goes back 55 years to the very early days of machine computation. The first identification of stiff equations as a special class of problems seems to have been due to chemists [chemical engineers] in 1952 (C.F. Curtiss and J.O. Hirschfelder, Integration of stiff equations, Proc. of the National Academy of Sciences of U.S., 38 (1952), pp 235--243.) For 15 years stiff equations presented serious difficulties and were hard to solve, both in chemical problems (reaction kinetics) and increasingly in other areas (electrical engineering, mechanical engineering, etc) until around 1968 when a variety of methods began to appear in the literature. The nature of the problems that leads to stiffness is the existence of physical phenomena with very different speeds (time constants) so that, while we may be interested in relative slow aspects of the model, there are features of the model that could change very rapidly. Prior to the availability of electronic computers, one could seldom solve problems that were large enough for this to be a problem, but once electronic computers became available and people began to apply them to all sorts of problems, we very quickly ran into stiff problems. In fact, most large problems are stiff, as we will see as we look at them in a little detail. L2-1 Copyright 2006, C. W. Gear Suppose the family of solutions to an ODE looks like the figure below. This looks to be an ideal problem for integration because almost no matter where we start the final solution finishes up on the blue curve.
    [Show full text]
  • An Introduction to Symplectic Integrators
    An Introduction to Symplectic Integrators Keirn Munro [email protected] Abstract We aim to outline some of the basic techniques of numerical integration as it pertains to computing flows of Hamiltonian systems. The method of Symplectic Integrators has gained popularity in recent years as a relatively stable approach to long time simulations of Hamiltonian mechanics, namely pendulum and celestial body problems. The approach makes use of the symplectic structure to move beyond naive analytic techniques and refine existing approaches. Background and Notation Letting (M; !) be a smooth symplectic manifold along with it's 2-form, respectively. Given a Hamiltonian H and a starting point z0, the task at hand is to numerically solve for the flow of XH starting at z0, z(t). The numerical algorithms, or integrators, we will use to do this are those smooth families of diffeomorphisms on M,Ψ: M × R ! M, such that Ψ∆t(z0) = Ψ(z0; ∆t) ≈ z(∆t). To simulate flow, the user iteratively applies Ψ∆t to z0. When we hold ∆t fixed, we may also require that Ψ∆t is a symplectomorphism and shall herein refer to maps of this form as symplectic integrators. We begin first with a result paraphrased from Proposition 5.4.3 in [5]: Proposition (Conservation of Energy): Given a Hamiltonian function H : M ! R and an associated vector field XH with flow 't : M ! M, H ◦ 't is constant in t. Proof. Proceeding by direct computation and letting x 2 M, d H('t(x)) = XH (H)('t0 (x)) dt t0 = dH (XH ('t0 (x))) (1) = − (ιXH ! (XH )) 't0 (x) = −! (XH ;XH ) 't0 (x) = 0 by anti-symmetry establishing the result.
    [Show full text]
  • On Yoshida's Method for Constructing Explicit Symplectic Integrators For
    faculty of science mathematics and applied and engineering mathematics On Yoshida’s Method For Constructing Explicit Symplectic Integrators For Separable Hamiltonian Systems Bachelor’s Project Mathematics July 2019 Student: J.C. Pim First supervisor: Dr. M. Seri Second assessor: Dr. A.E. Sterk Abstract We describe Yoshida's method for separable Hamiltonians H = T (p)+ V (q). Hamiltonians of this form occur frequently in classical mechanics, for example the n-body problem, as well as in other fields. The class of symplectic integrators constructed are explicit, reversible, of arbitrary even order, and have bounded energy growth. We give an introduction to Hamiltonian mechanics, symplectic geometry, and Lie theory. We com- pare the performance of these integrators to more commonly used methods such as Runge-Kutta, using the ideal pendulum and Kepler problem as examples. 2 Contents 1 Introduction 4 2 Preliminaries And Prerequisites 5 2.1 Manifolds, Vector Fields, And Differential Forms . .5 2.2 The Matrix Exponential . .6 2.3 The Vector Field Exponential . .7 3 Hamiltonian Mechanics 9 3.1 Motivating Examples . 10 4 Symplectic Geometry 15 4.1 Symplectic Vector Spaces . 15 4.2 Symplectic Manifolds . 17 5 Lie Theory And BCH Formula 21 6 Symplectic Integrators 28 6.1 Separable Hamiltonians . 28 6.2 Reversible Hamiltonians . 29 6.3 Yoshida's Method For Separable Hamiltonians . 30 6.4 Properties Of The Yoshida Symplectic Integrators . 38 6.5 Backward Error Analysis . 38 7 Numerical Simulations 43 7.1 The Ideal Pendulum . 43 7.2 The Kepler Problem . 46 8 Conclusion 50 References 51 3 1 Introduction Many interesting phenomena in science can be modelled by Hamiltonian sys- tems, for example the n-body problem, oscillators, problems in molecular dy- namics, and models of electric circuits [11].
    [Show full text]