Revision of Odes for Compphys
Total Page:16
File Type:pdf, Size:1020Kb
Revision of ODEs for CompPhys E July 16, 2013 Odinary Differential Equations ODEs are quite easy. Numerical Solutions thereto Solving ODEs numerically is necessarily a matter of approximation, since computers are not continuous machines but use discrete numbers, and they additionally cannot deal with arbitrary-precision numbers1 and so the whole integral has to be discretized. Both of these effects have a role to play in the deviation of the algorithm from the true value of the integral. We only need study 1st order ODEs We can decompose an arbitrarily high order differential equation into a system of first order ODEs to solve using the algorithms to follow. For example (n) (k) y = f(fy gk=0;:::;n−1; x) (1) can be manipulated into a series of first order equations by defining 0 0 y0 = y ; yk = yk+1; k = 0; :::; n − 2 (2) which gives us 0 y0 = y1 0 y1 = y2 ::: = ::: 0 yn−2 = yn−1 0 yn−1 = f(fykgk=0;:::;n−1; x) (3) In vector form, this is just y0 = f(y; x). Integration Algorithms Euler Method Euler's method is pretty dumb, but it gets the job done.2 1Well they obviously can if you want them to but the libraries are so slow they may as well not. 2Actually it doesn't. 1 Solution of y0 = −2:3y computed with the Euler method with step size h = 1 (blue squares) and h = 0:7 (red circles). The black curve shows the exact solution. The Euler method first order, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. It also suffers from stability problems. For these reasons, the Euler method is not often used in practice. It serves as the basis to construct more complicated methods. Instability . The Euler method is also pretty unstable. If the Euler method is applied to the linear equation y0 = ky, then the numerical solution is unstable if the product hk is outside the region z 2 C j jz + 1j ≤ 1 Modification: Backward Euler method A simple modification of the Euler method which eliminates the stability problems is the backward Euler method: yn+1 = yn + hf(tn+1; yn+1) This differs from the standard Euler method in that the function f is evaluated at the end point of the step, instead of the starting point. The backward Euler method is an implicit method, meaning that 2 Illustration of the midpoint method assuming that yn equals the exact value y(tn). The midpoint method computes yn+1 so that the red chord is approximately parallel to the tangent line at the midpoint (the green line). the formula for the backward Euler method has yn+1 on both sides, so when applying the backward Euler method we have to solve an equation. This makes the implementation more costly. Other modifications of the Euler method that help with stability yield the exponential Euler method or the semi-implicit Euler method. Taylor Expansion Method The Euler method can be thought of as a first-order Taylor expansion method: The local error of the Taylor-expansion algorithm of order p is O(hp+1), the global error is O(hp). The main disadvantage of this approach is that it requires recursively computing possibly high partial derivatives of f(y; x). Midpoint Method Further modification of the Euler method leads to the Midpoint method. 1 1 yn+1 = yn + hf tn + 2 h; yn + 2 hf(tn; yn) (4) The name of the method comes from the fact that in the formula above the function f is evaluated at t = tn + h=2, which is the midpoint between tn at which the value of y(t) is known and tn+1 at which the value of y(t) needs to be found. The local error at each step of the midpoint method is of order O h3, giving a global error of order O h2. Thus, while more computationally intensive than Euler's method, the midpoint method generally gives more accurate results. The method is an example of a class of higher-order methods known as Runge-Kutta methods. 3 Illustration of numerical integration for the equation y0 = y; y(0) = 1: Blue is the Euler method; green, the midpoint method; red, the exact solution, y = et: The step size is h = 1. Leapfrog Method Leapfrog leaves you waiting for more, testament to the fact it performs so poor. Good for differential equations of the formx ¨ = F (x) or equivalentlyv _ = F (x); x_ ≡ v; particularly in the case of a dynamical system of classical mechanics. Such problems often take the form x¨ = −∇V (x) Leapfrog integration is equivalent to updating positions x(t) and velocities v(t) =x _(t) at interleaved time points, staggered in such a way that they `leapfrog' over each other. For example, the position is updated at integer time steps and the velocity is updated at integer-plus-a-half time steps. Leapfrog integration is a second order method, in contrast to Euler integration, which is only first order, yet requires the same number of function evaluations per step. Unlike Euler integration, it is stable for oscillatory motion, as long as the time-step ∆t is constant, and ∆t ≤ 2=!. Why it's useful • It is time-reversible: One can integrate forwardn steps, and then reverse the direction of integration and integrate backwards n steps to arrive at the same starting position. • It has a symplectic nature, which implies that it conserves the (slightly modified) energy of dynam- ical systems. This is especially useful when computing orbital dynamics, as other integration schemes, such as the Runge-Kutta method, do not conserve energy and allow the system to drift sub- stantially over time. Verlet Integration Verlet integration is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The Verlet integrator offers greater stability, as well as other properties that are important in physical systems such as time-reversibility and preservation of the symplectic form on phase space, at no significant additional cost over the simple Euler method. 4 RK4 in pictorial form. Error The local error in position of the Verlet integrator is O(∆t4), and the local error in velocity is O(∆t2). The global error in position, in contrast, is O(∆t2) and the global error in velocity is O(∆t2). Read more about Verlet. Numerov Numerov's algorithm uses Taylor-expansion ideas and the particular structure of the ODE in question. It is good for equations of the form y00(x) + k(x)y(x) = 0. Equations of such a form include manipulations of the time-indepedent Schr¨odingereqn, for example. 1 5 1 1 + h2k y = 2 1 − h2k y − 1 + h2k y + O(h6) (5) 12 n+1 n+1 12 n n 12 n−1 n−1 Also clear is it provides 6th order accuracy. Runge-Kutta def rk4(x, h, y, f): k1 = h * f(x, y) k2 = h * f(x + 0.5*h, y + 0.5*k1) k3 = h * f(x + 0.5*h, y + 0.5*k2) k4 = h * f(x + h, y + k3) return x + h, y + (k1 + 2*(k2 + k3) + k4)/6.0 Here yn+1 is the RK4 approximation of y(tn+1), and the next value (yn+1) is determined by the present value (yn) plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by function f on the right-hand side of the differential equation. • k1 is the increment based on the slope at the beginning of the interval, usingy _, (Euler's method) ; 1 • k2 is the increment based on the slope at the midpoint of the interval, usingy _ + 2 hk1 ; 5 1 • k3 is again the increment based on the slope at the midpoint, but now usingy _ + 2 hk2 ; • k4 is the increment based on the slope at the end of the interval, usingy _ + hk3. In averaging the four increments, greater weight is given to the increments at the midpoint. The weights are chosen such that if f is independent of y, so that the differential equation is equivalent to a simple integral, then RK4 is Simpson's rule. The RK4 method is a fourth-order method, meaning that the error per step is on the order of O(h5), while the total accumulated error has order O(h4). Implicit Unfortunately, explicit Runge{Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small. This issue is especially important in the solution of partial differential equations. The instability of explicit Runge{Kutta methods motivates the development of implicit methods. An implicit Runge{Kutta method has the form s X yn+1 = yn + biki; i=1 where s X ki = hf tn + cih; yn + aijkj ; i = 1; : : : ; s: j=1 The consequence of this difference is that at every step, a system of algebraic equations has to be solved. This increases the computational cost considerably. Crank-Nicholson... ...combines two methods: Explicit... ...given by: 1 D un+1 − un = (un − 2un + un ) (6) ∆t i i (∆x)2 i+1 i i−1 Implicit... ...given by: 1 D un+1 − un = (un+1 − 2un+1 + un+1) (7) ∆t i i (∆x)2 i+1 i i−1 Giving us: 1 D un+1 − un = ((un − 2un + un ) + (un+1 − 2un+1 + un+1)) (8) ∆t i i 2(∆x)2 i+1 i i−1 i+1 i i−1 In order to use Crank-Nicholson, a grid is needed.