A Brief Note on Linear Multistep Methods
Total Page:16
File Type:pdf, Size:1020Kb
A brief note on linear multistep methods Alen Alexanderian* Abstract We review some key results regarding linear multistep methods for solving intial value prob- lems. Moreover, we discuss some common classes of linear multistep methods. 1 Basic theory Here we provide a summary of key results regarding stability and consistency of linear mul- tistep methods. We focus on the initial value problem, 0 y = f(x; y); x 2 [a; b]; y(a) = y0: (1.1) Here y(x) 2 R is the state variable, and all throughout, we assume that f satisfies a uniform Lipschitz condition with respect to y. Definition of a multistep method. The generic description of a linear multistep method is as follows: define nodes xn = a + nh, n = 0;:::;N, with h = (b − a)=N. The general form of a multistep method is p p X X yn+1 = αjyn−j + h βjf(xn−j; yn−j); n ≥ p: (1.2) j=0 j=−1 p p Here fαigi=0, and fβigi=−1 are constant coefficients, and p ≥ 0. If either αp 6= 0 or βp 6= 0, then the method is called a p + 1 step method. The initial values, y0; y1; : : : ; yp, must be obtained by other means (e.g., an appropriate explicit method). Note that if β−1 = 0, then the method is explicit, and if β−1 6= 0, the method is implicit. Also, here we use the convention that y(xn) is the exact value of y at xn and yn is the numerically computed approximation to y(xn); i.e., yn ≈ y(xn). Truncation error and accuracy. The local trunction error of the method is defined by 2 0 p p 13 1 X X T (y; h) = y(x ) − α y(x ) + h β f(x ; y(x )) : (1.3) n h 4 n+1 @ j n−j j n−j n−j A5 j=0 j=−1 Note that in many texts, τn(y) is used to denote the Tn(y; h) defined above. The present nota- tion is adapted from [2]. Note also that we can rewrite the expression for the local truncation error as, 2 0 p p 13 1 X X T (y; h) = y(x ) − α y(x ) + h β y0(x ) ; n h 4 n+1 @ j n−j j n−j A5 j=0 j=−1 which defines the local truncation error for integrating y0(x). Next, we define, τ(h) := max jTn(y; h)j: xp≤xn≤b We say the method is consistent if τ(h) ! 0; as h ! 0: (1.4) *North Carolina State University, Raleigh, NC, USA. E-mail: [email protected] Last revised: August 15, 2018. On linear multistep methods The speed of convergence of the numerical solution to the true solution is related to the speed of convergence in (1.4). Hence, we need to understand the conditions under which τ(h) = O(hm); (1.5) as h ! 0. The largest value of m for which this holds is called the order of the method (1.2). Theorem 1.1 ([1, Theorem 6.5]). Let m ≥ 1 be an integer. For (1.4) to hold for all continuously differentiable functions y(x), that is for the method (1.2) to be consistent, it is necessary and sufficient that p p p X X X αj = 1; − jαj + βj = 1: (1.6) j=0 j=0 j=−1 For (1.5) to hold for all y(x) that are m + 1 times continuouly differentiable, it is necessary and sufficient that (1.6) hold and that p p X i X i−1 (−j) αj + i (−j) βj = 1; i = 2; 3; : : : ; m: j=0 j=−1 Examples. The following is an example of a linear multistep method, known as the midpoint method: yn+1 = yn−1 + 2hf(xn; yn); n ≥ 1 Notice that y1 needs to be computed before the iterations can begin. This can be done via one step of explicit Euler: y1 = y0 + hf(x0; y0). The truncation error for this method is given by: 1 T (y; h) = [y(x ) − y(x ) − 2hy0(x )] : (1.7) n h n+1 n−1 n To compute this, we use Taylor expansion of y(xn+1) and y(xn−1): h2 h3 y(x ) = y(x ) + hy0(x ) + y00(x ) + y000(x ) + O(h4); n+1 n n 2 n 6 n h2 h3 y(x ) = y(x ) − hy0(x ) + y00(x ) − y000(x ) + O(h4): n−1 n n 2 n 6 n Substituting Taylor expansions of y(xn+1) and y(xn−1) in (1.7), we have 1 h2 h3 T (y; h) = y(x ) + hy0(x ) + y00(x ) + y000(x ) + O(h4) n h n n 2 n 6 n h2 h3 −y(x ) + hy0(x ) − y00(x ) + y000(x ) + O(h4) − 2hy0(x ) n n 2 n 6 n n h2 = y000(x ) + O(h3): 3 n This shows, in particular, that the method is second order. The following is another example of a linear multistep method: h y = 3y − 2y + [f(x ; y ) − 3f(x ; y )]; n ≥ 1: (1.8) n+1 n n−1 2 n n n−1 n−1 Using a similar approach as above, we can compute the truncation error for this method: 7 T (y; h) = h2y000(x ) + O(h3): n 12 n Thus, this is also a second order method. A simple but useful convergence result. The following convergence result is Theorem 6.6 in [1]. While this is not the most general convergence result for linear multistep methods, it applies to a wide class of such methods. 2 On linear multistep methods Theorem 1.2. Consider solving the initial value problem (1.1) using the linear multistep method (1.2) N N with a step size h to obtain fyngn=0 that approximate fy(xn)gn=0. Let the initial errors satisfy, η(h) := max jy(xn) − ynj ! 0; as h ! 0: 0≤n≤p p Assume that the method is consistent, and assume that the coefficients fαjgj=0 in (1.2) are non-negative. Then, the method is convergent, and max jy(xn) − ynj ≤ c1η(h) + c2τ(h); a≤xn≤b where c1 and c2 are suitable constants. If the method is of order m, and if the initial errors satisfy η(h) = O(hm), then the speed of convergence is O(hm). Remark 1.3. The above result states that to ensure a rate of convergence of O(hm) for the method (1.2), it is necessary that the method be of order m, i.e, each step has an error of O(hm+1) (recall truncation error (1.3) is this error divided by h). However, the initial values m y0; : : : ; yp need to be computed with only an accuracy of O(h ). Thus, the initial values can be computed with a lower order method. This is also noted in [1, p. 361]. As an example, and as mentioned before, the midpoint method, which is of second order, can be initialized with explict Euler, which is only first order accurate. General stability and convergence theory. To obtain a more general convergence re- sult, without the non-negativity assumption on αj, we need to investigate the stability of the method (1.2) more closely. The general theory is rather involved and is discussed in texts such as [1, 2, 3]. The stability of (1.2) is linked to the polynomial p p+1 X p−j ρ(r) = r − αjr : j=0 We note immediately that by the consistency condition, we have ρ(1) = 0. Let r0; r1; : : : ; rp denote roots of ρ, repeated according to their multiplicity, and let r0 = 1. The method (1.2) satisfies the root condition if the following hold: jrjj ≤ 1; j = 0; 1; : : : ; p (1.9) 0 jrjj = 1 ) ρ (rj) 6= 0: (1.10) We note that (1.9) requires all roots of ρ lie inside the unit circle fz 2 C : jzj ≤ 1g, and (1.10) states that the roots on the boundary of the unit circle are to be simple roots (i.e., of multiplicity one) of ρ(r). A key result on linear multistep methods says that (1.2) is stable if and only if the root condition holds. The general convergence result for linear multistep methods states that a consistent linear multistep method is convergent if and only if it satisfies the root condition; see [1, 2] for more details. As an example, consider the midpoint method described above. The characteristic equation for that method is given by ρ(r) = r2 − 1; which has roots r = ±1. Therefore, the method satisfies the root condition, and is thus stable. Next, we study stability of (1.8); the method has α0 = 3 and α1 = −2, and thus, its characteristic polynomial is ρ(r) = r2 − 3r + 2 = (r − 1)(r − 2); which has roots r1 = 1 and r2 = 2 and thus violates the root condition. Therefore, the method (1.8) is unstable. 2 Some commonly used linear multistep methods. We will focus on the following classes of methods: 1. Backward differentiation formulas (BDFs) 3 On linear multistep methods 2. Adams methods (a) Adams–Bashforth (explicit) (b) Adams–Moulton (implicit) 3. Predictor corrector methods 2.1 Backward differentiation formulas These methods are obtained based on numerical differentiation formulas. Basic idea: Interpolate past values of y(x), and then differentiate the interpolating polyno- mial to approximate y0. More specifically, let q(x) be the polynomial of degree ≥ p that interpolates y(x) at the points xn+1; xn; : : : ; xn−p+1, for some p ≥ 1: p−1 X q(x) = y(xn−j)`j(x); (2.1) j=−1 where `j(x)’s are elementary Lagrange polynomials.