Splitting Methods for Non-Autonomous Linear Systems
Total Page:16
File Type:pdf, Size:1020Kb
Splitting methods for non-autonomous linear systems Sergio Blanes1∗ Fernando Casas2† Ander Murua3‡ Abstract We present splitting methods for numerically solving a certain class of explicitly time-dependent linear differential equations. Starting from an efficient method for the autonomous case and making use of the formal solution obtained with the Magnus expansion, we show how to get the order conditions for the non-autonomous case. We also build a family of sixth-order integrators whose performance is clearly superior to previous splitting methods on several numerical examples. 1Instituto de Matem´atica Multidisciplinar, Universidad Polit´ecnica de Valencia, E-46022 Valencia, Spain. 2Departament de Matem`atiques, Universitat Jaume I, E-12071 Castell´on, Spain. 3Konputazio Zientziak eta A.A. saila, Informatika Fakultatea, EHU/UPV, Donostia/San Sebasti´an, Spain. 1 Introduction Composition methods constitute a widespread procedure to integrate numeri- cally differential equations, especially in the context of geometric integration. In this work we consider a particular case of partitioned linear systems which frequently appear when discretising many PDEs. Specifically, x′ = M(t)y, y′ = N(t)x, (1) − d1 d2 d1 d2 with x(t0) = x0 R , y(t0) = y0 R , M : R R × and N : R d2 d1 ∈ ∈ −→ −→ R × . Systems of the form x¯′ = M¯ (t)¯y + f(t), y¯′ = N¯(t)¯x + g(t), (2) − ∗Email: [email protected] †Email: [email protected] ‡Email: [email protected] 1 are also of type (1), since the solution (¯x(t), y¯(t)) of (2) corresponds to the solution x(t)=(¯x(t)T , 1)T , y(t)=(¯y(t)T , 1)T of an enlarged system (1) with M¯ (t) f(t) N¯(t) g(t) M(t)= , N(t)= − (3) 0T 0 0T 0 T T T T and initial conditions x(t0)=(¯x(t0) , 1) , y(t0)=(¯y(t0) , 1) . Similarly, the second order differential equation x′′ = D(t)x + g(t) (4) d d d with D : R R × , f : R R , can be considered as a special case of (1) with −→ −→ I 0 D(t) g(t) M(t)= d , N(t)= − − . (5) 0T 0 0T 0 Equation (1) describes the evolution of many relevant physical systems. In particular, the space discretization of the Schr¨odinger equation can be refor- mulated as an N-degrees of freedom classical linear Hamiltonian system with Hamiltonian equations of the form (1) [7, 8, 18] and also the time-dependent Maxwell equations can be expressed in this way [16]. On the other hand, the nu- merical integration of some nonlinear PDEs (such as the nonlinear Schr¨odinger equation) is frequently done by solving separately the linear component, which in many cases can be written as (1) and constitutes the most problematic part of the procedure. Denoting z = (x, y)T , one may write (1) as z′ = Λ(t)z, where Λ(t)= A(t)+ B(t), (6) and 0 M(t) 0 0 A(t)= , B(t)= . (7) 0 0 N(t) 0 − This system can be numerically solved by using, for instance, Magnus integrators [13, 11, 5]. These methods require computing the exponential of matrices of dimension (d1 + d2) (d1 + d2). If (d1 + d2) 1 then the exponentiation can be prohibitively× costly. For this reason, new≫ methods which only involve matrix-vector products of the form M(t)y and N(t)x are highly desirable. The purpose of the present work is to adapt to equation (1) the procedure presented in [1] in a more general setting. The approach combines the Magnus expansion with efficient splitting methods for autonomous problems. Here we summarize the main ideas involved and illustrate the technique by constructing several integrators for equations (6)–(7) which outperform previous algorithms. The new methods have the form (for a time step of size h) z(t + h) eAm(t,h)eBm(t,h) eA1(t,h)eB1(t,h)z(t), (8) ≈ · · · 2 where the matrices Ai(t,h) and Bi(t,h) are given by k k Ai(t,h)= h ρij A(t + cj h), Bi(t,h)= h σij B(t + cjh), (9) Xj=1 Xj=1 with appropriately chosen real parameters ci,ρij , σij . Since in our case I M (t,h) I 0 eAi(t,h) = i , eBi(t,h) = , 0 I Ni(t,h) I − then (8) can be written as z(t + h) K(t,h)z(t), where ≈ K(t,h) = eAm(t,h)eBm(t,h) eA1(t,h)eB1(t,h) (10) · · · I M I 0 I M I 0 = m 1 , 0 I Nm I · · · 0 I N1 I − − and obviously k k Mi = h ρij M(t + cj h), Ni = h σij N(t + cj h). (11) Xj=1 Xj=1 Notice that when M(t) and N(t) are constant, then K(t,h) reduces to K(h)=ehamAehbmB eha1Aehb1B = (12) · · · I a hM I 0 I a hM I 0 m 1 , 0 I bmhN I · · · 0 I b1hN I − − where k k ai = ρij , bi = σij , i =1,...,m. (13) Xj=1 Xj=1 In the autonomous case there exists an extensive list of splitting methods for separable systems in the literature (see [9, 12, 14, 15, 17] and references therein). In addition, for partitioned linear systems extremely efficient methods can be constructed due to the special structure of the system [7, 2, 3]. Our goal here is to start from a set of coefficients ai,bi (i =1,...,m) which provides an efficient method for the autonomous case, and then to find appropriate values for ci,ρij , σij such that (13) holds and (10) leads to a good method for the non-autonomous system (1). For the convenience of the reader (and potential user of the new class of integration methods) we collect in table 1 two algorithms implementing schemes (12) and (10) for the numerical integration of equation (1) in the autonomous and the non-autonomous case, respectively. In the last situation, the proposed algorithm requires to compute and store M(t + cj h),N(t + cj h), j =1,...,k at each step. We assume that the linear combination (11) is efficiently computed. l [i] [i] This is the case, in particular, when M(t)= i=1 fi(t)M with M constant matrices and fi(t) scalar functions, for a smallP value of l (and similarly for N(t)). 3 Table 1: Algorithms for the numerical integration of (1) using J steps of length h = t/J: (Algorithm 1) with scheme (12) for the autonomous case, and (Algo- rithm 2) with scheme (10) for the non-autonomous case. Algorithm 1: autonomous Algorithm 2: non-autonomous x0 = x(0); y0 = y(0); tn = t0 do n = 1, J do i = 1,k x = x(0); y = y(0) 0 0 Mi = M(tn + cih); Ni = N(tn + cih) do n = 1, J enddo do i = 1,m do i = 1,m yi = yi−1 − bihNxi−1 Mˆ = ρi1M1 + · · · + ρikMk xi = xi−1 + aihMyi Nˆ = σi N + · · · + σikNk enddo 1 1 yi = yi−1 − hNxˆ i−1 x = xm; y = ym 0 0 ˆ If (output) then xi = xi−1 + hMyi enddo xout(tn)= x0; yout(tn)= y0 endif x0 = xm; y0 = ym; tn = tn + h enddo If (output) then xout(tn)= x0; yout(tn)= y0 endif enddo 2 Order conditions One possible approach for deriving the conditions to be satisfied by the coeffi- cients ci,ρij , σij of a method of order, say, p, is to formally build a solution of equation (6) with the Magnus expansion. It is well known that z(t) can be formally written as z(t + h)=eΩ(t,h)z(t), (14) where Ω(t,h)= k∞=1 Ωk(t,h) and each Ωk(t,h) is a multiple integral of combi- nations of nestedP commutators containing k matrices Λ(t) [13]. This constitutes the so-called Magnus expansion of the solution. An important feature of this expansion is that, when the solution of (6) evolves in a Lie group , then eΩ(t,h) stays on even if the series is truncated, provided that Λ(t) belongsG to the Lie algebra associatedG with [10]. G It is possible to get explicitly Ωk(t,h) by inserting into the recurrence defining the Magnus expansion a Taylor series of the matrices A(t) and B(t). In fact, to take advantage of the time-symmetry property of the solution, which implies that Ω(t + h, h)= Ω(t,h), (15) − − it is more convenient to expand around t + h/2. More specifically, if we denote i 1 i 1 1 d − A(s) 1 d − B(s) α = , β = i i 1 h i i 1 h (i 1)! ds − s=t+ 2 (i 1)! ds − s=t+ 2 − − 4 so that A(t + h + τ) = α + α τ + α τ 2 + , (16) 2 1 2 3 · · · B(t + h + τ) = β + β τ + β τ 2 + 2 1 2 3 · · · then Ω(t,h) in (14) can be expanded as n n Ω(t,h)= h Ωk,n(t,h), (17) nX1 Xk=1 ≥ where each Ωk,n(t,h) is a linear combination of terms of the form [µi1 , µi2 ...,µik ] with µ = α or µ = β for each j = 1,...,k, and i + + i = n. Fur- j ij j ij 1 · · · k thermore, Ωn,k(t,h) = 0 for even values of n, Ωk,k(t,h) = 0 for k > 1 and Ω1,1(t,h)= α1 + β1. In particular, up to order h6 one has [5] Ω= hΩ + h3(Ω +Ω )+ h5(Ω +Ω +Ω +Ω )+ (h7), (18) 1,1 1,3 2,3 1,5 2,5 3,5 4,5 O where (for simplicity, we omit the arguments (t,h)) 1 1 Ω = α + β , Ω = (α + β ), Ω = [α ,β ] + [β , α ] , 1,1 1 1 1,3 12 3 3 2,3 12 2 1 2 1 1 1 1 Ω = (α + β ), Ω = ([α ,β ] + [β , α ]) + ([α ,β ] + [β , α ]) , 1,5 80 5 5 2,5 240 2 3 2 3 80 4 1 4 1 1 Ω = [α ,β , α ] + [α ,β , α ] [β , α ,β ] + [β , α ,β ] 3,5 360 − 1 3 1 1 1 3 − 1 3 1 1 1 3 1 + [α ,β , α ] [α ,β , α ] + [β , α ,β ] [β , α ,β ] , 240 1 2 2 − 2 1 2 1 2 2 − 2 1 2 1 Ω = [α ,β , α ,β ] [β , α ,β , α ] + [β , α ,β , α ] [α ,β , α ,β ] .