<<

MODIFIED MAGNUS EXPANSION IN APPLICATION TO HIGHLY OSCILLATORY DIFFERENTIAL EQUATIONS

MARIANNA KHANAMIRYAN

Abstract. In this work we introduce a novel approach, incorporating modified Magnus method with Filon quadrature, in application to the solution of systems of highly oscillatory linear and non-linear differential equations. Both methods are extremely efficiently in the setting of high oscillation compared to conventional ideas. This paper presents and highlights the advantage and superior performance of numerical methods which are designed from the outset for intrinsic dynamical systems of highly oscillatory nature.

Key words. High oscillation, dynamical systems, Filon quadrature, Magnus method, modified Magnus method, waveform relaxation methods.

AMS subject classifications. 65L05, 65P, 34B, 34E05, 34C15, 34C60

1. Introduction. Stiff dynamical systems are active scientific research area, with nu- merous publications in the field. Classical and even advanced scientific literature provides us with effective methods for smooth solutions, however so far theory and computational tools for highly oscillatory dynamical systems have been scarce. Even the most advanced techniques using exponential integrators deliver very poor numerical solution in presence of high oscillation, unless the step-size h of numerical integration is chosen to be infinitesimal compared to the increasing frequency ω of oscillation, [7], introducing strong requirement on the product hω. Adversely, numerical techniques empowered by Filon quadrature and related techniques, based on the asymptotic expansion versus Taylor expansion, represent a rather efficient and competitive approach compared to conventionalmethods, [13, 14, 16, 17]. Not only large step-size becomes affordable, but also highly oscillatory nature of the prob- lem works in our advantage, and our approximation improves as the frequency of oscillation grows. We first proceed with introduction of the numerical methods and main results, and pro- vide a detailed computational framework for our methods in upcoming sections. 1.1. Matrix transformation. Given,

b Ω I[ f ]= Xω (t) f (t)dt, Xω′ = Aω Xω , Xω = e , (1.1) Za

with a matrix valued kernel Xω depending on a real parameter ω 1 and satisfying matrix ≫ linear differential equation (1.1). We assume that Aω is a non-singular matrix with variable 1 d coefficients, σ(Aω ) iR, Aω− 1 and f R is a smooth vector-valued function. Nu- merical solution of this⊂ typek of integral,k ≪ and in∈ particular of the the matrix linear differential equation in (1.1) is of practical importance in application to the solution of linear and non- linear systems of highly oscillatory dynamical systems,

y = Aω y + f , y(0)= y0, (1.2) with analytic time dependant solution,

t y = Xω y0 + Xω f dτ = Xω y0 + I[ f ]. (1.3) Z0 We apply matrix transformation, introduced in [17] , to obtain the asymptotic expansion of the integral in (1.1). 2 MARIANNA KHANAMIRYAN

The first step is to vectorize and transpose the matrix Xω ,

x11 x12 ... x1d  x21 x22 ... x2d  Xω = . . . .  ......     x x ... x   d1 d2 dd  resulting in a vector X¯ω ,

⊺ X¯ω = [x¯1,x¯2,...,x¯d] ,

whereby each entry elementx ¯i is a column vector from the matrix Xω above,

x1i  x2i  x¯i = . .  .     x   di  DEFINITION 1.1. The matrix direct sum of n matrices constructs a block diagonal matrix from a set of square matrices,

A1 0 ... 0 d  0 A2 ... 0  A diag A A A i = ( 1, 2,..., n)= ...... M  . . . .  i=1    0 0 ... An   

DEFINITION 1.2. Kronecker product, or direct product, denoted by , is an operation on two matrices of arbitrary size resulting in a block matrix. It gives the matrixN of the tensor product with respect to a standard choice of basis. If A is an m n matrix and B is a p q matrix, then the Kronecker product is the mp nq block matrix, × × ×

a11B ... a1nB . . . A B =  . .. .  O  a B ... a B   m1 mn  with elements defined by

cαβ = ai jbkl, where

α = p(i 1)+ k, − β = q( j 1)+ l. −

d Vector X¯ω satisfies a linear differential equation X¯ ω′ = Bω X¯ ω , with Bω = Aω repre- iL=1 senting a direct d-tuple sum of the original matrix Aω . The second step is to scale an identity matrix I by taking its direct product with the ⊺ vector-valued function f := f = [ f1, f2,..., fd], F = f 1 d Id d. × × N MODIFIED MAGNUS METHOD USING FILON QUADRATURE 3

Finally, we arrive at equality,

b b Xω f dt = FX¯ ω dt. Za Za The latter appears to be suitable for integration by parts and representation of the asymptotic expansion of the integral (1.1), providing us with theoretical background to proceed with the Filon quadrature. 1.2. The asymptotic method. Hereafter we apply the big O notation, defined element- wise for matrix- and vector-valued functions, [17]. THEOREM 1.3. [17] Postulate that

b I[ f ]= Xω (t) f (t)dt, and Xω′ = Aω (t)Xω , Za

where Aω is an arbitrary non-singular matrix, σ(Aω ) iR, ω is a real parameter and f Rd is a smooth vector-valued function. ⊂ ∈ 1 If Fω = O(Fˆ),B−ω = O(Bˆ) and X¯ ω = O(Xˆ ω ), then

b A s σ ¯ I[ f ] Qs [ f ] = ( 1) s(t)Xω (t)dt − − Za = O(FˆBˆs+1Xˆ ), as ω ∞. →

1.3. The Filon method. For the Filon quadrature, we construct an r-degree polynomial interpolation v for the vector-valued function f in (1.1),

ν θl 1 − ( j) v(t)= ∑ ∑ αl, j(t) f (tl), l=1 j=0

( j) ( j) which agrees with function values and derivatives v (tl )= f (tl) at node points a = t1 < t2 < ... < tν = b, with associated θ1,θ2,...,θν multiplicities, j = 0,1,...,θl 1, and l = 1,2,...,ν. − By definition, Filon-type method is

θ b ν l 1 F − β ( j) Qs [ f ]= Xω (t)v(t)dt = ∑ ∑ l, j f (tl), Za l=1 j=0

β b α where l, j = a Xω (t) l, j(t)dt. THEOREMR 1.4. [17] Postulate that

b I[ f ]= Xω (t) f (t)dt, and Xω′ = Aω (t)Xω , Za

Aω is an arbitrary non-singular matrix, σ(Aω ) iR, ω is a real parameter and f Rd is a smooth vector-valued function. ⊂ ∈ 1 If Fω = O(Fˆ),B−ω = O(Bˆ) and X¯ ω = O(Xˆ ω ), then 4 MARIANNA KHANAMIRYAN

b F s σ ¯ I[ f ] Qs [ f ] = ( 1) s(t)Xω (t)dt − − Za = O(FˆBˆs+1Xˆ ), as ω ∞. →

THEOREM 1.5. [17] Let θ1,θν s. Then the numerical order of the Filon-type method ν ≥ is equal to r = ∑ θl 1. l=1 − Needless to mention that θ1,θν s define the smoothness of the function f at the end points, and hence contribute to the numerical≥ order of the Filon method. 1.4. Magnus expansion. At this stage we find it suitable to draw our attention to the numerical solution of the matrix ordinary differential equation Ω Xω′ = Aω Xω , Xω (0)= I, Xω = e . (1.4) Here, Ω represents the Magnus expansion, an infinite recursive , t Ω(t)= Aω (t)dt Z0 1 t τ + [Aω (τ), Aω (ξ )dξ ]dτ 2 Z0 Z0 1 t τ ξ + [Aω (τ), [Aω (ξ ), Aω (ζ)dζ]dξ ]dτ 4 Z0 Z0 Z0 1 t τ τ + [[Aω (τ), Aω (ξ )dξ ], Aω (ζ)dζ]dτ 12 Z0 Z0 Z0 + . (1.5) ··· From [5], we know that Ω(t) satisfies the following non-linear differential equation, ∞ B j j 1 Ω′ = ∑ adΩ(Aω )= dexpΩ− (Aω (t)), Ω0 = 0, (1.6) j=0 j!

with B j being Bernoulli numbers and the adjoint operator is defined as 0 adΩ(Aω )= Aω adΩ(Aω ) = [Ω,Aω ]= ΩAω Aω Ω j+1 j − adΩ (Aω ) = [Ω,adΩAω ]. (1.7) Solving this equation for Ω with Picard iteration results in an infinite recursive series and we use truncated expansion Ωs in approximation of Ω, [18]. Magnus methods preserve peculiar geometric properties of the solution, ensuring that if Xω is in matricial G and Aω is in associated g of G, then the numerical solution after discretization stays on the manifold. Moreover, Magnus methods preserve time- 1 symmetric properties of the solution after discretization, for any a > b, Xω (a,b)− = Xω (b,a) yields Ω(a,b)= Ω(b,a). These properties appear to be valuable in applications. − 1.5. Main results. In our previous works, [16, 17], we developed numerical methods for dynamical systems with constant and variable coefficients, where the representation of the matrix Xω (1.1), an infinite Magnus series, was truncated and evaluated numerically. In present work we expand our view. First, we employ Filon quadrature with Magnus expansion, to solve integral in (1.6). Second, we introduce a combination of a modified Magnus method and Filon quadrature in evaluation of (1.6). Third, we apply developed solutions for in the numerical approximation of (1.3). MODIFIED MAGNUS METHOD USING FILON QUADRATURE 5

2. The Magnus method. In this section we focus on Magnus methods for approxima- tion of a matrix-valued function Xω in (1.4). There is a large list of publications available on the Lie-group methods, here we refer to some of them: [1, 2, 4, 8, 9, 12, 15, 20, 21]. Given the representation for Ω(t) in (1.6), numerical evaluation of the brack- ets with conventional quadrature rules is well presented in the literature, [2], [12], [8]. For example, by choosing a symmetric grid c1,c2,...,cν , and taking Gaussian points with respect 1 to . Consider set A1,A2,...,Aν , with Ak = hA(t0 +ckh),k = 1,2,...,ν. Linear combinations 2 { } of this basis form an adjoint basis B1,B2,...,Bν , with { } ν 1 l 1 Ak = ∑(ck ) − Bl, k = 1,2,...,ν. l=1 − 2

1 √15 1 1 In this basis the six-order method, with Gaussian points c1 = 2 10 ,c2 = 2 ,c3 = 2 + √15 − 10 , Ak = hA(t0 + ckh), is 1 1 1 Ω(t0 + h) B1 + B3 [B1,B2]+ [B2,B3] (2.1) ≈ 12 − 12 240 1 1 1 + [B1,[B1,B3]] [B2,[B1,B2]] + [B1[B1,[B1,B2]]], (2.2) 360 − 240 720 where √15 10 B1 = A2, B2 = (A3 A1), B3 = (A3 2A2 + A1). 3 − 3 − This can be reduced further and written in a more compact manner [8], [12], 1 Ω(t0 + h) B1 + B3 + P1 + P2 + P3, ≈ 12 where 1 1 P = [B , B + B ], (2.3) 1 2 12 1 240 3 1 1 P2 = [B1,[B1, B3 P1]], (2.4) 360 − 60 1 P = [B ,P ]. (2.5) 3 20 2 1 h A more profound approach taking Taylor expansion of A(t) around the point t1/2 = t0 + 2 was introduced in [2], ∞ 1 dnA(t) A t ∑ a t t i with a (2.6) ( )= i( 1/2) , i = i t=t1/2 . i=0 − i! dt | This can be substituted in the univariate integrals of the form h/2 (i) 1 i h B = i+1 t A t + dt, i = 0,1,2,... h Z h/2  2 − to obtain a new basis 1 1 1 B(0) = a + h2a + h4a ++ h6a ... (2.7) 0 12 2 80 4 448 6 1 1 1 B(1) = ha + h3a + h5a + ... (2.8) 12 1 80 3 448 5 1 1 1 1 B(2) = a + h2a + h4a + h6a + ... (2.9) 12 0 80 2 448 4 2304 6 6 MARIANNA KHANAMIRYAN

(0) In these terms a second order method will look as follows, eΩ = ehB + O(h3). Whereas for Ω ∑4 Ω˜ O 7 a six-order method, = i=1 i + (h ), one needs to evaluate only four commutators, (0) Ω˜ 1 = hB (2.10) 2 (1) 3 (0) (2) Ω˜ 2 = h [B , B 6B ] (2.11) 2 − 2 (0) (0) 1 (2) 1 3 (1) Ω˜ 3 + Ω˜ 4 = h [B ,[B , hB Ω˜ 2]] + h[B ,Ω˜ 2]. (2.12) 2 − 60 5 Numerical behaviour of the fourth and six order classical Magnus method is illustrated in Figures 2.1, 2.2, 2.3, 2.4, 2.5, and 2.6. The method is applied to solve Airy equation ⊺ 1 1 y′′(t)= ty(t) with [1,0] initial conditions, t [0,1000], for varies step-sizes, h = 4 , h = 10 −1 ∈ and h = 25 . Comparison shows that for a bigger interval steps h both fourth and six order methods give similar results, as illustrated in Figures 2.1, 2.2, 2.4 and 2.5. However, for smaller steps six order Magnus method has a more rapid improvement in approximation compared to a fourth order method, Figures 2.3 and 2.6.

x 10−3 4

2

0

−2

−4 0 200 400 600 800 1000

⊺ FIG. 2.1. Global error of the fourth order Magnus method for the Airy equation y′′(t)= ty(t) with [1,0] initial conditions, 0 t 1000 and step-size h = 1 . − ≤ ≤ 4

The following Theorem provides sufficient condition for convergence of the Magnus series for a bounded linear operator A(t) in Hilbert space, [3], extending Theorem 3 from [19], where the same results are stated for real matrices.

THEOREM 2.1. (F. Casas, [3]) Consider the differential equation X ′ = A(t)X defined in a Hilbert space H with X(0)= I, and let A(t) be a bounded linear operator on H . Then, the Magnus series Ω(t) in (1.6) converges in the interval t [0,T ) such that ∈ T A(τ) dτ < π (2.13) Z0 k k and the sum Ω(t) satisfies exp Ω(t)= X(t). This result is applicable to bounded operators, whereas we consider matrices with a property σ(Aω ) iR, and ω ∞. In this respect we refer to work on Magnus integrators for Schr¨odinger equation,⊂ [6], → dψ i = H(t)ψ, ψ(t )= ψ . dt 0 0 MODIFIED MAGNUS METHOD USING FILON QUADRATURE 7

x 10−4 8

4

0

−4

−8 0 200 400 600 800 1000

⊺ FIG. 2.2. Global error of the fourth order Magnus method for the Airy equation y′′(t)= ty(t) with [1,0] initial conditions, 0 t 1000 and step-size h = 1 . − ≤ ≤ 10

x 10−7 1

0.5

0

−0.5

−1 0 200 400 600 800 1000

⊺ FIG. 2.3. Global error of the fourth order Magnus method for the Airy equation y′′(t)= ty(t) with [1,0] initial conditions, 0 t 1000 and step-size h = 1 . − ≤ ≤ 25

The time-dependant Hamiltonian H(t) is a finite dimensional Hermitian operator. As a result of discretization of an unbounded operator, typically it is a sum of discretized negative Lapla- cian and a time-dependent potential, and hence, H(t) my have an arbitrary large norm. In [6], authors provide asymptotically sharp error bounds for Magnus integrators in the framework applied to time-dependent Schr¨odinger equation, where there are no restrictions on smallness nor bounds on h H(t) , and h stands for a step-size of a numerical integrator. k k In present work we introduce an alternative method to solve equations of the kind (1.4). We show that applying Filon quadrature in combination with Magnus method or modified Magnus method to evaluate integrals in Magnus expansion results in higher accuracy in the solution of (1.4), and we extend the solution of the matrix exponential to the solution of Xω in (1.3), satisfying (1.4). Initialy, we test our methods for the following Airy-type equation,

y′′(t)+ g(t)y(t)= 0, g(t) > 0 for t > 0, and lim g(t)=+∞. (2.14) t ∞ → 8 MARIANNA KHANAMIRYAN

x 10−3 4

2

0

−2

−4 0 200 400 600 800 1000

⊺ FIG. 2.4. Global error of the six order Magnus method for the Airy equation y′′(t)= ty(t) with [1,0] initial conditions, 0 t 1000 and step-size h = 1 . − ≤ ≤ 4

x 10−4 8

4

0

−4

−8 0 200 400 600 800 1000

⊺ FIG. 2.5. Global error of the six order Magnus method for the Airy equation y′′(t)= ty(t) with [1,0] initial conditions, 0 t 1000 and step-size h = 1 . − ≤ ≤ 10

We can rewrite this ordinary differential equation in a vector form,

0 1 y′ = Aω (t)y, where Aω (t)= . (2.15)  g(t) 0  − The equation is highly oscillatory due to the large imaginary spectrum of the matrix Aω . We can now apply Filon quadrature to evaluate the Ω corresponding matrix exponential. 3. The modified Magnus method. In this section we continue our discussion on the solution of the matrix differential equation, and introduce modified Magnus method in com- bination with Filon quadrature, to solve linear oscillator

y′ = Aω (t)y, y(0)= y , σ(Aω ) iR, ω ∞. (3.1) 0 ⊂ → Introducing local change of variables, we write the solution in the form,

1 (t tn)Aω (tn+ h) y(t)= e − 2 x(t tn), t tn. (3.2) − ≥ MODIFIED MAGNUS METHOD USING FILON QUADRATURE 9

x 10−9 5

0

−5 0 200 400 600 800 1000

⊺ FIG. 2.6. Global error of the six order Magnus method for the Airy equation y′′(t)= ty(t) with [1,0] initial conditions, 0 t 1000 and step-size h = 1 . − ≤ ≤ 25

Observe that,

1 1 (t tn)Aω (tn+ h) y′ = Aω tn + h e − 2 x(t tn) 2 −  1 (t tn)Aω (tn+ h) +e − 2 x′(t tn) − = Aω (t)y(t) 1 (t tn)Aω (tn+ h) = Aω (t)e − 2 x(t tn). (3.3) − Therefore,

1 1 (t tn)Aω (tn+ h) Aω tn + h e − 2 x(t tn) 2 −  1 (t tn)Aω (tn+ h) Aω (t)e − 2 x(t tn) − 1 − (t tn)Aω (tn+ h) = e − 2 x′(t tn). − − (3.4)

In summary, x(t) satisfies the following differential equation,

x′ = B(t)x, x(0)= yn, (3.5) with

1 1 h tAω (tn+ 2 h) tAω (tn+ 2 h) B(t)= e− [Aω (tn +t) Aω(tn 1 )]e , where tn 1 = tn + . − + 2 + 2 2 Recalling our original equation for y(t), we develop its numerical solution in terms of x(t),

hAω (tn+h/2) yn+1 = e xn, (3.6) Ω˜ n xn = e yn. (3.7)

It is evident that in the representation for Ω˜ n, the commutator brackets are now formed by the matrix B(t). It is possible to reduce cost of evaluation of the matrix B(t) by simplifying 10 MARIANNA KHANAMIRYAN

1 1 it, [11]. Denote q = g(tn + 2 h) and v(t)= g(tn +t) g(tn + 2 h). Then, q − (2q) 1 sin2qt q 2 sin2 qt B(t)= v(t) − − (3.8)  cos2 qt (2q) 1 sin2qt  − − − q 1 sinqt = v(t) − cosqt q 1 sinqt , (3.9)  cosqt  − −   and for the product

sinq(s t) q 1 sinqt B(t)B(s)= v(t)v(s) − − cosqs q 1 sinqs . (3.10) q  cosqt  − −   It was shown in [11] that

2 2 sin wt O B(t) = cos wt + and B(t)= (t tn 1 ). k k w2 − + 2 Given the compact representation of the matrix B(t) with oscillatory entries, we solve Ω(t) in (1.6) with a Filon-type method, approximating functions in B(t) by a polynomial v˜(t), for example Hermite interpolation, as in classical Filon-type method. Furthermore, in our approximation we use end points only, although the method is general and more nodes of approximation can be required. Performance of the modified Magnus method is better than that of the classical Mag- nus method due to a number of reasons. Firstly, the fact that the matrix B is small, B(t)= O(t t 1 ), contributes to higher order correction to the solution. Secondly, the order of the − n+ 2 modified Magnus method increases from p = 2s to p = 3s + 1, [10]. In Figure 3.1 we present the global error of the fourth order modified Magnus method ⊺ with exact integrals for the Airy equation y′′(t)= ty(t) with [1,0] initial conditions, 0 1 − ≤ t 2000 time interval and step-size h = 5 . This can be compared with the global error of the≤FM method applied to the same equation with exactly the same conditions and step-size, Figure 3.2. In Figures 3.3 and 3.4 we compare the fourth order classical Magnus method with a remarkable performance of the fourth order FM method, applied to the Airy equation with 1 a large step-size equal to h = 2 . While in Figures 3.5 and 3.6 we solve the Airy equation with 1 1 the fourth order FM method with step-sizes h = 4 and h = 10 respectively.

3.1. The modified WRFM. Applications of the modified Magnus method include sys- tems of highly oscillatory non-linear equations

y = Aω (t)y + f (t,y), y(t0)= y0, (3.11)

with analytic solution

t y(t)= Xω (t)y0 + Xω (t τ) f (τ,y(τ))dτ = Xω y0 + I[ f ]. (3.12) Z0 − We apply modified Magnus techniques to the system of non-linear ODEs by introducing the following change of variables

(t tn)Aω (tn) y(t)= e − x(t tn), t t0, (3.13) − ≥ MODIFIED MAGNUS METHOD USING FILON QUADRATURE 11

x 10−10 2 1 0 −1 −2 −3 0 500 1000 1500 2000

FIG. 3.1. Global error of the fourth order modified Magnus method with exact evaluation of integral commu- tators for the Airy equation y (t)= ty(t) with [1,0]⊺ initial conditions, 0 t 2000 and step-size h = 1 . ′′ − ≤ ≤ 5

x 10−10 2 1 0 −1 −2 −3 0 500 1000 1500 2000

FIG. 3.2. Global error of the fourth order FM method with end points only and multiplicities all 2 for the Airy equation y (t)= ty(t) with [1,0]⊺ initial conditions, 0 t 2000 and step-size h = 1 . ′′ − ≤ ≤ 5

and solving the system locally with a constant matrix Aω , while x(t) satisfies a non-linear highly oscillatory equation

tAω (tn) tAω (tn) x′(t)= Bω (t)Xω (t)+ e− f (t,e x(t)), t 0, (3.14) ≥ with exactly the same matrix Bω as for linear equations,

tAω (tn) tAω (tn) Bω (t)= e− [Aω (t) Aω(tn)]e . (3.15) − The analytic solution to the new system is of the form t τ τ τ Aω (tn) τ Aω (tn) τ τ x(t)= Xωt)y0 + Xω (t )e− f ( ,e x( ))d . (3.16) Z0 − We solve the system applying WRF method, [16].

[0] x (t)= y0, (3.17) t τ τ [m+1] ¯ ¯ τ A(tn) τ A(tn) [m] τ τ x = X(t)y0 + X(t )e− f ( ,e x ( ))d , (3.18) Z0 − 12 MARIANNA KHANAMIRYAN

x 10−7 4 2 0 −2 −4 0 500 1000 1500 2000

FIG. 3.3. Global error of the fourth order FM method with end points only and multiplicities all 2 for the Airy equation y (t)= ty(t) with [1,0]⊺ initial conditions, 0 t 2000 and step-size h = 1 . ′′ − ≤ ≤ 2

0.015

0.005

−0.005

−0.015 0 500 1000 1500 2000

⊺ FIG. 3.4. Global error of the fourth order Magnus method for the Airy equation y′′(t)= ty(t) with [1,0] initial conditions, 0 t 2000 and step-size h = 1 . − ≤ ≤ 2

REFERENCES

[1] A. MARTHINSEN B. OWREN, Integration methods based on rigid frames, Norwegian University of Science & Technology Tech. Report, (1997). [2] S. BLANES, F. CASAS, AND J. ROS, Improved high order integrators based on the Magnus expansion, BIT, 40 (2000), pp. 434–450. [3] F. CASAS, Sufficient conditions for the convergence of the Magnus expansion, J. Phys. A, 40 (2007), pp. 15001–15017. [4] P. E. CROUCH AND R. GROSSMAN, Numerical integration of ordinary differential equations on manifolds, J. Nonlinear Sci., 3 (1993), pp. 1–33. [5] F. HAUSDORFF, Die symbolische exponentialformel in der gruppentheorie, Berichte der Sachsischen Akademie der Wissenschaften (Math. Phys. Klasse), 58 (1906), pp. 19–48. [6] MARLIS HOCHBRUCK AND CHRISTIAN LUBICH, On magnus integrators for time-dependent Schr¨odinger equations, SIAM J. Numer. Anal., 41 (2003), pp. 945–963. [7] MARLIS HOCHBRUCK AND ALEXANDER OSTERMANN, Exponential integrators, Acta Numerica, 19 (2010), pp. 209–286. [8] A. ISERLES, Brief introduction to Lie-group methods, in Collected lectures on the preservation of stability under discretization (Fort Collins, CO, 2001), SIAM, Philadelphia, PA, 2002, pp. 123–143. [9] ARIEH ISERLES, On the discretization of double-bracket flows, Found. Comput. Math., 2 (2002), pp. 305– 329. MODIFIED MAGNUS METHOD USING FILON QUADRATURE 13

x 10−9

1

0

−1 0 500 1000 1500 2000

FIG. 3.5. Global error of the fourth order FM method with end points only and multiplicities all 2, for the Airy equation y (t)= ty(t) with [1,0]⊺ initial conditions, 0 t 2000 and step-size h = 1 . ′′ − ≤ ≤ 4

−11 4x 10

2

0

−2

−4 0 500 1000 1500 2000

FIG. 3.6. Global error of the fourth order FM method with end points only and multiplicities all 2, for the Airy equation y (t)= ty(t) with [1,0]⊺ initial conditions, 0 t 2000 and step-size h = 1 . ′′ − ≤ ≤ 10

[10] A. ISERLES, On the global error of discretization methods for highly-oscillatory ordinary differential equa- tions, BIT, 42 (2002), pp. 561–599. [11] , On the method of Neumann series for highly oscillatory equations, BIT, 44 (2004), pp. 473–488. [12] , Magnus expansions and beyond, to appear in Proc. MPIM, Bonn, (2009). [13] A. ISERLES AND S. P. NØRSETT, On quadrature methods for highly oscillatory integrals and their imple- mentation, BIT, 44 (2004), pp. 755–772. [14] , Efficient quadrature of highly oscillatory integrals using derivatives, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 461 (2005), pp. 1383–1399. [15] ARIEH ISERLES AND ANTONELLA ZANNA, Efficient computation of the matrix exponential by generalized polar decompositions, SIAM J. Numer. Anal., 42 (2005), pp. 2218–2256 (electronic). [16] M. KHANAMIRYAN, Quadrature methods for highly oscillatory linear and nonlinear systems of ordinary differential equations. I, BIT, 48 (2008), pp. 743–761. [17] , Quadrature methods for highly oscillatory linear and nonlinear systems of ordinary differential equa- tions. II, Submitted to BIT, (2009). [18] W. MAGNUS, On the exponential solution of differential equations for a linear operator, Comm. Pure Appl. Math., 7 (1954), pp. 649–673. [19] P. C. MOAN AND J. NIESEN, Convergence of the Magnus series, Found. Comput. Math., 8 (2008), pp. 291– 301. [20] H. MUNTHE-KAAS, Runge-Kutta methods on Lie groups, BIT, 38 (1998), pp. 92–111. 14 MARIANNA KHANAMIRYAN

[21] A. ZANNA, The method of iterated commutators for ordinary differential equations on lie groups, Tech. Rep. DAMTP 1996/NA12, University of Cambridge, (1996).