A Fast Method for Solving the Helmholtz Equation Based on Wave Splitting
JELENA POPOVIC
Licenciate Thesis Stockholm, Sweden 2009 TRITA-CSC-A 2009:11 ISSN-1653-5723 KTH School of Computer Science and Communication ISRN KTH/CSC/A- -09/11-SE SE-100 44 Stockholm ISBN 978-91-7415-370-5 SWEDEN
Akademisk avhandling som med tillst˚andav Kungl Tekniska h¨ogskolan framl¨agges till of- fentlig granskning f¨or avl¨aggande av teknologie licentiatexamen i numerisk analys m˙andagen den 12 juni 2009 klockan 13.15 i D41, Huvudbyggnaden, Lindstedsv¨agen 3, Kungl Tekniska h¨ogskolan, Stockholm.
c Jelena Popovic, 2009
Tryck: Universitetsservice US AB iii
Abstract
In this thesis, we propose and analyze a fast method for computing the solution of the Helmholtz equation in a bounded domain with a variable wave speed function. The method is based on wave splitting. The Helmholtz equation is first split into one–way wave equations which are then solved iteratively for a given tolerance. The source functions depend on the wave speed function and on the solutions of the one–way wave equations from the previous iteration. The solution of the Helmholtz equation is then approximated by the sum of the one–way solutions at every iteration. To improve the computational cost, the source functions are thresholded and in the domain where they are equal to zero, the one–way wave equations are solved with GO with a computational cost independent of the frequency. Elsewhere, the equations are fully resolved with a Runge-Kutta method. We have been able to show rigorously in one dimension that the algorithm is convergent and that for fixed accuracy, the computational cost is just O(ω1/p) for a p-th order Runge-Kutta method. This has been confirmed by numerical experiments.
Preface
The author of this thesis contributed to the ideas presented, performed the numerical com- putations and wrote all parts of the manuscript, except the appendix that is written by Dr. Olof Runborg.
v
Acknowledgments
First of all I would like to thank my supervisor Olof Runborg for his patience and help. His ideas, encouragement and support helped me a lot to understand and solve many mathematical problems. I am very happy to have the opportunity to work with him. I would also like to thank Jesper Oppelstrup for his comments and advices. A big thank you to all my colleagues at the department, especially to Sara, Mohammad, Murtazo, Tomas, H˙akon and Henrik. Having you not only as my colleagues but also as my friends means a lot to me. I wish to thank families Nikolic and Minic for their love, support and understanding. Special thanks to Vladimir for making me laugh even when I am not in the mood. I am very grateful to my parents and my brother for everything they do for me. Thank you for being always by my side and never letting me go down. Finally, I would like to thank Miki for believing in me and encouraging me every time I loose faith in myself.
vii Contents
Preface v
Acknowledgments vii
Contents viii
1 Introduction 1 1.1 Wave Propagation Problems ...... 1 1.2 Geometrical Optics ...... 4 1.3 Numerical Methods ...... 6
2 A Fast Method for Helmholtz Equation 15 2.1 Derivation of the Method in One Dimension ...... 15 2.2 Formulation in Two Dimensions ...... 19
3 Analysis 21 3.1 Utility Results ...... 21 3.2 Estimates of One-Way Solutions ...... 23 3.3 Convergence of the Algorithm ...... 26
4 Error Analysis for Numerical Implementation 33 4.1 Numerical Implementation ...... 33 4.2 Error Analysis ...... 36
5 Numerical Experiments 43
Bibliography 61
A Runge–Kutta Schemes for Linear Scalar Problems 65
viii Chapter 1
Introduction
Simulation of high frequency wave propagation is important in many engineering and scien- tific disciplines. Currently the interest is driven by new applications in wireless communi- cation (cell phones, bluetooth) and photonics (optical fibers, filters, switches). Simulation is also used in more classical applications. Some examples in electromagnetism are antenna design and radar signature computation. In acoustics simulation is used for noise predic- tion, underwater communication and medical ultrasonography. One significant difficulty with numerical simulation of such problems is that the wavelength is very short compared to the computational domain and thus many discretization points are needed to resolve the solution. Consequently, the computational complexity of standard numerical methods grows algebraically with the frequency. Therefore, development of effective numerical methods for high frequency wave propagation problems is important. Recently, a new class of methods have been proposed which, in principal, can solve the wave propagation problem at a cost almost independent of the frequency for a fixed accuracy. The main interest of the new methods have been for scattering problems. These have mostly been derived through heuristic arguments. Proving the low cost rigorously is rather difficult although there are now a few precise proofs about computational cost and accuracy in terms of the frequency. For problems set in a domain much less have been done. The focus of this thesis is on such problems. We develop a fast method for solving Helmholtz equation in a domain with varying wave speed. We show rigorously that in one dimension the asymptotic computational cost of the method only grows slowly with the frequency, for fixed accuracy.
1.1 Wave Propagation Problems
The basic equation that describes wave propagation problems mathematically is the wave equation, 1 ∂2 ∆u(x, t) − u(x, t) = 0, (x, t) ∈ Ω × +, (1.1) (c(x))2 ∂t2 R where c is the speed of propagation that can be variable and Ω ⊂ Rn is an open domain. The equation is subjected to appropriate initial and boundary conditions. The initial conditions are of the form
u(x, t) = f(x), ut(x, t) = g(x), (x, t) ∈ Ω × {t = 0}, where f and g are given functions. Boundary conditions can be Dirichlet, i.e., + u(x, t) = ϕ(t), (x, t) ∈ ∂Ω × R ,
1 2 CHAPTER 1. INTRODUCTION or Neumann, i.e., + un(x, t) = ϕ(t), (x, t) ∈ ∂Ω × R , or Robin which is a linear combination of Neumann and Dirichlet boundary conditions. The wave equation is used in acoustics, elasticity and electromagnetism. In acoustics, the evolution of the pressure and particle velocity is modeled by this equation. Equations that describe the motion in an elastic solid are equations for the pressure waves (P-waves) and the shear waves (S-waves). In 1D problems the resulting equations can be decomposed into independent sets of equations for P-waves and S-waves, but in higher dimensions it is not the case. The unknowns are the velocity and shear strain or shear stress. Eliminating one of the unknowns from the system, it can be shown that the other satisfies the wave equation. Equations that are used to model the problems in electromagnetics are the Maxwell’s equations, ∂D ∇ × H = J + ∂t ∂B ∇ × E = − ∂t ∇ · D = ρ ∇ · B = 0, where H is the magnetic field, J is the current density, D is the electric displacement, E is the electric field, B is the magnetic flux density and ρ is the electric charge density. Assuming time-harmonic waves, i.e. waves of the form
u(x, t) = v(x)eiωt, where ω is the frequency, the wave equation can be reduced to the Helmholtz equation, ω2 ∆v(x) + v(x) = 0, x ∈ Ω. (1.2) (c(x))2 The equation is augmented with boundary conditions, which can be Dirichlet, i.e.,
v(x) = g, x ∈ ∂Ω, or Neumann, i.e., vn(x) = g, x ∈ ∂Ω, (1.3) or Robin boundary condition. Thus, the Helmholtz equation represents a time-harmonic form of the wave equation. Henceforth we refer to the problems of type (1.1) and (1.2) as domain problems. Another type of wave propagation problems is the scattering problem. Scattering prob- lems concern propagation of waves that collide with some object. More precisely, let Ω be an object, a scatterer, that is illuminated by an incident wave field uinc. Then, the wave field that is generated when uinc collides with Ω is the scattered field us (see Figure 1.1). There are many examples of scattering problems. One of them in nature is for example a rainbow that appears when the sunlight is scattered by rain drops. Another example is the scattering of light from the air which is the reason for the blue color of the sky. To describe the scattering problem mathematically, let us consider an impenetrable obstacle Ω in R3 and assume that the speed of propagation c(x) = 1 outside Ω. The obstacle is illuminated by some known incident wave uinc(x, t) which solves the constant coefficient Helmholtz equa- tion pointwise, for example a plane wave eiωx. We also assume that utot = uinc + us = 0 1.1. WAVE PROPAGATION PROBLEMS 3
Figure 1.1: Scattering problem.
on the boundary of the obstacle ∂Ω. Then, the scattering problem is to find the scattered field us(x) such that it satisfies the Helmholtz equation
s 2 s 3 ∆u (x) + ω u (x) = 0, x ∈ R \ Ω¯ (1.4) with boundary condition us(x) = −uinc(x), x ∈ ∂Ω, (1.5) and the Sommerfeld radiation condition ∂us lim r − iωus = 0, (1.6) r→∞ ∂r where r = |x|, which guarantees that the scattered wave is outgoing. Instead of boundary condition (1.5), some other boundary conditions can be used, as above in (1.3). Using the Green’s representation formula, the scattering problem can be reformulated as a boundary integral equation which represents the integral form of the problem. One transformation that is suitable for high-frequency problems [14] is 1 ∂u Z Φ(x, y) ∂u (x) + − iηΦ(x, y) (y)ds(y) = f(x), x ∈ ∂Ω, (1.7) 2 ∂n ∂Ω ∂n(x) ∂n where ∂uinc f(x) = (x) − iηuinc(x), ∂n 4 CHAPTER 1. INTRODUCTION
η = η(ω) > 0 is a coupling parameter that ensures the well posedness of (1.7) and ∂u/∂n is to be determined. Here Φ(x, y, ω) denotes the standard fundamental solution of the Helmholtz equation in R3 which becomes increasingly oscillatory when ω → ∞. One important difference between scattering problems in homogeneous medium and the domain problems is in the way the waves are scattered. In the domain problems the waves are scattered whenever the wave speed changes, while in the scattering problems the waves are scattered only from the surface of the scatterer.
1.2 Geometrical Optics
High frequency wave propagation problems require many discretization points/ elements to resolve the solution and therefore become computationally very expensive. Instead, asymptotic approximation such as geometrical optics (GO) can be used. See for example [21, 62, 52]. It is applicable to domain problems as well as to scattering problems. The key idea in GO is that the highly oscillating solution is represented as a product of a slowly variable amplitude function A and an exponential function of the slowly variable phase function φ multiplied by the large parameter ω. Hence, instead of the oscillating wave field, the unknowns in GO are the phase and the amplitude which vary on a much coarser scale than the full solution (see Figure 1.2). They are therefore easier to compute numerically, at a cost independent of the frequency. However, the approximation is only accurate for large frequencies. It typically requires that variations in the speed of propagation c(x) are on a scale much larger than the wave length. Consider the Helmholtz equation (1.2). In the GO
5
0
−5 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
4
3
2
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
0
−2
−4
−6 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
Figure 1.2: Real part of the solution of Helmholtz equation (top), the amplitude (middle) and the phase (bottom). It can be noticed that the amplitude and the phase vary on a much coarser scale than the solution. 1.2. GEOMETRICAL OPTICS 5 approximation its solution is sought in the form ∞ iωφ(x) X −k u(x) = e Ak(x)(iω) . (1.8) k=0 Substituting (1.8) in (1.2) and equating to zero the coefficients of the powers of ω yields the Hamilton-Jacobi type eikonal equation for the phase φ 1 |∇φ| = , (1.9) c and a system of linear transport equations
2∇φ · ∇A0 + ∆φA0 = 0 ... (1.10)
2∇φ · ∇An+1 + ∆φAn+1 = ∆An, for the amplitudes Ak, k = 0, 1,... . For large frequencies, Ak, k > 0 in (1.8) can be discarded, so only the transport equation for A0 remains to be solved in (1.10) and u(x) ≈ iωφ(x) A0(x)e . If the eikonal equation (1.9) is solved through the method of characteristics GO can be formulated as a system of ordinary differential equations (ODEs). Let p = ∇φ and define a Hamiltonian H(x, p) = c(x)|p|. Then, instead of (1.9), we are solving the following system of ODEs dx = ∇ H(x, p), (1.11) dt p dp = −∇ H(x, p), (1.12) dt x where (p(t), x(t)) is a bicharacteristic related to the Hamiltonian H(x, p). The parametriza- tion t corresponds to the phase of the wave φ(x(t)) = φ(x0) + t. There is also a system of ODEs for the amplitude. Finally, one can introduce a kinetic model of GO. It is based on the interpretation that rays are trajectories of particles following Hamiltonian dynamics. Let p be the slowness vector defined above and introduce the phase space (t, x, p) such that the evolution of particles in this space is given by the system of equations (1.11)–(1.12). Letting f(t, x, p) be a particle density function, it will satisfy the Liouville equation [6, 20]
ft + ∇pH · ∇xf − ∇xH · ∇pf = 0, (1.13) where ∇pH and ∇xH are given by (1.11)–(1.12). A disadvantage of asymptotic approximations such as GO is that it cannot capture some typical wave properties such as diffraction. Diffracted waves are produced when the incident field hits edges, corners or vertexes of the obstacle or when the incident wave hits the tangent points of the smooth scatterer (creeping waves). To overcome the problem with diffracted fields, an asymptotic expansion with correction terms has been proposed by Keller [35] in his geometrical theory of diffraction (GTD). GTD makes up for the GO solution on the boundary of the scatterer where some terms of this solution vanish. For example, GTD expansion of the solution accounts for the phase and the amplitude of the diffracted waves. It does so by means of several laws of diffraction which allow to use the same principles as in GO to assign a field to each diffracted ray. Another disadvantage of GO is that it fails when the amplitude A0 is unbounded, at caustics where waves focus. One way to remedy this is to use Gaussian beams [44, 49, 3]. 6 CHAPTER 1. INTRODUCTION
1.3 Numerical Methods
In this section we describe different numerical methods that can be used for the numerical solution of wave propagation problems. We first consider classical direct numerical methods and GO based methods. We then discuss more recent developments: methods that have an almost frequency independent computational cost, and so-called time-upscaling methods. The method developed in this thesis is strongly related to these new types of methods.
Direct Numerical Methods One way to solve partial differential equations numerically is to use finite difference methods, in which the derivatives are approximated by the differences of the neighboring points on a grid [29]. When a PDE is time dependent, one can discretize it first in space (semi- discretization) and then use some ODE solver to advance in time. Different choice of ODE schemes together with a space discretization of a certain order results in different finite difference methods. One example is the Crank-Nicolson method that is obtained when the second order space discretization is combined with the trapezoidal rule. For explicit methods special attention should be paid to the stability issues, i.e. time step must be shorter then the space discretization step (CFL condition). The equation can also be discretized on a staggered grid which is in some cases computationally advantageous. One standard staggered finite difference scheme for wave equations is the Yee scheme introduced by Kane S. Yee in 1966 [63] and further developed by Taflove in the 70s [55, 56]. Finite volume methods are direct methods that are based on the integral form of the equation instead on the differential equation itself [40]. Similarly to the finite difference methods, the domain is first split into grid cells or finite volumes which do not necessarily have to be of the same size. The unknown is then the approximation of the average of the solution over each cell. In 1D intervals are used for finite volumes, in 2D one can use quadrilaterals and approximate the line integrals by the average of the function values on each side of the line, or triangles and approximate the line integrals along the sides by the function values at neighboring triangles. In finite volume methods it is easy to use unstructured grids which makes those methods suitable for problems with complex geometry. However, the extension of finite volume methods to higher order accuracy can be very complicated. Finite element methods are based on the variational formulation of the problem which is obtained when the equation is multiplied by test functions and integrated over the domain using integration by parts [24, 7]. For discrete approximation of the problem, the function space S on which the variational formulation is defined is replaced by a finite dimensional subspace Sh. The approximation uh of the solution u is expressed as a linear combination of the finite number of basis functions φj(x) which are defined to be continuous and nonzero only on small subdomains. In that way, time independent problems are transformed into a sparse system of equations and time dependent problems are transformed into an initial value problem for a system of ODEs with sparse matrices. The unknowns are the coefficients in the linear combination of basis functions. In the time dependent case the vector holding the time derivative of the solution is multiplied by a matrix (the mass matrix). The method therefore becomes implicit and a system of algebraic equations has to be solved at every time step which can be computationally costly. There are techniques to avoid this problem by for example approximating the mass matrix by a diagonal matrix (’mass lumping’). A great advantage of FEM is that it is mathematically well motivated with rigorous error estimates. Furthermore, if there is an energy estimate of the original problem, the stabil- ity of the space approximation follows directly. As in finite volume methods unstructured 1.3. NUMERICAL METHODS 7 grids are easy to use and FEM is thus also very well suited for the problems with complex geometries. It should be noted, however, that adaptivity which can be easily implemented in FEM, only play a limited role in wave propagation problems, since the same wavelength has to be resolved everywhere. In discontinuous Galerkin methods [13], similarly as in FEM, the original problem is transformed into the weak formulation by multiplying the underlying equation by test func- tions and integrating by parts. The difference is that the integration is not done over the whole domain but over the subintervals which are centered around discretization points sim- ilarly to finite volume methods. The requirement on the continuity of the basis functions and thus of the solution is now relaxed allowing for discontinuities. The basis functions are chosen to be nonzero only on one subinterval. Compared to FEM, in a time dependent case, the problem is transformed into a system of ODEs which can be solved with a fast explicit difference method, i.e. no algebraic system of equations has to be solved at every time step. However, the stability of the solution does not follow automatically like in FEM. In spectral methods [59, 29, 31], the solution is approximated as a linear combination of smooth basis functions like Chebyshev or Legendre polynomials or trigonometric functions. The difference from the finite element methods is that the basis functions here are usually nonzero over the whole domain. Often, spectral methods involve the use of the Fourier transform. The solution of the problem is first written as its Fourier transform and then substituted in the PDE. In that way, a system of ODEs is obtained which then can be solved with some ODE solver. Fourier expansion of the solution normally requires the problem to be periodic. For nonperiodic problems other basis functions are recommended, for example Chebyshev or Legendre polynomials. Spectral methods have high, exponential, order of accuracy, but can be difficult to use in complex geometries. The direct methods described above can also be applied to the Helmholtz equation. The relative advantages and disadvantages of the methods are the same in terms of handling complex geometry and boundary conditions. Finite difference, finite element and finite volume methods lead to sparse systems of equations with the number of unknowns N that depends on the frequency, N ∼ ω. For high frequencies, those systems are large and direct methods like Gaussian elimination become computationally too expensive in higher dimensions. The alternative is to use iterative methods. However, there is a difficulty in this approach, as well. Since the system of equations is indefinite and ill-conditioned, iterative methods have slow convergence rate. The convergence rate can be improved by preconditioning but finding a good preconditioner for Helmholtz equation is still a challenge [25]. The indefiniteness makes it difficult to use multigrid and the conjugate gradient method. Typically Krylov subspace methods, like GMRES or Bi-CGSTAB are used instead. Preconditioners can for instance be based on incomplete LU-factorization [43] or on fast transform methods [47, 39, 27, 18]. More recently preconditioners based on a complex shifted Helmholtz operator has been introduced [26]. For scattering problems one needs to solve an integral equation of type (1.7), see for example [14, 36]. Let us consider a simplified version of (1.7) in 1D
Z b Φ(x, y, ω)u(y)ds(y) = f(x), x ∈ [a, b]. (1.14) a One way to solve (1.14) numerically is to use the collocation method. The idea in this method is to approximate u(x) by
N X u(x) = cjϕj(x), j=1 8 CHAPTER 1. INTRODUCTION
where {ϕj(x)} are basis functions in [a, b]. Next, let xj be a set of points in [a, b]. Usually, xj are chosen to be centers of the supports of basis functions (see Figure 1.3). Then, the problem is to find the unknown coefficients {cj} such that
N X Z b cj Φ(xi, y, ω)ϕj(y)ds(y) = f(xi), i = 1,...,N (1.15) j=1 a is satisfied at {xj}. Hence, the problem is transformed into a linear system of equations. Since the matrix in the system is dense, the computational cost of direct methods is normally O(N 3). Thus, when N is large, direct methods are not feasible. The alternative is to use iterative methods. Then, the computational cost can be reduced to O(N 2). However, for large N this can still be expensive. This problem can be solved with fast multipole methods which reduce the complexity to O(N log N) for any fixed accuracy [50, 23]. Besides the collocation method, there are other numerical methods that can be used for the numerical solution of boundary integral equations, for example the Nystr¨ommethod or the Galerkin method.
Figure 1.3: Collocation method.
In general, direct numerical methods for high-frequency wave propagation problems need a certain number of grid points or elements per wavelength to maintain a fixed accuracy. Hence, if N is the number of grid points in each coordinate direction, one needs at least N ∼ ω. Computational complexity for time dependent problems is normally O(N d+1) for a d-dimensional problem. For the d-dimensional Helmholtz equation, the cost is at least O(N d) but often higher. The complexity of the methods based on an integral formulation of the problem can be reduced to almost O(N d−1). The computational cost for all formulations thus grows algebraically with the frequency ω. Therefore, at sufficiently high frequencies (ω 1), direct numerical simulation is no longer feasible.
GO-based Methods Methods based on the GO approximation are suitable for high-frequency wave propagation problems. For the different mathematical models for GO, there are different numerical methods. PDE-based methods are used for the eikonal and transport equations (1.9) and (1.10). The equations are solved directly on a uniform grid to control the error. The eikonal 1.3. NUMERICAL METHODS 9 equation is a Hamilton-Jacobi type equation and it has a unique viscosity solution which can be computed numerically by finite difference methods for example. Some of them are upwind-based methods like the fast marching method [53, 61] and the fast sweeping method [60]. For time dependent eikonal equations one can use high-resolution methods of ENO and WENO type [46]. However, at points where the correct solution is multivalued the viscosity solution is not enough because the eikonal and the transport equations describe only one unique wave at a time. Therefore, some other numerical techniques must be used. Some examples are the big ray tracing method, the slowness matching method and the domain decomposition based on detecting kinks in the viscosity solution. In all methods, the solution is obtained by solving several eikonal equations. Ray tracing methods [12, 33, 37, 57] are used to solve the ray equations (1.11)–(1.12). That system can be augmented with another system of ODEs for the amplitude. The ODEs are solved with standard numerical methods for example second or fourth order Runge-Kutta methods. The phase and the amplitude are calculated along the rays and the interpolation must be used to obtain the solution in points other than the grid points. However in regions with diverging or crossing rays this method is not efficient. Methods based on the kinetic formulation of GO are phase space methods. The Liouville equation (1.13) has a large number of independent variables and therefore a lot of unknowns has to be used in the numerical methods which can be computationally ineffective. To overcome this problem one can use wave-front methods and moment-based methods. In wave-front methods a wave front is evolved following the kinetic formulation. Some of the wave front methods are the level set method [45] and the segment projection method [22, 58]. In moment-based methods a kinetic transport equation set in a high-dimensional phase space (t, x, p) is approximated by a finite system of moment equations in the reduced space (t, x). In that way, the number of unknowns is decreased. For more details, see [6, 51].
Methods with Weakly Frequency Dependent Cost The situation described above can be summarized as follows: For direct methods the com- putational cost grows with frequency for fixed accuracy, while for GO methods the accuracy grows with frequency for fixed computational cost. Unfortunately, the frequency range and accuracy requirement of many realistic problems often fall in between what is tractable with either of these approaches. Recently a new class of algorithms have been proposed that combine the cost advantage of GO methods with the accuracy advantage of direct methods. They are thus characterized by a computational cost that grows slowly with the frequency, while at the same time being accurate also for moderately high frequencies. It should be noted that one needs at least O(ω) points to resolve the solution. If that many points are used, then the computational cost is, of course, at least O(ω). In these methods, however, one does not resolve the solution, but computes instead GO like solutions. Therefore, one can only expect to get the full solution in O(1) points. The main interest of the new methods have been for scattering problems [8, 38, 16, 1, 32]. Those methods are based on the integral formulation of the Helmholtz equation (1.7). The key step is to write the surface potentials as a slowly varying function multiplied by a fast phase variation. Instead of approximating the unknown function v := ∂u/∂n directly, the following ansatz is used v(x, ω) ≈ ωeiωφ(x)V (x), x ∈ ∂Ω. (1.16) The basic idea is that, using asymptotic analysis, the phase function φ can be determined in such a way that V (x) is much less oscillatory than the original unknown function v. More 10 CHAPTER 1. INTRODUCTION precisely, φ(x) is taken as the phase of the geometrical optics approximation. For instance, when uinc is a plane wave in direction dˆ, then φ = x · dˆ. Hence, the ansatz becomes
ˆ v(x, ω) = ωeiωx·dV (x, ω), (1.17) where V now varies slowly with ω. Each side of (1.7) is then multiplied by e−iωx·dˆ to obtain 1 V + DV − iηSV = i(ωdˆ· nˆ − η), (1.18) 2 where the operators S and D are defined as follows
Z ˆ SV (x) = Φ(x, y, ω)eiω(y−x)·dV (y)dy, (1.19) ∂Ω Z ∂Φ(x, y, ω) ˆ DV (x) = eiω(y−x)·dV (y)dy. (1.20) ∂Ω ∂n(x) Thus, the problem becomes to find the amplitude V (x, ω) by solving the integral equation (1.18). The amplitude varies slowly away from the shadow boundaries and can be rep- resented by a fixed set of grid points, i.e. independently of the frequency. The integral equation can be solved for example with the collocation method. In this method one needs to compute integrals of the type given in (1.15). Although the amplitude is a slowly vary- ing function, the integrals cannot be computed independently of the frequency by direct numerical methods since their kernels are oscillatory (c.f. (1.19)–(1.20)). To overcome this problem an extension of the method of stationary phase has been suggested [8]. Since the values of the integrands and their derivatives in critical points make the only significant contributions to the oscillating integrals, an integration procedure based on localization around those points was introduced. Since the amplitude near shadow boundaries is vary- ing rapidly, those regions are treated with special consideration. However the method works mainly for problems with convex scatterers. For more information about these methods see [11]. The extension of the method to non-convex scatterers is proposed in [9, 17]. Rigorous proofs of the low cost of these methods is difficult and requires detailed results on the asymptotic behaviour of the exact solution near shadow boundaries. An example of a result, due to Dominguez, Graham and Smyshlyaev [16], concerns convex smooth scatterers in 2D. They show that for a Galerkin discretization with N unknowns where the integrals corresponding to (1.19), (1.20) are computed exactly, the relative error in V (y) can be estimated as 1/9 6 ! k 1/3 rel. err ∼ kα + e−Ck , N where α is a small exponent less than one. Hence, if N is slightly larger than k1/9 the error is controlled. For full domain problems much less has been done. In some sense this is more difficult than the scattering problem in a homogeneous medium because the waves are reflected at all points where the wave speed changes, not only at the surface of a scatterer. One attempt at lowering the computational cost along the lines above has been to use ”plane wave basis functions” in finite element methods [48, 28, 10]. The method can be seen as a Discontinuous Galerkin method with a particular choice of basis functions. However, except in simple cases these methods do not reduce the complexity more than by a constant factor. The method proposed by Giladi and Keller [28] is a hybrid numerical method for the Helmholtz equation in which the finite element method is combined with GO. The idea 1.3. NUMERICAL METHODS 11 is to determine the phase factor which corresponds to the plane wave direction a priori by solving the eikonal equation for the phase using ray tracing and then to determine the amplitude by a finite element method choosing asymptotically derived basis functions which incorporate the phase factor. A tailored finite-point method for the numerical simulation of the Helmholtz equation with high frequency in heterogeneous medium has been recently proposed by Han and Huang in [30]. They suggest to approximate the wave speed function c(x) by a piecewise constant function and then solve the equation on every subdomain exactly. The speed function is assumed to be smooth and monotone in each interval.
Time Upscaling Methods
Time upscaling methods have been proposed to overcome the problem with computational complexity of direct simulation of time dependent wave propagation. The aim is to reduce the extra cost incurred by the time–stepping. There is an interesting connection between these methods and methods with frequency independent cost, which will be described at the end of this section. Let us now consider a d–dimensional problem and suppose N discretization points in every space direction and M discretization points in time. For explicit methods, due to the stability (CFL) and accuracy requirements it is necessary to have M ∼ N r where r = 1 for hyperbolic problems and r = 2 for parabolic problems. Then, the computational cost for a time interval of O(1) is O(N d+r). With time upscaling the computational cost can be reduced to O(N d log N) or even to O(N d) while maintaining the same accuracy. For the advection and parabolic equations with spatially varying coefficients, time up- scaling methods are proposed in [19]. The authors suggest to use the fast transform wavelet together with truncation. Consider the following evolution equation
d ∂tu + L(x, ∂x)u = 0, x ∈ Ω ⊂ R , t > 0, u(x, 0) = u0(x), where L is a differential operator. After discretization, collecting the unknowns in a vector d u ∈ RN , the equation becomes
un+1 = Aun, (1.21) 0 u = u0,
d d where A is a N × N matrix approximating the operator ∂t + L. The computational complexity is of order O(N d+r). Clearly, (1.21) is equivalent to
n n u = A u0. (1.22)
Then, repeated squaring of A can be used to compute the solution of (1.22) in log2 M steps m 2 4 2m for M = 2 , where m is an integer, i.e. one can compute A, A ,A ,...,A in m = log2M steps. Since the matrix A is sparse, the cost of squaring is O(N d). However, the later squarings involve almost dense matrices and the computational cost is then O(N 3d). The total cost becomes O(N 3d log M) which is more expensive than to solve (1.21) directly. Instead, Engquist, Osher and Zhong in [19] suggest a wavelet representation of A which can decrease the computational complexity of the repeated squaring algorithm. More precisely, 12 CHAPTER 1. INTRODUCTION
the solution of (1.22) can be computed in m = log2 M steps in the following way
B := SAS−1, B := TRUNC(B2, ) (iterate m steps) (1.23) un := S−1BSu0.
The matrix S corresponds to a fast wavelet transform and the truncation operator TRUNC sets all elements of A that are less than to 0. It is obvious that the algorithm (1.23) is equivalent to (1.22) for = 0 and it was shown in [19] that there exist a small enough such that the result of the algorithm (1.23) is close to (1.22). Through truncation the algebraic problem with dense matrices is transformed to a problem with sparse matrices. The authors also showed that the cost to compute the 1D hyperbolic equation can be reduced from O(N 2) to O(N log3 N) for a fixed accuracy. Furthermore, for d–dimensional parabolic problems the computational complexity can be reduced from O(N d+2) to O(N d(log N)3). A similar technique is proposed in [15] by Demanet and Ying. They consider the 2D wave equation
2 2 utt − c (x)∆u = 0, x ∈ [0, 1]
u(x, 0) = u0, ut(x, 0) = u1, with periodic boundary conditions. They assume c(x) ∈ C∞ to be positive and bounded away from zero and propose to transform the wave equation into a first order system of equations,
vt = Av, v(t = 0) = v0, where v = (ut, ux) and A is a 2-by-2 matrix of operators. The system is then solved up to time t = T using wave atoms, which provide a sparse representation of the matrix, combined with repeated squaring and truncation as above. On a N by N grid the computational cost is reduced from O(N 3 log N) of pseudo-spectral methods to O(N 2+δ) for arbitrarily small δ of naive version of the wave atom method. For the final algorithm they proved rigorously that the computational cost is between O(N 2.25) and O(N 3) depending on the initial data and the structure of c(x). The key of the proof is to show that the solution operator eAt remains almost sparse for all t in the wave atom basis. Another fast time upscaling method for the one-dimensional wave equation is proposed by Stolk [54]. The idea here is first to rewrite the wave equation as a system of one–way wave equations, then transform them into wavelet bases and solve using multiscale stepping which means that the fine spatial scales are solved with longer time steps and the coarse spatial scales are solved with the shorter time steps. The computational cost is only O(N) for fixed accuracy. Hence, it can be noted that when the wave equation is solved with the time upscaling methods with repeated squaring, the solution is given on a grid that is coarse in time and dense in space direction. One can also solve the wave equation by solving Helmholtz equations for each frequency component. This is like taking a Fourier/Laplace transform in time. A space discretization with N points then corresponds to N Helmholtz equations for N frequency components. If the Helmholtz equation is solved with a frequency independent method, then the total cost is O(N), but the solution is obtained only in O(1) points. After an inverse FFT for each point at a cost of O(N log N) the solution of the wave equation is obtained on a dense grid in time but coarse in space at a total cost O(N log N), i.e. the same as in time upscaling, see Figure 1.4. 1.3. NUMERICAL METHODS 13
T t t
0 0
a b a b x x
Figure 1.4: In time upscaling methods the solution is given on a coarse grid in time (left). In frequency independent numerical method the solution would be obtained on a grid that is coarse in space (right).
Organization of the Thesis The thesis is organized as follows. In Chapter 2 we describe our method. In Chapter 3 we derive estimates of the solution to the one–way wave equation and show the convergence of the algorithm. In Chapter 4 we do the error analysis of the numerical implementation and show that the computational cost depends only weakly on the frequency. Numerical examples are given in Chapter 5.
Chapter 2
A Fast Method for Helmholtz Equation
We focus on the high frequency wave propagation problem in a bounded domain with a varying wave–speed function. The problem is described by the Helmholtz equation aug- mented with non-reflecting boundary conditions that also incorporate an incoming wave. The computational complexity of direct numerical methods applied to this problem grows algebraically with the frequency ω. The computational cost is at least O(ωd) for a d- dimensional problem. Therefore, direct numerical simulations become inapplicable for suf- ficiently high frequencies and some other methods have to be used. We propose a fast method for solving the Helmholtz equation based on wave–splitting. The Helmholtz equa- tion is split into one–way wave equations (backward and forward), which are then solved with appropriate boundary conditions and approximate forcing functions. In a domain where the forcing function is equal to zero GO is used and some standard p-th order nu- merical method otherwise. The result is an approximation of a type of Bremmer series [5]. In this chapter we present the model and describe our method. In later chapters we will show that the algorithm is convergent and that for fixed accuracy, the computational cost is O(ω1/p) for a p-th order numerical method. We consider mainly a model problem in one dimension, but we will also describe how the algorithm can be extended to higher dimensions. Although there are already some interesting applications in one dimension, e.g. inverse problems for transmission lines [41], most practical problems are of course in higher dimensions. Our focus in this thesis is on the analysis of the method and to show rigorous error and cost estimates. For higher dimensions, however, this is a much harder problem and we will restrict the analysis to the one-dimensional case.
2.1 Derivation of the Method in One Dimension
Consider the 1D Helmholtz equation
ω2 u + u = ωf, x ∈ [−L, L], (2.1) xx c(x)2
15 16 CHAPTER 2. A FAST METHOD FOR HELMHOLTZ EQUATION
where ω is the frequency and c(x) is the wave–speed function such that supp(cx) ⊂ (−L, L). We augment the equation (2.1) with the following boundary conditions
ux(−L) − iωu(−L) = −2iωA, (2.2)
ux(L) + iωu(L) = 0, (2.3) where A is the amplitude of the incoming wave. At high frequencies GO is a good approx- imation of the solution. We want to find a way to correct for the errors it makes at lower frequencies. A natural idea would be to use the system of WKB equations (1.10). However (1.8) does not converge, even in simple settings. It is only an asymptotic series. More precisely, a problem with (1.10) is that it only describes waves traveling in one direction. In reality, waves are reflected whenever cx 6= 0. Hence, to incorporate two directions we make the ansatz 1 iωv + c(x)v − c (x)v = F, (2.4) x 2 x 1 iωw − c(x)w + c (x)w = F. (2.5) x 2 x This describes wave propagation in the right-going (v) and left-going (w) direction. More- over, if z = v + w, then it satisfies the following equation (see Lemma 1)
2 2 c zxx + ω z = −2iωF + α(x)z, where 1 1 α(x) = cc − c2 . (2.6) 2 xx 4 x Thus, if F = α(x)z/2iω, then z is the solution of the Helmholtz equation (2.1) with f = 0. We now make an expansion of v and w in powers of ω,
X −n X −n v = rnω , w = snω . (2.7) n n Then, (2.4) and (2.5) become
X 1 α(x) iωr + c(x)∂ r − c (x)r − (r + s ) ω−n = 0, n x n 2 x n 2i n−1 n−1 n X 1 α(x) iωr − c(x)∂ s + c (x)s − (r + s ) ω−n = 0. n x n 2 x n 2i n−1 n−1 n
−n −n Defining vn = rnω and wn = snω and equating terms with the same powers in ω to zero, we obtain the methods below. In contrast to (1.8) the series (2.7) converge quickly for large ω.
Method 1 Let 1 1 iωv + c(x)∂ v − c (x)v = − f (x) (2.8) n x n 2 x n 2iω n 1 1 iωw − c(x)∂ w + c (x)w = − f (x) (2.9) n x n 2 x n 2iω n 2.1. DERIVATION OF THE METHOD IN ONE DIMENSION 17 for x ∈ [−L, L] and n ≥ 0, with the initial conditions:
A, n = 0 v (−L) = (2.10) n 0, n > 0 and wn(L) = 0, ∀n. (2.11) The right hand side in (2.8) and (2.9) is defined by the following expression
f0(x) = ωf(x), (2.12)
fn+1(x) = −α(x)(vn(x) + wn(x)). (2.13)
Assume we are solving equations (2.8) and (2.9) for n = 0, 1, 2, . . . , m with the given initial conditions (2.10) and (2.11) and the source function fn that is defined by (2.12)–(2.13). Then, it can be shown that the solution u(x) of the Helmholtz equation (2.1) can be ap- proximated by the following sum
m X u(x) ≈ zm = (vk(x) + wk(x)). (2.14) k=0 Remark 1. This is similar to Bremmer series [5], where
c (x) c iωv (x) + c(x)∂ v (x) − x v (x) = − x w (x), n x n 2 n 2 n−1 c (x) c iωw (x) − c(x)∂ w (x) + x w (x) = x v (x), n x n 2 n 2 n−1 with no ω−1 in the right hand side. The convergence is more subtle but has been shown in [2, 34, 4, 42]. We prefer (2.8)–(2.9) as it clearly separates waves of different size in terms of ω−1, which leads to a simpler analysis. Thus, we can solve equations (2.8) and (2.9) with some p-th order ODE numerical method m ≥ 0 times and approximate the solution of (2.1) by (2.14). However, the computational complexity still grows algebraically with the frequency ω.
Method 2
Now, note that (2.8) and (2.9) can be simplified when fn = 0. Then, using the ansatz
iωφ vn = Ae in (2.8) we obtain equations for A and φ,
1 c (x) ∂ φ = , ∂ A = x A(x). (2.15) x c(x) x 2c(x) This is in fact GO and can be solved independently of the frequency. Similar equations can be obtained when the ansatz is used in (2.9). Thus, the computational cost estimate can be improved by approximating the forcing functions. We can do the following: ˆ Replace fn by fn with ˆ 0, |fn(x)| < Toln fn(x) = (2.16) fn(x), otherwise 18 CHAPTER 2. A FAST METHOD FOR HELMHOLTZ EQUATION where ωTol Tol = (2.17) n 2n+1L ˆ for some tolerance Tol. More precisely, let f0 = ωf and define
1 1 iωvˆ + c(x)∂ vˆ − c (x)ˆv = − fˆ (x) (2.18) n x n 2 x n 2iω n
1 1 iωwˆ − c(x)∂ wˆ + c (x)w ˆ = − fˆ (x) (2.19) n x n 2 x n 2iω n with initial data A, n = 0 vˆ (−L) = (2.20) n 0, n > 0 and
wˆn(L) = 0, ∀n. (2.21)
Moreover,
fn+1(x) = −α(x)(ˆvn(x) +w ˆn(x)). (2.22)
Again it can be shown that the solution u(x) of the Helmholtz equation (2.1) can be ap- proximated by m X u(x) ≈ zˆm = (ˆvk(x) +w ˆk(x)). (2.23) k=0
Thus, equations (2.18)–(2.19) can be solved independently of the frequency in a domain ˆ ˆ where fn = 0. In a domain where fn 6= 0 a direct ODE numerical method can be used. The solution of (2.1) is then approximated by (2.23). Hence, the algorithm for computing the solution of (2.1)–(2.3) is as follows: Choose some tolerance Tol and, for n = 0, 1, 2,... , do the following
ˆ ˆ 1. Replace the function fn by fn defined by (2.16). If fn ≡ 0 stop, else
ˆ 2. Computev ˆn andw ˆn from (2.18) and (2.19). In a domain where fn 6= 0 use a di- 1/p 1+1/p rect p-th order numerical method with stepsize ∆xf = Tol /ω , otherwise use geometrical optics with stepsize ∆xc = ω∆xf , Figure 2.1,
3. Computeu ˆn =v ˆn +w ˆn and addu ˆn tou ˆn−1. Note that in a Runge-Kutta scheme the ˆ function fn may need to be evaluated in between the grid points. To obtain those ˆ values of fn, we use p-th order interpolation.
4. Compute fn+1 fromv ˆn andw ˆn, go to 1. P The solution of (2.1)–(2.3) is approximated by k uˆk. These methods converge rapidly for large ω as we prove in the following chapters. We will also show that the computational cost is proportional to ω1/p for a fixed tolerance, where p is the order of the numerical scheme. 2.2. FORMULATION IN TWO DIMENSIONS 19
|f_n|
ˆ Figure 2.1: Function |fn|. In a domain where |fn(x)| < Toln fn(x) = 0 and we can use GO ˆ to solve (2.18) and (2.19), otherwise fn(x) = fn(x) and some p-th order ODE numerical scheme can be used.
2.2 Formulation in Two Dimensions
Consider the 2D Helmholtz equation
ω2 ∆u + u = ωf(x). (2.24) c(x)2
In 1D we split (2.24) into two equations, one for waves propagating to the right and the other for the waves propagating to the left. However, in 2D, waves do not propagate only backward and forward. They propagate asymptotically in the direction ∇φ where φ solves the eikonal equation 1 |∇φ|2 = . c(x)2 Then, a natural extension of the 1D method would be to split the Helmholtz equation (2.24) into the following equations
c(x)2∆φ f iωv + c(x)2∇φ · ∇v − n v = n , (2.25) n n n 2 n 2iω c(x)2∆φ f iωw − c(x)2∇φ · ∇w − n w = n , (2.26) n n n 2 n 2iω Pm where the functions fn have to be determined in such a way that zm = j=0(vj + wj) is a good approximation of the Helmholtz equation. Following the same arguments as in 1D, we conclude that α(x) f (x) = (v + w ) + ∇φ⊥ · ∇ c(x)2∇φ⊥ · ∇(v + w ) , (2.27) n+1 2 n n n n n n where ∇φ⊥ · ∇φ = 0 and
c2∆φ c2(∆φ)2 α(x) = −∇ · ∇φ − . (2.28) 2 4
In general, the method in 2D would then be the same as in 1D, i.e., approximate fn by ˆ fn when fn is small and solve the equations on a coarse grid in the domain where fn is small and on a dense grid otherwise. As in 1D case, when fn = 0, vn and wn solve GO 20 CHAPTER 2. A FAST METHOD FOR HELMHOLTZ EQUATION equations. Note that C = constant does not imply that α = 0 as in one dimension. Hence, GO is not exact when a wave front is curved. Moreover, the ∇(vn + wn) term in (2.27) iωφn may now possibly be large. However, when vn is a wave of the form vn = Ae , then ⊥ ⊥ iωφn ∇φn · ∇vn = ∇φn · ∇Ae = O(1). There is a number of additional difficulties: • Caustics appear. This implies that ∇φ may not exist everywhere. That means that one has to resolve everything also in an area around caustics. (The size of this is still small though.)
• Multiple crossing waves. One will need a formula for computing φn+1 from vn, wn and φn. • Analysis is much harder. There are no L∞ estimates with ω-independent constants available. Chapter 3
Analysis
In this chapter we analyze the method when (2.18) and (2.19) are solved exactly. We will first derive estimates of the solution of the equations (2.8) and (2.9) and its derivatives. We then show thatz ˆm in (2.23) converges to the solution of (2.1)–(2.3). Next, we give an error estimate in terms of n and Tol. As a side effect we obtain a L∞ estimate of the solution of the Helmholtz equation in 1D. We also show that the size of the domain where a direct method, which resolves the wavelength, is used is proportional to O(ω1/p). Hence, we prove the following Theorem 1. Assume ω > 1, c(x) ∈ C2[−L, L], and r 2β|α|L1 1 cmax ≤ δ0 < 1, β = 3 . (3.1) ω 2 cmin If u is the solution of (2.1)–(2.3), then