A Comparison of Factorization-Free Eigensolvers with Application to Cavity Resonators

A Comparison of Factorization-Free Eigensolvers with Application to Cavity Resonators

A Comparison of Factorization-Free Eigensolvers with Application to Cavity Resonators Peter Arbenz Institute of Scientific Computing, ETH Zentrum, CH-8092 Zurich, Switzerland [email protected] Abstract. We investigate eigensolvers for the generalized eigenvalue problem Ax = λMx with symmetric A and symmetric positive defi- nite M that do not require matrix factorizations. We compare various variants of Rayleigh quotient minimization and the Jacobi-Davidson al- gorithm by means large-scale finite element problems originating from the design of resonant cavities of particle accelerators. 1 Introduction In this paper we consider the problem of computing a few of the smallest eigen- values and corresponding eigenvectors of the generalized eigenvalue problem Ax = λMx (1) without factorization of neither A nor M. Here, the real n-by-n matrices A and M are symmetric and symmetric positive definite, respectively. It if is feasible the Lanczos algorithm combined with a spectral transfor- mation [1] is the method of choice for solving (1). The spectral transformation requires the solution of a linear system (A σM)x = y, σ Ê,which may be solved by a direct or an iterative system solver.− The factorization∈ of A σM may be impossible due to memory constraints. The system of equations can− be solved iteratively. However, the solution must be computed very accurately, in order that the Lanczos three-term recurrence remains correct. In earlier studies [2, 3] we found that for large eigenvalue problems the Jacobi- Davidson algorithm [4] was superior to the Lanczos algorithm or the restarted Lanczos algorithm as implemented in ARPACK [5]. While the Jacobi-Davidson algorithm retains the high rate of convergence it only poses small accuracy re- quirements on the solution of the so-called correction equation, at least in the initial steps of iterations. In this paper we continue our investigations on eigensolvers and their pre- conditioning. We include block Rayleigh quotient minimization algorithms [6] and the locally optimal block preconditioned conjugate gradient (LOBPCG) al- gorithm by Knyazev [7] in our comparison. We conduct our experiments on an eigenvalue problem originating in the design of the new RF cavity of the 590 MeV ring cyclotron installed at the Paul Scherrer Institute (PSI) in Villigen, Switzerland. P.M.A. Sloot et al. (Eds.): ICCS 2002, LNCS 2331, pp. 295−304, 2002. Springer-Verlag Berlin Heidelberg 2002 296 P. Arbenz 2 The application: the cavity eigenvalue problem After separation of time/space variables and after elimination of the magnetic field intensity the Maxwell equations become the eigenvalue problem curl curl e(x)=λe(x), div e =0, x Ω, n e=0, x ∂Ω. (2) ∈ × ∈ where e is the electric field intensity. We assume that Ω,the cavity, is a simply 3 connected, bounded domain in Ê with a polyhedral boundary ∂Ω. Its inside is all in vacuum and its metallic surfaces are perfectly conducting. Following Kikuchi [8] we discretize (2) as Find (λ , e ,p ) Ê N L such that e = 0 and h h h ∈ × h × h h & (a) (curl eh, curl Ψh)+(grad ph, Ψh)=λh(eh,Ψh), Ψh Nh (3) (b) (e , grad q )=0, ∀q ∈L h h ∀ h∈ h 2 3 2 3 where Nh H0(curl; Ω)= v L(Ω) curl v L (Ω) , v n = 0 on ∂Ω and L ⊂H1(Ω). The domain{ ∈Ω is triangulated| ∈by tetrahedrons.× In order to} h ⊂ 0 avoid spurious modes we choose the subspaces Nh and Lh,respectively,tobethe (k) N´ed´elec (or edge) elements Nh of degree k [9–11] and the well-known Lagrange (or node-based) finite elements consisting of piecewise polynomials of degree k. In this paper we exclusively deal with k = 2. In order to employ a multilev≤ el preconditioner we use hierarchical bases and write [11] N (2) = N (1) N¯ (2),L(2) = L(1) L¯(2). (4) h h ⊕ h h h ⊕ h Let Φ n be a basis of N (2) and ϕ m be a basis of L(2). Then (3) defines { i}i=1 h { l}l=1 h (a) (b) Fig. 1. Example of matrices A (a) and M (b) for quadratic edge elements. A Comparison of Factorization-Free Eigensolvers 297 the matrix eigenvalue problem AC x MO x =λ , (5) CTO y OO y $ &) * $ &) * respectively,where A and M are n-by-n and C is n-by-m with elements ai,j =(curl Φi, curl Φj),mi,j =(Φi,Φj),ci,l =(Φi,grad ϕl). The non-zero structures of A and M are depicted in Fig. 1. The block of zero ¯ (2) columns / columns in A corresponds to those curl-free basis functions of Nh ¯(2) that are the gradient of some ϕ Lh . The non-zero block in the upper-left ∈ (1) corner of A corresponding to the basis functions of Nh is rank-deficient. The (1) deficiency equals dim Lh . The reason for approximating the electric field e by N´ed´elec elements and the Lagrange multipliers by Lagrange finite elements is that [9, 5.3] § grad L(k) = v N (k) curl v = 0 . (6) h h ∈ h | h ! "(k) By (3)(b), eh is in the orthogonal complement of grad Lh . On this subspace A in (5) is positive definite. Notice that eh is divergence-free only in a discrete n sense. Because of (6) we can write grad ϕl = j=1 ηjlΦj,whence n , (Φi, grad ϕl)= (Φi, Φj)ηjk or C = MY, (7) j=1 + n m where Y =((η )) Ê × . In a similar way one obtains jk ∈ T T H := C Y = Y MY, hkl =(grad ϕk, grad ϕl). (8) H is the system matrix that is obtained when solving the Poisson equation with (k) the Lagrange finite elements Lh . Notice that Y is very sparse. We have already ¯(2) mentioned that the gradient of a basis function ϕk Lh is an element of the ¯ (2) ∈ first set of basis functions of Nh . So, m1 rows of Y have a single entry 1. The gradient of the piecewise linear basis function corresponding to vertex k,say,is (1) a linear combination (with coefficients 1) of the basis functions of Nh whose corresponding edge has vertex k as one±of its endpoints. Equation (7) means that CT x = 0 is equivalent to requiring x to be M- orthogonal to the eigenspace (A)= (Y)corresponding to the eigenvalue 0. Thus, the solutions of (5) areNpreciselyRthe eigenpairs of Ax = λMx (9) corresponding to the positive eigenvalues. We could therefore compute the de- sired eigenpairs (λj , xj) of (5) by means of (9) alone. The computed eigenvectors corresponding to positive eigenvalues would automatically satisfy the constraint T C xj = 0. This is actually done if the linear systems of the form (A σM)x = y that appear in the eigensolver can be solved by direct methods. Numerical− exper- iments showed however that the high-dimensional zero eigenspace has a negative effect on the convergence rates if preconditioned iterative solvers have to be used. 298 P. Arbenz 3 Positive definite formulations of the eigenvalue problem In this section we present two alternative formulations for the matrix eigenvalue problem (5) that should make its solution easier. The goal is to have generalized eigenvalue problems with both matrices positive definite. 3.1 The nullspace method We can achieve this goal in a straightforward way by performing the computa- tions in the space that is spanned by the eigenvectors corresponding to the pos- T itive eigenvalues. This space is (Y )⊥M = (C)⊥ = (C ), i.e. the nullspace of CT whence its name. FormallyR,wesolve R N A (CT )x = λM (CT )x. (10) |N |N In the iterative solvers, we apply the M-orthogonal projector P (CT ) = I 1 T T N − YH− C onto (C ) whenever a vector is not in this space, i.e. for generat- ing starting vectorsN and after solving with the preconditioner. The eigenvalue problem (10) has dimension n m. However, the vector x (CT ) has n components. − ∈ N 3.2 The approach of Arbenz-Drmaˇc [12] Let P be a permutation matrix and Y1 m m Y = PY = , Y2 Ê × nonsingular. (11) #Y2% ∈ ' ' ' Let ' A A M M C A = P T AP = 11 12 , M = P T MP = 11 12 , C = P T C = 1 . #A21 A22% #M21 M22% #C2% ' ' ( ( ' Then' the n m eigenvalues (of the eigenvalue problem ' − ' ' ( ( ' 1 T A xˆ = λ(M C H− C )xˆ (12) 11 11 − 1 1 are precisely the n m positive eigenvalues of (5). The matrix on the right-hand ' ( ' ' side of (12) must not− be formed explicitely as it is full. 3.3 Discussion These approaches have in common that they require the solution of a system of equations involving the matrix H. In the present application this is the dis- (2) cretized Laplace operator in Wh . The order of H is about the same as the (1) dimension of the ‘coarse’ space Vh . Therefore, we solve systems with H by a direct method. The method of Arbenz-Drmaˇc has the advantage that the order of the eigenvalue problem is smaller by at least 1/8. Thus, less memory space is required. A Comparison of Factorization-Free Eigensolvers 299 4 Solving the matrix eigenvalue problem Factorization-free methods are the most promising for effectively solving very large eigenvalue problems Ax = λMx. (13) The factorization of A or M or a linear combination of them which is needed if a spectral transformation like shift-and-invert is applied requires way too much memory. If the shift-and-invert spectral transformation in a Lanczos type method is solved iteratively then high accuracy is required to establish the three-term recurrence.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us