MATHEMATICAL ENGINEERING TECHNICAL REPORTS Iterative

MATHEMATICAL ENGINEERING TECHNICAL REPORTS Iterative

MATHEMATICAL ENGINEERING TECHNICAL REPORTS Iterative Refinement for Symmetric Eigenvalue Decomposition Adaptively Using Higher-Precision Arithmetic Takeshi OGITA and Kensuke AISHIMA METR 2016{11 June 2016 DEPARTMENT OF MATHEMATICAL INFORMATICS GRADUATE SCHOOL OF INFORMATION SCIENCE AND TECHNOLOGY THE UNIVERSITY OF TOKYO BUNKYO-KU, TOKYO 113-8656, JAPAN WWW page: http://www.keisu.t.u-tokyo.ac.jp/research/techrep/index.html The METR technical reports are published as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. Iterative Refinement for Symmetric Eigenvalue Decomposition Adaptively Using Higher-Precision Arithmetic Takeshi OGITA Division of Mathematical Sciences School of Arts and Sciences Tokyo Woman's Christian University [email protected] Kensuke AISHIMA Department of Mathematical Informatics Graduate School of Information Science and Technology The University of Tokyo Kensuke [email protected] June 2016 Abstract Efficient refinement algorithms are proposed for symmetric eigen- value problems. The structure of the proposed algorithms is straight- forward, primarily comprising matrix multiplications. We first present a basic algorithm for improving all the eigenvectors associated with well-separated eigenvalues. We show that it quadratically converges in exact arithmetic, provided that a modestly accurate initial guess is given. The convergence rate is also preserved in finite precision arith- metic if the working precision is sufficiently high in the algorithm, i.e., it is indispensable to double the working precision in each iteration. Moreover, for multiple eigenvalues, we prove quadratic convergence under a technical assumption, whenever all the simple eigenvalues are well separated. We emphasize that this approach to multiple eigen- values overcomes the limitation of our analysis to real matrices, re- sulting in the extension of the proof to Hermitian matrices. On the basis of the basic algorithm, we propose the complete version of a refinement algorithm which can also improve the eigenvectors associ- ated with clustered eigenvalues. The proposed algorithms construct an accurate eigenvalue decomposition of a real symmetric matrix by iteration, up to the limit of computational precision. Numerical re- sults demonstrate excellent performance of the proposed algorithms in terms of convergence rate and overall computational cost, and show 1 that the basic algorithm is considerably faster than a naive approach using multiple-precision arithmetic. 1 Introduction Let A be a real symmetric n × n matrix. We are concerned with the sym- metric eigenvalue problem Ax = λx, where λ 2 R is an eigenvalue of A and x 2 Rn is an eigenvector of A associated with λ. Solving this problem is im- portant because it plays a significant role in scientific computing. Excellent overviews can be found in [22, 26]. Throughout the paper, I and O denote the identity and the zero matri- ces of appropriate size, respectively. Moreover, k · k denotes the Euclidean norm for vectors and the spectral norm for matrices. For legibility, if neces- sary, we distinguish between the approximate quantities and the computed results, e.g., for some quantity α we write αe and αb as an approximation of α, and a computed result for α, respectively. The relative rounding error unit according to ordinary floating-point arithmetic is denoted by u. For example, u = 2−53 for IEEE 754 binary64. For simplicity, we basically handle only real matrices. The discussions in this paper can be extended to Hermitian matrices, as we will mention at Subsection 4.3. Moreover, the discussion for the symmetric eigenvalue prob- lem can readily be extended to the generalized symmetric (or Hermitian) definite eigenvalue problem Ax = λBx where A and B are real symmetric (or Hermitian) with B being positive definite. 1.1 Our purpose This paper aims to develop an algorithm for calculating an arbitrarily accu- rate result of the eigenvalue decomposition A = XDXT; (1) where X is the n×n orthogonal matrix whose ith columns are the eigenvec- tors x(i) of A (called the eigenvector matrix), and D is the n × n diagonal matrix whose diagonal elements are the corresponding eigenvalues λi 2 R, i.e., Dii = λi for i = 1; : : : ; n. For this purpose we discuss iterative refine- ment methods for (1) together with the convergence analysis. Several efficient numerical algorithms for (1) have been developed such as the bisection method with inverse iteration, the QR algorithm, the divide- and-conquer algorithm or the Multiple Relatively Robust Representations (MRRR) algorithm via Householder's tridiagonalization, and the Jacobi al- gorithm. For details, see, [9, 10, 13, 14, 22, 26] and references cited therein. Since they have actively been studied in numerical linear algebra for decades, there are highly reliable implementations for them, such as Linear Algebra Package (LAPACK) routines. 2 We stress that we do not intend to compete with such existing algorithms but to develop a refinement algorithm for improving the results obtained by any of them. Such an algorithm is useful if the quality of the results are not satisfactory. Namely, the proposed algorithm can be regarded as a supplement to the existing ones for constructing (1). In fact, we assume that some computed result Xb for (1) can be obtained by backward stable algorithms in ordinary floating-point arithmetic. Our analysis provides a sufficient condition for the convergence of the iterations. In our proposed algorithms, the use of higher-precision arithmetic is mandatory, but is basically restricted to matrix multiplication, which ac- counts for most of the computational cost. For example, an approach used in Extra Precise Basic Linear Algebra Subroutines (XBLAS) [18] and other accurate and efficient algorithms for dot products [20, 23, 24] and matrix products [21] based on error-free transformations are available for practical implementation. 1.2 Background A naive and possible approach to achieving an accurate eigenvalue decom- position is to use a multiple-precision arithmetic library such as MPFR [19] with GMP [12] in Householder's tridiagonalization and the subsequent algo- rithm. In general, however, we do not know in advance how much arithmetic precision suffices to achieve the desired accuracy for results. Moreover, the use of such multiple-precision arithmetic for entire computations is often much more time-consuming than ordinary floating-point arithmetic, owing to the difficulty of optimization for today's computer architectures. There- fore, we prefer the approach of the iterative refinement, rather than that of simply using multiple-precision arithmetic. There exist several refinement algorithms for eigenvalue problems that are based on Newton's method (cf. e.g., [3, 5, 11, 25]). Since this sort of algorithm is designed to improve eigenpairs (λ, x) 2 R×Rn individually, ap- plying such a method to all eigenpairs requires O(n4) arithmetic operations. To reduce the computational cost, one may consider the preconditioning by Householder's tridiagonalization of A by using ordinary floating-point arith- metic such as T ≈ Hb TAHb, where T is a tridiagonal matrix, and Hb is an approximately orthogonal matrix involving rounding errors. However, it is not a similarity transformation, so that the original problem is slightly per- turbed. Therefore, the accuracy of eigenpairs is limited by the orthogonality of Hb. Simultaneous iteration or Grassmann-Rayleigh quotient iteration in [1] can potentially be used to refine eigenvalue decompositions. However, such methods require the use of higher-precision arithmetic concerning the or- thogonalization of the approximate eigenvectors, and hence we cannot re- strict the higher-precision arithmetic to matrix multiplication. Moreover, 3 Wilkinson [26, Chapter 9, pp. 637{647] explained the refinement of eigen- value decompositions for general square matrices, mentioning Jahn's method [17, 4]. Such methods rely on a similarity transformation C := Xb−1AXb with high accuracy for a computed result Xb for X, which requires an accurate solution of the linear system XCb = AXb for C, and breaks the symmetry of A. Alternatively, the Jacobi algorithm is useful for improving the accuracy of all computed eigenvectors. In addition, Davies and Modi [6] proposed a direct method for completing the symmetric eigenvalue decomposition of nearly diagonal matrices. However, in fact, owing to rounding errors in floating-point arithmetic, it is difficult to compute the eigenvectors cor- responding to clustered eigenvalues with high accuracy. In other words, higher-precision arithmetic is required for computing accurate eigenvectors corresponding to the clustered eigenvalues. We will mention the details in Section 2. With such a background of the study, we try to derive a simple and efficient iterative refinement algorithm for simultaneously improving the ac- curacy of all the eigenvectors with quadratic convergence, which requires O(n3) operations for each iteration. In fact, the proposed algorithm can be regarded as a variant of Newton's method, and therefore, its quadratic convergence is naturally derived. 1.3 Our idea The idea to design the proposed algorithm is as follows. For a computed eigenvector matrix Xb for (1), define E 2 Rn×n such that X = Xb(I + E). Then we aim to compute a sufficiently precise approximation Ee of E using the following two relations: { XTX = I (orthogonality) (2) XTAX = D (diagonality) After obtaining Ee, we can update X0 := Xb(I + Ee). If necessary, we iterate the process such as Xb(ν+1) := Xb(ν)(I + Ee(ν)). Under some conditions, we prove Ee(ν) ! O and Xb(ν) ! X, where the convergence rates are quadratic. Using (2), we will concretely derive a basic refinement algorithm (Algo- rithm 1 in Section 3), which works perfectly for the eigenvectors associated with well-separated eigenvalues.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    42 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us